Hey Homelab Heroes – We Need Your Input!

We’ve been tossing around some ideas for future 45HomeLab server builds, and we want to hear from the real experts. YOU, the community.

We’re exploring whether we should offer:

  1. A high-performance trimode backplane (yes, it would support SAS/SATA/NVMe… but it will bump up the cost)
  2. A hybrid backplane that lets you mix SSDs and HDDs while keeping things budget-friendly and simple

We’re listening—and your feedback helps shape the future of our products.

So tell us…

What would you rather see in future HL units?**

  • Trimode Backplane (even if it costs more!)
  • Hybrid Backplane (HDD + SSD combo for the win!)

Vote below and let us know why!
We’re building for the homelabbers, with homelabbers.

  • Trimode Backplane (even if it costs more!)
  • Hybrid Backplane (HDD + SSD combo for the win!)
  • Both
  • None
0 voters

I would like the flexibility on having a SAS/SATA Pool and an NVME Pool but with the NVME pool connected directly to MotherBoard using MCIO connections directly and not from an HBA.
For SAS/SATA pool support both HDDs and SSDs.

For NVME pool support for both 15mm and 7 mm drives. (Not just M2 drives).
Something like CP104_8 x U.2 NVMe SSD Backplane Cage in 2 x External 5.25" Drive Bay
/
ASRock Rack 4U8G TURIN2 NVMe Backplane MCIO - ServeTheHome
/
Chenbro - Products

Just designed with the awesomeness of 45 Drives to get the air flow between the drives with the top loading way for these drives also.

@Vikram-45HomeLab I didn’t vote for the tri-mode backplane, but I figured I’d put out there I’m probably enough of an enthusiast to buy one still.

I do think timing is a factor here. I assume that internally you have a target of how new (or conversely how old) of enterprise gear or features you want to offer. NVMe in the form of U.2 or U.3 is growing, but I see it being a few years out still before going “mainstream” in HomeLab. You’ll want to offer it at some point in the future but, in my opinion, hybrid seems to be the way to go for the nearer term future.

My concern with the Tri-Mode Backplane, the way the options were presented, was the " *(even if it costs more!)", giving the impression that it would be the baseline offering, increasing the price of entry for an already expensive chassis, not some sort of option. Although I appreciate the opportunity for input, and trying to avoid a long rant, I think this is one of those surveys that misses some proper feedback about how the product could/should be developed.

As I see it:

  • There is going to continue to be a core group of prospective customers that are cost conscious and mostly interested in just a large pool of spinning rust, they don’t need or want tri-mode at a higher cost.
  • There are people who would like to add some 2.5" (7 or 15mm) SSDs, but aren’t enamored with the 3D-printable bracket (although it’s cool that was thought about). These people would be better served by a hybrid combo like I think is/was offered on the AV15 for a while

    Although U.2/U.3 has the benefit for people who are labbing for work, consumer 2.5" SATA SSDs still provide a more cost conscious option for the segment who are looking for faster random disk cache for media serving and light random IOPS workloads. The industry is moving on from the 2.5 inch form factor and SAS for flash storage though. Great for speed, but it requires PCIe lanes that a lot of builds aren’t going to have.
  • Although more of an investment for the buyer, a more forward looking approach would probably be the combined HDD/NVME configuration

    although I definitely think it needs to be an option, not the single configuration.

All of those still miss another opportunity, which is people who want to add long GPUs but don’t want a 5U 24" Beast. I’d like to see a version developed that has only 8 or 9 3.5" drive bays (or some configuration in the same space including some SSD/NVME) but leaves the left side of the chassis open for long GPUs. This would be an AI type configuration, allowing for a couple of AMD Radeon Pro W7900s or something.

One of the nice things about the 45Drives cases is their relative simplicity, so I’m not looking for a lot of little fiddly parts and configuration options, but if we are talking about the backplane direction, I think there are some better alternatives to discuss than just “$$$tri-mode or hybrid?”

2 Likes

I completely agree on this point. I doubt it’ll make sense for 45HomeLab to offer to different back planes in the same product line. It’ll also take time for cost to come down which I prefer in order to include it one day while keeping a lower entry point for people.

i would pay a premium for a EATX HL30 version, as for drive back-plane configuration, sell them separate on the store and let people build the lab they want.

3 Likes

Just curious, as much as I like the Q30, what is the use case for a machine with both EATX and 30 drive bays? With 100 Gb+ networking available now at not unreasonable pricing, wouldn’t you rather have a compute node separate from a storage node? You also have to get a lot of airflow through the drives to cool a high TDP CPU(s) you’re going to put on an EATX board.

For those of us who don’t work in enterprise data centers but are only exposed through L1Techs and STH, a bit of an explanation of these two choices might be helpful. The most I know is;

I assumed, I guess incorrectly? that this folded in support for the EDSFF ruler drives, The tri-mode backplane, based on what Brett showed, seems to refer only to SFF-8639 connectors with either 2.5" (7-15 mm high) or 3.5" drive form factor spacing. Is that correct? The EDSFF drives use a different connector?

I’m not sure, maybe it depends on the size of the flash, but I thought the enterprise was moving in the direction of EDSFF. I’m not sure what’s more readily available new or second hand, but I do know there is at least one consumer NAS, the QNAP TBS-h574TX that has EDSFF bays. I figured there was a parallel shift similar to how m.2 NVMe has become dominant over 2.5" SATA SSDs on the consumer side.

compute, storage and rack space consolidation and i will be able to reuse my existing EATX server parts.

The different back plane options are nice, and as was mentioned earlier, would be perfect to have on the store. This way there is some flexibility in what you can do.

What I would like to see is a workstation option that uses full atx or larger motherboards. My current desktop is in an o11d and with the GPU, usb, serial/parallel, and NIC there are no more slots. This would not work on a m-itx.

Also, I would be interested in seeing a more compute optimized option, or even HCI options. Where a collection of HL4s would work for compute it limits the platform options greatly, compared with ssieeb systems with just a few drive mounts

1 Like

DigitalGarden, always good chatting and thanks for helping to flesh this out more. So the answer is there are legitimate use cases for both configurations. To give some examples for your question, when you start digging into say enterprise / hyperscaler builds, yes, they would often split out a disk heavy build (a storage POD) from a compute focused (say a compute POD or an AI POD). Because they are often trying to really optimize a specific server for storage or GPUs, etc.

But on the other hand, that’s not always universal for them or even a high-end Workstation use case either. Because you may want say 30x disks paired with an E-ATX motherboard with 1-2 CPUs so that you can run workloads 100% locally to the machine and not have to pop up and out over the network. Because every time you have to jump out across the network, even on say a 100 Gb NIC, you are going to pay a very heavy latency penalty (relatively speaking to processing the data locally) vs. making a call from a CPU across a motherboard via PCIe Links to say an HBA Card that then fans out across your drives. The pro of this setup is that all processing is done locally to the server. But the con here is that this only works if your entire raw dataset fits within a Q30 (including all of the raw drives you’re losing to redundancy). And obviously, many Hyperscaler workloads wouldn’t fit.

Then, to answer your question on airflow, yeah, you do have a lot going on in server case that has 30x Hard Drive plus 1-2 CPUs in terms of thermals. But on the other hand, if it’s a Q30, and if we assume front to back airflow, you also have a 4U server which means you can easily get 3x 120mm fans across the front. And 3x 120mm fans can push a lot of air if you buy the right Fan SKUs, are mindful of static pressure, etc. And you could even put some fans on the back to help if needed. So thermals shouldn’t hold this config back at all as long as you have a decent fan design installed.

So yes, there are very legit reasons for someone to want a Q30 with an E-ATX motherboard. There is no 1 size fits all here which is why I agree with your idea that they should give us options. And I will expand on that, but let me break that into a separate message…

1 Like

I agree with several of the folks above who said to just build the case for us and then let us buy the backplane options we want in the store. Why? Because the storage industry is actively transitioning to EDSFF and yet, the majority of us will still want to run SATA, U.2, etc. for many years. Thus given all of the industry churn in terms of storage form factors, the most important thing that the Homelabs Team can give us right now is storage form factor “Flexibility.” Pure and simple.

And I think the way to do that (which was not offered in the original voting) would be to make the backplanes fully modular. How would that work in this case for say a Q30? Ok, well, a standard Q30 has 2 backplanes with 15x drives each. So break each row of 15x drives into 3x modularly connected, backplane plates. And then allow us to swap in and out whatever 3x plates we want per row.

Backplane Plates could be:

  • SATA (simple plain jane SATA or SATA / SAS, whatever keep the costs as low as possible)
  • Tri-Mode (SAS, SATA, NMVe)
  • EDSFF (there are multiple form factors here, EDSFF is not 1 single size)
  • Blanks (if we wanted to run long GPUs)

The benefit of this idea is that everyone wins. If you want to go cheap, great. If you want to go hardcore mode, you will pay more, but great. Yes, my idea takes more work upfront from 45 Homelabs / Protocase, but this way everyone gets what they want. And Homelabs sells way more units since the installed base for this product, by design, will be much wider than the ideas listed in the original voting above, because this case isn’t going to be solely dependent on selling into 1 single part of the market (a low costs at all costs enthusiast vs. a moderate needs user vs. someone with very high end needs). This case could instead bridge all use cases for this product.

TLDR… it’s time that 45 Homelabs gave us “LEGOs” for Backplanes :wink:

Because again, as I’ve said in other posts, I’ve looked at a lot of rackmount server cases and IMHO, the quality of the Backplane was the most standout piece of the HL15 server case. It’s the feature that other Server Case manufacturers have yet to fully replicate. So I would love to see the larger 45 Homelabs / Protocase team take that lead, and then really build on it to get this next gen case up to an 11 out of 10 reviewed kind of a case.

2 Likes

This doesn’t work well with the 4-channel nature of most SAS cabling (15/3=5). You’re down to either 12 3.5" drives per row, or a web of single cables for every slot like the Pod 2.0 design. It also means there are more cables and considerations (for the builder) around power distribution to all these plates.

1 Like

Good feedback. 2 things.

(1) I’m ok if it can’t exactly be 3x plates of 5x connectors each for a total of 15 drives. It could totally be an alternative arrangement. So IDK, maybe it’s 4 plates of 4x drives, but with the understanding that you can’t use that 4th drive on the last plate next to the left wall of the case. That way you’d still get a total of 15 drives (1 plate x 3 drives + 3 plates x 4 drives if we’re going left to right across the front of the case). There are just lots of ways these plates could be laid out: 3x drives per plate, 4x, or 5x, etc.

I’d also say that when doing product design, there are an almost infinite amount of factors that have to be juggled simultaneously when first starting out which can easily lead to paralysis if we’re all not careful. So we may or may not want cabling to our #1 top of mind driver when we as a group are doing early stage product design brainstorming. At least not for a rackmount, server case. It may instead be better for for us to figure out the core specs and features we want first. Then the 45 Homelabs Team can go back, huddle for a bit, and then come back out of the huddle to us and say “Ok, well, if we were to design the backplane modularly, then here is how it might work in terms of the raw drive layouts. Here are the options we can actually give you. What do you guys all think?”

Then practically speaking, I like the idea of them making more raw drive capacity available even if that means I have to buy an extra cable or 2 or even if that means there are few more extra cables that I have to jiggle around with in order to make the power distribution work. Also given how the HL15 v1.0 is built today, the Homelabs Team has already done a pretty good job of running all of the backplane cables run underneath their backplane. They give you enough access to the cables when you really need it, but they also make sure that the cabling is out of the way when you don’t need access to it.

(2) All of the above being said, I don’t want to miss the forest for the trees here. To me, the exact cable layouts, backplane plate layouts, etc. is all a totally negotiable thing. My uber point was that a modular backplane should be 1 of the core features that I really think they should build in to their product specs for the HL15 / HL30 and similar cases moving forward.

  • If they build this modularly, it’s just a way more long term scalable design.
  • Best of all, a modular design, allows them to hit all of the different price points and different market segments that people are asking for in terms of what future HL15 / HL30s might look like.

In closing, I think a modular backplane addresses the excellent point that you (DigitalGarden) initially raised above. The one where you said that there is a real danger in making this design a “one size fits all” type of a thing. The more we talk about this, the more I realize that you were very right about that. Because while we all are excited about the idea of a new backplane design, it’s already very obvious from this thread that there are many, many ways to bespoke it.

So I’ll close by saying it again, we need 45 Homelabs to give us “LEGOs for Backplanes” :wink:

2 Likes

I’d easily pay extra for a tri-mode backplane, though ideally it’d be something like supermicro does (where some sub-set of the slots are tri-mode capable) - with the example linked, a 12 bay backplane has 4 tri-mode slots, keeping manufacturing costs down.

It’s a happy medium, one that’s perfectly suited to homelab use IMO, and what I use in my 743 chassis’ (4 sas/sata + 4 tri-mode on the built in backplane, plus another 5 sas/sata via mobile-rack in the 5.25” slots). Just based on what I’m seeing sale prices at for the ‘standard’ vs hybrid backplanes at current, doing this only appears to add ~25% to the cost of the backplane, making it an easy win IMO.

3 Likes