hello -
I have a fully populated HL15 with an LSI-9400-16i and of course, I wanna a squeeze a few more drives in the HL15, using 2.5" SAS drives. Are there things I should look for to make sure it’ll work with the chassis and its current HBA, existing expander, etc? I wanna make sure everything remains 12Gb as well.
Not sure I understand your setup from the post. You have a custom build using an LSI-9400-16i? What motherboard are you using? What PCIe slots are populated and with what cards?
The HL15 has a direct attached backplane, not an expander backplane. Each of the SFF-8643 cables you connected between the backplane and the 9400-16i supports 4x 12gbps SAS channels (so with 15x drives, one goes unused).
A SAS expander card is a multiplexer/switch, so its purpose is to add storage capacity but at the potential cost of raw throughput. If you want to ensure 12gbps to all drives at all times you would want to install a second HBA, or sell the -16i card and buy a -24i card.
You also need to consider power and that your PSU will support the addition of however many additional SAS drives you wish to add.
Here is a bit about expanders and HBAs;
The cleanest and technically less compromising option would be to get a -24 card for direct connect. You are not very clear on what these 2.5 SAS drives are (HDD vs SSD) but you might want to also consider power requirements/PSU.
I personally struggled quite a bit to find appropriate PSUs with at least 25A at 5V. The only option(s) at a “standard” length that I was able to find was the Vertex (PX and GX) from Seasonic at 1000-1200W, Other options from other manufacturers, including Corsair, are longer making fitting these and having space to connect and manage the cables very difficult or impossible considering the location of the backplane power board.
Broadcom/LSI cards generally support staggered spin-up which would help with peak power demand during power on which in turn should avoid tripping any protections on your PSU, assuming it is adequate for the additional load to begin with. It all depends what else you have on - GPUs etc for virtualisation server passthrough or similar which will have their own power demands. In my case this is purely used as a NAS.
You will also need to consider mounting options where applicable. I designed and 3d printed my own 2.5 bracket kit(s), and planned for their mounting position with cooling in mind.
If you are all about storage density, I can’t speak for others’ results, but all in all, I have 27 storage devices currently in my HL15 and can fit quite a few more (expect another 12 if I was to take it to the extreme) though I have no such need at the moment.
Yup - custom build with an asus PRIME H770-PLUS D4 and I’m wanting to use some 2.5 SAS SSD drives, assuming it doesn’t get cost prohibitive. I’ve got the corsair 750 watt PSU that was provided in the HL15, so i should be fine for power. This system is basically a NAS with some light transcoding. Just using the onboard GPU as 90% of the time, playback doesn’t require any transcoding.
You might find that at peak load, and depending on the SSDs you choose and their number of course, the 750W PSU bundled might not be able to cut it.
SSDs utilise the +5V rail, and typically most require up to 1A each at full load (spec-wise a lot go up to 1.8-2 Amps each). The standard PSU can manage up to 20Amp total at the 5V rail.
HDDs utilise both 12V and 5V with 5V feeding the electronic components and typically 12V dealing with the motors/spinup, Considering the limits on each rail, SSDs can be more demanding.
Might be worth doing the math for your use case. You need to account for peak power, not just average/idle, or your PSU’s protections will trip.
This is also consistent with the advice for anything requiring additional power draw in the FAQ
So you have;
• 1 x PCIe 5.0 x16 SafeSlot Core+
• 2 x PCIe 4.0 electrical x4 in a physical x16
• 2 x PCIe 3.0 x1
I assume the 9400-16i is in the electrical x16 slot, and the other three slots are free then.
To answer the original question as asked, the only concerns from the HL15 chassis perspective, since you are not using the full build, is how you would mount and power the 2.5 inch drives. Not just the total amp load per rail type on the PSU, but also how you will rejigger the four connections from the PSU to the PDU board to free one up for these 2.5" drives. I’m not saying it’s hard, there are other posts here on doing this, only that it is something to consider, since that was your request; things to look out for.
Given your board’s PCIe layout, you would want to be sure any additional HBA you got would run in an electrical x4 slot. I think another 94xx or higher series card will, but I’m not sure the earlier 93xx series negotiate down to x4. It wasn’t as explicitly stated in the docs that I could find for earlier HBAs vs more recent ones. Not all HBA’s I’ve used would run in an x4 slot; most of them seem to have x8 lanes on the edge connector.
In the larger picture, it might be worth posting and discussing what you are actually trying to accomplish; what functional problem you are trying to solve, rather that jumping to a solution. For example, if you are trying to set up caching, you could add an m.2 nvme drive in each of the two x4 PCIe slots if you’ve already populated the three already on the motherboard.
I would expand on this excellent response as well. Worth checking that you really need SAS SSDs.
You have mentioned that this is a NAS however no details about Network Adapters. If you have anything less than 25Gbit networking you can saturate that with SATA SSDs assuming a few mirrored vdevs. Your motherboard already has 4 SATA ports (if you don’t need more disks) or you can find much cheaper and less demanding HBAs for SATA which would solve the abovementioned PCI challenges much easier.
Not sure if this is something worth considering for you
Thanks for the feedback from everyone here. I only have 3-4 2.5" SAS SSDs, and this topic has shown me it’s cost prohibitive to add them to the NAS. I don’t have a solid use case for them, so adding them was more of a want than a true need.
If you relax the 12gbps requirement a bit, you might be able to do something for $100-$110 USD.
You can get a version of a 9300-4i for about $65 including shipping;
You would need a forward breakout cable something like this;
You would disconnect one of the cables going from the PSU to the PDU and replace it with the 4x SATA power cable that came in the extra parts box/bag with the HL15. If I remember the calculations, the amp draw for the 15 HDDs across three molex connectors to the PDU instead of four would be ok.
You’d need one of the SSD mounting brackets, the official one or one of the many other 3D models floating around;
https://store.45homelab.com/products/125
Although the HBA will connect to the drives at 12 gbps, the PCIe bus would limit the total concurrent throughput across all four drives to 32 gbps rather than the theoretical 48 gbps. Depending on the capabilities of the SSDs, they may not be able to sustain 12 gbps anyway, and only do 4 or 5 gbps each sustained.
For power, the general numbers are kind of fuzzy ranges, but I think 15 3.5 inch HDDs plus 4 2.5 inch SSDs is on the edge of what the 20A +5V rail on the RM750e is supposed to deliver. Someone else may be able to comment better. You may be fine, or it may work most of the time, but glitch out when the system is under heavy load like doing a scrub.