New Server, HBA, SAS Cables

Hey All,

I am purchasing the HL15 BareBones.

I have the following parts and am looking at the correct HBA to purchase.

|MB|SuperMicro H13SAE-MF|
|CPU|AMD EPYC 4584PX|
|CPU Cooling|Dynatron A49|
|RAM|128GB DDR5 - 4 x 32GB 4800Mhz (Crucial CT32G48C40U5.M16A1)|
|GPU|PNY NVIDIA RTX A4000|
|nVME-1|Kioxia KXG80ZN84T09 4TB PCIe Gen4 NVMe (OS)|
|nVME-2|Kioxia KXG80ZN84T09 4TB PCIe Gen4 NVMe (Transcode, Docker)|
|nVME-3 (HBA)|Kioxia CD6-V 2.5" 1.6TB KCD61VUL1T60 SSD (L2ARC/Cache)|
|PCI SSD Mount|JMT Metal PCI Slot 2.5inch|
|PSU|Corsair AX1000 80PLUS Titanium|
|Case|45Drives - HL15|
|Networking|10G Mellanox ConnectX-3 CX311A SPF+|
|HD Array|15 x 22TB IronWolf Pro 7200RPM|

I am looking at the Broadcom 9500-16i 12Gb/s HBA TriMode Card.

With that card can I plug in all 4 backplane ports to cover all 15 Drives and also plug in an nVME ssd I will have in a cady?

This is a two port slimSas card and im not sure what cables are required.

The backplate uses four 8643 connectors so if you use the hba for all 15 hdds, you will not be able to use the ssd. Unfortunately the 16th drive is not usable.

2 Likes

You would select;

Set F
(2x) 2x SFF 8643 → SFF 8654 [2x Mini-SAS-HD (Backplane) to Slim-SAS]

You won’t be able to “also plug in an nVME ssd I will have in a caddy” to the HBA. There are 15 drive bays on the backplane and no way to access the 16th SAS channel. You would have to plug that into the motherboard.

You don’t mention the motherboard. Are you sure it will support a GPU and two PCIe electrical x8 slots for the NIC and HBA? I realize it’s an Epyc CPU, so it probably does, but you’ve got a lot of PCIe bandwidth going on in that build.

Hey Mate,

I am using the following Motherboard:

MB: SuperMicro H13SAE-MF
CPU: AMD EPYC 4584PX

I have 28 PCIe Gen 5 lanes

Devices:
2 x Gen 4 nVME’s (MotherBoard)
GPU: PNY NVIDIA RTX A4000
HBA Card: Broadcom 9500-16i 12Gb/s HBA TriMode (15 Drives)
Networking: 10G Mellanox ConnectX-3 CX311A SPF+

Do I have enough PCIe lanes… I cant figure it out :slight_smile:

++++Update:++++
I have re-read the MB Manual multiple times and here is how many lanes I believe i’m consuming.

28 PCIe Gen 5 Lanes.

SLOT 6: Gen5 16x - GPU: RTX A4000 PCIe Gen 4. I think the RTX will only use x8 because it is a Gen4 Card in a Gen5 Slot.

SLOT 4: Gen5 8x - HBA: Broadcom 9500-16i 12Gb/s Gen 4. I believe this will use only x4 as its Gen4 Card in Gen5 slot.

SLOT 7: Gen4 x4 - 10G Networking - 10G Mellanox ConnectX-3 CX311A SPF+ Gen3 will use x4. I believe this will only use a couple of lanes.

Here is a link to the MB Manual.
https://www.supermicro.com/en/products/motherboard/h13sae-mf

Page 10 is where I got a bunch of this information.

Am I right in these thoughts?

–
D

You’ll be ok, yes.

FYI, the number of lanes don’t change based on the PCIE generation. A card will use a certain number of lanes (1, 4, 8, or 16) based on a combination of the physical pins on the edge connector and the processor and chipset architecture (page 15). The PCIE lanes of SLOT 4 and SLOT 6 are shared. If you were to put your GPU in SLOT 6 and leave SLOT 4 empty, the GPU would use 16 lanes (the edge connector and the physical slot have pins for 16 lanes). When you put your GPU in SLOT 6 and the HBA in SLOT 4, the GPU will only be able to use 8 lanes, not because of PCIE generation, but because those two slots share the 16 available lanes to the CPU, so SLOT 6 will negotiate as 8 lanes and the HBA will negotiate as 8 lanes. Also, although the physical length of SLOT 4 is x16, only half the pins are electrically connected in it, it is only ever electrically an x8 slot. This is why in the manual they talk about SLOT 4 as “PCIe 5.0 x8 (IN x16)”. Your NIC has an x4 edge connector and your HBA has an x8, so those should operate at their maximum rated bandwidths in their respective slots.

Also, just FYI an LSI 95xx series HBA is overkill for spinning rust, you could use a 93xx series card for probably 1/3 the cost and see no performance difference. You may have other reasons for selecting it (already purchased, on sale, future move to an NVME array, etc), just saying for your build as is, it’s $200 or something and extra heat in the case that will have no performance benefit.

1 Like

Hey Mate,

Thank-you for this detailed response. It was a great help.

Ordered. Cant wait :slight_smile:

1 Like