HL15 fully built, looking for U.2 solutions that fit within available PCIe slots/lanes

Hello all,

I just received my HL15 fully built, I went with maxed out specs. (24 core Xeon Gold 6230R, 512GB DDR4) I’m going to be installing 15x 28 TB HDDs.

Planned additions:
-Mellanox Connectx-5 100GbE Dual Port NIC (for direct link with primary workstation)
-Nvidia RTX A4500 GPU (for running AI LLM and Plex transcoding)

I am trying to figure out what my options are for adding in SSD storage, as I’d like to have a couple drives in mirror configuration for ZFS metadata (special allocation devices), a L2ARC cache drive, and if possible a general purpose SSD storage pool for hosting VMs. I was initially planning on using NVME M.2 drives in a carrier card, but I’m worried about running into issues with write endurance, so I think enterprise U.2 drives would be a better option.

Looking at the spec sheet for the motherboard: X11SPH-nCTF | Motherboards | Products | Supermicro
I have four physical PCIE slots to work with. If slot 1 is set to run at x8 speed, then slot 2 can run at x8. If slot 1 is running at x16, then slot 2 is disabled. Slot 3 is always x8 and slot 4 is always x4 speed. I don’t think I need the full x16 speed for either the GPU or NIC given my use case, so I should have an effective total of 3x PCIe 4.0 x8 slots and a x4 slot.

Looking at the spec sheet for the CPU:
https://www.intel.com/content/www/us/en/products/sku/199346/intel-xeon-gold-6230r-processor-35-75m-cache-2-10-ghz/specifications.html
I have a maximum of 48 PCIe lanes to work with.

If my math is correct, with the existing components I have…
4 lanes in use by on-board NVME boot drive
8 lanes for GPU in slot 1
8 lanes for NIC in slot 2
48 - (4+8+8) = 28 lanes remaining

I have 2x Oculink ports available, a PCIe x8 slot and a PCIe x4 slot.

Each U.2 drive will take 4 lanes.

I’d like to do:
2x U.2 connected to Oculink ports
2x U.2 in a PCIe carrier card in the x8 lane
1x U.2 in a PCIe carrier card in the x4 lane

This will use up all my physical PCIe slots, and use 44 of my 48 PCIe lanes.

Proposed lane usage:
4 lanes - boot NVME
8 lanes - GPU
8 lanes - NIC
8 lanes - both Oculink ports
8 lanes - PCIe x8 U.2 carrier card
4 lanes - PCIe x4 U.2 carrier card
Total lanes used: 44 out of 48

Did I calculate this correctly? Will this work? Am I missing anything? Would there be a better way of accomplishing this than what I’ve proposed? I’ve been discussing this plan with ChatGPT but would like a human to sanity check this before I go out and make some big purchases.

Thanks!

You should review the motherboard system block diagram on page 18 of the X11SPH manual. There may be PCIe lanes being used that you are not accounting for, such as the on-board LSI SAS3008 controller, but also the PCH both uses and exposes some PCIe lanes (all the slots don’t go directly to the CPU). Also, be aware of which slots you need to bifurcate and how. Your ability to bifurcate certain slots may be limited. For example, based on posts here, if you run Slots 5 and 6 at x8 each, you can’t then bifurcate either of them to x4x4, so the PCIe x8 U.2 carrier card will likely need to go in Slot 3.

Anyway, I think your plan is fine, just saying the analysis can be a bit more complicated than “The CPU has 48 lanes.”

1 Like

Agree with @DigitalGarden. As an alternative, you could use HBA cards.

For example, an LSI 9600-24i would give you 3x (x8 SFF-8654) in an x8 PCIe Gen 4.0 slot.
U.2/U.3 uses an SFF-8639 connector.
You could get adapter cables SFF-8654 (HBA) to SFF-8639 (U.2/U.3) to add drives directly to a specific PCIe slot, maybe a CPU lane if available.

8x SFF-8654 to SFF-8639 mentioned here come in a variety of lengths and various configurations. I’ve seen cables that use a single PCIe lane per drive, giving you up to 24 drives on that HBA. I’ve also seen them in 2x lanes and 4x lanes (full intended speed).

As I understand it, other HBAs like the 9500 and maybe even the 9400 have “tri-mode” support and are compatible with U.2/U.3 but I am less familiar with those. Sometime back it was mentioned that one of these may support U.2/U.3 only on some of its ports for instance.

Since you are asking about adding 5x+ U.2 drives I am assuming the price difference between these is negligible and giving you the “best” option.

As a general note, you didn’t provide your usage apart from possible LLM use. This may be important because my proposed solution would be routing 5xU.2 drives (20 PCIe Gen ? lanes) through an 8x Gen 4 slot. This HBA solution DOES limit the peak performance of the U.2 drives but that would only happen in practice under some specific scenarios and/or for a few moments of the life of the system. I.E. it may matter or it may not, depending on the workload you plan to use your system for.

For this post I focused on the question asked, How to add more drives in a PCIe-constrained system?

You are compiling a BEAST of a system and that is fine, even if it is triple overkill as long as you are aware that some of these components are not needed and may never be used to their full potential.

Please be sure to keep us updated during the build and deployment.

I think all the PCIe on the X11SPH is gen 3.