PCI-E slot configuration on the X11SPH-nCTPF / X11SPH-nCTF motherboard

Currently the prebuilt HL15 ships with either the X11SPH-nCTPF or the X11SPH-nCTF, depending on whether you choose SFP+ or RJ45 as the connector for the 10GbE ports. Both are Supermicro boards:

X11SPH-nCTPF website: X11SPH-nCTPF | Motherboards | Products | Supermicro

X11SPH-nCTF website: X11SPH-nCTF | Motherboards | Products | Supermicro

Both of them have a common manual that is downloadable from the “Resources” section of either webpage or via this link: https://www.supermicro.com/manuals/motherboard/C620/MNL-1949.pdf

On page 18 of the manual there is a block diagram for the system features. I am now trying to understand that block diagram and would appreciate some feedback, as to whether my interpretation is correct. I got interested in this in the context of where / how to expand my system in the future with NVMe carrier cards. I currently have the Supermicro AOC-SLG3-2M2 2-Port NVMe carrier card that I ordered as an accessory with my HL15.

  • Slot 3 is connected to the CPU via port 2C with a PCI-E x8 connection, so with x4x4 bifurcation a 2-Port NVMe carrier card should work there, correct?

  • Slot 5 and slot 6 share 16 PCI-E lanes. As soon as a PCI-E card is plugged into slot 5, some bifurcation kicks in and both slots are run at x8x8. Apparently you cannot further bifurcate those slots to x4x4 in slot 5 and x4x4 in slot 6. ( Inserting x16 PCIe card disables second slot - #5 by technotim ) Though if slot 5 is not populated, perhaps you can run a 4-Port NVMe card in slot 6? Would a 2-Port NVMe card also work in slot 6, even though it is a waste (a 4-port Supermicro NVMe carrier card is around or above 300 USD)? Would appreciate some feedback here.

  • Apparently slot 2 is connect to the chipset via PCI-E x4 lanes in a physical x8 slot. So I assume that you can only use a PCI-E carrier card for/with a single NVMe drive in that slot, correct? I see that the chipset itself is connected with 2x 8 lanes to ports 2A and DMI3, so I am wondering, if putting an NVMe drive in slot 2 would take too much bandwidth from the other devices that are connected through the chipset. In theory the 12 free PCIe lanes should provide 24GB/s bandwidth (with 12GB/s bandwidths in one direction), which should be plenty for 15 SATA III drives and 2 10GBe NICs, but perhaps in practice this is different?

Would it be possible to use slot 2 for a GPU for 4k transcoding or do I need more bandwidth for that than PCI-E 3.0 x4 (though GPUS with physical x8 connector are apparently pretty rare).?

I am just trying to figure out what the restriction are that this motherboard imposes on future PCI-E usage. Any insights here are much appreciated.

I think you’ve got it right.

Yes

Yes, you should be able to bifurcate Slot 6 as x4x4x4x4 when slot 5 is not populated, and if you set it to x4x4x4x4 and only put in a 2-port NVMe card it will use those first two x4 channels and ignore the second two,

Mostly, yes. Technically, there are a few manufacturers that make NVMe carrier cards that have their own switch on the card and do not rely on PCIe bifurcation. Bandwidth would be limited, but it depends on your use case.

It depends on your use case, is your workload going to hammer the SSD, SATA drives and dual NICs all at the same time? Note that not all 15 drives are connected through the chipset, 7 (or 8 if you shift cables around) of the backplane drives are connected to the CPU through the SAS3008 controller. The SFF-8087 ports and the two orange SATA-DOM ports on the motherboard go through the chipset, the SFF-8643 ports on the motherboard don’t.

I don’t have direct experience with this, and don’t know if transcoding requires different bandwidth than, say, gaming, but ignoring the physical connector issue, my general understanding based on posts like the one below and other stuff I’ve read is that the impact is minimal, say 10%. For example, those external GPU boxes that are, or were, the rage just run at PCIe x4. The thread also mentions something about potential power issues, but that may be if running a PCIe 4.0 card in a PCIe 3.0 slot.

GPU in PCIe 4.0 x4 slot

1 Like

Thanks so much for your detailed response to my post.
No, I don’t except my workload to be that heavy, it was more of a worst-case analysis.

Thanks, I totally missed that. So keeping 12 PCI-E lanes for the rest of the chipset shouldn’t be a problem at all.

Now I can use all the USB ports for USB to SATA adapters. :clown_face: :upside_down_face:

Apparently there were indeed some rumors at one time that the PCIe 4.0 spec would allow up to 300W power through the PCIe slot (vs.75W in PCIe 3.0):

Though that was later debunked:

and the slot power in PCIe 4.0 is also 75W, just as in 3.0. As before all extra power has to come through the additional power connectors.

In the meantime I came across this reddit thread and yet another reddit thread and it appears that transcoding with 4 PCIe lanes is doable.

So luckily power doesn’t appear to be an issue and the bandwidth appears to be sufficient, so if ever needed, it might be worth trying to run a GPU with an x8 physical connector (or with an adapter to a x8 physical connector) in slot 2 of that board.