Slot 6 with bifurcation for a 4 x NVME Card

Just as Documentation and to share what I’ve done to my new HL15
(fully Build with X11SPH-NCTPF Motherboard)

I used CPU SLOT6 for a 4 x NVME Card. This is how to.

(CPU SLOT6) = PCI-Express 3.0 x16 Slot
This slot can be used as x16, x8x8, x4x4x4x4 using Bifurcation.

In Case of my Truenas install a 4 x NVME Board was used with the x4x4x4x4 setting.
The Settings can be found in Bios under;
_Advanced
__CPU Configuration
___North Bridge
____IIO Configuration
_____CPU Configuration
______OU2 (II0 PCIe Br3) Change from Auto to x4x4x4x4
As a side effect of this Slot 5 can not be used anymore as all PCIE Lanes for Slot 5 & 6 have now been consumed by Slot 6.

  • Card used: Delock 89017, PCI Express x16 Card to 4 x internal NVMe M.2 Key M - Bifurcation
  • Drives used: Intel OPTANE SSD P1600X Series • 118GB • M.2 PCIE

I configured a 3 x Mirror as special Vdev in Truenas for metadata and have the 4.th position currnetly unused.

Stil contemplating to use that 4.th position as a ZIL/Slog or with 2 more Optanes as a 3 way mirror for the new Fast DeDup".

I gues now I’ve got to start load testing to see if the card holds up without errors.

Regards
Henk

(Edit, Spelling)

4 Likes

If you are using a supermicro motherboard (the fully build model) and the card sold by 45homelab, there should not be a need to set the bifurcation as it will auto detect it.

I was able to confirm that in an earlier post.

Hutch noted the same things

I have the same motherboard as you with the stock CPU (bronze 3204). As my build was an early one, I am assuming the same is true with the other CPU options now offered.

The Autodetection mode did only give me the 1st NVME that is why I manually set it to be x4x4x4x4

Might be different with other Cards than my Deloc 89017

Regards
Henk

Do you did not get the supermicro model that was sold by 45homelab?

Interesting

Thank you for the clarification.

yeh I bought the fully Build with X11SPH-NCTPF Motherboard.
Could have to do with the IPMI Upgrade I performed before. We might not be running the same version. anyhow with the manual setting all is fine.

possibly, but I don’t see that the IMPI update had no pressing fix that was needed. If your server is not exposed to the internet and it is just for your homelab, I am sure the item is good.

The 4 PCI slots for this motherboard show the defaults as I documented on this post. These are gen3 PCIe slots.

This is a good post to let other know about this specific NVME PCIe card.

My act of responding assumed that this was the card offered by 45Homelab – the Supermicro model.

I am just getting started learning here and may be way off base but I feel like @pcHome is pointing at devices that allow 2 NVMe to use PCIe 8x with split bifurcation and @hjboven is trying to use a 4 NVMe to use the PCIe 16x split into 4x4x4x4. From the “setting up bifurcation.pdf” document in the @pcHome referenced post, it seems to be showing how to connect an AOC-SLG3-2M2 which shows picking auto but that seems not the same thing @hjboven is trying to do.

I found this because I am am also trying to figure out if the standard build motherboard X11SPH-NCTPF supports this 4x4x4x4 split bifurcation in which case I can buy and use this
$36 (KONYEAD 3001K which clear states it will only work on motherboards supporting this split bifurcation)
https://www.amazon.com/gp/product/B0CMPZQS81/ref=ox_sc_act_image_2?smid=A37RLX27ZTP84G&th=1
instead of this $180 version (KONYEAD 3003k which states it will work wherever)
https://www.amazon.com/gp/product/B0CBRCFY69/ref=ox_sc_act_image_2?smid=A37RLX27ZTP84G&th=1

I think I understand as @Spacekop pointed out that all the extra processing, fans, heatsinks are needed to do that bifurcation and ends up about 2x slower in some of the posted benchmarks.

Does anyone have any experience with this to confirm first hand that standard build motherboard (X11SPH-NCTPF) can handle this 16x->4x4x4x4 as needed for a 4NVMe “direct” split bifurcation?

Anyone have experience with this particular 4 NVME carrier product which is a lot less expensive than the Delock 8907 mentioned by @hjboven (which seems to have a pretty long delivery time in the places I have found it so far)?

If not, any alternative 4 NVMe carriers that have worked in split bifurcation or other mode?

I don’t have any personal experience with KONYEAD and have not found them sold elsewhere which makes me a little nervous.

Yes the bifurcated card will work. It will effectively disable Slot 5 though.

I posted on a couple threads where people had problem with bifurcation. I am using the supermicro 4 slot Nvme card within my build.

Some pie cards work with the motherboard to have these set automatically. There is a way to manually set the slot to the bifurcation you need.

The motherboard manual is available online (@ supermicro’s site).

1 Like

[Originally posted as a reply to @DigitalGarden by mistake] Thanks for the confirmation. Disabling Slot5 “effectively” is due to the heat sink width mechanically interfering, right?
As this box is only for storage in my application this is not a big deal but suppose in other applications it might make more sense to do 2 2-slot carriers (like the 45Drives configurator offers).

I see @pcHome pointed out he uses the SuperMicro 4slot NVMe carrier in their build. Does it also have a width that mechanically blocks Slot5?
Thanks also for your feedback @pcHome. I did see in the manual the x4x4x4 bifurcation. Missed it on the first pass that searched on bifurcation instead of x4x4x4…they spell it bifuraction).

1 Like

Within my HL15 build, I use Slot 3. To answer your question about slot 5, the card could be used in that slot as well; there is nothing that is blocked.

Hope the info helps you.

See

Thanks for the link to that discussion @DigitalGarden. I think it is all clear to me now that this is a shared resource. If I understand correctly, you can get 2 8’s to work in 5 and 6 but if you put an x4x4x4x4 with an x16 connect like the one I propose using that gives an x4 “lane” to each of the 4 NVMe there is nothing left for the x8 in Slot5 to “share”/use.

This also reconciles with @pcHome who is using a SuperMicro 4NVMe carrier that uses an x8 connect (instead of an x16 connect) which as they state, will connect and work fine to either Slot3, Slot4, or Slot5.

Thanks both for the help.

1 Like

Correct. This probably doesn’t affect you, nor isn’t an issue for most people, since they would be putting a 2- or 3-slot GPU is Slot 6 and physically covering Slot 5. I think TechnoTim was using a 1-slot GPU though. So, he thought he could run that at x8 and then bifurcate Slot 5. You can’t both share and bifurcate the two slots.

I only mentioned it since your NVMe card would leave Slot 5 physically available and I don’t know what else you plan to try to put in your HL15.

I am glad you are progressing with this topic information.

Good luck - I would love you hear about the final results of your configuration.

I am wondering if anyone that has built out 4 NVMe in (x4x4x4x4) on PCI 3 x16 (or x8) has any experience with what is adequate CPU/RAM. My plan is to add dual 25Gb/s links along with these 4 NVMe drives (Gen3 x4 since I assume paying for Gen4 is $ for gains I wont’ get with this MoBo) which is what I am planning for fast storage that later dribbles out to disk? I was thinking I should bump up to the 4210 but am wondering if that is necessary (or enough) to avoid having the CPU be the bottleneck (non-expert here in any of this). Also wondering how RAM affects that? Obviously the 25Gb/s fabric is expensive. If my math is right, for off-MoBo data streams, that gives me 3ish GB/s and from what I read Gen3 PCI is 3.5 for x4 (also seeing 8GB/s for x8) but I guess I am asking whether there are CPU and memory demands that the “base” HL15 would bottleneck on if two links of the switch were throwing 25Gb/s at the HLF15 MoBo and 4 NVMe were storing that streamed data (probably have terminology errors).

OR
Should, I really consider building up my own machine to get Gen4 PCI and newer compute to better utilize NVMe? My thinking was that the stream from the switch was capped at 2x25Gb/s coming into PCI with a NVIDIA Mellanox ConnectX-6 plugged into an x8 (so the Gen3 x8 if 8GB/s is not bottlnecking in terms of pushing data to the CPU). Therefore, if in this setup, if a quad carrier can bifurcate to x4x4x4x4x4 then the network will remain the bottleneck whether I am in Gen3 or Gen4 PCIe, right?

Am I correct that the situation could be different if the data stream was generated on the MoBo. In the case of the CPU generating the stream (rather than flow through) I guess I can see that a different MoBo with Gen4 PCI is warranted. I am not very concerned about the rate of transfer from the NVMe to the spinning disk drives as my writing to NVMe will be bursty from number crunchers on the switch.

Seems I digressed from my question but maybe its relevant to reshaping my question;
What CPU/RAM is sufficient for 2x25Gb/s (6.25GB/s) pass through write to 4 NVMe drives in a carrier in PCI 3 x16 (bifurcated to x4x4x4x4)?
and
Am I correct that there is no point paying for gen 4 NVMe that is going to run at gen3 speeds in this configuration (and would still saturate the 6.25GB/s NIC limit)?

1 Like