[SOLVED] Slot type needed for an LSI 9300-16i?

I have a B650 (socket AM5) motherboard with the following PCIe specs from the manual;

PCIEI (PC!e 4.0 xl6 slot) is used for PC!e x16 lane width graphics cards.
PCIE2 (PC!e 4.0 x1 slot) is used for PC!e x1 lane width cards.
PCIE3 (PC!e 4.0 xl6 slot) is used for PC!e x2 lane width graphics cards.
PCIE4 (PC!e 4.0 x1 slot) is used for PC!e x1 lane width cards.

and an LSI 9300-16i HBA. When I originally set this up in the HL15, I used PCIEI for the HBA as I had no plans for a GPU and a lot of ignorance about all things PCIe. The system is running TrueNAS and working fine.

I’d like to move the HBA to PCIE3 so I can install something else in PCIEI. But, when I do this, the ZFS pool doesn’t come up in TrueNAS (degraded and only has four disks or something, not sure, slow UI disk info updates). I moved the card back to PCIE1 and all is ok again. I repeated that two or three times, so it’s not a loose cable problem.

When I put the HBA in PCIE3, the motherboard BIOS does seem to show the HBA and that it has 12 drives attached (which is the number of bays I have populated), but I’m not an expert.

This is without putting anything else in PCIEI. First, I’m just trying to move the HBA.

Should PCIE3 support the LSI 9300-16i? Do I need to reset something in the BIOS or in TrueNAS? Will it only support two of the four SFF-8643 connectors in that slot?

One thing that confuses me a bit is that these HBA cards (from whatever PCIe generation) seem to be “X8” but motherboards seem to have moved away from PCIe X8 slots and have instead dedicated all that PCIe bandwidth to m.2, USB, SATA and a single X16 slot.

Is there not a motherboard that will give me two X8 slots? It was never something I did, but I remember people putting three or four GPUs on a system.

X670 motherboards seem to provide more total PCIe lanes but the slot configurations seem about the same, so I can’t really tell if getting one of those would provide better support for the LSI 9300-16i in the non-X16 slot.

Hi @DigitalGarden,
What motherboard are you using?
What OS are you running? Truescale? Ubuntu? something else…

Are you running an M.2 drive (or more)? I seem to remember some motherboards sharing the PCIE3 with M.2 drives…and when they did it would run at x4 speeds.

It’s an Asrock B650 PG Lightning. OS is TrueNAS Scale.

I have two of the m.2 slots populated. If I recall, M2_1 has TrueNAS and M2_2 has a drive I’m passing through to a VM as a boot drive. I suppose I could take that one out for testing. I can rejigger that VM to not use the SSD if I need to.

I didn’t really see anything about “if you put something in slot X it will disable or reduce the performance of slot Y” in the manual.

So I did a quick lookup on that MB. It looks like that PCIE3 slot is a physical x16 slot, but only has x2 (2 lanes) electrically. (page 31 in the manual)


I don’t think that LSI -9300-16i will work as it is an x8 card (except in PCIE1). If I get a chance I’ll look into the X670 boards tomorrow.

Thanks. I do luckily understand the difference when they talk about physical vs electrical slot “size”, so I wasn’t expecting PCIE3 to be electrically X16. But, I thought PCIe cards were supposed to adjust if fewer lanes were available electrically.

With the TrueNAS boot SSD in M2_2, no SSD in M2_3 and the HBA in PCIE3 I get this on boot;

The HBA and SATA drives on the backplane do seem visible in the BIOS though;

@DigitalGarden ,
does your ZPOOL contain all 12 drives? It looks like BIOS is showing 8 (maybe 4 drives per lane?)

Can you show the result of “zpool status -v”

Any chance that bios shows an additional 3008 controller?

…might be a rabbit hole, but I found this post in an UnRaid forum:

basically it claimed the 9300-16i showed up as 2 8i’s (1 for each chipset)

Yes, the BIOS shows the other SAS 3008 controller with the other four drives. I was just trying to give an idea of what I was seeing on the screen in the BIOS, not all the screens.

The pool is 12 drives in one vdev, RAID Z2.

Are you saying do a zpool status at that ‘(initramfs)’ prompt? I’ll have to put the system back in that configuration to do that.

I’ve ordered an Asus ProArt B650 Creator board from B&H that is supposed to support an X8/X8 setup, so hopefully that will work. There were a handful of others, X670E, but they were all over $500. Architecturally I don’t need a GPU and 15 drives in the same box, but it is the cheapest route given the other hardware I have. From scratch, I could have architected these two workloads on separate machines.
Perhaps I should have gone with the “full build” motherboard, but I don’t really like that 3204 CPU and I think development on that socket has stopped.

I know the number of PCIe lanes can be an issue, but I didn’t realize “one GPU and an HBA” would be a hard PCIe configuration for a consumer motherboard.

Can you clarify what your boot device is? is it an ssd? originally it looks like you had it on M2_1, but now it sounds like it’s on M2_2.

The HBA card is a different pool with the 12 drives, correct?

You can’t run the “zpool status” yet . At the initramfs prompt, you are already in recovery mode (possibly bad block or sector). If the SSD is your boot device, run fsck against your partition (i.e. “fsck /dev/sda” ). Once it’s done you type reboot.

I mis-spoke from memory originally. The boot drive is a 250GB Samsung 960 Evo NVMe SSD in M2_2, but M2_1 and M2_2 both use CPU PCIe lanes.

I’m not sure I know what this is asking. All four SFF-8643 connectors from the HBA go to the backplane and the only thing in the backplane are 12 spinning 3.5-inch 6gbps SATA 10TB drives, and three slots empty.

I found this;


which is essentially my question from the Intel side. My guess is the 9300-16i will negotiate down to x4 but maybe not x2. There doesn’t seem to be documentation on that though. Unfortunately in PCIE3, I’m asking it to run at x2 and even if it would, the performance would not be good.

Let’s table this and I’ll see if the other motherboard I ordered gives me more flexibility.

Thanks for taking the time to help. I’ll report back in a week or two when I get the new board and have a chance to install it.


Just to close the loop on this, the other board I ordered–Asus ProArt B650 Creator–is running both the LSI 9300-16i and a GPU in an x8/x8 configuration. I guess the 9300-16i isn’t supported in a physical x2 slot. I’ll need to pay more attention to PCIe lanes in the future. I didn’t think I was doing anything challenging just trying to use both an HBA and a GPU.