I’m building a HL15 system with a LSI 9305 supporting 15 16TB SATA HDD drives on the backplane. I want to internally add 4 additional identical 16TB SATA HDDs attached through the motherboard SATA ports. All 19 drives will then be managed as one large ZFS RAIDZ-2 pool.
Question: will HDD performance be significantly impacted with this setup, or am I much better off running only 15 HDDs off the HBA controller? I’d appreciate any quick thoughts on this topic. Thanks much / ngā mihi from New Zealand.
Probably not, assuming the HBA is in the x16 slot. You have to look at the block diagram for the motherboard in your motherboard manual. The 15 drives will connect to the CPU through the PCIe lanes of the slot; the four additional HDDs will connect to the CPU via the Platform Controller Hub (PCH), which consolidates/coordinates the communication with the peripherals that aren’t PCIE/NVME over a separate set of internal PCIe lanes between the CPU and PCH. So the PCH handles the SATA ports, USB ports, onboard NIC, etc. It may also have some PCIe slots connected to it. Any contention wouldn’t be between the HBA and the SATA ports, but between the SATA ports while doing lots of network and USB activity. If you have the HBA in a PCIe slot that is connected to the PCH, then yes there would be possible contention. The CPU/PCH link is typically equivalent to a x4 or x8 PCIe 3 or PCIe 4 connection in bandwidth. This will depend on your CPU, generation, etc. but will all be shown in the block diagram.
You can think of the throughput of a mechanical HDD to be 1/4 of 1x PCIe 3 lane or 1/8 of 1x PCIe 4 lane.
Please change your username to avoid confusion with official 45HL staff.
Thank you for a knowledgeable and well-supported answer. As you anticipated the block diagram shows the x16 PCIe (containing the LSI 9305) passes through the CPU and the 4x SATA to the PCH, so they shouldn’t contend. I am now confident to proceed. Cheers.
I will add, having that many drives in a RAIDz2 will decrease performance. In our testing performance drops off around 12-13 drives in a single Vdev, I believe you would get better performance from something like two 9 wide vdevs in the same pool with one active hot spare.
You will get 3 less drives of space however!
Just some more food for thought!
OP did say “one large ZFS RAIDZ-2 pool”, so there’s a fair chance the plan is already for multiple VDEVs. If not, and one VDEV is the plan, besides the online performance you mention, OP should also consider the stress the system would need to go through to rebuild a degraded pool. It will need to read and process the entire used-space contents of the other 18 drives in order to rebuild the failed drive. Given that RAIDZ2 indicates an aversion to data loss, that seems like a bit of a risky and time consuming operation. Breaking the pool into two or three VDEVs also mitigates against the impact of drive failures.