Hardware Plans for Proxmox NAS

I just got my HL15 & I’m happy with it!
I’m just looking to expand my hardware & could use some assistance.
(Yes, my system is overkill & that’s how I like it)

Currently, I’m running bare metal TrueNAS Scale but looking to transition to Proxmox as the base OS & virtualize TrueNAS.
My current config is as follows

My boot drive is just a 500GB NVMe drive.
For Proxmox I want to store the VM’s on the boot drive so they get the NVMe speeds.
I was planning on using two SABRENT 4TB’s in a mirror.
I planned on using these for their performance & reliability.

I was also debating on putting in an Intel Arc card for transcoding to take the load off of the CPU.
These were my considerations.

My concern with all of this in mind would be the PCIE lanes.
Is there a motherboard out there for the 13900K that has any SAS connections?
Should I just ditch the power hungry CPU & go with a different CPU & use the arc card?

If there are any questions or suggestions, I’d appreciate the help

Why? I don’t see anything in the specs as listed that exceeds the capabilities of your motherboard. You have only specified two PCIe cards, and those won’t conflict with the teo NVMes you want to use.

If there is, it doesn’t change anything about available PCIe lanes. The PCIe bandwith is a function of the CPU architecture/socket. Putting a SAS controller on the motherboard just means it would be setting in stone the function of (probably) 8 of those lanes. I find the Connectivity diagrams on TechPowerUp useful;


If you needed more lanes (which I’m not sure you do based on the information provided so far), you would need a different CPU architecture; Intel Xeon, AMD Epyc, etc.

This depends on your workload. Not really sure what problem you are trying to solve. Power draw and performance are typically two opposite ends of the same slider. I would expect an i9-13900K, which I assume has QuickSync, to transcode very well and not really need an external GPU unless you are routinely pegging the CPU with some other tasks while simultaneously trying to transcode multiple streams.

The thing that stands out to me, that you did not mention, is the PCIe 2.0 x4 HBA. This seems like it would be painfully slow for a large pool, and I assume you have, or plan to have, a large pool if you bought an HL15. How many disks and what types of disks do you have for the data pool? I’d be considering an upgrade to an LSI 93xx generation HBA. You would put it in the second physical X16 slot with the Arc in the first, and bifurcate them x8/x8.

I thought that the NVMe drives took up the PCIE lanes, since they use the PCIE protocol. So that’s just my misunderstanding, I guess.

I assumed having the SAS connectors on the board could lead to higher read & write speeds, no?

The main “issue” is I just want to make sure my build / idea is reasonable for the transition to proxmox.
If my 13900k is good enough then there’s no real reason to add the Arc GPU.
My main goal I guess would be power efficiency.
I would like to lower the cost of my power bill but keep the power I have.
Currently, I am using QuickSync to transcode, but not really using its full potential.

Yeah, I agree, I just could never find a better card that has 4 SAS connections.
I looked up the LSI 93xx generation HBA & found one with two SAS connections.

I have 10 20TB drives in one pool & in my second I have 4 10TB drives.

Some motherboard manufacturers include a block diagram in the user manual, I didn’t see one in the manual for your motherboard, but here is one for a Gigabyte Z790 board. All Z790s are going to look somewhat similar. This is an alternate way to view the connectivity info I attached above;

(One difference with your board is it would show the one PCIe x16 slot connected to the CPU as optionally being bifurcated as x8x8 with your second physical x16 slot.)
The point is, yes the CPU directly supports 16x PCIe 5 lanes and 4x PCIe 4 lanes, but the latter is wired to one of the NVMe slots. That NVMe slot always has those four lanes allocated to it whether there is an SSD in the slot or not, and it doesn’t share that bandwidth with anything. Asus could have done something like provided another x4 PCIe slot on the board, and said “If you populate slot PCIE4_X4_CPU, M2A_CPU will be disabled” but they didn’t. Most everyone wants their boot drive to have a fast direct CPU connection. the only mutex configuration I see in glancing at the ASUS manual is that “M.2_4 slot shares bandwidth with SATA6G_5-8. SATA6G_5-8 will be suspended once either a SATA or NVMe device is detected at M.2_4”. IE, if you populate M.2_4, you can’t use four of the SATA ports on the motherboard. The PCIe lanes aren’t some sort of global resource that are allocated and freed as you plug and unplug things, the usage of the lanes is largely determined at design time with few exceptions like a few bifurcation options or the SATA ports or fourth M.2 slot usage mentioned above. Hopefully that makes sense.

Sorry, you would look for “-16i” cards, ie 9305-16i that would have a PCIe 3 edge connector. There is a 9201-16i, but that is also PCIe 2 (but x8) so may only be marginally Better than the IO Crest card. I suspect the IO Crest card also is an expander type and multiplexes two of the SFF-8087 ports together (but I could be wrong). I mean, if it works, that’s fine. In theory the PCIe 2 x4 edge connector should have a 2 GT/s bandwidth which should support about 15 spinning rust drives, A 9305-16i will probably run hotter and use more power. It also uses the newer SFF-8643 connectors like on the backplane, so you would need a different cable set.

Well it’s hard to know without knowing anything about the use case. Maybe others can chime in. Why do you want to switch to Proxmox with TN virtualized? How many/what sort of containers and/or VMs do you want to run? Have you monitored the main dashboard in TN and looked at the Netdata reporting to get a feel for how the system is currently performing?

I have a general impression that Ryzen CPUs are more power efficient than Intel, but I’ve also seen videos disputing that. I would question whether the i9-13900 not overclocked is really terribly more power hungry than other CPUs if configured correctly. Of course if you are overclocking then it will draw a lot more power, that’s the way it works. If your current setup basically works you can wait and see what the 15th gen Intels have to offer. If performance/watt is really a “main goal” than you can work down Passmark’s CPU Power Performance ranking;

https://www.cpubenchmark.net/power_performance.html

Unfortunately, I don’t think you can limit it by socket or other criteria besides Intel/AMD, so you kind of have to know a bit about the naming schemes to get something in the same ballpark, but for example you could keep your motherboard and get an i9-14900T which has about twice the performance/watt than an i9-13900. I don’t think K SKUs are on the list because they are overclockers. This wouldn’t be “keeping the power you have” though. The Passmark score would drop to about 40K vs about 60K for your 13900K. It’s not like you have a Xeon or AMD chip from 12 years ago. You’d have to analyze what the cost to buy a new CPU and sell the old one is and what the cost savings in electricity really is. If you live in California or Europe, maybe it’s worth it. If you live in Canada or many other parts of the US, probably not.