Pre-Purchase HL15: Hardware Options

I’m looking to buy the HL15 but want to see about other options of hardware if I should.
I currently have an i9-13900K & a ASUS ProArt Z790-Creator with 96GB of RAM

With the current specs sold for the HL15 my i9 is more powerful, but I’m just not sure if having ECC RAM is worth it?
Is there another chipset I should use for sustainability & efficiency?

I was mainly using the CPU for transcoding

I’m not exactly sure what sustainability and efficiency means, but the best thing you can probably do for sustainability is be sure you have applied the latest BIOS to address the Raptor Lake early CPU degradation bug.

The differences between your ASUS ProArt Z790-Creator and the full build Supermicro X11SPH is more than just ECC RAM. The Supermicro board has other Server motherboard features like IPMI, support for up to 2TB of RAM, dual 10G NICs, more PCIe lanes and many others. The main advantage for 45HL is that it has an onboard HBA for 8 SAS/SATA drives in addition to the regular 8 SATA ports, so they don’t have to include a separate HBA in the build, freeing up a PCIe slot and reducing costs vs a discrete HBA.

The main purpose of ECC is to mitigate against data corruption in memory. How big an issue this is for ZFS is debated across the internet. ECC RAM certainly can’t hurt, and is the most recommended. But memory data corruption definitely is a bit of an edge case. Many people successfully run ZFS with non-ECC RAM with no problems. The trouble is, if the RAM data corruption isn’t caught, the system can start writing corrupt data to the disk that is then not caught by a scrub and gradually corrupts your entire pool. It really depends on how critical your data is to you.

Do you really have no other specs for what the HL15 system needs to do? That system seems like overkill for just transcoding.

Chipset or socket? LGA1700 (of your CPU) is the most recent intel desktop socket. If you need CPU transcoding you need to stay with Intel, although in many other ways AMD produces more what I would consider sustainable and efficient CPUs, but my definition–for how long they support the socket and price/performance–may be different than yours. AMD chips don’t have Quicksync though.

You could step back to an “H” or “B” chipset, but you’d lose overclocking of your K chip, and since almost any other motherboard you buy wouldn’t have an onboard HBA, you probably want a board that supports x8/x8 bifurcation so that along with the HBA (which will take one x8) you still have another x8 slot free.

There are more recent Intel server sockets than the LGA3647 on the X11SPH. There is an LGA2066 and LGA4189, but I don’t think Supermicro has made an X12, X13 or X14 board similar to the X11SPH for those sockets.

I guess if someone said “transcoding NAS,” I’d probably think something like 12th gen Intel and an “H” chipset. If someone said “sustainable and efficient transcoding NAS” I might think more along the lines of an AM5 system with a discrete GPU. Neither of those give you the features of a server motherboard though. I think many people are getting the full build with a CPU upgrade to the Xeon 4210 or 4216 and adding in a GPU if they need transcoding. It doesn’t have to be a high end GPU.

Not sure if any of that helps.

Sorry, I guess I could’ve added more info.
I’m new to forums & primarily was using Discord. Trying to transition myself.

For sustainability and efficiency, I’m referring to the issues you listed. I applied those BIOS updates.
I was thinking of switching to an EPYC chip for the headroom & efficiency, but wasn’t sure if it was really worth the price of admission.

I guess I didn’t think about all of those other features exactly. I think I’ll be sticking to what I have for now just because I have it & my server is already in the same room as me, so I don’t really need IPMI .
I was planning on setting up a second machine just for backing up the pool in case of any corruption or issues like that.

I don’t just transcode, apologies again that’s just me simplifying my text.
I am also using it for game servers (Minecraft & Palworld).
I have multiple applications & bots I’m running.
Honestly, it is overkill for me. It was my first NAS build.
It’s a bare metal TrueNAS Scale system with

  • 10 20TB HDD’s
  • 3 10TB HDD’s

I’ve been using AMD for my main system for a while, but I just didn’t see it as useful as I would’ve liked for transcoding.
But I guess I didn’t think about getting a GPU. I heard the intel ones have been great, especially for the price.

I went ahead & just bought the HL15 bare-bones & going to transplant my system into that.

That’s what I was going to suggest as I was reading your reply. It might seem a bit pricy depending on what you compare it to, but it is mostly a great case that will be “sustainable and efficient” unless your 13 disk pool is really full already.

If you are running out of threads or PCIe lanes then EPYC is a change you could consider, but my impression is that most of the EPYC boards are larger than ATX, so I think people have had trouble getting EPYC builds into the the HL15.

Have you seen the posts here about the fans? If you didn’t order the Noctua fan upgrade you will at least want to unhook the stock Coolerguys fans from the PDU and connect them to motherboard headers that you can control. Otherwise they run at full RPM all the time. Your board probably supports both DC and PWM fans. If it only supports PWM then you will probably want to replace the stock fans with 4 pin PWM fans.

I was actually thinking about any issues I might come into.
I tried finding a video on it but how do I hookup the backplane?
Is it a specific connection I’d need on my bored?

I actually did see issues with the cooling.
I went ahead & opted for the noctua package. Been using them for years.
I actually have an NH-D15 on my current NAS build that I’m worried won’t fit.

How do you have your drives connected currently? It looked like the ASUS ProArt Z790-Creator board only had 8 SATA connectors, so I assumed for 13 drives you were using either an HBA or a SATA expansion card.

Which “Cable Set” did you choose when you ordered? There are 7 or so choices, A through G I think.

The backplane has four SFF-8643 (“mini SAS HD”) connectors. The case should come with the cables you chose already plugged into the backplane. They plug in on the underside of the backplane, so if you did need to change them you do have to take off the front and the metal drive cage to get to the backplane then unscrew the backplane so you can get to the underside. I’m not sure if there is a specific video, it’s not too hard, everything’s just screwed together. If you chose the correct cables though you shouldn’t have to mess with that, the backplane ends should come plugged in with (IIRC) the motherboard ends ziptied in the case for shipping.

I don’t think the NH-D15 fits. I think that’s 168mm high and I think the max for the HL15 is 160mm, depending on where the measurements are from. You probably will need to slim down to an NH-D14 or NH-D15S.

1 Like

I was stupid & just rushing the purchase.
I got set A Mini-SAS-HD
I went ahead & emailed the team to see about editing the order, if possible

I was using a 16 Port SATA III PCIe card to connect the drives.
I guess my question that I don’t know, is there a difference in speed / performance in going SATA or SAS?
If so, I’d rather go SAS & just get a new board tbh

Well, yes and no. Think of SAS as a superset of SATA. SAS drives and controllers can do everything SATA ones can, and more, but SATA drives and controllers can’t do everything SAS ones can. To take full advantage of SAS speed improvements, you need both SAS compatible drives and an HBA. The protocols won’t let you connect a SAS drive to a SATA port, and connecting a SATA drive to a SAS HBA is absolutely possible and done all the time, it doesn’t give you any speed improvement.

The reason people would use a SAS HBA to connect to the backplane rather than a SATA port expander even if they are just using SATA drives is that they are typically more stable. SAS HBAs are enterprise grade parts, SATA port cards are typically consumer grade. Drives might randomly drop out of the pool more on a SATA card. Less critically, it is easier to manage four cables than an octopus of 16 ends of fanout cables. OTOH, if you look at SAS HBAs you’ll see they have large heat sinks that your SATA card does not. They do draw more power and output more heat than a SATA card if that is a factor in someone’s build. It doesn’t seem to be for you.

I haven’t used a 16 port SATA card, but I did use a 4-port one in a SFF NAS build without any problems. For more drives than that I’ve always used a SAS HBA. Here’s one article that talks about it a bit;

My suggestion would be to either keep things simple and start with a basic transplant and handle any other changes later, in which case you would want;

Set G
(4x) SFF 8643 → 4x 7-Pin SATA [Mini-SAS-HD (Backplane) to 4x 7-Pin SATA]

Or, to plan to swap out the SATA expander card with an LSI 9300-16i, in which case you would want;

Set A
(4x) SFF 8643 → SFF 8643 [Mini-SAS-HD (Backplane) to Mini-SAS-HD]

Again, though, with SATA disk drives this is more of a system stability and cable management decision than a speed/performance decision

The Epyc you should be able to fit the SuperMicro H12 and h11 as they are ATX. Threadripper Pro however, unlikely. Which is sad since some of my tasks need decent single core performance not available on the modestly priced used Epycs,