HL15 hardware advice

It’s after midnight here and I’m pretty strung out after days of researching rack gear, rack UPSs, and general server hardware, so thanks in advance for any help and please bear with my likely incoherent rambling :slight_smile:

At this point I’ve pretty much got my heart set on the HL15, but I’m dithering on what configuration to go for hardware wise. Whatever it is I’m going to run TrueNAS Scale on it (physically or virtually with PCIe passthrough) and its main function will be to be my primary NAS (with the Synology DS918+ being repurposed as a local backup box).

It will be a while before I can order, as at the moment I don’t have a rack (or even room for a rack, until I throw some crap out of the basement), so my first order of business will be putting in a rack with what I consider the basic necessities for a rack environment (patch panel, UPS, PoE switch, router) so I’ll have a place to put it and a network to connect it to.

Software wise, I’m not sure what all I will do with it. I have a Minisforum MS-01 running Proxmox that has VMs I can use to host my Docker applications and services, so it isn’t strictly necessary I run any of them on the NAS. Both the MS-01 and the NAS will have 10 Gbps networking (ideally SFP+ on the NAS end so I don’t have to fiddle with transceivers, as the MS-01 has 2x SFP+).

So software wise I basically have three ideas for it:

  1. Bare metal TrueNAS Scale, no applications or VMs
  2. Bare metal TrueNAS Scale, maybe some applications and VMs for flexibility (e.g., maybe I could run things on it as backup in case the main source was down via load balancing or HA)
  3. Bare metal Proxmox, PCIe passthrough of the LSI HBA to a TrueNAS Scale VM, then whatever applications or VMs I feel like running. Potentially make this part of a cluster with the MS-01 if asymmetric clusters made up of different hardware are supported and I got another mini-PC or something.

That brings me to the hardware dithering. I’m pretty lazy these days and I’m aware that in software scenario 1 and most likely scenario 2, the “fully built” configuration should be able to handle that perfectly fine with headroom to spare while saturating two 10 Gbps streams. I’m also not sure I could beat the difference that adds to the price if I part out a CPU, motherboard (especially one with IPMI), ECC RAM, and LSI HBA. Plus the additional PCIe slots and lanes would be nice, even if paying that much for a build with a 5-year-old weak CPU makes me cringe pretty hard.

On the other hand, the best numbers I’ve been able to find online say the full build configuration should idle at around 120 W empty and 280 W full, and should run around 350W under full load when full and 510 W during startup when full. I’ve seen a build in the HL15 with an AsRock Rack W680D4U, i5-14500, 128 GB ECC RAM, the Noctua cooling package (which I’d be getting either way), and 6 18 TB Seagate Exos drives with two 3 drive RAIDZ1 vdevs idling at around 86 W, and I’m guessing that would run circles around the Full Build configuration CPU power wise.

If I went with an AM5 build with something like an ASRock Rack B650D4U and the Ryzen 9 9900K I’m sure it would run circles around the Full Build configuration CPU power wise, and probably sip power at idle. I would need to buy an Arc A310 for hardware transcoding if I decided to run Jellyfin / Emby / Plex on the box my media actually lives on, but that’s the same with the Full Build configuration, so I consider that a wash.

With either a W680 or a AM5 build, the limited number of PCIe lanes also means needing to take care about what slots support what lanes, if bifurcation is supported., if I could have a GPU for QuickSync and use a NVMe bifurcation card at the same time, etc., which is annoying.

Basically, my thought process is me continuously going around in circles asking myself if more power efficiency and performance (which I might not even end up making use of as there’s a part of me who thinks it would be cleaner if this did NAS and nothing else) is worth paying a little to significantly extra (mainly because Unregistered DDR5 ECC is expensive) or if I should just make my life easier (and probably save a little money, since I do want IPMI, so the motherboard is going to be expensive regardless, and I’d probably end up on the hook for an SFP+ NIC with either of the builds I’ve considered) by going with the Full Build configuration.

As for the power efficiency, my reasons for considering it are:

  1. Less heat means less noise, though this may be less of a consideration if I can’t hear the Noctua fans running at full speed in the basement from upstairs).
  2. Power bill. While We aren’t exactly dealing with European electricity prices here (our bill says the fuel price to generate the electricity is 0.04139 per kWh), our electric bill does run pretty high (as of late, it’s been in the $350-450 range). This isn’t really tech related, but we have a heat pump, a 30-year-old inefficient water heater, and the basement has three chest freezers of varying sizes and an old refrigerator for overflow). So while I doubt the Full Build configuration would have a significant impact on that, it’s at least worth considering if less added electricity cost is worth paying an up front premium for a more power efficient build.

I knew that would be very long-winded, but I think I surprised even myself. Thanks to anyone who took the time to read my little novella, and I would appreciate any recommendations and advice from those of you who have actual hands-on experience with the HL15, whether it’s the Full Build configuration or your own build in it.

So it sounds like you want to relocate some of the computing you are doing to the basement. Do you already have networking down there to the rest of the house? Will this be near where the internet comes into the house, or will that need relocated? How are things for power down there re the number of breakers the outlets are connected to…What else will be on the same breaker as the IT equipment?

You don’t necessarily need a rack. The HL15 will work fine in a tower orientation with the included feet.

Well that’s the thing. Without knowing use cases, it’s hard to provide advice. As you indicated, you can treat it just as storage–a replacement for the DS918+, or you can include migration of whatever workload you are doing on the MS-01. But it sounds like your use case is mainly entertainment consumption? What other applications are running in your house, or do you want to run in the future? You have POE cameras? so you have some surveillance software? What else is running in containers or VMs on the Synology or Minisforum? It sounds like you’re not a “homlabber” with a tech career trying to keep up on tech or doing software development, etc.?

LGA3647 is more like a 7 year old platform. The Bronze 3204 is fine for storage. I’d upgrade to at least the Silver 4210 if you were going to do anything serious with containers or VMs

Correct. You’re not going to get 10G networking and support for 15 drives on a consumer motherboard. The best you’re going to be able to do is 2.5G LAN and 2 PCIe slots that will be 8x when both are populated, and an additional 1x or 4x slot. So, that will be your HBA and one other card; GPU or 2x NVMe, but not both. For more than that you need a server motherboard and CPU; Xeon or Epyc. Do you really need an NVMe card? What’s your use case. Do you really do a lot of transcoding? I seem to be able to play media files just fine without needing to transcode them.

You could look at the new JetKVM if you went with a consumer motherboard. Depending on what IPMI features you are most interested in.

Something doesn’t add up there. The lowest electricity price in the US is slightly under 9 cents/kWh. Your home is all electric (no natural gas?) and no solar?

In my case, I don’t need IPMI, nor more NVMe than the motherboard has. And, although the cost of electricity is low where I live, I didn’t want the system generating more heat than necessary given where it’s located. I did need better performance than the passmark scores of 4886 MT / 1114 ST of the Xeon 3204 and wasn’t happy with the price/performance as you moved up to other CPUs for the LGA 3647 socket. So I did an AM5 build with an Asus ProArt Creator B650 motherboard and Ryzen 9 7900, which has about 10x the passmark MT score. I have a GPU in the electrical 16x slot (for AI, not transcoding), an LSI 9300-16i in the electrical x8 slot and a 10gb NIC in the x4 slot. I think I paid about $800 total including any shipping and tax (B&H Payboo FTW) for the CPU, motherboard, HBA and NIC, so a lot more CPU performance than the $1000 for the X11SPH+Xeon 3204. If I needed more PCIe lanes, and had no budget constraints, I would probably go with Epyc. There’ve been a few interesting posts here of Epyc builds.

Yes. At the moment, literally everything is on my desk (or close to it, in the case of the MS-01), which is pretty inconvenient. Moving networking / server infrastructure to a rack would make my life a lot easier. I do not have networking to the rest of the house down there, but at the moment the only wired networking is on my desk, so until I could do the cable runs, I could get by with two runs from the patch panel to a switch mounted under my desk and an access point where my desk is (as part of this, I will probably replace my aging router with a UDM Pro or something).

I honestly don’t know power wise, but at the moment all of it besides the MS-01 is running off a single outlet under my desk, all connected to an APC BRM1500 or whatever the old model number of their tower 1500VA unit is, so I’m not too worried about it, though I was already considering having a higher current circuit put in there if I needed one for a refurb rack UPS from Eaton et al.

I know, but that’s a non-starter for me, as with my luck I’m 100% sure it would get knocked over while running and ruin the drives.

At the moment, my DS918+ runs all my Docker containers, as I haven’t gotten around to setting up a Docker VM on the MS-01. I run LSIO’s swag container as my reverse proxy, but switching to traefik is on my to-do list. I run Nextcloud to store my documents and photos (I’m thinking about running Immich too but haven’t gotten around to it yet), calibre to manage my ebooks, emby, jellyfin, and plex all pointed at the same media library (so I can compare them accurately, though jellyfin is the one I use the most), mealie as a recipe database, netbootxyz for PXE boot / installations, phpmyadmin as a mariadb front-end, vaultwarden as my password vault, watchtower to update containers. So, I do depend on some of these. There are other things I’d like to setup in the future, like tailscale for external access (at the moment I’ve got my domain’s DNS record pointed at my static IP with 443 open, so I may want to move most or all of the services behind tailscale for more security, and if I had a friend with a NAS, I could use something like duplicati over tailscale to do encrypted off-site backups to their NAS while they did the same for me, etc.)

I got the MS-01 with the intention of migrating my services over to a VM on it, as the Docker version on the Synology is really old, and the MS-01 is so much more powerful than it that it just made sense. But I haven’t gotten around to it yet.

I’m a software developer working from home for a living, but I take the “playing around with stuff” homelab wise by spells.

Well, the two motherboards I mentioned will do that (they each come in two variants: one with built-in 10 Gbps networking and one less NVMe slot, and one with only 1 Gbps networking and an extra NVMe or PCIe slot, I can’t remember which), because the former config uses 4 of the PCIe lanes for 10 Gbps. But in general you are correct.

I don’t really “need” an NVMe card unless I was going with a motherboard that didn’t have a couple PCIe slots and I decided to run VMs on it, as I would want to put VMs on a mirrored NVMe. I don’t do much transcoding, as I do full fat Blu-Ray rips onto my NAS (the vast majority of my used storage goes to that), so I’m not re-encoding to AV1 (admittedly in large part because I’m so anal and OCD I can’t decide what Handbrake settings to use, so until storage becomes a problem, which with the HL15 it wouldn’t for a very long time, I’m not worrying about it), so most of my home media consumption is Direct Play (the only time it really forces a transcode is with certain clients, like the built-in Smart TV apps, if PGS subtitles get enabled, which forces a transcode), but I’ve shared my media with a couple people, and I prefer to be future proof. I’d rather have it and not need it than need it and not have it.

Funny you should mention that as I have two JetKVMs on the way. I heard about the project from Wendell about 2 hours before the Kickstarter ended and backed it lol. But remote management is a must for me after my experience with the MS-01. The MS-01 has Intel vPro on one of its NICs, and the main reason I haven’t gotten my services migrated over to it is that I wanted to get vPro and remote KVM over AMT working through MeshCentral, and that’s been a massive pain. I’m thinking the Jet KVM may just solve that problem for me.

But I can’t be arsed to power down a server, tote it to a monitor / keyboard / mouse (or I guess in a rack I wouldn’t have to if I had a console / KVM put in), etc. when I need to troubleshoot or update the BIOS or install an OS or change a BIOS setting or something. I haven’t had hands-on experience with either IPMI or the JetKVM, so I am not sure how full a replacement it is. I think it would let me do most of what I’d like to do insofar as I think you can put an ISO on it (or have it download an ISO at runtime if memory serves) to install an OS remotely, and it would let me get into the BIOS and change settings remotely, etc.

Yeah, our bill doesn’t really give us a breakdown of cents per kWh as far as I can see. It has generation services which has a “fuel factor” with the ifgure I gave. But they tack on transmission services, distribution services, and it all added up to $443.36 for last month. I don’t like them, but they’re all we can get without moving. It’s all electric with no solar. I’d love to put in solar panels and a Tesla Powerwall or something, but I don’t have $30K lying around to do it with.

That’s a nice build. If the JetKVM is a sufficient IPMI replacement for me, that would fit my needs just fine I’m guessing. Do you know about how much power it idles at? I have a 5900X in my desktop and I’ve had odd issues with it since I got it, in that it randomly spikes up to 1.4V or so at idle unless I cap it to keep it from boosting, so out of the box it wanted to idle at 50-60C, and after capping it idles around 37-45C. I know AM5 is far more efficient, especially the 9000 series, and I heard Wendell say the 7900 would “sip power” in a NAS build, but he gave no actual numbers.

This is currently how I’m using my HL15. It’s primary role is NAS running latest version of TrueNAS Scale but I also run any storage intensive applications on it via docker apps (SyncThing, Emby, etc.).

It looks like you’re in the ballpark but a hair high with your numbers here. My Full Build HL15 right now is using ~217 watts. That’s with an upgrade CPU to a Xeon Silver 4214, 8 Seagate spinning drives, 6 SSD’s, 5 NVME’s, 2 SATADOM’s, and an Arc A750 GPU.

1 Like

Thanks for the clarification. That’s a lot of storage :slight_smile:
Do you mind explaining how all that is connected (and how the SSDs are mounted?) I might want to put in a few SATA SSDs for a pool, but I didn’t want to take up any of the 3.5" bays for them if I could avoid it. I would think they could make a bracket or somethign that would let you mound a few to the side of the case.

For the SATADOMs, what’s the setup there? I thought in the full build config the motherboard’s SATA ports were connected to the backplane, so I’m curious how you’ve got things setup.

I’m happy to elaborate further. I’ll start with the SATADOM’s. I have as a boot drive mirror for TrueNAS. The full build does have two available sata ports that can also provide power to a SATADOM. They are the orange ports if you look at a picture of the X11SPH-nCTPF motherboard. This freed up the NVME slot on the motherboard to be used as L2ARC if I thought I needed that.

X11SPH-nCTPF | Motherboards | Products | Supermicro

In my case, I did use the slots for my SSD’s and 3D printed the 3.5 to 2.5 bracket. Two of the SSD’s are used as a SLOG VDEV and the other four are port of my SSD mirrored ZPOOL for workloads where I want some more performance.

45Drives - SSD Caddies by 45Drives | Download free STL model | Printables.com

They also have some printable brackets to mount SSD’s on rear of the case, but you correctly stated that the remainder of the SATA ports on the full build is connected to the backplane.

45Drives - SSD Mounting Bracket by 45Drives | Download free STL model | Printables.com

EDIT: I should also call out that these mounts can be bought through 45HomeLab if you don’t have a 3D Printer.

1 Like

With very non-scientific or extensive testing, per a KillAWatt clone;

  • Build as described with NIC and HBA but not the GPU or HDDs; 120W
  • Adding in the RTX 4070 Ti Super; 150W
  • Adding 13x 12TB Seagate SATA HDDs to all above; 225W

I don’t have the drives set to spin down or anything, and it’s possible that after long enough of truly being idle the CPU may go into some sleep state. A 65W CPU isn’t going to “sip power” vs a 15W or 35W one, but it certainly will beat the performance/watt socks off the LGA2011 used enterprise stuff a lot of Wendell’s homelab audience is/was still using.

To be honest, if I was lazy about this project, as you claim to be, I would do what Ryan did; get the full build with a CPU upgrade. That sounds like it will meet your needs now and for years to come even starting with a slightly older hardware generation. The X11SPH is actually an interesting board that is hard to beat in some ways. It’s a shame Supermico doesn’t seem to have made an X12 X13 or X14 variant of it.

Finally figured out how to quote lol.

Thanks for the info. Investing in a platform that old does make me wince, but if I relegated it only for storage (and maybe hosting “storage adjacent” services like Jellyfin / Emby / Plex, maybe the *rr services if I ever decide they’re worth using alongside my Blu-ray ripping rather than dowloading my media, etc.) I’m sure it would handle all that just fine, and all the expansion options on it do make it more attractice. I wish they offered an Epyc build in a similar price range.

Yeah. It’s definitely worth considering.

If I go that route though, it’s worth considering new vs used for the upgrades. It looks like buying used I could save around $500 on the Xeon Silver 4214, and around $164 on open box 64 GB Micron RDIMMs. I’m not sure how that would impact support though.

I’m all for supporting 45Homelab, but I’m about to try and put in a whole rack so it’s going to be a rough year as it is lol.

I watched the Tom Lawerence livestream yesterday with Brett Kelly and he talked about their new “pro” line of this HomeLab hardware. They do offer an Epyc build via their configurator but it doesn’t list a price. That indicates to me it’s on the costly side. Still though, you could inquire and see if value is right for you if do indeed have your heart set on an “easy” Epyc build.

1 Like

Neat. Just looking at the prices of that CPU / motherboard on eBay I’m going to say that’s gonna be out of budget lol.

I could maybe find used Epyc parts to put a good build in, but it’s trickier as I haven’t followed that space at all.

I don’t really NEED crazy power if I use this just as a storage server maybe running a few containers for storage related things. Just stings paying around $1000 for a 7-year-old platform. That SuperMicro motherboard is nice though, old or not.

Yep - New old stock enterprise gear isn’t cheap. :money_mouth_face:

So you’d buy the full build with the 3204 and resell the 3204? I don’t think 45HL publicly offers a no CPU full build option, but you may be able to do a special request via info@45homelab.com. I don’t think it should impact support a lot beyond the obvious that they wouldn’t support hardware they didn’t sell you. The main thing I am aware they have value-add support for is hard drives, where they will broker the RMA process and front replacements for drives you buy through them.

It will only sting once, not like that electric bill :slight_smile:

Seriously, if your use case is HDD-based storage, it really doesn’t matter that much. You’re mainly paying for the server grade motherboard and CPU. You aren’t going to get anything much more from a current gen Intel or AMD platform than the older one in this area, and that likely won’t change for years. There are still use cases that run just fine on 8th gen Intel, perhaps back to 6th gen depending on OS and USB requirements. As long as you’re not running Windows on the bare metal, you’ll be fine.

1 Like

Well, I’m not going to pay $1000+ for that CPU upgrade when ordering the build, so it’s more like I would probably stick with the 3204 and hope it works for what I would use the machine for (I expect it would, as in a couple of the reviews I saw, two simultaneous 10 Gbps transfers was pegging the 3204 at around 40-50%, so I would hope it would just be able to handle running some Docker containers on top of that since my shitty Synology Celeron can handle them without breaking much of a sweat). Then if I found the performance of the 3204 wasn’t good enough, I’d probably buy a used 4216 and try to sell the 3204.

EDIT: Well it looks like I had the prices totally wrong, as the 4216 is $750 not $1000. That does admittedly make it more tempting, but that’s a lot still. Ah well. Who knows what I’ll do now.

1 Like