Pre-sale question about HL15 parts

I just love the look of this case … I have always had trouble with my hard drives running hot – this setup with fans in front and back – beautiful!

I want to upgrade my home Truenas server and am looking at this … and want to make it work.

I have a cpu/motherboard/memory etc selected already and want to know how best to integrate it with this box.

So first question, if I get the [HL15 - Chassis, Backplane & PSU] option, does it come with a HBA? If not, I presume you would recommend that I get a LSI 9400-16i … can I get that from you? And then I would get the Set A cables?

Is there anything that I am missing here? I should be able to plug the board of my choice into this bad boy and get great hot-swap, clean backplane, and badass cooling for my drives. :slight_smile:

The [HL15 - Chassis, Backplane & PSU] doesn’t come with an HBA and unfortunately the only HBA that I think 45HL offers in the store is a 9600-16i for $1225. So you have to come up with the appropriate connectivity based on the system you are building. If you get a 9400-16i, yes you would get cable set A. You’d probably have to source the HBA from Amazon or eBay.

  • What size is your motherboard? The max for the HL15 is ATX.
  • Do you have a particularly tall CPU cooler? The max for the HL15 is 160mm I believe.
  • What other PCIe cards do you have if any? EG, if you currently have a GPU and a lower end CPU with few PCIe lanes, your motherboard may not support both a GPU and HBA.

This is going to be a headless media server – I ‘may’ add a video card later for plex transcode (I don’t know what I am talking about but read this somewhere.)

Mobo: Pro WS W680-ACE Intel W680 LGA 1700 ATX Workstation Motherboard
CPU: i7-14700

Yes, I will make sure my cpu cooler has clearance. This mobo ‘seems’ to have both 4 discrete sata ports and 1 4 x Slim which I am hoping is a SFF-8654 - -so I might not need a HBA, but from all I have read, it seems to be a smart move.

Then, no I don’t think you are missing anything.

Per the specs page (Pro WS W680-ACE - Tech Specs|Motherboards|ASUS USA)
The motherboard only supports 8x SATA total, 4x discrete 7-pin and 4x via a SFF8654-4i. To use all 15 bays on the HL15 you would need at least an additional -8i HBA (preferred) or SATA expander. For simplicity of cabling I would just get a -16i HBA, but if you aren’t using SAS drives, you could mix an -8i HBA and the mobo connectivity. I doubt the cost savings would be that much for -8i vs -16i. To use the SlimSAS on the motherboard you would need to purchase a separate SFF-8654-4i to SFF-8643 cable. Those SlimSAS and MCIO mobo connections can be a bit finnicky when used with SATA (I think their main intent is NVMe). I think there are parts of the “standard” that aren’t specified and different mobo and cable vendors do their own thing.

1 Like

I was thinking to avoid the onboard sata options entirely and go for a 16i HBA … that’s why I asked if you sold one.

That’s what I’d do. I don’t work for 45Drives.

Thanks. I am a bit outa my depth. I am building a Truenas server and want to futureproof it. I got a list of parts from ChatGPT and am trying to see if I can work it. But I had trouble with drive heating in the past, so I wanted to ‘fix’ that with this build.

You shouldn’t have trouble with drive temperatures in the HL15. Of course, that can be affected some if you fill it with hot-running enterprise drives or expect to run the fans at really low RPM for minimal noise. In most situations though, the drive temps are fine with the fans only ramping up perhaps during a scrub. The new dust filter in the v2 will restrict airflow some, so if you don’t have pets you could consider removing that.

Your specs may not be a rabbit hole you want to go down, but remember that ChatGPT (depending on what version you are using) only is trained on data current as-of perhaps 2023. Using it to suggest a “futureproof” system in mid 2025 may not yield the best results. Also “futureproof” may have different meanings to people I guess; no tech is “futureproof” so not sure exactly what things you are trying to mitigate. To me, if someone used that word, I would recommend looking a either a system based on Intel’s current socket, LGA 1851, their “Core Ultra” series, such as the 5 235 or 7 265, or AMD’s current AM5 socket, like the Ryzen 9 7900. Intel changes their socket almost every CPU generation, forcing a motherboard upgrade to use new CPUs, whereas AMD keeps the same socket across more CPU generations, so in that respect I consider AMD more “futureproof”, but the Core Ultra series seems solid. Just know that the Intel socket will probably be incompatible with the next round of Intel CPUs. I’m partial to AMD right now, but one reason to go with the Intel Core Ultra is the transcoding you mentioned; if that is a functional requirement for you, Intel’s QuickSync would mean you might not even need a GPU for plex.

I know ChatGPT is dumb. Really. But I have been trying to work around the known limitations – and perhaps the one you mentioned is one that I can’t. :unamused_face:

I have been considering that in all likelyhood, whatever I do will be fine :relieved_face: but I also think if I build something badass, I might step into more vm stuff … :exclamation_question_mark:

So I have thought that perhaps my MB choice is limited in terms of PCIe lanes – which I am coming to think might be important some time. And I can overcome the QuickSync issue with an installed video card that might be useful when I do create a vm anyway.

I don’t know if this makes sense – but with this line of thinking, I was considering then what 45HL was offering as pre-built … perhaps like this:

[HL15 - Fully Built & Burned In]

Motherboard: ASRock ROMED8-2T
Processor: EPYC 7282
Memory: 128GB
AC Power Cord: Type B
Fan Kit: Noctua Fans

To that I could install a video card .. and should be good … least in my limited thinking…

No offense, but you’re a bit all over the place. I didn’t see anything “dumb” about ChatGPTs suggestion. The i7-14700 seems like a decent CPU (well, ignoring the Raptor Lake bug and that the gains over 12th and 13th gen were minimal) and the board it suggested seems at a glance like a capable workstation board. I was only pointing out that ChatGPT probably has limited knowledge of the Intel 15th gen release, and you can probably build an equivalent system on that platform for about the same price, or you should be looking for discounts on the 14th gen components.

It really all depends on your requirements, what you need and want the system to do currently or in the future. Is it just for you or a family? Is it mainly for media storage? Do you want to do video editing directly off of it? Do you want to do AI with it? Are you streaming security cameras to it 24x7? …

There is no direct relationship between VMs and PCIe lanes. PCIe lanes allow you to plug in more, or faster hardware, for example, if you wanted to add two (or 3 or 4) GPUs and/or an NVME carrier card for 4 NVME sticks and/or a 100G NIC along with the HBA, you need additional PCIe to support that. But that’s overkill if all you are doing is storing Linux ISOs.

v2 of the HL15 is good (“futureproof”?) in that regard with 128 or something PCIe lanes instead of the 20-28 that are going to be on a consumer motherboard build (where quite a lot get pre-dedicated to the on-board NVMe). It also allows for more overall RAM and ECC RAM, remote management and some other server features. But, the HL15v2 technology, like ChatGPT’s build, is also not current generation–the Epyc Rome CPUs are circa 2019. Epyc has moved on to socket SP5 and Intel Desktop to LGA 1861. That may or may not be significant to you. If multicore processing power is a factor, you can get a bit more out of the top end from the CPUs on the SP3 socket, but those highest end ones will cost more in both money and AC power. If someone doesn’t need all the PCIe, there are builds that one can do for a motherboard, CPU and HBA for less than the $1100 incremental cost from the bare chassis & PSU to the min spec full build that give you better price/performance and performance/watt, but the $1100 also doesn’t seem unreasonable for what you get in server grade parts for an EPYC system a generation behind. Most homelabs don’t need the most current server hardware. It all depends in relative priorities in a particular use case.

I’m not sure if any of that helps or makes things more confusing. The HL15 v2 Full Build is certainly a good offering if you’re a bit directionless in requirements, a tinkerer, but don’t want to build your own system.

With v2 having PWM fans directly wired to the motherboard, I don’t think you really need the Noctua kit. The Noctua’s might be a tiny bit quieter, not sure, but the fan kit upgrade really was a workaround to some questionable fan wiring decisions with the HL15 v1, and shouldn’t really be needed for v2.

Thank you! Most of this makes sense – and yes I am a bit directionless at the moment. I have always built my own PC’s, and built my last Freenas/Truenas server that has lasted 12 years. As I consider replacing that, I have learned about Proxmox and am excited about moving to that kind of implementation – moving Plex to its own VM rather than under Truenas. And then started considering what I could do with Proxmox that I hadn’t considered before. So yeah, directionless is fair.
It seemed that there was a trade off between new consumer motherboard speed for a targeted app – and the server motherboards which could do more things and likely have more bulletproof reliability.
I am a tinkerer – when I have time (well I used to be, and I consider that I might be again in the future.)
I don’t mind building my own system. When I built my media server 12 years ago, I used Xeon and a server motherboard – and it has been rocksolid over all these years.
So whatever I do, it will represent a monumental improvement from my 12 year old system. And I don’t mind rolling my own. And I have decided that Proxmox Nas, Plex (with transcode etc.), and various vms are my direction. Money isn’t a huge factor, though I have always enjoyed saving it buy building myself.
I have read a few discussions about my original Asus Motherboard pick that have me afraid to go that direction. People that shelved it, or used it for a workstation after failing to get it to do what they wanted as a server.
As I understand your comments, the HL15 has loads of expansion possibilities – so its a jack of all trades. Were I to be more clear in my direction, I could find something that was more a master of what I need. Further, it seems that if I was rolling my own, and moving towards the HL15, then I could consider a more current implementation rather than the 2019 version builds. What would a more ‘current’ build that doesn’t break the bank look like? What mobo/cpu combo would I be considering?
I think this is the direction – I can’t go wrong with a more current stable/steady server – its already going to run circles around what I have been using and like to be sufficent for many many years to come.

I’m not sure why you started with a custom build and discounted the fully built offering. When I purchased my HL15 v1, I built my own AM5 system, I wasn’t very impressed with the X11SPH motherboard and relative CPU price performance of the fully built v1 for my use case. I could live without many of the server motherboard features. But the ROMED8-2T and Epyc CPU in the v2 is a much stronger homelab offering. Current gen server parts are typically expensive, so it is pretty typical to go back a generation on the server side. From what you’ve said, I’d just get a fully built system, certainly if you don’t want to be potentially constrained by PCIe. For my particular situation though, power draw per unit of compute is high on my list of factors, and often more recent consumer CPUs can do more compute with less power draw than server CPUs.

Two resources I use are;
PassMark Software - CPU Benchmarks - CPU Performance by Socket Type to see what CPUs are available for which socket and https://pcpartpicker.com/ to get an idea of what consumer motherboards are available for a particular socket; it’s less useful for server motherboards though.