I just received my HL15 (so hyped!) and figured I’d post about my experience. If nothing else I was reading these types of posts for the past couple of weeks while waiting for it to arrive.
Key dates, I ordered a full build HL15 on December 1, on December 12th it shipped and arrived today on December 16th.
The packaging was very solid, the cardboard box was slightly beat up but inside everything was untouched.
If you buy an HL15, definitely make sure to scan the QR code and read the manual. I’ve already had 5-6 things I couldn’t “figure out” that were clearly documented. Thank you to the 45HomeLab team for this!!
The rails were a bit tough to space out in my nearly full rack. I had 4U reserved for the unit, spaces 9-12. I checked the install manual and didn’t see any recommendations for where to put the rails in a rack to center it in a 4U space (I may have missed this). For anyone who buys an HL15 down the road, after some trial and error I landed the rails in the middle holes for spaces 10 and 11 (aka dead center of the 4U space). I guess this makes sense, but I really had no idea what to expect here.
The default username for IPMI is case sensitive and the password is on the sticker on the side of your unit. This was very well documented in the setup guide but I figured I’d post it here for search engines down the road.
The Noctua fans are insanely quiet, I’m so glad I got this upgrade and can highly recommend it. I cannot hear my HL15 over my Unifi switch, it’s insane.
Cockpit did need to be restarted on my machine to bring it up, and this was my first disappointment. The Motherboard Viewer does not support the X11SPH. I was a bit surprised as this came directly from 45HomeLab. The System and Disks tabs do work though. I should mention that this is temporary for me, I’m installing proxmox on this HL15 shortly, so I’m mostly playing around and I’m very impressed with Houston!
I don’t have the full build HL15, but I think the motherboard viewer is supposed to work for the X11SPH.
There are some posts here about re-installing Houston after you install Proxmox, so you would have Proxmox to manage the VMs and Houston to manage ZFS, if a setup like that would be of interest.
Small update, firstly thank you @DigitalGarden, I successfully setup Houston on top of proxmox! I had to follow this guide to make it work and some random google searches to correct issues. I really like this method better than my previous idea, which included disk pass-through to a Truenas VM.
Thank you @rymandle05! I ended up forcing an update for all 45drives packages through yum before I overwrote the Roky install. Doing a normal yum update did not work as there were packages being held back stopping the update for all packages. Targeted updates did in fact fix my mono display issue.
The largest issue I’ve run into so far with installing Houston on top of Proxmox is that the drive aliases were not present, and generating them using dalias was a pain. I ended up having to use the work around found here. Using the modified version of dalias from that post, as well as updating the template to only have the first 8 bays, dalias worked great. I then manually updated /etc/vdev_id.conf after finding the path for each disk in bays 9-15. After a quick udevadm control --reload-rules and udevadm trigger, all works well now!
EDIT: Another small fix. I had to follow these instructions to fix missing fonts and icons in Houston’s ZFS manager.
I now have all 15 drives installed and aliased, as well as my two NVME drives. time to start building my ZFS pools. This process was not 100% fool-proof either, and I expect I’m backing myself into a corner long term. In the end this is for fun!
Great to hear you got everything going pretty quickly. I’m a little surprised older Houston packages are shipping with the HL15. I suppose the disk image 45HomeLab is using must not have been updated yet.
What drives and sizes are you putting in there? Seems like you’re gonna have terabytes for days by loading up all 15 bays.
I ended up getting some refurbished Western Digital Ultrastars on Black Friday, 14tb each. I’m trying to plan for the drives to fail in batches. I’m not 100% certain what my layout will be yet, but I was considering as bad as mirroring across three pools of 5 RaidZ2 pools. At minimum I’ll be doing RaidZ3 if I pool them all together, so it’s going to be a pretty significant overhead.
I’m struggling a bit with Houston and Proxmox for setting up the ZFS pool, mostly I wanted to use some of the more advanced features such as the special pool for the metadata (following the Level One Tech suggestion). That doesn’t seem to be exposed via the UI like it is for Truenas. I can definitely do it via command line, but I’m starting to rethink I should just do a Truenas in a vm again. Worst case scenario I could always export the pool and import it into Houston later (I assume). Ultimately, I’m tired and it’s late, I think getting the hardware installed, IPMI licensed, BIOS updated, OS installed, and some basic playing around is a pretty good stopping point for the night.
Knowing you’re using this with Proxmox, you may want to consider multiple VDEV’s as a good compromise between streaming reads and I/O. You could do a total of 3 VDEV’s with each in a 5 wide RAIDZ2 configuration. You’ll loose the capacity of three more drives over a large RAIDZ3 but it’s less than mirrored pairs with better IOPS. Have fun for a few days trying several configurations to see which gives you the performance for the workloads you’re looking at. It may even benefit you to have two zpools: one pool for Proxmox and the other for file sharing purposes via Houston. That would allow mirrors for one and then RAIDZ2 or Z3 for the other.
I shared this same sentiment when I tried to use Rocky and Houston. I often found myself dropping to the command line. I’m hopeful the upcoming updates will help with this.
Oh yeah, I forgot to mention that I’m moving a pair of NVME drives in my old NAS to the riser card once I get things settled down. So all VM drives will live on that pair, maybe some on the main pool when performance isn’t a concern. The spinning disk is really meant to be just general purpose storage of all kinds. Thanks for the tip though!