I have been interested in 45Drives for a while and, more recently, 45HomeLab.
I have had various RAID systems over the years and am currently using an aging SansDigital 8-bay enclosure with ArcSAP RAID on a PCIE card.
The system boots from 4 SSDs in RAID 0 from the motherboard RAID.
This is running on an old Dell Precision T3500 that cannot be upgraded to Windows 11, and Windows 10 support ends a year from now.
While I was going to wait, this system is increasingly less stable, and even RAID 6 can fail too easily.
My interest is in a 45HomeLab with 15 drives under ZFS.
My question is, will ZFS survive changes in motherboards and operating systems?
For example, if I want to play around with different processors, ARM vs AMD/Intel, and/or other versions of Linux, will the ZFS on the drives survive the transition?
Also, I recall seeing an AMD EPYC option a while ago, but it’s no longer listed as an option. It would be even nicer if 45HomeLab offered a pre-built ARM option.
I think the answer might need a few considerations; a) are you using ZFS for the boot partition, or just a mount point after boot, and b) is the pool encrypted?
OpenZFS is fairly platform independent regarding OS and CPU, since it is mainly defined by disk structures, but OpenZFS, like all software, has versions, and is backward compatible but not necessarily forward compatible. It does have “feature flags” embedded in the file system metadata, but of course earlier versions of the software won’t be able to deal with those features (say pool expansion) if they were added by a later version of the software.
If your intent is to mess about with different motherboards, OSs and CPUs it would be best to install the oldest stable Linux version you plan to run and it’s ZFS version and then not upgrade the pool features when a system might tell you “new ZFS features are available”.
To move an OpenZFS pool between systems you would “export” it from the current one and then “import” it to the new one. Not the best name to me since it isn’t moving or copying data. Export is mainly being sure any open files and mount points get shut down cleanly and marking the disks for easy import on another system. You can still import pools on other systems if you forget the export, but it’s cleaner and safer if you export first.
If the question is referring to ZFS as the OS boot device, not all Linuxes will boot from ZFS, or the configuration is more complex, so that scenario for swapping stuff about would have more hurdles.
Finally, if you encrypt the pool, there are a few different encryption options, but you would want to be sure that you choose one that isn’t tied to the hardware and that you have an externally saved copy of the key file, passphrase, or whatever.
Of course, RAID is not a backup, so you should always have some other backup of the pool, or anything important on the pool, if you are tinkering this way, and you might want to try working with a test pool before swapping the main pool. A ZFS pool should be fairly resillient to these types of changes, but I’m sure they do more forward/upgrade testing than backaward/downgrade testing, and you will come across the latter if you are swapping out different distros and doing it on different CPU architectures. A lot of distros have LiveCD/LiveKey boots, so you could do some testing of a new OS without overwriting the main OS already installed that way, as long as you weren’t changing CPU architectures.
I have not used ZFS yet, but I have been aware since Sun invented it.
I am not considering ZFS for boot; it is just for archival and backing up other systems. For important files, I use Google Drive.
For example, I do my Time Machine backups on my current RAID via Samba, but still keep important files on Google Drive. I also keep a lot of media, such as music, photos, and videos, in my archive.
My thought was to invest in a low-cost 45HomeLab initially but be able to upgrade the system over time while maintaining the integrity of my ZFS archive.
I currently have 13 TB used on my 20 TB RAID, so I would initially build something I could copy that to. Over time, I could replace the disks with larger ones and then expand the archive.
If I want to host other services, I might want to upgrade the system board while maintaining the ZFS archive with minimal effort. For example, a build server for software development.
As a software developer, I have gotten my hands dirty with both software and hardware, and I will if I have to, but I prefer to buy things as preassembled and pre-tested as possible, with minimal maintenance.
I was thinking of starting with TrueNAS Scale to start with, but open to other suggestions.
There should be minimal effort in most upgrade or sidegrade paths. There is no separate RAID card like your current system, all the “RAID” is done in software. The only requirements for a migration would be that a) the new OS, if there is one, supports the feature flags being used by the pool being imported, and b) (specifically talking about the HL15) the new hardware, if there is any, supports the physical cables connecting the main system to the HDD backplane. That probably means the new hardware has a PCIe slot for the same HBA, but you could have other setups. You don’t have to use the same HBA, just whatever cabling so that the new system sees the drives.
The process of adding drives/space to a ZFS pool, even with the recent addition of RAIDZ expansion, needs a bit of planning. You should probably try to read a bit about VDEVs and VDEV layouts. Basically, the number of drives in a VDEV, and the number of VDEVs in the pool will determine some characteristics of pool performance and of how much data needs to be read from other drives during a rebuild to recover from a drive failure. Typical recommendations of VDEV size is 8-10 disks. It is typically better, if possible, to, say, start with an 8 drive VDEV and then add a 7 drive VDEV rather than slowly grow the single VDEV to 9, 10, 11, 12, … drives.
TrueNAS, specifically the current version of TrueNAS SCALE is definitely a solid option for a storage appliance. It depends a bit on your use case, though. Other options you might see discussed are Proxmox, Unraid, or Houston/Cockpit. They all have similar features but emphasize different aspects.
TrueNAS focuses on easy management of the storage. You can run containers and VMs, but management for those is less robust in comparison
Proxmox focuses on being a virtualization server, with many features for the easy management and migration of VMs. It allows for the creation of ZFS pools on nodes, but monitoring and management of those pools isn’t as robust
Unraid doesn’t use ZFS (or in newer versions can, but not in the way you might think). It also has a robust container and VM environment. It’s main feature is that it allows for growing the pool slowly a drive at a time and with drives of different sizes. The downside is throughput won’t be as good as with ZFS and it has a license cost tied to a USB key.
Houston/Cockpit is the default GUI management environment for the 45HL full build systems that allows for some amount of ZFS, container, and VM management and monitoring, although one could install and run Cockpit on a custom build. It sounds like you aren’t considering the fully built HL15 though.
Some people combine Proxmox and TrueNAS; they load Proxmox as the bare metal OS and then run TrueNAS in a VM, trying to combine the best virtualization features of Proxmox with the best ZFS features of TrueNAS. This can be tricky though.
Do you expect to need a Windows VM to be running on the new setup?
I have TrueNAS Scale for “home production” data, containers and a Windows VM (HL15 custom build), and a separate Proxmox instance for “home lab” virtualization experimentation.
I spent some time on YouTube looking at various things.
I think I will go with TrueNAS Scale, which will serve SMB and AFP and host Plex and Home Assistant via Docker.
I don’t need full virtualization such as Proxmox, and I don’t need to run Windows, as I can run that elsewhere. I mostly use Windows to run Word and other Office apps, as well as Steam. I have a Microsoft Volterra and can also run Windows on my MacBook via Parallels. I want to do transcoding with Plex, so I need the proper hardware.
I am pretty invested in Western Digital RED disks running at 5640 RPM, so I can repurpose some of those.
If I need to improve write performance, I can run some fast ZFS log devices, such as NVME.
The latest version of TrueNAS supports VDEV expansion, which can make the evolution of the HomeLab system easier.
Now the question is to go with the stock Xeon from 45HomeLab or opt for an AMD system because I prefer AMD.
There is no point in considering ARM yet… but that geerlingguy seems to have it running…
Does that sound reasonable, or am I missing anything?
By “AMD” do you mean Ryzen or Epyc? The good points about the 45HL full build is that it has a server motherboard (X11SPH) that provides IPMI remote management, a built-in HBA, 2x10G NIC, and supports up to 2TB ECC RAM. So, you do get quite a lot of power built into the board that you would need additional PCIe cards, or just might not be obtainable on consumer Intel Core/AMD Ryzen platforms. But you may not need some of those features. A downside to me is that the socket (LGA3647) is some three generations/7 years old now. And the Xeon 3204 has pretty much the poorest performance for that socket. Paying US$1000 for that (US $2100 minus $1100 for the case+PSU), even if it might meet certain storage-centric workloads, just seems overpriced. If I were to go that path, I’d at least be upgrading to the Xeon 4210 or 4216 CPU options. But then you have to start asking about price/performance and performance/watt compared to more recent CPUs.
The trick with custom builds seems to be with PCIe lanes. If you run Plex then you probably want either an Intel Core CPU with Quicksync, or a GPU; you will almost certainly need an HBA, and you may need a NIC card if you need faster than 2.5G networking. With the GPU in the PCIe x16 slot, not all consumer motherboards give you a lot of other slots. The HBAs typically have x8 edge connectors and run most comfortably at that bandwidth. Some will negotiate to an electrical x4 or x2 slot, but at reduced overall throughput (but that might be less of an issue if you only have 5400 RPM drives). You typically need higher end consumer boards with a second electrical x8 PCIe slot and the ability to “bifurcate” the x16 slot as x8x8 in order to run both a GPU and HBA. This won’t be an issue for Epyc, and Xeons also have something like 40 PCIe lanes, but for AMD Ryzen or Intel Core you’ll need a “Pro” or “Creator” board or such.
Brand new HBAs are typically pretty expensive, so doing a custom build you probably need to be ok buying an HBA such as an LSI 9305-16i second hand off of Ebay, Amazon or such.
My main HL15 has an ASUS ProArt B650 Creator motherboard and Ryzen 9 7900 CPU. It has a 4070 TI Super, an LSI 9300-16i and a 2x 10G NIC in the PCIe slots. Just for reference; others here have better and more complex builds.
For the mobo, CPU, HBA and NIC total I paid about $800. I don’t have IPMI or ECC RAM, but have a system that otherwise based on Passmark has 10x the performance of an X11SPH + Xeon 3204 for $1000.
One other thing to consider is the fans / fan noise, if you have not already seen posts about that here or in other reviews you may have read or watched. By default the case fans are connected, effectively, directly to the PSU as 2-pin fans and not controlled via PWM or VDC, so spin at 100% RPM all the time. The CoolerGuys fans as part of the HL15 chassis do support VDC regulation, though. So as long as your motherboard supports 3-pin DC fan headers, you should be able to tame the included chassis fans to reasonable noise levels, which you will need to do if the system is going to be in a room with you. 45HL does offer an upgrade kit when you go through the configurator that will replace the CoolerGuys fans with Noctua fans. This is almost needed for anyone selecting the full build since the X11SPH motherboard only supports PWM fans. If you like Noctua, it might be worth purchasing the option even for a custom build. It didn’t seem expensive vs buying the fans and other parts oneself separately, but as I said, isn’t necessarily needed as badly if you have a motherboard with DC fan support. This is what I have; I just rewired the CoolerGuys fans to my motherboard headers with some 1-to-3 splitter cables, I didn’t splurge on the full Noctua fan replacement. There are threads to all that I can link to if needed.
Yes, that looks like a good product. I would lean more toward the ProArt X870E-CREATOR WIFI, but it does not have the IPMI. However, you seem to do without it.
Jeff Geerling recommends the Ampere Altra Bundle ALTRAD8UD-1L2T, but I worry about trying to get TrueNAS and friends running on ARM. Jeff is more of a nerd than I am, and TrueNAS seem to have no interest in ARM.
I would go with the Noctura fans, as I would keep the server beside my desktop. And yes, I must be mindful of the other fan/cooling considerations.
As much as I prefer ARM, there are arguments to be made for having an X86 system around for legacy reasons.
Regarding fantasy, I would love to go with something like the Gigabyte ME03-CE0, as TrueNAS seems to like RAM.
Years ago, I built a workstation with dual Xeon processors and 192 GB of RAM based on a bleeding edge Intel reference mobo. I learned a lot, but keeping it running was a constant hassle.
His physical build didn’t seem too bad, but yes I think you are going to have to install much of your additional packages building from source rather than just grabbing an executable. Probably less of an issue for a software dev who likes to chase cryptic build messages. There seem to be a number of ARM Linux distros around now, but he seemed to get stuck with his original choice of Rocky Linux;
I don’t know what the current status of that effort is, but you would probably find yourself managing ZFS and the containers without a GUI. Which is perfectly fine for some people.
The HL15 officially supports only ATX boards (9.6 inches deep). Be careful if choosing a “deep ATX” board. There is an additional inch or so in the HL15 before you hit the fans, but depending on the board layout, any headers at the front of the board (power, SATA, USB, etc) may be blocked by the fans with XL ATX boards. That Gigabyte board layout looks compatible, just a caution.
If that is your fantasy board, though, I guess I’m confused about your use case, which I thought was “[Storage server for 20TB available], which will serve SMB and AFP and host Plex and Home Assistant via Docker. […] I don’t need full virtualization.”
Seems to be overkill for that use case.
Not TrueNAS, ZFS. But that’s a complicated topic about how much RAM you really need for ZFS; it really depends on your workload.
That is why I used the word fantasy. Wants and needs are two different things. Think Tim the Toolman Taylor… I want overkill; I need less…
While I am retired, I still do recreational programming, and it would be nice to play around and experiment on my NAS server.
While I am dreaming, the Tomcat HX S8050 (S8050GM4NE-2T) seems like a nice ride… According to other forum posts, the HL15 could seat a CEB form factor of 12" x 10.5".