Help this rookie build his 1st server

I’m looking to get an HL15 with the following specs:

• X11SPH-nCTF
• Xeon Silver 4210
• 128GB
• Noctua fans

My primary goal is to get my family data out of the cloud.

I’ve never owned, built, or maintained a server before but have some decent technical chops. I’ve attached a diagram of what I’m wanting to do/run with my HL15. I’m looking to learn if I’ve got the infrastructure considerations properly assessed before moving forward.

Is my HL15 config overkill for this?
Does my layout make sense?

Can anyone advise on how I make the storage pools available to the virtual machines? I’m ultimately looking for any advice or documentation you would recommend to someone who is a server novice.

Thanks,

Ben

What is your backup plan? If it is not replication to another server, and your data is important, then I’d strongly suggest RAID Z2 instead of Z1. There are numerous articles on this but it mainly centers on the vulnerability to a second drive failure when one has failed and the stress rebuilding the array. Purchase a fifth drive if you need to to maintain 60TB of usable space…

I’m not a Proxmox expert, but you’ll create some sort of bridge connection for the VMs to see the TrueNAS instance without going over the network. They’ll still connect to the pool via SMB or ISCSI or NFS depending on what share protocols you set up. If you want more direct access from the VMs to the ZFS file system, you’ll need to use Proxmox ZFS not TrueNAS ZFS for the Data. There are pros and cons to each. Also note that for running TrueNAS in a VM, you’ll be passing wither individual disks or the motherboard SAS and/or SATA controllers to that VM, so you will need some other storage (eg, m.2 drive(s) on the motherboard) to store the VMs. I wouldn’t necessarily expect to see that called out on the diagram, but just pointing it out in case it’s not obvious. Proxmox OS and the VM image storage won’t be on the pool of 20TB Seagate drives as you have it architected, at least not efficiently.

Something seems off on your networking. You seem to show all the network traffic for the HL15 going through the Mac. My understanding from the diagram is that you want the Mac directly wired to one of the 10G ports on the HL15, and that you don’t have a 10G switch. But presumably to make use of some of your other VMs like PiHole and the IP cameras, the HL15 will be connected to the “Ethernet Switch”.

I wouldn’t use single-headed arrows unless you really intend to show uni-directional activity. Either use double heads or unheaded lines. For example, you show arrows pointing out from Frigate to the POE cameras, which is confusing.

Can’t really comment on “overkill” without knowing the planned workload. All that is really shown is one Mac doing video editing and some IP cameras. How many family members are using the DVR concurrently? Does it require transcoding? Are the IP cameras stream recording 24x7 or only based on event detection? Etc. I don’t see any Plex or A/V media library beyond the DVR, just still images, so I assume that’s not a requirement.

You might consider a separate mini PC running pfSense and pfBlockerNG instead of a PiHole VM. This would keep all your Firewall/Routing activity separate from the HL15. You can use OpenVPN on that to access your network from outside. I’m not sure how that compares to your intended use for Tailscale. The main potential issue is that you start messing with the HL15–rebooting, labbing, etc–and it’s going to potentially affect the family’s access to the internet if you are running firewall/routing services on it. Just something to consider, not a strong ‘do this’ or ‘don’t do this’.

2 Likes

It doesn’t seem like overkill to me! :grinning:

I agree with @DigitalGarden on looking at RAIDZ2 over RAIDZ1 for your pool. Z2 is pretty much “default” way to go unless you know you need something different. You can use special VDEV’s with nvme storage later on if you find out you need more performance out of the pool. If you haven’t already, check out some of 45Drives videos in this playlist for a deeper dive: https://youtube.com/playlist?list=PL0q6mglL88APuKz198JYWXgMCqAbxrkUT&si=PYWIJnk47Fg1igOR

Probably my biggest question: is there a reason you are looking to virtualize TrueNAS on Proxmox? Both are great solutions and I use both in my homelab. That said, virtualizing TrueNAS is definitely a more advanced task dealing with IOMMU. It’s doable but can be challenging. A user @Goose had a post back in December that sounds like he had some success but details are limited: Proxmox and TrueNas Scale passthough LSI Card

Looking at your diagram, I think everything but ChannelsDVR you want to run on proxmox is available as an app in TrueNAS Scale via the official repo. Looks like the TrueCharts repo has Channels according to this post - Channels DVR server installation instructions for TrueNas Scale - Playground - Channels Community. Knowing this, I think you may want to consider running TrueNAS Scale baremetal on HL15 with apps.

But assuming you want to move forward with both Proxmox and TrueNAS hyper converged, I think two pools is a the way to go. The full build HL15 has two controllers handling the 15 drives: a SAS3008 and a SATA controller. I would pass the SAS3008 to TrueNAS so it has access to drives 9-15 and leave the SATA contoller with Proxmox so it can access drives 1-8. This way you can build a pool for TrueNAS to use and manage but then have a second ZFS pool (probably smaller or in a different layout) for Proxmox to use and manage for the VM’s/LXC’s Operating Systems. You can still use NFS/SMB for mounts back to TrueNAS as needed.

Give that some thought and let us know! I’ll keep chewing on this to see if any other thoughts/ideas come to mind.

1 Like

Thank you both for your input. I’m a totally rookie in this space so I appreciate the guidance and thought you put into this. I have cleaned up the diagram a little based on @DigitalGarden feedback.

A few takeaways:

  • I have updated my diagram to reflect 5 drives in Raid Z2 configuration.

  • I only have a critical need to backup the photos/video from Immich and would likely look for a solution to back that up locally.

  • I’ll use the m.2 drive on the motherboard to store Proxmox and the VM images.

@DigitalGarden you said you couldn’t comment without knowing the planned workload. Here’s an overview of what I’m wanting to achieve with my HL15:

Primary Users: me, wife, daughter

  1. Immich: local photo/video server that our phones can automatically send newly taken images/videos to Immich storage allocated in diagram. Would like to be able to view this library when not at home.

  2. Channels DVR: This aggregates/displays our movie and home video libraries as well as TV antenna channels and YouTube TV channels. Movies, home videos, and select client videos from my 25 year career as a video editor would be stored on the DVR storage allocated in the diagram. Currently I run this server using an old iMac and it does h264 hardware transcoding. I would like the HL15 to takeover the hardware transcoding. Would the configuration I’ve listed in my original post be able to handle that or would I need to purchase some type of GPU to toss in one of the PCIe slots? I would like to be able to view Channels DVR content when not at home.

  3. Frigate: We currently have our 9 Reolink POE security cameras recording/streaming 4K30 video + sound 24x7 to a Reolink NVR. We access the streams through the Reolink app. I’d like to remove the Reolink NVR/app and use Frigate app to view the stream and record them to the allocated storage in the diagram. The NVR storage pool would record until it is full and then remove the oldest footage to accommodate new footage. Of course, we would like to be able to view the security cameras when not at home too.

  4. Video Editing on 10gig connection: I have two cat 6 cables run from a patch panel that connects to my Ethernet switch to my office where I edit on a Mac using Final Cut Pro. I would like to connect the 10 gig Ethernet from the HL15 to my Mac using one of the cat 6 cables mentioned above. The connection on the Mac would be made using a Thunderbolt 4 based 10 gig Ethernet bridge as such OWC Thunderbolt 10G Ethernet Adapter I would like to use the Data storage allocated in the diagram as storage for video footage and would like the editing experience not be laggy.

  5. Backup Mac Computers: Our desktops/laptops in the house are Macs. I’d like to have each Mac (when home on the network) run nightly Time Machine backups to the Time Machine storage allocated in the diagram.

  6. Pi-Hole: I’m a rookie, but I’d like to see less ads and get Google out of my life. The thought is to have my Router use the Pi-Hole to kill ads and avoid Google’s DNS. I have a Beelink Mini-PC that I currently use to run a Home Assistant Server. I could just use that PC to run this if you think that is a better idea.

  7. Tailscale: So you’ll notice above that I want to access things like our photo library, live tv/video library, and security cameras with ease when not connected to the local network/away from home. I’m open to ideas here but I know Tailscale has an integration with Channels DVR so that is why I was thinking I would use it for my off network access.

  8. Honorable mentions/things to try in the future: In the future I’d like to either use some of the storage I currently have (or add storage) to give the family 4TB of cloud file storage. Think of this as a replacement for Microsoft OneDrive/Google Drive/etc. Obviously this would need to be able to be accessed when not connected to the network.

So that’s by and large what I’m looking to do. @rymandle05 you asked if there was a reason I was looking to virtualize TrueNas on Proxmox. My answer to that is I’m very new to all this so what I put in the diagram is how it works in my head. If there is a better way to accomplish the things I’ve mentioned above, I’m totally open to not doing Proxmox.

Based on what I’ve listed here, is my HL15 configuration overkill? I’m ultimately looking to make a purchase soon but getting clarity on how to set it all up would be a big confidence booster. I’ve got some decent technical chops but this is just uncharted territory for me.

Appreciate any guidance on this. Thank You!

1 Like
  1. The Intel Xeon Scalable processors do not have Intel Quick Sync, which is the technology that allows them to transcode efficiently. I tend to play all my media in native resolution/formats, and not transcoded, but if multiple people in the family are to be streaming concurrently along with the other things the HL15 is doing then my uderstanding is you will probably want a GPU. It just has to be a cheap one like the Zotac GeForce GTX 1660 (Best GPUs for Plex Video Transcoding [2024] - GPU Republic)

  2. What network bandwidth do 9 IP cameras use at 4K30? From what I can see online it’s something like 4 mbs to 12 mbps per camera depending on codec? So average that to 8 mbps and it’s something like 72 mbps 24x7? That’s not an insignificant amount of network and disk bandwidth. @rymandle05 might disagree, but if it were me, I think I would consider not just making that a separate ZFS dataset, but actually dedicating physical disks specifically for this task, and they could be smaller (cheaper) disks depending on the number of days to be stored. For example, a pair of 10TB disks in a mirror or something. This would also minimize the chance that the security recording wouldn’t impact you scrubbing through the video files or cause DVR playback to stutter.

  3. So you do have a 10Gb switch? It may not impact the answer to your questions about the HL15, so I apologize if focusing on something tangential or out of scope, but it might help to know a little more about the networking gear you have. It doesn’t have to be a fancy picture, but just a bit more about how the cameras are connected, etc. Do the cameras go to a POE switch that has an uplink to another switch for the rest of the house? What I’m trying to get to is back to bandwidth and “overkill”/underpowered. If you have one 10Gb port on the HL15 dedicated to the video editing Mac, then is the other 10Gb port sufficient to support the other jousehold activity, and will it be connected to a switch that supports 10Gb? Because you explicitly called out the TB adapter for a 10Gb connection from the video editing machine to the HL15, I was assuming that the “Ethernet Switch” in the graphic was only a 1Gb switch.

  4. There are any number of tutorials on setting up Truenas to work with Macs and Time Machine. I think it’s pretty simple. I think one thing you need to do is when setting up SMB to enable “Apple Protocol Extensions”.

  5. Your router doesn’t really “Use the Pi-Hole”. When you set up Pi-Hole it becomes the DNS server (and optionally DHCP server) for your network. So if it goes down (HL15 is turned off) two devices on your network will have a harder time communicating with each other and devices won’t be able to use the internet (easily). Maybe with the HL15 as the hub of everything that’s less of an issue for you, but I like being able to reboot my HL15 without worrying that I’m going to kick someone in my house off of the internet. Other people certainly run Pi-Hole this way, but in my mind a firewall/router device should be separate from a compute and/or storage device.

  6. NextCloud should do this

Given your descriptions so far, I agree with @rymandle05 's comments about Proxmox and TrueNAS. Both do similar things, but Proxmox’s focus is on VMs and containers, and its use as a storage appliance comes second. TrueNAS’s focus is on being a storage appliance and it’s VM support isn’t as robust. It does have good preconfigured apps (containers) though that include what you have listed (Immich, Frigate, Channels DVR, Tailscale, Pi-Hole). If you had said you need to set up some Windows virtual machines or some Linux VMs for LLMs, or needed a multi-node high-availability setup that automatically moved VMs between nodes, then that would be different, but I’m not hearing any of that. People certainly do virtualize TrueNAS, but it is because they have more advanced requirements for virtualizing custom workloads. It’s not always easy, and the more layers you add the more opportunity for data loss or corruption. Virtualizing TrueNAS under Proxmox adds nothing unless you start with having a need for some feature(s) of Proxmox. Which, so far, none seem to have been expressed in your requirements.

I don’t think your specs are overkill. Upgrading to the Xeon 4210 is good. Most of your 128GB of RAM probably won’t be actively in use by users/applications all the time, but ZFS loves RAM for caching, so it sure doesn’t hurt. I would argue that someone could build something more powerful than an X11SPH/Xeon 4210 system in an HL15 chassis for cheaper than $1750 (that’s the route I went), but I think you want hardware that works out of the box and that’s fine.

Good to know! If you had a desire to learn proxmox or another reason I wasn’t going to spend anymore effort suggesting alternative paths. I think TrueNAS Scale on bare metal is a better starting point. The good news is if it doesn’t work out, you can change it and import the ZFS pool to a virtualized TrueNAS VM later.

Here’s a few more tips on top of what @DigitalGarden provided:

  • Keep the TrueNAS disks either on the SATA controller (1-8) or the SAS controller (9-15). This is so you can (hopefully) split drives between TrueNAS VM and Proxmox down the road. You won’t be able to use IOMMU to pass individual drives to the TrueNAS in Proxmox. It’ll need to be the entire controller.

  • With 9 cameras at 4k30, consider putting them on a separate network. The HL15 has two nic’s so you can split networks between the two both physically or via VLAN’s.

  • If you want to use the 1TB NVME for something else, I am using a pair of SATADOM’s for my boot drives. You can find them in various sizes on Amazon.com or on Ebay.com. They will take power directly from the motherboard which is nice.

  • A GPU for transcoding runs into some of the same challenges / limitations as virrualizing TrueNAS if you want to share that GPU across multiple apps. There are ways to do it. Craft Computing has covered some these methods - https://www.youtube.com/channel/UCp3yVOm6A55nx65STpm3tXQ. Otherwise, containers can share GPU resources more easily. TrueNAS Scale is already setup for this for both NVIDIA and Intel Arc GPU’s.

1 Like

Given the replies that @rymandle05 and @DigitalGarden provided, I was thinking that RAM would be an issue for you. I wanted to ask if you thought about this issue when designing your lab? Both
ZFS and and Virtualization on the same server are wanting to compete for the RAM usage.

While I don’t have any specific views with Proxmox or TrueNas scale, RAM will be a shared resource.

Proxmox or any another host of virtualization wants to provide as much virtualization capacity using as much RAM available on the server.

ZFS wants to use as much RAM as possible to cache the most frequently used files.

Given what you are wanting to achieve, I believe you are going to reach a limit or a ceiling with 128 GB RAM.

I believe either on this forum or Lawrence Systems or Level1Tech, an article citing how ZFS likes to use at least 50% of the RAM for its cache by default.

For my HL-15, I decided the HL-15 will take over the NAS duties from TrueNas. (I have post sharing my lab detail and it has my previous TrueNas hardware. ) As My virtualization servers are separated, the default build (default hardware bronze CPU) from 45homelab meet my needs with Rocky Linux as the OS and Cockpit, ZFS, and other software modules. I had to increased the RAM to reach its max configuration (8 x 64 GB DDR4 RAM or 512 GB). As my 3 Proxmox nodes connect to the HL-15 via the 10Gbe connection for backups and archives, the HL-15’s RAM usage averages near 270 GB (24x7). There are times the RAM will peak higher and lower, but the average is 270 GB (in the last 16 days).

My concern for you is ZFS could take up 64 GB of your RAM, but the Virtualization guests will want to use that RAM as well for its services or you will be limited to not provision anymore guests.

As I reviewed your second image of your homelab (which is a great image), I am estimating that Channels DVR and Frigate are going to consume a good portion of the HL-15 resources.

The other concern is you TailScale/PiHole could impact your home network during any HL-15 maintenance window which causes a reboot, etc.

I use TailScale and PIHole, but not within virtual guest:

  • TailScale is configured on my Negate/pfSense hardware
  • PiHole is running on a raspberry Pi (pi4), but many of the features in the application offers can be duplicated within a pfSense software package (pfBlockNG).
    These are minor improvements (to make them independent to your virtualization solution).
1 Like

Kind of the opposite. ZFS would use up-to 50% of RAM for ARC on versions of TrueNAS Scale and Ubuntu. This would impact people who weren’t running VM’s since much of the other 50% would go unused. That has been changed in more recent versions to allow ZFS to use all the available RAM. Of course this is just cache, so it will give it back as other processes need it.

The amount of RAM needed for ZFS really depends on the workload, and some workloads can get by on 16GB just fine. Another rule of thumb is 1GB RAM per TB of disk space. Containers shouldn’t eat up RAM the way full VMs do.

Someone can always add RAM pretty easily.

Thank you for clarifying on the base %'ages.

I have seen posts overlook this topic as part of the planning.

When I transition my previous TrueNas server (a very inexpensive Chenboro server with 64 GB RAM) to HL-15, I did see a big improvement. Then upgrading the memory to its max, the performance improved further. In my case it was at the cost of additional purchases,

My point on the post goes back to ZFS and Virtualization configuration. If you opt to tune TrueNAS (or any other server using ZFS) to use less of the available RAM or limit the amount of RAM available VMs, there is a sacrifice in either case.

1 Like

Agree, I don’t think more RAM can ever hurt if you can afford it. Part of RAM usage efficiency will depend on whether the applications listed are implemented as VMs (fixed RAM allocation) or containers (dynamic RAM usage with cap).

1 Like

This was exactly my thought process too. ZFS can work just fine with less RAM but will always benefit from having more. ZFS on Linux has traditionally defaulted to the up-to 50% of RAM with the ability to change that kernel parameter if needed. Interestingly, Proxmox as of 8.1 now sets ARC usage limit will be set to *10 %* of the installed physical memory, clamped to a maximum of 16 GiB. Even better, TrueNAS Scale Dragonfish updated the ZFS ARC memory allocations to behave identically to TrueNAS CORE and dynamically use and or give up RAM for ARC based on need. I believe this is in the latest version of ZFS so I think this option will also be coming to proxmox and other linux distros as they adopt newer ZFS versions.

Anyway, the short of my opinion is 128GB should be fine knowing some tuning might be required or go big with more RAM so tuning is less of a need. That’s assuming your wallet is ok with the more RAM option. :money_mouth_face:

2 Likes