Hi, So I’m SOOOOO Happy that I found out about this. I have a short rack and the 17" depth form factor is perfect. I’m in the process of building out a server for virtualization and one for nas. I would prefer to have a dedicated NAS on metal and say Proxmox on the other metal. With that being said, if I plan to use the TrueNas for just ZFS Raid I have a question…
- Is the Full build a bit overkill? aka, could I get away with something cheaper I install myself to get just the support I need? since I won’t be running vm’s / containers from this CPU.
- Would the full build, or if I build my own also support JBOD? additional rack servers (if I can find one that fits my enclosure) (only 500mm deep).
- If I were to choose the chassis option, I’m confused on the cable selection. I’m used to mainly building desktop machines vs servers so not sure what cables I would need. I plan to use IronWolf Sata drives.
- If I ran proxmox on the metal would this do 4k transcodes on the full build or would it be better to get the chassis?
If I can I was thinking to find a lower TDP intel chip, if I did decide to run a VM on it, the only thing might be plex, so the CPU I’d get with Quicksync in case I need transcode support.
Otherwise, I’m looking to build out a full proxmox server in “potentially” another HL15 (just trying to determine if I want all the drive support since it would be kinda wasted) Unless I could do a JBOD with a bunch of the disks… the VM’s I’d store don’t the NAS.
I could be making this in a “REALLY” bad way as I’m new to building out the server side of things and used to just having a Desktop Synology. My needs are growing into needing more though.
Any info to help me get things figured would be appreciated. As I figure out what I need.
Let me share detail as I am using the HL-15 fully built as a storage server for my Proxmox 3-node cluster.
- I do not think it was overkill as I am getting a system that is fully tested and the software installed. Once I received it, I had to make the necessary configuration changes for my ZFS pool, File Shares (via SMB and NFS), and other minor changes.
- JBOD support? It sounds like you would not want to full built route. With the fully built the HL-15 has a backplane connected to the motherboard (that came with the unit). If you build your own, then it really depends on the parts you pick, etc to make JBOD work.
- The difference between 1st option (the base unit) and the 2nd option (data cables and backplane) is the power supply unit (PSU) and the cables needed to connect items to the PSU. The backplane has the sata/power connector for the drive.
- From what I understand with 4K transcodes, you need a GPU to do all this work. The fully built unit has no GPU. The full built’s motherboard has a simple graphics via ASPEED AST2500 BMC port(s). I don’t know if you are looking at SFP+ or RJ45. Either motherboard uses the same graphics chip. This chip does not seem to give you the best results for transcoding.
To your 4th bullet point: The CPU has only 6 cores (or 12 threads). If you load the unit up with many drives to have a large storage pool, then your RAM within the unit is going to have a battle between the virtual guests and the storage. A ZFS pool likes to use RAM as the storage cache.
I think you would have better performance using a separate proxmox server to allow the Virtual Guests have the best performance - CPU, GPU and RAM. Then the HL-15 can focus on being a storage server.
If you have proxmox on the unit, then you are going to be very limited as proxmox prefers the physical disk be used for its application.
Hope this helps you.
I did not need the HL15 pre-built as I was putting some of my current NAS hardware in the chassis. I am using some older X99 hardware in my HL15 + TrueNAS as that is perfectly find for my NAS build.
If you want to look into running Plex on the HL15 I would look at this post that has had quite a bit of discussion regarding this there.
IMO - The HL15 CAN be used as a Plex server if you are only planning on yourself and maybe a family member or two watching at a time. If you are slotting a GPU in the server and passing that to Plex then maybe but the CPU that comes in the Pre-built is not idea for Plex.
The chassis is 20 inches deep. The minimum depth for the adjustable rails they sell is something like 26 inches.
Oh Man, I totally misread that. it was getting late and I was just not seeing it straight. Thanks for pointing it out.
Now, with it being 20" and the current rack I’m looking at has a max of 21" I’m wondering if that is too close to make this work or not… Sysracks 22U 24" deep cabinet.
Thanks for all the info here. Basically I was debating on 2 units. 1 chassis/psu, to build out a proxmox server with a QuickSync CPU, ecc, etc. Something like this maybe: Intel® Core™ i7-1375PRE Processor. For this unit I would have some of the drives dedicated to Proxmox. So all there VM’s would run there. The rest of the drives I would just expose as maybe a JBOD so I can have “work space” for stuff that doesn’t need raid, like the raw rips of my DVDs while transcoding is done from a VM on proxmox. I would have this unit host the Plex instance, with the Library data pointing to the 2nd unit which would be the TrueNas unit.
So from a 2nd unit, I was thinking the Full Build option and replace the OS with TrueNas Scale. Setup my stuff and use it as a dedicated NAS, no additional VM’s or anything, though I “could” potentially run some smaller containers. Since I’ll have a Raspberry Pi k3s cluster as well this is the persistent volume for that as well.
Now my only concern is the back off the box… since my Rack is only 24" deep and it says max 21" depth on the label and how realistic this is to manage.
I would SFP this into my switch (Ubiquity 48 Pro switch). Same for the Proxmox box.
Does that make sense? From a JBOD if my MB has a bunch of SATA port support, would I need an HBA?
Lots of fun new things to learn!
Thank you for sharing more detail.
I would check the motherboard to see if the chipset already supports a HBA. In the fully built system the HL-15 is using the 2 motherboard chips to manage the drives (from a previous post):
the backplane had 4 sets or 4 drives connected with 1 Molex and 1 HBA cable for data. the reason we switched to this design was for the performance. our older designs where it had a multiplexer on the backplane which allowed 5 drives to connect to 1 HBA cable.
When it comes to JBOD for your needs, it really depends on what file system you will use.
For myself, I ruled out JBOD because I wanted to use a storage format that would provide redundancy. The redundancy would allow me to remove/replace a drive, etc.
Yeah my principal “server” would be a raid array. I was just thinking if I ran a 2nd HL-15 for Proxmox I won’t necessarily need 15 drives. Since I can use some drives for just storing temporary stuff that is not critical and can be lost, thus JBOD. If I could pick up say 10 cheap 2TB drives that would give me 20 TB of staging data or scratch pad area as it were.
So it will really start to come down to how well will this fit in my server rack since the “usable” space says it maxes out at 21". My Rack for reference. Sysracks 22U rack
Before I had the Hl-15, I was ina similar situation. I wanted to create a true proxmox cluster.
As I searched eBay for used servers, I search for servers with multiple sockets and many cores. I found this Supermicro Chasis supporting 8 drives (model number SYS-6027R-3RF4+ for about $300). The chasis included a motherboard supporting 768 GB of RAM (or 1.5 TB) and had two 12-core cpus.
With a few more upgrades, I decided to use this as a Proxmox node configuring the drives as a ZFS pool. I currently have about 20 VMs running within this node using the pool.
The HL-15 product has made me think more about the storage server.
Currenty I have moved my 12 drives from a slower server running TrueNas to the HL-15 (using Rocky Linux and Houston web ui) as the ZFS pool. I decided to stick with the reinstalled software becauseI did not use many of the other truenas core features. I never tried truenas scale.
For future HL-15 upgrades, I wanted to create a couple more pools:
- 1 pool using the NVMe Carrier Card. I ordered 4 NMVe.
- 1 pool using ssd using the 3d printer bracket
The SSD pool will require another HBA pci card and a different power supply.
I would love to read more as your build the HL-15 within your homelab.
You have that rack or are still looking for a rack? If you have the rack you can get a tape measure and do some actual measuring.
I’m not a rack expert, but I think 600mm racks are mainly intended for A/V equipment, not computers. When they say a max of 21 inches I think that is giving you 3 inches of space behind the equipment for wires and airflow. Are you intending to mount a switch or patch panel in the rack? That may be awkward if the posts for the rack nuts are set for 600mm. You’d need to source your own sliding rails or use L brackets or something. I’m not sure how standardized rail kits are.
I’m not sure you’d find some other rack mount 3.5-inch NAS enclosure much less than 20 inches deep, though. Non-rack mount, sure, or compute nodes sure, but there’s only so many ways you can tetris a standard mobo, HDDs and PSU.
I don’t have it yet. The unit says it has 530mm of rack depth. which is right about 21". so 1 inch less than the units depth… not sure what issues I might run into on the back side. SFP port, power, etc if needed. Thus trying to get an idea of if it will “really” fit or not.
1 inch isn’t really a lot of space for cables with strain relief. You’ll definitely need a right angle power cord. You might be able to get an RJ45 to bend in an inch, but not ideal. They do make right angle RJ45s. HDMI, VGA, and USB would all also likely be a tight fit. I suspect you’ll find you need to leave the back off for an extra inch or two of cable management.