My HL-15 within my HomeLab

This is a quick post to show my HL-15 installed within my rack.

Just as Jeff Greling has a shirt stating that he cosplay as a sysadmin, I am one that does not take the time to cable manage.

After the ZFS data was moved to the HL-15, I updated the photo on 2023-12-01T00:15:00Z. I replaced many of the Ethernet cables with shorter (6") cables.

This rack was something I started during COVID (March 2020). Today the rack is almost full.

This coming Thanksgiving holiday, I am hoping to “tame” the cabling.

Current the rack hosts:

  • Keystone jack with 24 ports supporting Ethernet, HDMI, and USB ports
  • Netgear unmanaged 24-port switch
  • Negate SG-4860 - the 1U rack max model. The memory on this model is 8 GB.
  • Netgear unmanaged 24-port switch
  • Keystone jack with 24 Ethernet ports
  • Netgear unmanaged 24-port switch
  • Dell R420 384 GB RAM with 4 Drives using a Dell RAID card. 2 Socket yielding 16 total cores (32 cores for Virtualization)
  • Supermicro X9DRW-7/iTPF 1 TB RAM with 4 total drives running a ZFS RAIDz2. The chasis is a supermicro (I don’t have the model number). 2 sockets yielding 16 cores (32 cores for Virtualization)
  • Supermicro X9DRi-LN4+/X9DR3-LN4+ 786 GB RAM with 8 Drives running a ZFS RAIDz3. The chasis is a supermicro (I don’t have the model number). 2 sockets yielding 16 cores (32 cores for Virtualization)
    * Chenboro unit running TrueNas with 10 drives ZFS (wtih 2 address drives for Vdev) and a separate ssd drive to boot. This unit does not have a lot of memory or a powerful CPU as it is just a storage unit.
  • HL-15 - I picked the stock model (16 GB RAM with SFP+ ports). I have to installed the 4 slot NVMe Carrier Card

On the other side of the rack I have two Mikrotik switches creating my 10GBe backbone for the servers. Those models are CRS309-1G-8S+IN and CRS305-1G-45-IN. As you read this post, you might notice a number of switches/routers. I use the unmanaged switches to separate the network traffic.

Functionality there is a Proxmox 3-node cluster compromised of the Dell R420 and the 2 Supermicro units. The cluster is connected to a TrueNas as a storage target (for ISOs, Backups, and running VMs). The cluster provides 88 CPU threads, 2 TB of RAM, and 68.58 TiB.

Not seen in the photo are the couple dozens Raspberry Pi 3b, 4, 400, and the two piKVM connected to them as well as a Home Assistant Yellow.

There are items I have not decided to add to the rack or recycle. Comparing these units to the existing rack units, these units draws more power without yielding much computation power

  • Apple X-Serve the 1U unit which supports 96GB RAM and 3 SAS (or SATA drives).
  • Dell SC24-SC - the 1U unit were mostly deployed as Facebook servers in the early 2000s. I have two unites with each supporting 2 CPUs (4 cores) and 48 GB RAM (as the max).
  • Netgate XG-7100 - the 1U unit needs to have a power supply replaced. It supports 24 GB RAM support both early version of NVMe. It was the model that required extra configuration to create discrete network devices.
  • Dell Mid-Tower unit circa 2008 - the unit looks like a desktop and supports 32 GB RAM. The Intel CPU is either 2 cores or 4 cores.

2023-11-16T16:31:00Z - I received an generic set of INTEL SFP+ cables. The stock cables from are not that expensive (I paid $11 USD). I do not mind the shipping costs.

This stock cable (with INTEL tranmitter ends) does work with MikroTik switches too. I generally pay extra for the cables to have transmitters specific for the network card/switch-router chipset (and a longer length).

2023-11-30T16:56:00Z Updated my HL-15 with the 12 Drives from my TrueNAS server.

Updated the photo as I purchase/updated the cabling with 6" Ethernet cables.


I was able to install the AOC-SHG3-4M2P - 4 Position M.2 NVMe card. Within Houston you will see the Carrier under system hardware information as:
Class Mass storage controller
Model SAS3008 PCI-Express Fusion-MPT SAS-3 (AOC-S3008L-L8e)
Vendor Broadcom / LSI
Slot 0000:19:00.0

While I did initially configure the BIOS to use 4x4x4x4 as per the Supermicro manual (for the card), I switch the setting back to AUTO as the card was recognized.

Within the Houston Storage Devices section, I can see the 16 TB and 20 TB drives and 2 of the 4 Carrier Card NVMe slots that are used.

1 Like

My HL-15 are running the 12 Drives from my ol’ TrueNas Core server.

  • Chenbro NR12000 1U Stats
    • Supported 12 Drives via the motherboard HBA
    • Intel Xeon E3-1220 V2 @ 3.10GHz (Ivy Bridge) from circa Q2 2012
    • PCIe 3.0 (1x16 & 1x4, 2x8 & 1x4 or 1x8 & 3x4)

The TrueNas Core setup was 10 4-TB Hitachi/HGST Ultrastar 7K4000 drives with 2 IronWolf Pro 128 SSD running as a cache disk (due to the RAM limitation).

I was able to successfully export the ZFS pool and import the pool to the HL-15. After the import, I noticed 1 of the SSD was flagged as FAULTED. I am in the process of replacing the drive.

Houston via Rocky seems to be as straight forward as TrueNas CORE.

NOTE: I only use the NFS and SAMBA features from TrueNas Core.

Here is the original pic of my rack before the Thankgiving holiday:

For the remaining gaps in the rack, I have a few more accessories to install:

  • I have two rails ordered to properly mount the two Supermicro chassis.
  • I ordered new keystone jack (48 ports) to help minimize the cable runs and better access to USB. For example, I can have my Netgate console port plug connect to a keystone and hide the cabling from the keystone to the server.
  • I have a couple NavPoint 4U drawers that will hold other spare drives, rack studs, etc.

The inside of my rack looks like the outside of your rack. But still cool build!


I do not doubt that.

I have had this mess of cables there 3 to 3.5 years. I decided to clean it up during my Thanksgiving break. I order colored 6” cables to replace the 1, 2, and 3 ft cables I was using.

I have a 24 port patch panel above the rack. I connected the wired Ethernet within my home to the topmost 24 port keystone.

I ordered a 2U 48-port keystone to further improve the cabling within the rack to the unmanaged switches.

1 Like

Updated my HL-15 to use 8 64-GB LDIMM RAM sticks.
I tend to use a vendor that gives a lifetime warranty (and it is a vendor I have used for many years now). item number is MEM-DR464L-CL01-LR26 - Supermicro 1x 64GB DDR4-2666 LRDIMM PC4-21300V-L Quad Rank x4 Replacement

The memory is running at a slower speed as the default CPU only go to 2130 MHz

When I purchased the HL-15, I choose the default prebuilt model with 16 GB RAM.

Nice improvement over the original which was already impressive. Just curios, I was looking at your patch-panels and see HDMI ports, I am just wondering the reason for it? I

My brain FEELS like the outside of their rack and the inside of yours then! Cable management always sucks. I’ve wanted a couple of those Patchbox brand cassettes, but boy are they insanely priced.

I see USB as well, maybe a nice way to pass-through the I/O from the back of their servers for easier KVM? Honestly, seems like a pretty solid idea!

1 Like

Great question!

My server rack is next to a small desk. Using the monitor (and/or a 32" TV Display), I can use the HDMI ports to monitor/interact with specific units. On the backside of the rack, I have 1U shelves that have individual Raspberry Pis.

I will be making a few more updates to the rack.

  • I am planning to merge two keystone panels to 1 48 port unit
  • I have a HDMI KVM that I will connect to a PiKVMs. Each PiKVM offer a secondary HDMI output jack
  • I have a couple 4-U NavePoint Drawers that have spare parts.

An item not in the rack picture is a 24 port Ethernet hardwire patch panel. The patch panel is cheap generic unit from one of the big box hardware stores. I want update the wiring to use keystone ethernet jacks.

After watching Jeff Geerling’s newest video today, I might put some RGB in that area too.


The top patch panel port 4 & 5 are connected to a Pi that I use for some simple monitoring.

Port 6 allows me to connect the Netgate USB console directly to one of my virtualization servers. When I need to do some software upgrades, I can connect via a virtual desktop to the Netgate via the USB console port for any interactive commands. For example, pfSense gave ZFS as a feature upgrade but required format the desk for a new install. I perform the upgrade via this USB cable.

Looking forward to the “after” pics of your cable management :smiley:

I am hoping to have an updated picture by the end of the week/weekend.

Trying to organize all the ethernet cables, power cables, and the shelf for the raspberry Pi devices is a challenge.

1 Like

I keep a pi in my rack as well, and boy is it just unwieldy. I use mine as a remote desktop with a console cable to my network switch. Like a poor man’s opengear.

1 Like

Originally I was reluctant on getting ethernet cables at shorter lengths because I did not have a lot of keystone patch panels. Last fall I decided to get 6" and 12" ethernet cables.

While I would love to get the Patchbox, I cannot justify the space it would require. I opted to move the equipment around to remove the need for a 3’ ethernet cables.

All my home wired ethernet connects have its own patch panel mounted on my basement wall. My server rack is right below this patch panel.

My first raspberry Pi was a small mail server for my homelab (and a development project). Then I expanded to use homebridge, zabbix via the NEMS Linux project, etc. I use home assistant yellow.

As I built a cluster of hypervisor, I move some application from the Pi to the hypervisor.

I am in the process creating a raspberry Pi to read/record the water meter. Its data will be added to my home assistant. I also purchased a Raspberry Pi 5 with NVMe from Pineberry (I also get a NVMe unit from Pimoroni.

1 Like

Here is the latest revision to my home server rack. All the pictures of your server rack motivated me to clean up the organization of my own rack.

The Server rack is Raising Electronics Server Rack Open Frame Rack 4 Post 19 inch Adjustable Server/Audio Rack Cold Rolled Steel(27U,31 Depth) . The server rack has 4 casters that lock in place.

Here are some of the changes in the rack:

  • Move a NavePoint rack shelf up 1U for a Supermicro Micro Tower and a UPS
  • Move a 24-port keystone patch panel to row 2 as the ethernet cables will not be as taut.
  • Replaced and move a 24-port keystone patch panel with a 48-port for Row 4 & 5.
  • Moved various individual keystone to new positions
    ** All HDMI ports to Row 2
    ** There are 3 additional USB ports at Row 4
    ** There are many open CAT 6A ports at Row 2 and 4 for future IOT devices.
  • Moved the unmanaged switch and the Netgate appliance to row 6 and 7 (respectfully)
  • Row 8 is the Dell R420
  • Row 9 & 10 is Supermicro X9DRW-7/iTPF
  • Row 11 is Supermicro X9DRi-LN4+/X9DR3-LN4+
  • Row 12 - 15 is the HL-15
  • Row 16 - 19 is a 4U NavePoint Deep Draw
  • Row 20 thru 23 is an additional NavePoint Deep Drawer
  • The bottom has a NavePoint rack shelf for two small UPS.

You can see all the cable runs at the top are fed into the rack’s keystone patch panels.
The NavePoint drawers have various spare parts, server accessories, cables, as well as unused raspberry Pi (and Pi accessories).

The back of the rack needs to have all the loose power cables organized. I am planning to have a Rack UPS to replace the existing two UPS.