My HL-15 within my HomeLab

This is a quick post to show my HL-15 installed within my rack.

Just as Jeff Greling has a shirt stating that he cosplay as a sysadmin, I am one that does not take the time to cable manage.


After the ZFS data was moved to the HL-15, I updated the photo on 2023-12-01T00:15:00Z. I replaced many of the Ethernet cables with shorter (6") cables.

This rack was something I started during COVID (March 2020). Today the rack is almost full.

This coming Thanksgiving holiday, I am hoping to “tame” the cabling.

Current the rack hosts:

  • Keystone jack with 24 ports supporting Ethernet, HDMI, and USB ports
  • Netgear unmanaged 24-port switch
  • Negate SG-4860 - the 1U rack max model. The memory on this model is 8 GB.
  • Netgear unmanaged 24-port switch
  • Keystone jack with 24 Ethernet ports
  • Netgear unmanaged 24-port switch
  • Dell R420 384 GB RAM with 4 Drives using a Dell RAID card. 2 Socket yielding 16 total cores (32 cores for Virtualization)
  • Supermicro X9DRW-7/iTPF 1 TB RAM with 4 total drives running a ZFS RAIDz2. The chasis is a supermicro (I don’t have the model number). 2 sockets yielding 16 cores (32 cores for Virtualization)
  • Supermicro X9DRi-LN4+/X9DR3-LN4+ 786 GB RAM with 8 Drives running a ZFS RAIDz3. The chasis is a supermicro (I don’t have the model number). 2 sockets yielding 16 cores (32 cores for Virtualization)
    * Chenboro unit running TrueNas with 10 drives ZFS (wtih 2 address drives for Vdev) and a separate ssd drive to boot. This unit does not have a lot of memory or a powerful CPU as it is just a storage unit.
  • HL-15 - I picked the stock model (16 GB RAM with SFP+ ports). I have to installed the 4 slot NVMe Carrier Card

On the other side of the rack I have two Mikrotik switches creating my 10GBe backbone for the servers. Those models are CRS309-1G-8S+IN and CRS305-1G-45-IN. As you read this post, you might notice a number of switches/routers. I use the unmanaged switches to separate the network traffic.

Functionality there is a Proxmox 3-node cluster compromised of the Dell R420 and the 2 Supermicro units. The cluster is connected to a TrueNas as a storage target (for ISOs, Backups, and running VMs). The cluster provides 88 CPU threads, 2 TB of RAM, and 68.58 TiB.

Not seen in the photo are the couple dozens Raspberry Pi 3b, 4, 400, and the two piKVM connected to them as well as a Home Assistant Yellow.

There are items I have not decided to add to the rack or recycle. Comparing these units to the existing rack units, these units draws more power without yielding much computation power

  • Apple X-Serve the 1U unit which supports 96GB RAM and 3 SAS (or SATA drives).
  • Dell SC24-SC - the 1U unit were mostly deployed as Facebook servers in the early 2000s. I have two unites with each supporting 2 CPUs (4 cores) and 48 GB RAM (as the max).
  • Netgate XG-7100 - the 1U unit needs to have a power supply replaced. It supports 24 GB RAM support both early version of NVMe. It was the model that required extra configuration to create discrete network devices.
  • Dell Mid-Tower unit circa 2008 - the unit looks like a desktop and supports 32 GB RAM. The Intel CPU is either 2 cores or 4 cores.

2023-11-16T16:31:00Z - I received an generic set of INTEL SFP+ cables. The stock cables from FS.com are not that expensive (I paid $11 USD). I do not mind the shipping costs.

This stock cable (with INTEL tranmitter ends) does work with MikroTik switches too. I generally pay extra for the cables to have transmitters specific for the network card/switch-router chipset (and a longer length).

2023-11-30T16:56:00Z Updated my HL-15 with the 12 Drives from my TrueNAS server.

Updated the photo as I purchase/updated the cabling with 6" Ethernet cables.

6 Likes

I was able to install the AOC-SHG3-4M2P - 4 Position M.2 NVMe card. Within Houston you will see the Carrier under system hardware information as:
Class Mass storage controller
Model SAS3008 PCI-Express Fusion-MPT SAS-3 (AOC-S3008L-L8e)
Vendor Broadcom / LSI
Slot 0000:19:00.0

While I did initially configure the BIOS to use 4x4x4x4 as per the Supermicro manual (for the card), I switch the setting back to AUTO as the card was recognized.

Within the Houston Storage Devices section, I can see the 16 TB and 20 TB drives and 2 of the 4 Carrier Card NVMe slots that are used.

1 Like

My HL-15 are running the 12 Drives from my ol’ TrueNas Core server.

  • Chenbro NR12000 1U Stats
    • Supported 12 Drives via the motherboard HBA
    • Intel Xeon E3-1220 V2 @ 3.10GHz (Ivy Bridge) from circa Q2 2012
    • PCIe 3.0 (1x16 & 1x4, 2x8 & 1x4 or 1x8 & 3x4)

The TrueNas Core setup was 10 4-TB Hitachi/HGST Ultrastar 7K4000 drives with 2 IronWolf Pro 128 SSD running as a cache disk (due to the RAM limitation).

I was able to successfully export the ZFS pool and import the pool to the HL-15. After the import, I noticed 1 of the SSD was flagged as FAULTED. I am in the process of replacing the drive.

Houston via Rocky seems to be as straight forward as TrueNas CORE.

NOTE: I only use the NFS and SAMBA features from TrueNas Core.

Here is the original pic of my rack before the Thankgiving holiday:

For the remaining gaps in the rack, I have a few more accessories to install:

  • I have two rails ordered to properly mount the two Supermicro chassis.
  • I ordered new keystone jack (48 ports) to help minimize the cable runs and better access to USB. For example, I can have my Netgate console port plug connect to a keystone and hide the cabling from the keystone to the server.
  • I have a couple NavPoint 4U drawers that will hold other spare drives, rack studs, etc.
2 Likes

The inside of my rack looks like the outside of your rack. But still cool build!

1 Like

I do not doubt that.

I have had this mess of cables there 3 to 3.5 years. I decided to clean it up during my Thanksgiving break. I order colored 6” cables to replace the 1, 2, and 3 ft cables I was using.

I have a 24 port patch panel above the rack. I connected the wired Ethernet within my home to the topmost 24 port keystone.

I ordered a 2U 48-port keystone to further improve the cabling within the rack to the unmanaged switches.

1 Like