Homelab featuring HL15

With the inclusion of the HL15, I’ve maxed out my 15U rack.
(But I see a blank 2U space what gives!)
Yeah, I know, but that’s sorta reserved, and on the backside there’s an UPS in the way.

I feel like this picture makes it look more messy than in person, that 2u dust filters should be cleaned lol.

Top to Bottom:
1U shelf, Home Assistant running on its own mini-pc, hence the zigbee and thread dongles.
1U 24 port patch panel. all the servers have cables running through this to the switch below. Some keystones are missing so i can fit m.2 to usb3 drives for the raspberry pis.
1U TPlink Omada switch (24port, no PoE)
1U brush panel, just hides some clutter,
1U 4x Raspberry Pi 4 with PoE hats. These are 4 nodes in my Kubernetes cluster.
1U TPlink Omada 8 port PoE.
1U Synology RS820+ with 4x 4TB. This will now be synced with my HL15.
2U Rackchoice 2x Mini-ITX. This holds 2 Turing Pis, with 6 CM4s. This is the other half of my Kubernetes cluster.
4U HL15, fans swapped with noctuas, intake filters on the front, over 100Tb of raw storage, 96GB Ram
2U placeholder

But wait there’s more!

So I mentioned the 2U spot was suppose to be filled, well between moving the UPS to the back, and adding the HL15, there isn’t enough room for things.
I have another 2U shelf with things that don’t exactly have a nice home.
This holds another mini-pc for my Jellyfin server. This is a Minisforum HN2673 with an Intel Arc A730M
This pc was going to be attached to the Kubes cluster, but there’s some quirkiness with JF and multiple intel devices so that is on hold.
Also pictured is a framework mainboard that’s attached via thunderbolt to an RTX2070 and housing a Google Coral m.2. This is attached to my cluster, and running Frigate, Harbor, and OpenAI whisper.
The Idea is to get a case for the 2070 and framework that could fit in the 2U spot on the front. (or I might wait until I get a new rack.

Also slightly pictured in red is an ESP32 running OpenDTU for monitoring solar panels, and a Lutron hub for light switches.

Currently this is all on 1Gb connections, but my main uplink to my router is 2 LAGed connections.
My Synology has a dedicated 10Gb link to the HL15.

Also slight annoyance on the HL-15. My LTT screwdriver is slightly too wide to screw in the PCIE slots


Nice rack you have! What unit are using for the raspberry pis?

I did not have any issue with my LTT screwdriver. I was using Slot 3 for the PCIe carrier card that I installed.

It looks like I have a similar shelf to your for the top of the rack.

I would also like to know what you’re using for the rasp pis. I have a couple in my server rack, but they’re just hanging around. Would be nice to tidy them up a bit. :sweat_smile:

For me my Pis are used for various monitoring. I have one that take digital images of my water meter during the day, process the photo using OCR, and add the water usage as a stat to Home assistant.

Before I had my larger homelab servers (and rack), I had pis hosting home assistant, homebridge, a simple local mail server, etc.

Woah! :exploding_head: that’s an amazing use of a Raspberry Pi. Now I want to set that up. I’ve never even thought of something like that.

I purchase a 3d printer just to make the attachment for the raspberry pi (and camera).

I found the project a few months ago, but had no time to complete it until this Thanksgiving/Christmas holiday.

Of course the HL-15 has paused the project as I am working on this unit now.

I should say the screwdriver fits, but only the knurled shaft part, so I can’t get the screw started.

Here is the link to the mount I have https://www.amazon.com/dp/B09D7RR6NY/

Jeff has a video about a slightly different rack here, where I got the idea from

For me though, I don’t want to focus on what Pi runs what thing. anything I do is just containerized and deployed to kubes. I have various websites running, and I even proxy to the other non-kubes hosts for home assistant etc. So I have a floating IP load balancer on the cluster that my router directs everything to. I just like the idea that I can lose almost half the pis and still have kubes balance things mostly OK. Certain things like GPU resources and x86-64 vs arm64 force certain workloads to specific nodes. I also have over 2TB of storage clustered between just kubes for persistent storage.

For most people, I don’t think Kubernetes is a realistic solution, but I had worked with it in my previous job, and hoping to convince my current employer to use it, so I’m keeping my skill fresh.


How did you achieve this?

I’m using K3s and Longhorn.
I believe it uses isci? Basically a portion of your drives can be allocated towards longhorn for persistent storage. Many of the nodes have 120 to 500gb ssds, while 2 of them currently only have their emmc. so any time a pod needs storage, longhorn creates that volume on 3 nodes for redundancy. If a node was to go offline, or that volume is corrupted, it automatically recreates it on a new node. So its not 2TB continuous, but I’m not allocating more than 15GB for a single pod.

I haven’t set it up yet, but I can also create snapshots and back them up via NFS to the HL15.
My Frigate instance just uses NFS backed storage directly to the HL15 for clips because I have it allocating 1TB alone.


HAHA! I am glad I am not the only one that noticed my LTT screwdriver didn’t fit in the screw slot well!

Cool homelab! I like the build and pictures!

Is that backed by HL15? Regardless how did you go about that?

Here is a link to the project.

From what I can see in github, the project is active. I was planning to 3d print something similar to the “AI-on-the-edge-device on a Water Meter”. I have a raspberry Pi 3B with a camera. The project was using an ESP32. If the Pi does not work out, then I was going to use the ESP32.