Between a rack and a hard place

First off, my apologies for the long posts, but want to provide some information along with my request for feedback and suggestions.

I am constantly thinking about and or planning the next changes or upgrades to my sudo-production/homelab. I have an idea about what I want to move to in terms of the number of systems and what task(s) they will perform I am not sure what direction/chassis to use.

I plan to run 5 systems as part of a Proxmox VE cluster with at least 2 SSDs for the OS, the ability to run at least 4 Ceph SSD OSDs and enough PCIe slots to cover my networking needs (1 x 1Gb management, 2 x 10Gb Ceph and probably 4 x 1Gb for VM use). On the storage side of things I plan to run 7 systems as part of a Ceph cluster with at least 2 SSDs for the OS, the ability to run at least 8 Ceph SSD OSDs and enough PCIe slots to cover my networking needs (1 x 1Gb management, 2 x 10Gb for Ceph).

I keep going between the Rosewill 4U (12 hot-swap or 15 internal and adding hot-swap cages) chassis, the 45HomeLab HL15 and building my systems out using off-the-shelf components. Or continue using older enterprise equipment but upgrade (my HPE DL380 G5 servers) to newer hardware like a Dell PE R730XD. The options I am considering are boiled down to these main considerations: Upgrade/adaptability, power and heat generation/consumption and initial and long-term sunk cost.

My concerns with the old enterprise equipment are mostly around upgradability, HEAT management and POWER draw as right now my systems are pulling about 200 to 300 watts each and I have 7 of them running 24/7 and I do not see the R730XD setups pulling any less power and will still be quite hot as they are a 2U chassis. This setup is also restricted by its nature and it is not a chassis that can be easily used or upgraded when the hardware inside has reached its end of use, though they are mostly readily available and should be for the time it would take me to purchase the quantity I am planning for.

When It comes to the Rosewill option my concern is about the availability of rails as I already own 2 of these chassis though they are the 15-bay without hot-swap models but I have yet to find rails or a recommendation on rails that are still available and will not have fitment issues with other equipment in my racks. However, they will allow for better management of heat given their larger size and the ability to add larger fans. They also are upgradable and should last through more than a few hardware upgrades and would only be replaced if there was a failure in the front panel, metal fatigue or I needed more drives in a single chassis.

The HL15 is one of my newer considerations as it was as we all know only more recently released to the market and while it seems like a chassis that will last for years, has available rails and holds the most drivers per chassis it is a larger initial cost and I would not be buying all the systems I am planning for at once future availability is not guaranteed.

I would love to get feedback from the community or input on which way I should lean and also what other things I might not have thought about that I should consider. Also, I realize this is a forum for 45HomeLab but I am hoping that this will help steer me in the best direction for me and my future sudo-production/homelab changes and upgrades.

To properly analyze this I think there would need to be information about what the actual functional requirements are (not the physical equipment you have already selected) and whether this “pseudo-production homelab” generates any income, or is solely an expense.

Are you looking at the fully built HL15, or to bring your own internals?

I don’t see any mention of spinning rust in your description of your plans. If I’m reading it right, you plan for 12 physical systems or nodes populated with NVMe and enterprise SAS SSDs. I’d really question whether 12 systems is necessary in todays world of Epyc CPUs and 24-bay SSD chassis.

If you are trying to build out representative clusters to mimic something at work, eg, maybe you could do that with Minisforum MS-01s? If you have a specific set of VMs, docker containters and other application you are trying to run, I think you could probably accomplish that in one or two higher end systems rather than needing 12.

Also, in the 12 systems you do mention you don’t mention more than 8 SAS/SATA drive bays, so one option would be to wait and see what the HL8 looks like for your use case.

1 Like

@DigitalGarden, you are totally right I forgot to add more details around this plan so I have added them below:

This pseudo-production/homelab setup currently does not produce any income but I do most of my side-hustle IT work for small business and more than a few have asked if I would be able to host their site and or email for them it is mostly a hobby of mine that I am using learn and tinker with various things that peak my interest. Part of why I want to move to more modern hardware and also a more robust solution while staying within a “reasonable” cost is that power prices are going up and it is also getting harder to keep the systems I have cool in the summer.

I have a number of VMs and container based applications that are currently running on my setup, which include a pfSense firewall, email server, PMG, PBS, a docker swarm cluster, documentation services, media server, Unifi Network Application, virtual workstations as well as a reverse proxy and a few others I am probably forgetting.

For the HL15 I would be looking at chassis and back-plane only and like the Rosewill option would be supplying the rest of the hardware myself.

For the 7 storage systems I am planning to run spinning rust for the OSDs and these systems would be replacing my current Ceph and NAS devices which are currently made up of 6 systems with anywhere from 4 to 24 HDDs inside. My calculation was that if I were to go to 7 system and take the number of drives I have currently and spread them across those systems I would need about 8 drives per system and this would leave me room with a 15 bay system to add driver as my need for more storage came to be. It would also allow me to instead of having 3 isolated systems that are using TrueNAS and 3 that are running Ceph, I would have a 7 node cluster and be able to manage all of my storage on that platform which I have grown quite fond of. The 5 hypervisor systems would be using all SSDs and would start with 3 per system and I would also be able to expand as my needs for storage grew as well.

I am typically using Segate IronWolf NAS drives for my HDD of choice and have been using Samsung consumer SSDs for my SSD of choice and currently my 14 250GB drives are only at 20% wear after about 2 years of service.

I have thought about reducing the number of systems to say 3 hypervisor and 5 storage and while that is an option I keep coming back to 7 and 5 as by having smaller capacity and less beastly systems but more of them it allows for more machines to fail before there is an issue vs only a small number of super beastly machines running all the things. I guess it will eventually come down to a cost of a few beastly or many less so when I get to the point of populating the hardware into a chassis, my focus right now is selecting a chassis that will be around for a multiple years/hardware changes and updates and upgrades. I have looked at the Mini-PC and the MS-01 but the limited IO and drive space for a machine that is in the 600-900 dollar range once ready to be deployed seems to restrictive in my mind right now.

My day job is not in IT anymore but I do try and use this environment to help me keep learning and experimenting with what is closer to a true production environment both for my own interest and knowledge but also to hopefully prove valuable in terms of skills and knowledge should I go back to IT full time.

Here are some other thoughts then. Note that my experience is mainly with the HL15 and Q30, and used Supermicro servers. I have built systems in other ATX cases but not the Rosewill, and I don’t have direct current experience with proprietary enterprise servers from other vendors–HP, Dell, etc.

From what I’ve seen, I wouldn’t want to build in the Rosewill, especially something that would be “sudo-production”. It seemed from your second post you are talking about 56 HDDs across the systems (although I apologize if I’ve inferred something wrong again). There is definitely something to be said for having backplanes and caddy-less bays in the HL15. I wouldn’t want the nest of SATA power and data cables in the Rosewill, and I have no interest in putting 224 screws into HDD caddies on the enterprise systems. In my experience the caddy systems also restrict airflow requiring higher RPMs from the fans.

Is system weight an issue? I don’t have back problems or anything, but it’s harder to rack the larger servers oneself. The HL15 is pretty easy to get in the rack weight-wise (the rails are a bit of a PITA, but that’s a different issue), although it is the least dense option compared to some of the other 4U enterprise chassis available, which can get 24 or even 36 3.5" HDDs into 4U standard depth rack space…

Is noise an issue? It’s a lot easier for the 120mm fans on the HL15 to keep the drives cool than the 80mm fans in the enterprise chassis. And, bringing your own mobo, you should probably have DC fan control available and not have to swap out the included fans to reduce noise, just reconnect them to the mobo from the PDU.

Venturing into hosting web sites and paid work will mean you will need to consider high availability features. You are doing some of that already with clustering, but you would also need to consider if some of the redundant PSU, hot swap fans, etc type features on the enterprise chassis are also good to have. The HL15 just has the one ATX PSU and no hot swap fans. You can get quotes from 45Drives on empty AV15 or Q30 chassis with redundant PSUs.

You can put an HBA in the MS-01 and use it as the head for a SAS disk shelf. It doesn’t sound now like this necessarily fits the architecture you are trying to build out, just saying you aren’t limited to internal storage.

At one time 45Drives offered a version of the AV15 that had a mixture of 2.5 and 3.5 inch drive bays. I’m not sure if they still do (you might contact them if you are interested) but something like that might fit your use case (not sure).
https://www.45drives.com/blog/storage/the-storinator-storage-workstation-ft-unraid/


My HL15 replaced a second-hand dual Xeon Supermicro server. I put a Ryzen 9 7900 in it. This gave me 3x the compute at 1/3 the TDP. I understand you want redundancy, but each machine you are going to leave on 24/7 has a baseline power draw, heat generation, and noise floor just sitting idle, not to mention all the additional hardware costs, even if they are at second-hand prices, and need for UPS if you are doing pseudo-production work. Each additional machine also comes with additional time to do software updates and administration.

The calculus was different 3+ years ago, but for me, I’d definitely lean to fewer more powerful systems within reason. I’m not saying go out and buy a quad-socket 256 thread/CPU Epyc Tyan server. A lot of the applications you listed don’t seem particularly demanding.

Remember that the HL15 will come with cables and shipping included in the price.

As for the relative prices of the chassis, well that you will need to evaluate for yourself. Like many things, the cheapest example is usually not the best value, and the most expensive example either just has a needless markup or has legitimate features, but that are rarely used. Maybe you start with one HL15 and see how it goes. For me personally I like the design of the HL15 and 45Drives pods–the direct wired backplane, the cage that allows good airflow, top loading with no caddies needed, 120mm fans. I’m a bit disappointed at the way the fans are wired by default and that the PCIe covers are punch-out. Is it worth $800 to me? Well as a one-off purchase compared to a Rosewill, yes. Compared to a used Supermicro it’s a bit fuzzier, but I guess I’d also say yes as mostly drive density in the rack and eATX support aren’t usually critical for my builds, and you do need to get a header cable and do a fan swap in most Supermicro cases. it just feels weird spending more for the less “engineered” case, by which I’m not talking about quality, I’m talking about all the proprietary bits and pieces that are custom designed exclusively for each enterprise server. If you are thinking of eventually buying 12 of them, though, the markup does add up.

1 Like

Certainly there is power draw just from having any server connected to power even before you look at things like IPMI and other standby items that consume power. My goal would be to have my servers operate in the 125 to 150 watt range without drives being factored in as they are going to be the same amount of power no matter the system.

I also did some quick price calculations using the US Newegg site and found that to purchase a Rosewill chassis with rails and 3 Icydock 5 bay drive cages to match the same capacity of the HL15 it was around the 1200 dollar mark once I converted it to my local currency (CAD). I would have used the Canadian Newegg but I could not find any of the parts event listed let alone in stock there.

To your point @DigitalGarden around HA and redundancy, even it I went with redundant PSUs there is only 1 motherboard in each system and also many other failure points within each system and so other than drives which I plan to use Ceph or ZFS for OS installs to create redundancy there. I however, look at each node a single failure point and as long as there are enough other nodes to handle the failure I am okay with that, which is why I am looking at 7 for storage to allow for 3 failures before there is a major issue and on the hypervisor side 5 nodes allows for 2 to be down and still maintain operations.

I am planning to have dual switches in each rack as well as multiple UPSs so that if one of them goes down it does not take down all the nodes, it would take at most just under half. Though I also realize I only have 1 source of power in my townhouse that I am renting and while I have redundant internet the backup is running off of a LTE modem.

I think the more I look at this is can eliminate the Rosewill plan and be split between the HL15 and a Dell R730XD, which the more I keep coming back to them and knowing the shortcomings and issues with my current older enterprise gear I lean towards going the HL15 route.

Does anyone know if you can mount or is there a way to have hot-swap or call it easy swap rear drives for the OS in the HL15 either via 3D printing a bracket or an add on part/card? Also can you run all 15 bays in the HL15 with SSD drives and either using the 3D printed adapter or some other adapter to fit them into the slots?

I would be running all SSDs in the hypervisor nodes and would run HDDs in the storage notes and would distribute the SSDs and HDDs I currently have between all the nodes to start and then look at either just adding more drives across the nodes evenly or replacing current drives with larger drives as needed/required.

A direct link doesn’t seem to work, but go to the store page and navigate on the left to “SSD Caddies”.

https://store.45homelab.com/#3d-printed-parts-sdd-caddies

The first one is for six internal extra SSD drives and mounts to the inside back of the chassis abiove the I/O shield area with four screw holes provided by design. Note though the drives are facing perpendicular to the airflow direction, so may impede airflow. They might also not play well if you’ll have an active CPU cooler on your build. Forum members have posted other designs to improve airflow, as well as ideas on secreting 1-2 SSDs around the case without additional brackets, for example mounting one SSD itself to the backplate and another to the side wall. Search the forums.

For adapting 2.5 inch drives to the slots, $30 per drive seems pretty steep to me. Maybe those are a lot cheaper if you can print them yourself. I used this;
https://www.amazon.com/gp/product/B01ELRRKW8
but it is just for SATA 6gbps drives. If you have SAS drives, that one apparently wouldn’t work, although you might be able to find a competitor that would. There shouldn’t be any logic on the PCB, it’s just being used to physically adjust the connector location; the wiring should just be straight through for each pin, so why they wouldn’t just make it SAS compatible I don’t know.

The HL15 backplane only supports up to SAS 12gbps, so if you have some sort of newer U.3 drives or such, those won’t work with the current backplane.

For the HL15, I have just slotted the SSDs into the 3.5in slots without a caddy, its a bit harder to get them in, but they friction fit in there just fine, and I like the fact that airflow isn’t as obstructed

I have the rosewill 4U, the airflow isn’t as good as the HL15 if its loaded to the gills with drives. its also just not as well built. And like you said the rails are an issue… I haven’t been able to find good rails for it.

As for rear hotswap, I recently found these Swappable NVMe U.2 M2 PCIe Slot Adapter Dual SSD

1 Like

You might want to get proper specs (all i see is marketing language) or see if you can get a copy of the owner’s guide from the retailer before purchase. This probably requires PCIe port bifurcation to access both NVMe’s. And it looks like a x16 connector. I think to fully use NVMe speeds you need PCIE 4 x4 per SSD (?). So, does that mean this is using PCIE 3 x8 per SSD? Or are some of those fingers not electrically connected?

This might work with the HL15 full build X11SPH with the HBA on the the motherboard, but for many people bringing their own mobo, they’ll need the x16 slot for the HBA and/or might not have a mobo supporting bifurcation.

There is a version of that unit which would fit over a PCIe slot but not be electrically connected to it, using cables instead. Depending on the build, putting that in a slot space over a 1x slot or some other components occupying slot space might be a better option;

https://www.amazon.com/Micro-SATA-Cables-Super-Compact/dp/B0CZGS8YK2/

Another suggestion is this icy dock one that takes up a x8 slot
https://www.amazon.com/dp/B0BJ14JV7B/

2 Likes

I found this one: Icydock PCI-E to 2.5" I am thinking of something like two of those in 2 of the 7 PCI-E slots for rear drives and then would use 2 for 2x10G NIC cards and 1 for an HBA for the 15 drive days and 2 would still be left for future and or dealing with spacing issues.

2 Likes

Not sure if you know this, but it appears newegg might be shutting down the Rosewill brand. they took all the cases off their site and some vendors are showing discontinued

I see now that the server products have been removed from both Newegg and also Rosewill’s own site.

I did find a redit thread that had a post from an account saying they were or were with Newegg stating they are updating some of their products and also should be getting new stock apparently in May.

Though I didnt do a deep dive into the account on redit to see how credible they are.

Maybe it’s a total lineup refresh? But it’s be weird they completely remove all products that were there