HL 15 - Workstation & Shared Data storage

Hello,

I am looking to upgrade my current setup (home) and was wondering if anyone has used the HL 15 in the manner I am planning.

My current build (this machine operates offline) includes:

  • CPU: Epyc 9174h
  • Motherboard: Supermicro h13ssl-nt
  • RAM: 192 GB 4800 ECC
  • 2x M.2 SSDs: Samsung 990 Pro - 2TB and SN850 2TB (used for running two OS)
  • GPU: Nvidia RTX A 4000

I am looking to add the following components:

  • HBA: Broadcom 9600-16i (I need to purchase this)
  • Flash Storage: 15x Micron 5400 Pro (already have these on hand)

Currently, I am running Ubuntu for my work, and I exclusively use this machine for work. I would like to set up a shared folder using the Micron SSDs in RAID 5. (I have a Synology NAS with HDDs that I intend to use for backups.)

I plan to upgrade to Ubuntu 24 LTS and am considering using BTRFS for the shared folder with RAID 5.

Questions:

  1. Has anyone tested a similar setup? What was the performance of the shared folder?
  2. Is RAID 5 a good idea for 15 SATA SSDs?
  3. If the OS crashes, will I be able to rebuild the RAID on a new OS without losing data?
  4. If I install SMB, will I need to do anything special to ensure compatibility with Mac & Windows? Is it difficult to enable SMB multichannel? (I have 2x 10GbE ports and would like to use both for transfers.)
  5. Is there an option to add even more SATA SSDs in an HL 15? I would love to add many more.
  6. What potential problems might occur?

Thank you all!

  1. Has anyone tested a similar setup? What was the performance of the shared folder?
    The shared folder performance will likely be limited by the 10GBase-T LAN connections on your mobo. IE, a proper RAID setup should be able to saturate that.

  2. Is RAID 5 a good idea for 15 SATA SSDs?
    This depends entirely on your use case. What workload is being done on the SSDs? What is your tolerance for write performance vs redundancy? RAID 0, 10, or 6 might be better options depending on the workload.

  3. If the OS crashes, will I be able to rebuild the RAID on a new OS without losing data?
    Why BTRFS and not ZFS? Either way, if the OS is what crashes, you should be able to mount the disks on a system running equal or newer versions of your OS and filesystem. You won’t necessarily need to rebuild the RAID.

  4. If I install SMB, will I need to do anything special to ensure compatibility with Mac & Windows? Is it difficult to enable SMB multichannel? (I have 2x 10GbE ports and would like to use both for transfers.)
    I don’t know what they are off the top of my head, but I think you will likely have to do some commands specific to setting up the share for Mac access. SMB’s origin is as a Windows protocol.

  5. Is there an option to add even more SATA SSDs in an HL 15? I would love to add many more.
    See here for a 6-bay caddy that mounts to the inside back of the chassis. Of course you need to cable up your own SATA data and power connectors. There have been a few other versions of that posted on the forum, If you are really serious, 45Drives has SSD-specific backplanes and servers. Remember you will need some way to secure the drives from falling over if the unit is bumped, the default cage is intended to hold 3.5 inch drives. You might want to follow this thread.

  6. What potential problems might occur?
    A 9600-16i is overkill if all you are dealing with is SATA drives. All you really need is a 930x-16i. I mean, it’ll work, but don’t think you’re going to get some speed boost over earlier HBAs when SATA is your baseline. HBAs typically want at least an electrical x8 alot. It seems like you’d probably have that, but should confirm.

1 Like

Thank you!

  1. Understood. What do you think—will I be able to sustain 10GbE performance in an aggressively random context?
  2. I considered RAID 5 since these are SSDs that I will replace after 2000 TBW. However, I think it will take a long time to reach this number. I mainly perform small random writes on this machine and share the data with clients from a computer connected to this machine and another server.
  3. This is just my personal experience. While I advocate for ZFS as a great file system and recommend it, BTRFS always seemed more oriented toward home users than ZFS, and it looked very stable to me.
  4. Okay.
  5. Thank you. I am not sure if I will go this route because I would like to use fans for exhaust since I will replace the standard fans with T30’s.
  6. I considered this option because it appeared to be the latest version and I expected it to be supported for a longer period, but I might be wrong.

I am also considering whether hardware RAID might be better in this context. A Broadcom 9670w-16i seems easier to manage, providing a unified SSD setup instead of software RAID. However, I’m uncertain. Do you have any opinions about hardware RAID in a context where the machine is used for both compute and storage?

What is the current load on your Epyc CPU and 192GB of RAM? Modern CPU instruction sets handle the parity quite efficiently and it isn’t something that typically needs offloaded. There is more bit rot potential with hardware RAID and your access to your data on another machine will be more restricted if/when the RAID controller fails. You may also be forced to boot into the card’s BIOS to do things like rebuild the array.

I haven’t used BTRFS RAID 5, but this is typical of what I have seen; https://www.phoronix.com/news/Btrfs-Warning-RAID5-RAID6

Here is some info on SMB multi channel. The hardware and OS on the client machine(s) would obviously need to support the setup; it’s not just a server-side thing.

1 Like

Thanks!

CPU Usage:
It depends on what I do. I can’t say it is constantly running at 100%; there are times when it runs at 100% for several hours, but most of the time it is not used over 60%. (as for the tasks, code compilation, and simulation)

RAM Usage:
It varies. Most of the time, I am okay with using 120-140 GB.

RAID Controller:
Understood. I am considering buying an LSI 9500-16i, which I found for $200 new. However, I am unsure if it is an original or a Chinese copy. Do you have any suggestions on how to verify its authenticity?

BTRFS RAID 5/6:
It sounds like a ticking bomb, I wonder how is Synology dealing with it. After reading a few forums based on your link, I am hesitant about BTRFS RAID 5/6. Regarding ZFS, I am uncertain. Doing a Z2 over 15/21 drives sounds like a bad idea (performance loss), while doing 3x Z1 over 15 drives sounds safer but would consume 3 SSDs. Do you have any suggestions here? I am looking for the most performant configuration with the least storage loss.

SMB Multichannel:
Looks great.

Thank you!

This isn’t your card, but might give you some general things to check;

My understanding is Synology uses mdadm to build the RAID, and then puts BTRFS on top of that; What is the RAID implementation for Btrfs File System on Synology NAS? - Synology Knowledge Center

OK, but there’s another potential factor; redundancy and the impact of losing a disk. The three conflict with each other. If the two items you list are truly your only criteria, then you just do a RAID 0. “Performance” also needs qualified as Read or Write. Based on a statement above, you are mostly concerned with Read performance, which isn’t really impacted by RAID levels.

ZFS can be a big topic, but when creating a pool, you break it into groups of disks called VDEVs, which is the number of disks a given file gets striped across. For example, if you have a pool of 45 disks, you wouldn’t have all 45 drives in one big VDEV–1/45 of every file on each disk. That would be risky and inefficient. Typically you would have 8 to 15 disks in a VDEV. This is also the level at which the parity is created, so for RAID Z1 you lose one disk per VDEV to parity, for RAID Z2 you would lose two disks per VDEV. With 15 disks typical layout might be;

RAID Z1 One 15-disk VDEV
RAID Z1 One 7-disk VDEV & One 8-disk VDEV
RAID Z2 One 15-disk VDEV
RAID Z2 One 7-disk VDEV & One 8-disk VDEV

But each one of those is gradually riskier in terms of downtime and needing to go to a full backup on multiple disk failure. If that’s less of a factor for you that’s entirely fine. The risk of a dual SSD failure is probably a lot less than a dual HDD failure.

1 Like

Counterfeit Check:
It looks good.

Synology RAID:
Interesting. Isn’t MDMDA having problems with bit rot? Did they fix it?

ZFS:
As far as I understand, in ZFS the maximum number of IOPS for a VDEV is limited to that of a single drive. If I build the pool out of multiple VDEVs, then the number of IOPS would be (number of VDEVs * IOPS of a drive), which would be fairly good. Is this information accurate? By the way, is RAID Z1 capable of fixing data errors on the fly when reading?

2.5" Drive Cage:
The price for a 3.5" to 2.5" drive cage on 45 Homelab is a bit too much in my opinion. For 15 of these, I need to pay $300, which is 37% of the case price. Therefore, I wonder if anybody has tested the “SPACER SPR-25352.” These could be purchased for approximately $1.20 each. Would these work just fine?

Thank you!

I’m not an expert on Synology or MDADM (or anything really), but the way bit rot is dealt with in any RAID system is to “scrub” the pool. Occasionally (eg, once a month) read through every byte and it’s parity bit(s) and look for discrepancies. And then if an error is found try to decide what the correct data originally was. I don’t think a scrub process is built into MDADM, but scripts can be written to do this. I think scrub tasks can be configured automatically or manually in the Synology DSM.

I use ZFS, as it was architected from the ground up to address these issues of traditional RAID like bit rot and the write hole. Yes, ZFS will identify and fix parity related data errors as part of reading any file.

For ZFS performance, I will say that my experience doesn’t always match the theoretical stuff I read or all the claims thrown around on the interwebs, but yes that is my understanding for random writes, that IOPS are a function of the number of VDEVs. It is a bit hard to measure ZFS performance as read and write caches (ARC, SLOG) can be involved, and random R/W can be substantially different than sequential R/W. The size of the files being written also matters. But this is true of any parity RAIID system, you can’t parallize the writes in the same way you can reads. This is why the most performant pools are going to be RAID 0 or RAID 10 type pools (which you can do in ZFS); striped no parity and mirror of stripes. Again it’s the trade-off between performance, redundancy and useable space. You can’t have all three, and the correct mix is application specific.

I agree about the price of the drive caddies. It’s a small consolation, but remember the price includes shipping. If you have a 3D printer you can print your own. If you don’t, you might still be able to get 15 of them printed through a 3D printing service for a more reasonable price.

If this is the SPR-25352 you are talking about;

I don’t think that will work because it places the drive in the middle of the slot. You need a caddy that places the SATA connector in the same location as it would be on a 3.5 inch HDD.

This is what I bought instead;

But, I only needed two and I liked the minimal design. This one might be cheaper for you;

1 Like

Thank you!

I really appreciate your insights and honesty.

Currently, I am using 7 drives in a RAID 5 configuration in my current case (Sliger CX170a). It is a nice case, but it has some real issues, especially with the distance between the SATA SSDs and the power cables.

I used MDAD RAID 5 and BTRFS. I read a little about ZFS, and I think it is an amazing piece of software, especially for HDDs with NVME and flash caches attached to the HDD RAID. However, for my current setup, I thought that the current choice might be good.

I don’t really care about benchmarks since I need this to work for my current flow. I am very aware that benchmarks would be useful since the performance could be compared against other setups, and I apologize for not including them, but I need to focus on my practical needs rather than scoring points that I am not going to benefit from.

I managed to move data at a speed of ~2.8 GB/s RW from my NVME SSD to the array using Linux. I had two text files, each of 75 GB, and it went quite well.

I also moved a folder with many small files. The size range was approximately 20KB to 30-100 MB, and the total size of the folder was 12 GB. It went well at a speed of ~900 to ~1500 MB/s.

I am quite happy with the results, and I think these will get better when I add more SSDs.

I do not know how well it would perform with ZFS, but for now, it does what I need it to do. I can saturate the 10 GB/E in almost all cases for what I do.

If you have any suggestions regarding this setup, I would really appreciate your thoughts.

Regarding the 3.5 to 2.5 solutions, I really like your proposals. I am just a little worried about the extra circuitry, and I read a few unfavorable comments about some solutions on Reddit. I am not claiming anything as true about these; I do not know. It just made me worry a little.

Would the following solution work: Corsair Bracket CSSD-BRKT?

Please let me know your opinion. I would really appreciate it.

I think that has the same sort of issue. Most of the 2.5 to 3.5 inch bay adapters are intended for flexible cabling in a desktop build. Since for the HL15 the drives are plugging into a backplane, the 2.5 inch drive’s SATA connector has to be positioned very specifically to match where it would be on a 3.5 inch drive.

Note how in the picture the left side of both drives line up. So it is hard to make something mechanical that positions the 2.5 inch drive securely and in a professional way in the footprint of a 3.5-inch drive given the default screw locations on the sides and bottom. Having a metal strip on the left would mean the solution would be wider than a 3.5 inch drive, and there are no standard screw holes on the top of a 2.5 inch SSD.

For the two product examples I linked there are no electronics. The PCB is just straight passthru traces so that the SATA connector can be physically positioned correctly. This is no different than running a SATA cable to your SSD. They could have used a really short flexible cable, but the PCB provides rigidity. The only possible future downside I know is most of them spec only SATA, so if you had plans to use 12gbps SAS in the future, they may not work. You hadn’t stated that as a requirement, though.

Honestly, though, if you are rack mounting the case I personally wouldn’t invest a lot of money on caddy solutions if you aren’t slamming your case in and out of the rack. SSDs stand pretty vertically on their own and shouldn’t need a lot to keep them so if you aren’t being rough. One option might be to just use a 1/2-in. binder clip.

https://forum.45homelab.com/uploads/default/original/2X/9/9f06625056adaa5d671641f95cb2076783c4e8ad.jpeg

You could probably do something similar with hot glue and hook-and-loop tape or some other bits and bobs from the hardware store

Also, there are normally little fingers on the side of the drive bays that extend down about an inch or so. I removed them because they were pushing too hard against my HDDs to easily remove them, but you could probably put some 2-sided tape on those to have the SSDs stay vertical.

If I didn’t like any of these other solutions, I think you can get 3D printers these days for the price of buying 15 caddies ($450), so I’d probably just invest in my own starter printer, join the 3D printing revolution, and print my own.

1 Like

Here is another idea for mounting 2.5 inch drives without the expensive 3D prints. OCZ makes an adapter bracket (OCZACSSDBRKT2) available for about $5-$6 ea;

https://www.amazon.com/dp/B002I8MUU0

Used as intended, this places the SATA connector almost where it should be relative to a 3.5-inch drive, but not within tolerances for a backplane.

Instead, you can turn it over and mount the SSD to the back with double-sided tape. I used a 3.5 inch drive to get the depth right and line up the edge of the inverted bracket with the edge of the SSD;

I put one strip of tape on the left edge of the SSD and one on the right edge of the bracket, so they’d easily slide around to align, then pressed them together when aligned. There are probably some other ways you could attach the SSD to the bracket, aluminum is fairly easy to drill through.

When you slot it into the HL15 just be sure it is aligned with the SATA side of the bay and that it goes straight down, clearing the bottom bar and not going diagonal into the neighboring bay. Mine went straight down aligned perfectly with the connector. You might have to wiggle the adapter a bit to get it past the fingers, but after that it should go straight in like an HDD.

This should also work with the HL15 in tower orientation, and with 15mm drives.

1 Like

Hello,

Thank you for providing these solutions. I find your final proposal to be the most appealing.

However, I am still quite disappointed that it is not possible to purchase a solution directly from 45 Drive to address this issue. Nevertheless, the alternatives you have shared are good options.

Did anybody try to 3d Prin a completely new 2.5-inch cage?