What SSD's, if any, will you be using?

I did not see a topic created to discuss SSD’s for both OS and Storage usage. Personally I plan on using a pair of nvme for OS and then a mix of spinners and flash for storage for vm’s and files.

What brand / model / sizes / class (enterprise / consumer) do you plan on using?

What are you doing and why?

I’m building up three HL15’s this Christmas. Small possibility of extending to four, if I can find an ATX form-factor board for an AmpereOne CPU, but I expect that to be a 2024 hack!

For the AMD-based builds I’m using Supermicro H13SSL-N mainboards and EPYC 9354P processors. So, I’ve got plenty of PCIe 5.0 lanes, and have been thinking about how I want to use them. :slight_smile:

Where I landed -

  1. I will be putting a couple PCIe 4.0 M.2’s onto the mainboard for the operating system, likely Samsung’s 980 Pro or 990 Pro, but I need to finish analyzing $/TB again. I have historically liked the Samsung 980 Pro, and I have a 4x4x4x4 bifurcation card using 2TB drives today which is great. Operating system will be on a mirror of 500GB or 1TB drives and my thought process is this optimizes for “a single drive doesn’t make me reinstall”. Losing the data is not problematic but the time cost to recover is unfortunate (it’s far cheaper in time terms to screwdriver in a replacement when one fails and then let the mirror recover).

  2. I’m going to be splitting out ZFS metadata for this build. I have done this before but only on a NAS unit with 4x20TB, my larger NAS units do not do this. I will be doing a 3-way mirror using a bifurcation card in one of the PCIe 5.0 x16 slots and am intending to use the fourth M.2 slot for a “scratch drive” (think /tmp - extracting stuff from ZIP or RAR archives). In selecting drives for metadata, I need to determine capacity requirement by looking at my current ZFS pools and their metadata needs (including both the NAS doing this today, and others which do not) then use that to answer what 15x 20TB needs in the HL15. Three-way mirror because if you lose the metadata, you also lose the ~250TB of data on the spinning drives. I haven’t finalized “What drives?” here, but I need PLP (power loss protection) and I care about drive durability more here than for OS.

  3. Finally, I need a NVME-based storage solution for data. I still don’t see any PCIe Gen5 which makes sense to me in price-performance terms today. I have three PCIe 5.0 x8 MCIO ports and I’ll likely be going with SolidIGM D7 P5600 (formerly Intel, but they sold this part of their business). I have quite a few P4610 6.4TB and they’ve been rock solid for years. I’ve seen the P5600 1.92TB for approximately ~$175 USD and 3.2TB or 3.84TB around ~$300-400 USD per drive. They are a PCIe 4.0 x4 U.2 interface (so will require MCIO-to-U.2 adapter cables). I’ll be starting with 2x 3.84TB in a mirror but am designing my HL15 builds with the assumption I’ll add more later, meaning I have to budget ~18W per drive in power for 6 drives (even though I’m only buying and installing two today).

Bulk storage in each system will be 15x Seagate Exos X20 SATA drives on a LSI 9500-16i HBA. Networking to connect all the systems together is ConnectX-5 100GbE, I don’t plan to use Ceph or any “native” solution to manage this all as one storage pool. Irreplaceable data is typically stored on two ZFS pools locally, in two separate systems, kept in sync with zrepl, and is also backed up offsite.

I’m very excited to get these HL15 chassis and start the builds and hearing what the rest of the community is doing with theirs would be awesome, so thanks for starting this thread! :heart:

3 Likes

Am I reading this correctly that you intend to mesh network your 3 HL15’s, possibly with no switch/router?

Would love it if you could expand on this approach, why think about it this way?

If you were going to use Ceph (which is what I intend longer-term: a 3xHL15 HA setup), what would you pay special attention to?

Maybe I’m just totally oblivious, and can’t fathom your actual use-case(?) :slight_smile:

Nope. They’ll be wired to a 100GbE switch. I have a few of these because they’re pretty cheap, use very little power and make very little noise — MikroTik Routers and Wireless - Products: CRS504-4XQ-IN

I think Ceph’s better these days but it certainly used to take a reasonable amount of “looking after”, whereas I find ZFS is pretty set it and forget it. Homelab isnt a place that I spend lots of time outside specific projects.

1 Like

Just some Samsung consumer NVME drives for the OS, maybe one or two SATA SSDs for unpacking files and Seagate Exos drives for storage.

Anyone pulling the trigger on the carrier cards now available in the store? With what setup in mind?

Not sure why these are 10x more expensive than your regular “consumer” carrier card, any hints?

The Intel P5620 (3DWPD) are absolute tanks.
Ive bought one of the carrier cards to mount some optane drives and play more with special vdevs and will boot M.2 off the motherboard (TYAN S8030GM4NE-2T).
For storage Ill add some SATA SSD’s (Micron 5300 or Intel 4520) and Seagate spinners.

I already picked up the 4-slot. FYI, These are PCI 3.0 (fine if you got the full build). Also, these cards are bit shorter @190mm. The Asus are 200mm, 270mm or 290mm depending on generation and I wasn’t sure of clearance available.

Nothing shows up for me in the store yet other than the original 3 case options.

I picked some Intel 118Gb P1600x (on sale today for $60) for special vdevs testing as well.

I also have a pair of ironwolf 125 1Tb drives that I’ll run as a mirrored pair in it’s own pool for either VM’s or iSCSI target through Houston. Everything else is spinning rust.

The only reason I am NOT using SSDs is because I need high volume of storage. I don’t really need it to be THAT fast. I would love to see the performance stats from someone who does run an all SSD build.

I’m shooting for 300-330TB HDD and 40TB NVMe in each system — no point IMO using the 3.5” bays for NVMe and will just hide those in the rear somewhere. Should be possible to do something with 3D printed brackets above the main board.

2 Likes

Enterprise grade, therefore higher quality to begin with. Typical “cheap consumer” pcie cards are usually a bit dodgy. I have a few that work fine, but they’re not something documented anywhere and more of a YMMV situation. They also support PCIe bifurcation, allowing you to turn that single PCIe lane in to effectively a 4x4x4x4 lane to hand out individually to each NVME drive.

These are made BY supermicro, here’s the 4-slot user manual and are therefore guaranteed to work with your board if you have the x11 included in the full build, or one of the optional mobo’s in the user guide.

For my personal use case, I’ll likely grab a 4-slot in the future. Create a RAIDz2 or two mirrored vdev’s and serve them up via ISCI/NFS or even something like Minio for S3 like storage. Certainly useful for fast indexing use cases like elasticsearch and similar.

Wanted to chime in. Your best bet with something like ZFS for SSD is going to be Enterprise or Datacenter grade. Intel, Micron (sold through 45Drives main store), Samsung, and a couple others, all have great enterprise SSD’s.

The key thing between consumer SSD and Enterprise is going to be the DWPD, or Drive Wipes (writes? I always forget) per Day. Something like a Samsung Evo will have 2 DWPD, whereas an Intel SSD will have closer to 5-6. Endurance aside, the I/O performance is typically much improved compared to a consumer SSD.

Enterprise SSD’s are typically classed in 3 groups. Read-Intensive, Write-Intensive, Mixed-Use. All very self explanatory. I picked up some 1.92TB Intel SSD’s for around $199 each recently, and boy howdy are they great.

1 Like

What and where did you get your SSD’s?

Mat

It isn’t really “Enterprise Grade”, this is :apple: versus :tangerine: unfortunately.

On the Supermicro one you linked to the 4-slot manual - this is a PCIe 3.0 x8 card which is supporting 4x NVMe drives, it looks like 2280 or 22110 are supported. By the very nature it presents as x8 electrically but supports four drives, it is oversubscribed, and that means it must be rocking a PCIe switch under that heatsink which “reduces” four PCIe 3.0 x4 devices (the NVMe drives) down to a PCIe 3.0 x8 presentation to your mainboard.

You also sometimes see switched cards as x16 cards which is a workaround for boards without bifurcation support, those may be PCIe 3.0 x16 ↔ 4x PCIe 3.0 x4, or they might even be PCIe 4.0 x16 ↔ 4x PCIe 4.0 x4. Those also tend to be $150-300 USD each and a lot of that cost is again the added components on the board necessary for PCIe switching, vs. just simple bifurcation.

Another challenge with this card, outside of perhaps the specific board 45Drives is selling with the ‘Full Build’, this is PCIe 3.0, and for 2023, you can buy PCIe 4.0 x4 drives (there’s some great options and great prices per terabyte!). They’re backwards compatible with this card, but they won’t run at the 7GBs/sec advertised because the number of GT/sec on PCIe 3.0 is 50% that of PCIe 4.0.

On the $50 ones you see on Amazon, eBay or others you are almost 100% of the time just getting a 4x4x4x4 bifurcation card. If your board doesn’t support that you’re going to find it just doesn’t work. You can get these cheaply for PCIe 3.0 and PCIe 4.0, and ASUS just released their PCIe 5.0 “Hyper M.2” which is a x16 bifurcation card (but please note that it’s too deep to fit in a HL15).

1 Like

If you check out their web page and throw the model numbers they’ve listed you’ll see that they do indeed reference a Supermicro card.

My HL15 just showed up an hour or so ago and I’ll confirm once I have some down time with work.

Edited to add:
Nvme via m.2 is not as common as you’d think in enterprise servers (which the Supermicro board is). The u.2 form factor is largely preferred, partially as hotswap of drives can be done from a front accessible enclosure.

I can confirm, I received AOC-SLG3-2M2 | Add-on Cards | Accessories | Products - Super Micro Computer, Inc. when ordering the 2x NVME card add-on.

I went the full build route, and the board does indeed support bifurcation. While I know there’s plenty of options out there, I personally prefer to try to stick as close to baseline as possible when it comes to storage. I’ve had $1k PCIE cards die at work about as often as I have had $20 cards die on me in my personal machines. However, I can’t say the same about Linux kernel support, nor do I wish to spend hours upon hours troubleshooting.

If the cards offered are their baseline, and I’ve gone the baseline build, I know the system “should work”. This is to say, Rocky Linux, ZFS, and Houston UI have no issues with the components they’ve offered up, and that’s my intended running config.

Been in the industry long enough to understand this fallacy. Unless they are providing ongoing support (IE: They are assuming the risk of hardware failure via ongoing hardware replacement), “validated design” is going to be the cheapest product that doesn’t drastically increase their support calls. They want to move as many units as possible while keeping the call center costs down, cheaper parts means cheaper product for the customer means more units moved. If they don’t provide any support whatsoever, they are going to put whatever they can get at an extreme volume discount. That’s why you are seeing them ship X11 systems (release date Aug. 23, 2017… Over 6 years old). If you are just doing an absolutely basic homelab storage, sure, you don’t need a ton of compute and x86 compatibility is likely to carry old hardware forward way way past it’s prime. Probably fine for most but if your criteria is “I don’t want to troubleshoot”, I’ve got bad news for you… This isn’t a managed storage appliance from Pure storage… No one is going to fix it for you and the full build isn’t validated to the degree you are likely banking on.

Not a dig against 45homelab, I think they are being pretty transparent, just trying to level set that you are probably going to have to babysit the full build just as much as if you did the research and put it together yourself. I’m in the chassis and backplane “so I can build it myself” camp (but I’m not throwing together a simple storage chassis).

1 Like

Oh no, I’m sorry, troubleshooting is my favorite thing! But basic hardware troubleshooting from the very beginning is not.

My use case for the HL15 is a hodgepodge of everything. I currently have a 4 host ESXi cluster primarily running off NFS shares from a TrueNAS box. In that cluster, one of the ESXi hosts and the TrueNAS box are considered “golden and sacred” in comparison to the rest. They are both Dell R340’s, the ESXi host uses a Dell PERC H330 with intel S6410’s in HW RAID, and the TrueNAS box as my day to day SMB share. Anything CRITICAL is backed up using Veeam and off to Backblaze it goes.

Now, the remaining 3 hosts are HPE DL20’s. They get blown up and rebuilt almost weekly, sometimes through Ansible, sometimes by hand, sometimes not at all! Until I have faith in my skillset to roll ZFS manually (yes Houston UI is nice, but CLI will always reign supreme) I’ll keep the HL15 in that “ephemeral realm”. During that time, I want to test what 45Homelabs is offering as a baseline, and for me the Supermicro NVME card will more than likely be for redundant OS boot. Knowing that they are recommending it gives me peace of mind and a bit more of a controlled environment to dig into the bits I’m more interested in at the OS level.

1 Like