Pool Layout for 15 Drives

Okay don’t want this to be a controversial post as I know there are MULTIPLE ways to do things. I’m quite new to ZFS and looking for an optimal way to setup the ZPool on the HL15.

Using a ZFS Capacity Calculator (OpenZFS Capacity Calculator) and watching a few ZFS videos on YouTube I think I’ve settled on 3x vdevs, 5 Disks each, in a RAIDZ2. That allows for 6 disks failures which seems a bit overkill. The other option I was thinking was 2x vdevs 7 Disks each in a RAIDZ2 leaving 1 spare drive. But more vdevs is better performance? Workload is going to me mainly media storage with maybe a few docker apps. VM storage could be a possibility in the future as lab expands.

How is everyone else thinking of dividing up the 15 drive bays?

1 Like

draid2:6d:15c:1s is likely where I will land although like you I haven’t entirely decided.

I will probably do a few test configurations:

  • see how various zfs raid will work.
  • I will probably add my own networking card for 1GBe client traffic
  • check how the systems performs with my existing 10GBe equipment

This unit is planned to replace my “Chenbro NR12000 1U 12-Bay storage server”. I have a unit similar to the YouTube video Jeff from Craft Computing posted back in Nov 2020.

I do not plan to use this HL-15 for virtualization as I have an existing cluster of 3 nodes within my home lab.

It would be great to hear what others are thinking to do or implement.

I believe I have decided this layout. (Already have 6x 18tb exos drives.

3 vdev, 5xdisks raidz1.

I bought the nvme card, and I plan on having a ZIL, and l2arc OR use it as mirred zfs pool.

Here’s why:

The media I have while it would suck to lose it, I’m not super concerned.

The ones I DO care about losing, I have backed up into two separate clouds.

Losing 1 disk per vdev, is fairly safe (I can have up to 3 drive failures all the drives, Admittedly 1 in 5.

If a particular vdev fails, its only the data in that (which is roughly 63tb in my case), I would still have the 126tb in the other two.
Edit: I was wrong, it does corrupt the whole kaboodle.

for future expansion:

I was thinking about a disk shelf (like an md1200, or others), and add an external port hba, which could give me another 12 disks (if I need it).

Anything you absolutely need to keep, should be backed up, should be stored elsewhere.

Zfs is awesome, but it’s better to be safe than sorry.

I don’t believe that’s the case. If a vdev dies in a pool, the whole pool is gone (unless there has been a change to ZFS I’m unaware of). Here’s a link to Lawrence Systems YouTube video talking about it in a layout plan for 60 drives. This is linked directly to where he says it: https://youtu.be/h4ocFY-BJAQ?si=SSwOx-eUnrTMXWoI&t=365

you are 100% right, BOOOOOOO, there goes my plans

I wouldn’t say your plan is wrong, I guess this is just where the tricky part of balancing out what risk you are comfortable with. Z1 you would have ~196TB and Z2 you would have ~145TB, so a 51TB difference between them according to that ZFS calculator I posted.
I feel like if it was an 18 disk chassis I would have an easier time picking 3x 6 disk z2 vdevs and wouldn’t really be questioning my choice.

I still may go with mine the raid z1, simply because the data I really care about is going to be backed up in multiple places. While I will be annoyed (greatly) its not going to make me happy.

I’m extremely cautious about Z1 setups —

  1. If you experience another failure during a rebuild then you lose the vdev and the pool. Rebuilds move lots of data around, so if you have another drive hanging in there by a thread, boom.

  2. It is worth considering how long you’re carrying risk of vdev/pool loss — what happens if you cannot swap a drive immediately, either because you don’t have a spare on hand, are you on holiday or a business trip for a couple weeks?

For some the argument might be they don’t have the extra $250+ to keep a 20TB spare (or whatever cost and drive size is in your array) — that’s a really difficult one because if one fails, you either have to stretch your budget badly, or it can take weeks to warranty return and replace a drive to the manufacturer if you don’t pay for advance replacement (or if the type of drive you bought doesn’t support that service).

Backups are important but recovering from the loss of a 300-330TB (raw) pool is extremely time consuming.

All of this said it’s a personal choice and once you’ve analyzed your objectives and options, it just comes down to probabilities — you could still lose three drives in quick succession after all.

I’m doing a 14x drive RAID1 and haven’t yet decided on whether the last spot will be a hotspare or a cache drive of some sort. Plan is to do a second chassis with an identical setup for TrueNAS Scale HA.