I think Wendell from level one in a video recommended 4 raid Z1 vdevs.
I’m going to be using? Seagate Enterprise, 24 terabyte. So I would like to. however, the math adds up purchase three or four drives at a time and make. one vdev at a time. for whatever that use case is at that time. Or if I fill up, which probably won’t happen, anytime soon with that high of capacity.
So with that recommendation of four raid Z 1 vdeves, how many drives per vdev is that? and then he said also that leaves room for a hot spare.
In that video, he said four drives, three drives of capacity and a hot spare.
But then he said it’s one hot spare in the system, which is. Vdev Fails it would use that hot spare. So I guess that means each vdev is three drives?
3 6 9 12 out of 15 Leaves three drives empty. And if you populate one with a hot spirit, then there’s still two empty slots. And I’m confused as to how that would be used.
It depends on your use case and requirements. What read performance do you need? What write performance do you need? What amount of redundancy do you need? What percentage of the pool do you want to lose to parity? Do you have a backup? Questions like that. If you are doing smaller groups of drives you could do 3x 5-disk VDEVs or add a PCIE carrier for a 16th drive and do 4x 4-disk VDEVs.
Some people do a 7x mirrored stripe with a hot spare.
I’d be careful of RAID Z1, as when one disk has failed if you lose another disk in the VDEV during the intensive operation of rebuilding the pool, you have lost the whole pool.
Not knowing your use case, I’d recommend two RAID Z2 VDEVs. That could be two 7-disk VDEVS and a hot spare, One 7-disk and one 8-disk, or add a carrier like above for a 16th drive and two 8-disk RAIDZ2 VDEVs.
This is kind of a tough question without more insights into the workload expected for the these drives. Do you want to maximize performance or storage capacity or land somewhere in between?
Tom Lawerence has a good forum post that links to a lot of other resources. I suggest maybe taking a look here and letting us know more what you had in mind for the pool.
As for your question on Z1 vdev width, you want at least 3 drives in a RAID Z1 vdev for it to make sense. You can do more than that but once you get to 7 or 8 drives you’re better off in a Z2 situation. I usually recommend Z2 as a default anyway so you can lose two drives and be ok still. That would mean a vdev size of 5 disks or more.
Yeah, I’m looking for performance mostly. with just bare minimum redundancy onedrive redundancy is fine. And then again, since I’m using 24 terabyte drives, again, I can opt towards performance then storage, because I’m going to be getting a lot of storage just anyway.
I was just trying to make sense of the raid Z1 4 drives, one redundant. to fill out the HL15. because again, if you do it that way with one Redundant in each vdev and a hot spare You fill up the case like that, and you still have two slots left And I don’t know what the best use of those last two slots would be. I like to usually mathematically make it so that when I fill up my. array with the same vdevs it. ends up filling it all.
Or maybe I’m overthinking it. and you just put in another two hot spares in those two slots
Or you just make The last vdev a 5 drive vdev idk
I’m also looking for whatever makes sense to not do all at once to start. with one vdev and then maybe a few months later add another etc etc
So what was Wendell’s recommendation of 4 raid Z1V devs. That would be three drive in each v dev with redundancy of 1 ?
So if we did two redundant drives, what’s the minimum vdev size? Is that 4 or 5?
I could possibly swing 7 I ordered four just now. but I was trying to keep them sort of as small as possible so I can add as needed. Keep the spinning drive cost down.
If you went Z2 route then you could do up to three Z2 VDEV at 5 wide. You could always start with that one VDEV and see how that performs. If you need IOPS then you can add a second VDEV or a even a third.
For maximum IOPS, ZFS mirrors is the way to go. It also allows for easy expansion with an additional VDEV only requiring two drives.
Interesting. Yeah, I’m probably going to be adding a nvme cash drive. or something like that So hopefully my right and read performance will go to that and then it’ll move to the pool later, but still want maximum performance on the pool. for those times I need to pull something that I have. not used in a while off the pool.
I think we still need better insight into your definition of “maximum performance”. Unfortunately, there really isn’t a setup that maximizes all elements of performance. You kind of have to choose between the following:
Sequential Reads or Writes: Reading or writing larges amounts of data into a single file. Movies and other media falls into this category.
IOPS: Reading or writing small amounts to data to a large number of files at the same time. This is what operating systems or databases often do.
This article does a nice job illustrating the difference in these numbers across different layout of the same 12 drives.
Quick note here on a cache drives, ZFS doesn’t offer the ability to use a single NVME as both a write cache and Level 2 cache. You actually will need three drives to this. Two nvme in a mirrored pair for writes and then the third for L2ARC. ZFS will use RAM for the L1ARC so you’ll often see the advice to maximize RAM before adding a L2ARC drive. That said, if you have a nvme lying around there’s really no harm in adding it.
Yes, I exactly remember this when 45 drives was helping me set up my L1 and L2 arc. Yeah. Another question I was going to ask to the community was, what is the best pcie and VME board to put those three or four drives on? For the L2 arc.
Luckily, I have a lot of ram left over from one of those servers. I just hope it’s compatible with with whatever motherboard I choose.