DAS to NAS Upgrade

I’m looking at getting a HL15. Currently I have 8 6TB drives in a raid 10 that I need to transfer. My question is if I buy a couple of large capacity drives to hold my current data, can I then wipe and integrate those 8 drives into a drive pool without loosing my data, or would I be better off populating the entire 15 drives off the bat? I’m new to ZFS and setting up NAS’ in general.

My use cases for the server are self hosted cloud, Plex, some game servers, Home Assistant and some AI setups, and general data storage for projects.

We can probably frame an answer better if we know what setup you have now running the RAID 10. Is it like Linux RAID? UnRAID software? Hardware RAID?

To try to answer your question, you can certainly wipe and re-use the 6 TB drives, but if you pair it in a system with some larger drives, you will end up with two pools, one pool of 8 TB drives and one pool, or vdev, of “larger drive size”. Basically when grouping drives together for RAID, like your RAID10 system, ZFS prefers all the drives in a grouping to be the same size. That doesn’t mean all the drives in the enclosure have to be the same size, just all the drives you put together to span RAID 10 (or RAID 5 or RAID 6 or whatever) across.

I get wanting to reuse hardware, but I’d start from the top and set out your requirements, and then see if keeping the 6 TB drives, or perhaps selling them on eBay eg, is a better idea. 6 TB are relatively small, and are probably rather old with a lot of hours on them. OTOH, spreading data across more smaller drives can be more performant than having fewer larger drives.

You have 6 TB * 4 = 24 TB usable space currently in your RAID 10. How much space do you need now and how much do you expect it to grow? Is there any background to the choice of RAID 10? Do you have a workload (maybe your “AI setups”) that require fast HDD write performance?

If I was going to migrate this to an HL15 keeping the 6 TB drives what I would probably do is;

  1. put 7x new 6 TB drives in the HL15 as a single RAID Z2 (equivalent of RAID 6). VDEV, That would have 5 * 6 TB = 30 TB usable space and would survive any 2 drives failing.
  2. copy the data over the network or via other means from the old system to the new system. Because you have a set of mirrors, you may be able to install the specific four drives from one of the mirrors into the HL15 and mount those to do the copy. That would depend on details TBD.
  3. confirm you have 100% of everything you intended to copy
  4. re-confirm you have 100% of everything you intended to copy
  5. wipe the old drives and add them to the HL15. This fills all 15 bays of the HL15 with 6 TB drives.
  6. Create a second VDEV from 7 of the old drives. This would also be RAID Z2 like the first VDEV with 30 TB usable.
  7. Add this second VDEV to the pool with the first VDEV. This now gives you a RAID Z2 pool with 60 TB in which any two drives from either 7-drive VDEV group can fail before you would lose the pool.

If you got new larger drives the process would be similar, except in step 7 I would just keep two separate pools, not try to merge them. You can have a pool in ZFS where the VDEVs have different numbers of drives and drive sizes, but it is discouraged because of the way data gets striped across the drives.

Not sure if that helps.

For ZFS, you don’t need to populate all 15 drives, but you do need to expand the pool in groups of drives at a time they call vdevs. So what many people will do is start with 7 drives, then add another 7 drives together when they need to expand. :Leaving one bay for a hot or cold spare. Another option is to start with one 5-drive vdev, then add a second, then a third as needed. There is a recent ZFS addition that does let you add a disk at a time to a VDEV, but it comes with some performance quirks. It’s useful, but it’s by no means “I’m going to start with two drives then add one drive at a time as I need to until I fill all 15 bays”. And as with the other RAID stuff, the disk you add has to be the same size as the others in the VDEV.

1 Like

Thanks for the thorough reply. My current raid 10 is a direct attach thunderbolt 2 that I use for video editing and time machine. It’s completely full (so all 24TB of usable space is utalized currently). They are WD gold drives, and I don’t think they have that many hours on them becuause I didn’t keep the DAS on 24/7. Just when I needed to transfer archives or edit big video projects. I’m looking to migrate all of this to a NAS, plus the afformentioned space requests for VM’s, cloud host, plex etc… none of those things are currently running on the DAS. Plex is currently on my desktop, but I don’t want to run it 24/7, and I currenlty have apple and google cloud that I want to get away from using.

Does RAID 6 give me the same performance as a raid 10? I was looking at the new Ubiquiti UNAS Pro 8 but have heard that the 10 gig nics on it don’t actually push as much bandwidth as people would like and the processor in it is slow - this was the main reason why I’m looking at the HL15 now, and I plan to do 1 raid 1 SSD cache.

So that’s two more things not on the original list. Video editing off a NAS (or even a DAS) is it’s own can of worms. Anyway, I think I get the picture with your clarification. Did you set up the RAID through MacOS or does the DAS have it’s own RAID controller? Maybe a model number or link.

I just picked RAID Z2 as it is a common middle-of-the-road choice for many people’s workloads, but certainly not all. You can do a RAID 10 setup in ZFS if that is what you prefer. The tradeoff of any RAID “level” is between read speed, write speed, # of drives you can lose before you lose the array (with caveats), and the amount of space taken up for the redundancy/parity.

Your 8-disk RAID 10 has;

  • read speed of 4 drives
  • write speed of 4 drives
  • you can lose as few as 1 or as many as 4 drives and still have the array. it is very dependent on which drives fail; if one drive and it’s mirror both fail you lose the array.
  • 50% of the array is dedicated to redundancy

A 7-disk RAID Z2 vdev would have;

  • read speed of 5 drives
  • write speed of 1 drive
  • it can lose any two drives and the array is still healthy; the third failure kills the array.
  • 2/7 (29%) of the vdev is dedicated to redundancy

Note that the slower write speed isn’t usually noticeable in RAID Z2 because of ZFS’ use of RAM caching. Note also that with four HDDs per mirror in your DAS, the max best throughput that could give you is something like (4 drives * 2 gbps =) 8 gbps, so you aren’t accessing the DAS at anywhere near the 20 gbps of a Thunderbolt 2 connection.

Again, though, I was just picking an example for conversation purposes, not trying to sell you on a particular layout, as I don’t know how you rank the capacity/performance/integrity factors. Mostly I was trying to answer your question about moving your data to a set of new, possibly larger, drives, if you could wipe your 6 TB disks and re-use them (yes, but if you used larger drives you would probably have two separate pools, not one big pool), and if you would lose the data in the new array when you did that (no).

Does your Mac currently have 10gbps networking?

Here are some references;

1 Like

Thank you for all your help! I’ll visit those links you provided as well to study up some more.

My current Mac does not have 10 gig, but my pc does.

It’s been a while since I set up the raid. It was using the manufacturer provided terminal software with a raid controller built into the DAS. It’s an Areca 8050T2

That helps. It doesn’t change anything, but it’s good to know to understand your setup.

I think you just need to project your data needs and price out your options based on an end-state on the HL15;

  1. end state using 6 TB RAID 10
    You would need to get at least 8 new 6TB drives to hold your current data
    You could then grow the pool by 2-disks (one mirror) at a time
    But you have the drives to salvage from the DAS, so you would just add 6 drives to the HL15 for a total of 7 mirrored drives and a usable space of 42 TB

  2. end state using 6 TB but change RAID level
    If reusing the DAS drives is important, but you foresee needing more than 42 TB usable, you could change the RAID type, either as I suggested above or in other ways
    In this case you need a new set of 7 6TB drives to copy your current data to
    You would then add 7 drives from the DAS as a second VDEV. You would have one spare drive ready for when you have a drive failure
    Depending on if you used RAID Z1 or Z2 this would give you a total usable space of 60 TB - 72 TB

It would be nice if the HL15 had 16 drive bays instead of 15, since many of the RAID setups are based on multiples of 2. There are relatively easy ways to add a 16th drive but I’ll skip that here.

My concern with getting 7 - 8 new (or new-to-you) 6 TB drives is that you want to be sure they are 7200 RPM and aren’t SMR drives. I haven’t looked, but my guess is that 6 TB drives probably aren’t particularly cheap when looked at on a $/TB basis.

  1. end state using higher capacity drives
    If you foresee needing more than 60 - 72 TB then you need to go this route
    The number of drives you need to start is dependent on the raid level and layout you choose as talked about above. Although you could do a 5-drive VDEV, I think the two main choices are to get either 7 new drives if you aren’t doing RAID 10, or enough drive pairs to hold your current 24 TB if you are doing RAID 10. A sweet spot for $/TB may be around 16 TB drives, so for a RAID 10 you could get four 16 TB drives to start and then grow by pairs of 16 TB drives.
    You could put some or all of the DAS drives in as well, but this would be a sperate pool. I don’t know the Mac analogy, but on PC, it would be like having files spread across a C: and D: drives instead of all in subdirectories of your C: drive. You could run some of your smaller stuff, perhaps Home Assistant and the Game Servers off of the 6 TB-drive pool and your larger stuff–plex and video editing–off of the new drive pool.

Larger drives are great if you need more space, but they are also going to be less performant in some ways. You’ll get more performance from four 6 TB drives in a RAID than you will from a single 24 TB drive. It also takes more time to recover the array if there is a drive failure if you have the same amount of data placed on fewer drives. In that example, it could take up to 4x as long to recover a failed drive in a RAID of 24 TB drives than 6 TB drives. Just something to consider about going “too large”.

1 Like

Thank you for the help!! I greatly appreciate it.