My HL-4 just arrived. After some speedy help to provide the changed password, the box is up and ticking.
But I’m a ZFS admin newbie with a problem. The plan was to salvage 4 volumes from a home brew TrueNas system from which motherboard magic smoke escaped. Stone cold dead. Not a peepl. Not even a snow crash.
The disks were part of a 6 physical device RAIZ2 pool. I had planned to salvage 4 for a new RaidZ2 pool in the HL-4.
The only working system I have with SATA drive bays is the new HL-4. The disks still have their original RaidZ2 headers and trailers and are marked as active as shown below
.
[45drives@homelab ~]$ sudo lsblk -f /dev/sda
[sudo] password for 45drives:
NAME FSTYPE LABEL UUID MOUNTPOINT
sda zfs_member
├─sda1 linux_raid_member sherman:swap0 370031e6-ea20-b39a-7a28-bb3e45bcc4c4
│ └─md127
└─sda2 zfs_member FreeNAS 3002080908045744899
[45drives@homelab ~]$
I tried to use MacOS disk utility to repartition the disks. The USB disk dongle couldn’t source enough power to spin the devices.
ZFS commands on the new host (I kept homelab) won’t clean up the media because Sherman (deceased) still owns them.
Can I import the pool with missing media?
Will it sort itself out?
If it imports, can it be exported and then cleared?
Will dd from /dev/zero cover all the blocks including the ZFS headers? If zero filled, will ZFS know what to do with the media? What is the size spec to catch all the sectors.
I’ll check when I return to this after doing Minister of Coin tedious monotony.
What is clear is that OpenZFS is bound and determined to keep you reasonably safe from lapses of mind or clumsy fingers.
Overnight reading remembered the FreeBSD Handbook which has a good section but nothing on this specific situation. Reading indicates GUID tools may save the day.
I’d recommend using “wipefs -af /dev/sdX” to wipe the partitions off the drives.
If that still does not remove the partitions, you could also use “fdisk /dev/sdX” to select your drive and then use option “d” to delete the partitionsand then save.
This should remove the partitions and allow you to create a ZFS pool, There is also an option when creating a new ZFS pool to force, which willoverwritee the data on the disk and should allow you to build the pool.
Then just use the dd method. Refer to the article link.
It’ll take longer because it’s writing to the whole disk. I don’t think bs has to be an exact multiple of anything, so just use the bs from the example. I think you can also do this writing of zeroes from the Houston GUI.
There are ways to get all four going in parallel, eg;
nohup dd… &. Nohup will run the command and not terminate it if you terminate the shell session, and & will return to the command prompt after starting the command, allowing you to enter another command.
In my experience, you only need to wipe the partition table to allow the disks to be used with a new ZFS pool. I’ve done this a few different ways but I find using the Houston / Cockpit Storage module to be the easiest. @Hutch-45Drives suggestion should also work. If you go the dd route you only need to wipe the first 1-2 megabytes of the hard drive. No need to wipe the entire disk unless you really want the data gone.
I deleted the partitions, then recreated them using the original sizes, 2G and the rest.
I used type ZFS swap for the first, and ZFS filesystem for the second.
With some fiddling cockpit-zfs would do the creation but can’t see them. Terminal says they are happy.