Houston ZFS Pool - Which pool type did I create?

Yes it’s an incredibly dumb question.

It was late when I got Houston up and running - grabbing the wrong OS install media can waste a lot of time. Though I could swear that I set my pool to use the 3x 18TB drives in RaidZ1, I’m looking at the numbers net to a ZFS pool calculator and can’t tell if it created the pool in RaidZ1.

“zpool list -v tank” & “zpool history -i tank” did not give me any answers.

The ZFS capacity calculator says (ZFS Capacity Calculator - WintelGuy.com) says that I should have…

  • 49TB of storage capacity,
  • 54TB raw,
  • 31.6TB usable, and
  • 28.4TB usable after the 10% refreshervation and slop space allocaiton.

Houston shows that pool “tank” has the following values

Is this RaidZ1 or did I misclick?

Thanks in advance for helping a silly person who really does not want to re-transfer 10TB of data

Hi @Specter,
An easy way to check via Houston is to use the 45 drives disks module. Then click on one of the drives in the pool.

It appears you might not have a full build, so not sure if the 45drives disks module is available to you.

That is correct - hacking that plugin to work with my HBA (9300 16i) is at the top of the stack… but I don’t know where to begin yet.

Specter continues to regret not getting the full build.


Here’s the zpool metadata…
Blocks LSIZE PSIZE ASIZE avg comp %Total Type
- - - - - - - unallocated
2 32768 8192 24576 12288 4.00 0.00 object directory
3 393216 12288 36864 12288 32.00 0.00 L1 object array
12 6144 6144 147456 12288 1.00 0.00 L0 object array
15 399360 18432 184320 12288 21.67 0.00 object array
1 16384 4096 12288 12288 4.00 0.00 packed nvlist
- - - - - - - packed nvlist size
1 16384 4096 12288 12288 4.00 0.00 bpobj
- - - - - - - bpobj header
- - - - - - - SPA space map header
681 11157504 2789376 8368128 12288 4.00 0.00 L1 SPA space map
1950 7987200 7745536 23236608 11916 1.03 0.00 L0 SPA space map
2631 19144704 10534912 31604736 12012 1.82 0.00 SPA space map
- - - - - - - ZIL intent log
3 393216 12288 24576 8192 32.00 0.00 L5 DMU dnode
3 393216 12288 24576 8192 32.00 0.00 L4 DMU dnode
3 393216 12288 24576 8192 32.00 0.00 L3 DMU dnode
3 393216 12288 24576 8192 32.00 0.00 L2 DMU dnode
4 524288 86016 176128 44032 6.10 0.00 L1 DMU dnode
1286 21069824 6844416 13824000 10749 3.08 0.00 L0 DMU dnode
1302 23166976 6979584 14098432 10828 3.32 0.00 DMU dnode
4 16384 16384 36864 9216 1.00 0.00 DMU objset
- - - - - - - DSL directory
- - - - - - - DSL directory child map
- - - - - - - DSL dataset snap map
9 99840 24576 73728 8192 4.06 0.00 DSL props
- - - - - - - DSL dataset
- - - - - - - ZFS znode
- - - - - - - ZFS V0 ACL
8657 1134690304 35471360 70942720 8194 31.99 0.00 L2 ZFS plain file
60448 7923040256 2003857408 4007714816 66300 3.95 0.07 L1 ZFS plain file
44248459 5798485350912 5787062816256 5787070341120 130785 1.00 99.93 L0 ZFS plain file
44317564 5807543081472 5789102145024 5791148998656 130673 1.00 100.00 ZFS plain file
660 86507520 2703360 5406720 8192 32.00 0.00 L1 ZFS directory
3976 24655872 7818752 20873216 5249 3.15 0.00 L0 ZFS directory
4636 111163392 10522112 26279936 5668 10.56 0.00 ZFS directory
3 1536 1536 24576 8192 1.00 0.00 ZFS master node
- - - - - - - ZFS delete queue
- - - - - - - zvol object
- - - - - - - zvol prop
- - - - - - - other uint8
- - - - - - - other uint64
- - - - - - - other ZAP
- - - - - - - persistent error log
1 131072 4096 12288 12288 32.00 0.00 SPA history
- - - - - - - SPA history offsets
- - - - - - - Pool properties
- - - - - - - DSL permissions
- - - - - - - ZFS ACL
- - - - - - - ZFS SYSACL
- - - - - - - FUID table
- - - - - - - FUID table size
- - - - - - - DSL dataset next clones
- - - - - - - scan work queue
- - - - - - - ZFS user/group/project used
- - - - - - - ZFS user/group/project quota
- - - - - - - snapshot refcount tags
- - - - - - - DDT ZAP algorithm
- - - - - - - DDT statistics
31 31744 31744 253952 8192 1.00 0.00 System attributes
- - - - - - - SA master node
3 4608 4608 24576 8192 1.00 0.00 SA attr registration
6 98304 24576 49152 8192 4.00 0.00 SA attr layouts
- - - - - - - scan translations
- - - - - - - deduplicated block
- - - - - - - DSL deadlist map
- - - - - - - DSL deadlist map hdr
- - - - - - - DSL dir clones
- - - - - - - bpobj subobj
10 163840 40960 122880 12288 4.00 0.00 L1 deferred free
18 172032 73728 221184 12288 2.33 0.00 L0 deferred free
28 335872 114688 344064 12288 2.93 0.00 deferred free
- - - - - - - dedup ditto
15 41472 15872 73728 4915 2.61 0.00 other
3 393216 12288 24576 8192 32.00 0.00 L5 Total
3 393216 12288 24576 8192 32.00 0.00 L4 Total
3 393216 12288 24576 8192 32.00 0.00 L3 Total
8660 1135083520 35483648 70967296 8194 31.99 0.00 L2 Total
61806 8021786624 2009489408 4021825536 65071 3.99 0.07 L1 Total
44255810 5798539749376 5787085444608 5787129241600 130765 1.00 99.93 L0 Total
44326285 5807697799168 5789130454528 5791222108160 130649 1.00 100.00 Total

                        capacity   operations   bandwidth  ---- errors ----

description used avail read write read write read write cksum
tank 5.27T 43.8T 520 0 16.9M 0 0 0 0
/dev/disk/by-id/wwn-0x5000cca2bac1663b-part1 1.76T 14.6T 173 0 5.64M 0 0 0 0
/dev/disk/by-id/wwn-0x5000cca2a6ce892e-part1 1.76T 14.6T 172 0 5.53M 0 0 0 0
/dev/disk/by-id/wwn-0x5000cca2a6c4afcc-part1 1.75T 14.6T 174 0 5.70M 0 0 0

Not a problem. just drop into a command line and run “zpool status”
It will tell you the type of zpool.
zpoolStatus

1 Like

image
well rats.

You can also run a “zpool history”. that will show you what options were run when you created the pool.

@Specter, You can see your pool status by going to the ZFS tab and pressing the status tab

Looking at the below image you uploaded it looks like you simply have 3 drives in a pool with no RAID so these would simply be stripped together. i highly recommend deleting and recreating the pool with a RAIDZ1 for 1 drive of parity at least.

Was able to confirm that it was wrong.

Changed a few settings, but I rebuilt it and
image

3 Likes