HL8 cannot create Storage Pool

I just receive a new HL8. After booting it and patching it for the first time, I added the hard drives and started the process to create a storage pool. The unit was powered off when I installed the drives. There were no drives installed on the first and second boot of the unit.

I went to “45Drives ZFS”, clicked the Create Storage Pool, and started the wizard. On step 02, my 8 disks are not being displayed. I cannot select my hard drives to setup the RaidZ2 pool.

From the command line, the OS is detecting the disks. dev/sda through /dev/sdh.
From the menu “Storage Devices” Houston is displaying my 8 hard drives per the OS labels.
Under Storage Devices, each individual disk is flagged “10.9 TiB Unrecognized data” since they’re new and have no partition or formatting.
All 8 drives are the same model. Seagate ST12000VN0008-2Y

Can someone tell me how to get Houston to recognize my disks properly so I can create the ZFS pool?

Maybe be aware of this thread;

Although, it’s not clear to me if the bugs in cockpit-zfs and/or cockpit-zfs-manager got resolved.

Perhaps start by just confirming here which package is installed and which version. Then maybe someone can advise with whether that might be the cause of your issue and how to proceed.

I have:

cockpit-zfs.noarch 1.1.18-5.el8

I uninstalled cockpit.zfs, then installed cockpit-zfs-manager. It was no help.

I had to uninstall cockpit-zfs-manager then reinstall cockpit.zfs. After another reboot, I’m back where I started and it’s not detecting my disks.

Originally, I started with a fully patched system.

It’s possible to get some screenshots of what you’re seeing? I don’t doubt what you’re reporting but it’s often good to see the screen in its entirety. Sometimes there’s a detail on screen that doesn’t seem relevant but actually is.

Also do you happen to know what the 2Y means for these models? I’m not getting any documentation on that when googling around.

And, finally, welcome to the forums!!

Screenshots 1/4

screenshots 2/4

Screenshot 3/4

Screenshots 4/4

I just powered on the HL8 before I took the screenshots.

Sample of where I source the hard drives

Amazon.ca

Thanks for this @vertaxis! In the wizard, does anything change if switch the Disk Identifier? I think there should be an option to use device path. Aliasing is the way to go but if they show up with a different method that tells us the alias mapping is messed up somehow.

All 3 disk identifiers produce the same result. None of the 8 hard drives are listed.

1 Like

Hi @vertaxis, could you run “lsblk” and make sure the server is seeing the 8 HDDS you have populated?

If that does not return the 8 HDDS, I would recommend reaching out to info@45homelab.com to have someone look into this issue with you

1 Like

[root@homelab 45drives]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10.9T 0 disk
sdb 8:16 0 10.9T 0 disk
sdc 8:32 0 10.9T 0 disk
sdd 8:48 0 10.9T 0 disk
sde 8:64 0 10.9T 0 disk
sdf 8:80 0 10.9T 0 disk
sdg 8:96 0 10.9T 0 disk
sdh 8:112 0 10.9T 0 disk
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 600M 0 part /boot/efi
├─nvme0n1p2 259:2 0 5.8G 0 part /boot
└─nvme0n1p3 259:3 0 925.2G 0 part
├─rl-root 253:0 0 303.3G 0 lvm /
├─rl-swap 253:1 0 23.3G 0 lvm [SWAP]
└─rl-home 253:2 0 597.9G 0 lvm /home

The OS sees the drives fine. Houston is where the problem appears to be from my perspective. The ZFS modules are not seeing the devices?

ls -l /dev

brw-rw----. 1 root disk 8, 0 May 5 08:34 sda
brw-rw----. 1 root disk 8, 16 May 5 08:34 sdb
brw-rw----. 1 root disk 8, 32 May 5 08:34 sdc
brw-rw----. 1 root disk 8, 48 May 5 08:34 sdd
brw-rw----. 1 root disk 8, 64 May 5 08:34 sde
brw-rw----. 1 root disk 8, 80 May 5 08:34 sdf
brw-rw----. 1 root disk 8, 96 May 5 08:34 sdg
brw-rw----. 1 root disk 8, 112 May 5 08:34 sdh

1 Like

For the most part, Cockpit, and by extension the 45Drives Houston modules, run the same commands from the command line that you would. I’m sure lsblk showing the drives has a few people scratching their heads at 45HomeLab.

This shouldn’t matter but maybe try wipefs or formatting one of the drives to see if it shows up after that.

Page 32 of the User Guide:

“If you are unable to create the pool, ensure the drives you are using are free of any
partitions.
Ensure the disks you are using to create the pool are of the same size”

These are brand new drives with no partitions. To format a drive, I would need to create at least 1 partition.

Hey @vertaxis,

Can you try running the command “dmap” in the cli to see if this helps anything at all? Once you run a dmap try using the command lsdev to see if it populates the drives.

There is a change. I think it sees 1-1, 1-2, 1-3, & 1-4.

It’s not seeing 2-1, 2-2, 2-3, & 2-4.

[root@homelab 45drives]# dmap
/opt/45drives/tools/server_identifier: ipmitool fru command failed. IPMI is not present on this system. Attempting to pull info from DMI tables

Scan Results:

{
“Motherboard”: {
“Manufacturer”: “Gigabyte Technology Co., Ltd.”,
“Product Name”: “B550I AORUS PRO AX”,
“Serial Number”: “Default string”
},
“HBA”: ,
“Hybrid”: false,
“Serial”: “216237-1-01”,
“Model”: “HomeLab-HL8”,
“Alias Style”: “HOMELAB”,
“Chassis Size”: “HL8”,
“VM”: false,
“Edit Mode”: false,
“OS NAME”: “Rocky Linux”,
“OS VERSION_ID”: “8.10”,
“Auto Alias”: false,
“HWRAID”: false
}

Reloading udev rules
Triggering udev rules

/etc/vdev_id.conf:

This file was generated using dmap 4.0.7-1 (/opt/45drives/tools/dmap).

alias 1-1 /dev/disk/by-path/pci-0000:02:00.1-ata-1
alias 1-2 /dev/disk/by-path/pci-0000:02:00.1-ata-2
alias 1-3 /dev/disk/by-path/pci-0000:02:00.1-ata-3
alias 1-4 /dev/disk/by-path/pci-0000:02:00.1-ata-4
alias 2-1 /dev/disk/by-path/pci-0000:04:00.0-ata-1
alias 2-2 /dev/disk/by-path/pci-0000:04:00.0-ata-2
alias 2-3 /dev/disk/by-path/pci-0000:04:00.0-ata-3
alias 2-4 /dev/disk/by-path/pci-0000:04:00.0-ata-4

[root@homelab 45drives]# lsdev
╔══════════════════════════════════╗
║ Storage Disk Info ║
╠════════════════╦═════════════════╣
║ Row 2 ID (Dev) ║ Row 1 ID (Dev) ║
╠════════════════╬═════════════════╣
║ 2-1 (-) ║ 1-1 (/dev/sda) ║
║ 2-2 (-) ║ 1-2 (/dev/sdb) ║
║ 2-3 (-) ║ 1-3 (/dev/sdc) ║
║ 2-4 (-) ║ 1-4 (/dev/sdd) ║
╚════════════════╩═════════════════╝
← motherboard | front plate →
Legend:
Empty Occupied (no partitions) Occupied (1 or more partitions)

After a reboot, the Wizard is showing the 4 drives that were found.

The Forum is blocking me from posting. It thinks I’m posting too many messages in a day.

I found something.

From running the dmap:

This file was generated using dmap 4.0.7-1 (/opt/45drives/tools/dmap).

alias 1-1 /dev/disk/by-path/pci-0000:02:00.1-ata-1
alias 1-2 /dev/disk/by-path/pci-0000:02:00.1-ata-2
alias 1-3 /dev/disk/by-path/pci-0000:02:00.1-ata-3
alias 1-4 /dev/disk/by-path/pci-0000:02:00.1-ata-4
alias 2-1 /dev/disk/by-path/pci-0000:04:00.0-ata-1
alias 2-2 /dev/disk/by-path/pci-0000:04:00.0-ata-2
alias 2-3 /dev/disk/by-path/pci-0000:04:00.0-ata-3
alias 2-4 /dev/disk/by-path/pci-0000:04:00.0-ata-4

From looking at /dev/disk/by-path from the command line:

[root@homelab tools]# ls -l /dev/disk/by-path
total 0
lrwxrwxrwx. 1 root root 9 May 5 16:11 pci-0000:02:00.1-ata-1 → ../../sda
lrwxrwxrwx. 1 root root 9 May 5 16:11 pci-0000:02:00.1-ata-2 → ../../sdb
lrwxrwxrwx. 1 root root 9 May 5 16:11 pci-0000:02:00.1-ata-3 → ../../sdc
lrwxrwxrwx. 1 root root 9 May 5 16:11 pci-0000:02:00.1-ata-4 → ../../sdd
lrwxrwxrwx. 1 root root 13 May 5 16:11 pci-0000:04:00.0-nvme-1 → ../../nvme0n1
lrwxrwxrwx. 1 root root 15 May 5 16:11 pci-0000:04:00.0-nvme-1-part1 → ../../nvme0n1p1
lrwxrwxrwx. 1 root root 15 May 5 16:11 pci-0000:04:00.0-nvme-1-part2 → ../../nvme0n1p2
lrwxrwxrwx. 1 root root 15 May 5 16:11 pci-0000:04:00.0-nvme-1-part3 → ../../nvme0n1p3
lrwxrwxrwx. 1 root root 9 May 5 16:11 pci-0000:07:00.0-ata-1 → ../../sde
lrwxrwxrwx. 1 root root 9 May 5 16:11 pci-0000:07:00.0-ata-2 → ../../sdf
lrwxrwxrwx. 1 root root 9 May 5 16:11 pci-0000:07:00.0-ata-3 → ../../sdg
lrwxrwxrwx. 1 root root 9 May 5 16:11 pci-0000:07:00.0-ata-4 → ../../sdh

They don’t match for the last 4 devices. Any advice on how to fix this?

I’m assuming that dmap is writing these mappings somewhere for cockpit/Houston to use. Is this something I can manually edit? If so, what files do I need to change?

Hey @vertaxis

Let me ask our development team just to be positive and see what we can do. Are the 4 drives showing up for zpool creation now ?

You should be able to go into the file /etc/vdev_id.conf and edit the lines for 2-1 → 2-4 to the correct pci path to fix !