New Cockpit ZFS Module

Is anyone successfully using the new cockpit-zfs module? I enabled the testing repo and installed the new module. It’s unable to see my disks / pools etc. I have zero issues with the old module. I’m running Rocky 8.10 on ZFS 2.2.4 and also tried 2.2.6. Help is appreciated, thanks!

Hi @sinewaverx,

We are currently looking at how we are upgrading ZFS 2.1 to ZFS 2.2. Due to that, we have not yet published the python package to our repository to support ZFS 2.2.

Here is the link to the RPM https://45homelab.com/python3-libzfs-2.2.1-5.el8.x86_64.rpm until we get the upgrade path completed.

We are currently in process of building these packages for el9, expecting to have repos up by end of next week.

In the mean time, you can install the package linked above to give proper functionality to cockpit-zfs.

Feel free to reach out to me at info@45homelab.com if you have further questions, and i will get back to you as soon as possible.

el9 build - great!

It felt weird installing el8 for this considering that I’ve spent the past year showing people how to upgrade el7/8 to 9…

How does one get access to the el9 based instructions? Totally willing to be a guinea pig to do either a fresh install or in-place upgrade.

2 Likes

Awesome, Thanks! Are the Ubuntu Repos getting updated as well? Are there plans to support both 22 and 24?

1 Like

What ever happened with the EL9 repositories? Are they up?

1 Like

Hey Vikram! Any updates regarding new os support?

Hi @sinewaverx,

Our team is currently working on this and they will be out with the update as soon as possible, and they are estimating this to be out by this week.

1 Like

Is there any chance we could get a base image for installs that would be available somewhere?

Hi @daemon1001,

I have requested for this to our team, and I will let you know as soon as it is available for download.

1 Like

Hey homelabers,

I am happy to let you know that we have completed the EL9update. However, we are continuing to add packages.

Here is the URL to the repo.

https://repo.45drives.com/community/rocky/el9/stable/

packages are being built and added as we work through them.

Here is contents of the .repo file.

[45drives_community_stable]

enabled = 1

priority = 1

gpgcheck = 1

repo_gpgcheck = 1

baseurl = Index of /community/rocky/el9/

gpgkey = https://repo.45drives.com/key/gpg.asc

name = 45Drives EL9 Stable

4 Likes

Hi @Vikram-45HomeLab,

Any idea when completed repos and configuration scripts will be available for the new OS’s? I’ve been watching the new packages.

Thanks!

1 Like

HI @sinewaverx, I am happy to let you know that the Repo for Rocky 9 are available for download.

Once you have installed Rocky 9 visit the link - Index of /community/rocky/el9/stable/

or you can use a simple one line command to enable the el9 package repositories
curl -o /etc/yum.repos.d/45drives-community.repo https://repo.45drives.com/repofiles/rocky/45drives-community.repo

Apologies for the necropost, but since I’ve just gone through the 8 to 9 upgrade, thought I’d leave some notes for “future me” and hope it’s useful for others… (usual disclaimers, no warranty implied, etc, etc, etc…)

First - What doesn’t work

  • leapp, ELevate or other of the “usual suspects” for upgrading a RHEL-flavoured linux system (could not find a viable profile for Rocky 8 to 9 upgrade, not to say it does not exist, but I could not find it)
  • the script https://scripts.45drives.com/rocky9-preconfig.sh fails when trying to install ZFS on rocky9
    • It looks like the zfsonlinux repos moved
    • manually added the repo for http://download.zfsonlinux.org/epel/9.7/x86_64/ and installed the required packages then continued with the preconfig script

So, how did I upgrade, “the short version” - I didn’t.

Since I had the OS installed on separate devices from the zpool, the quickest way to rocky 9 was to nuke and pave the rocky 8 install and install rocky 9. So, here’s the slightly longer version with some added steps for those people that have not “done this a bunch of times”:

  1. quiesce the system
    1. shut down devices or other systems that are using resources from the storage server (you might have NFS or samba clients, possibly ESXi, openshift, KVM or proxmox hosts, iSCSI initiators, etc)
    2. stop services that might be trying to use or lock data, files or devices on the system (samba? nfs? scst? etc…)
  2. back up / export existing configuration and key files to a location off the box
    1. for stuff like this, I usually grab at least my homedir, /root/ and /etc/
    2. the 45drives tools stash some important stuff in /opt/45drives
    3. since I was running VMs on here, even though the disk backing files were living on the zpool, the configuration within KVM needed to be exported
      1. export “VM” configs
        virsh list --all --name | grep -v "^$" | while read domname ; do virsh dumpxml --domain $domname > dom-${domname}.virsh.xml ; done
      2. export storage pool configs
        virsh pool-list --all --name | grep -v "^$" | while read poolname ; do virsh pool-dumpxml --pool $poolname > pool-${poolname}.virsh.xml ; done
      3. export network configs
        virsh net-list --all --name | grep -v "^$" | while read netname ; do virsh net-dumpxml --network $netname > net-${netname}.virsh.xml ; done
      4. review the contents of /var/lib/libvirt to see if there are any backing files, ISOs or other things that your VMs care about and back those up with the rest of the configs
        1. ie: if your machines use a virtual TPM, the backing files for those live under /var/lib/libvirt/swtpm/ by default
      5. this was enough for my needs, but you should review your config to see if there’s something else you need to export/import/recreate
    4. I’m lazy, so instead of rebuilding my bonded, vlanned network configuration, I used nmstate to export my network config (you may need to install nmstate if it’s not already present)
      nmstatectl show > hl15-nmstate.yml
    5. while the services I care about all keep their configs under /etc, your system might be different, so review what you’re running, locate and back up those configurations, data and other stuff you care about
  3. you did copy all that backed up information to a usb drive or your laptop, right?
    1. make a second copy of that information, just in case
  4. shut the machine down and prepare it for the upgrade
    1. remove any USB or other storage devices not needed for the OS install
    2. attach a DVD drive or mount the Rocky9 ISO through the BMC
    3. if you want to make ABSOLUTELY sure that you don’t nuke your zpool, disconnect the SATA cables that go to the backplane
    4. if you have any other hardware changes to make (NICs, memory, GPU…) this would be a good time to take care of it
  5. boot from the Rocky9 ISO (I prefer a minimal install, so I used the rocky9 minimal ISO, but use your preference) and begin the install
    1. at the storage selection screen, make sure you ONLY have your OS device selected, then choose “custom partitioning”
    2. ===POINT OF NO RETURN===
    3. delete the partitions and volumes that make up the rocky8 install
    4. click the option to have the installer create an LVM filesystem layout for you and customize it to your requirements (I like to have separate volumes for /var/, /var/log/, /var/log/libvirt/ at least)
    5. continue with configuring a minimal network configuration, hostname, initial users, software selection, etc as you require and click to start the install
  6. when the install is complete, shut the machine down and revert the steps you made to prepare the machine
    1. detatch the Rocky9 ISO
    2. reconnect the SATA cables for your zpool devices if you disconnected them
  7. boot your new rocky9 machine, log in, sudo as root, and start restoring the configuration
    1. DON’T PANIC! - your zpool will not show up initially, this is expected since there are no ZFS tools or modules installed yet
    2. recreate or restore the network configuration
      nmstatectl apply hl15-nmstate.yml
    3. if the machine was a member of an IPA realm, this would be a good time to rejoin the realm
    4. grab the preconfig script and run it
      curl -O https://scripts.45drives.com/rocky9-preconfig.sh
      bash ./rocky9-preconfig.sh
    5. if the preconfig script fails at the ZFS module step
      1. install the correct repo and add a couple required packages
        dnf install http://download.zfsonlinux.org/epel/9/zfs-release-3-0.el9.noarch.rpm
        dnf config-manager --set-enabled zfs
        dnf install zfs-dkms zfs
      2. rerun the preconfig script and allow it to continue where it left off
    6. restore, merge or recreate key config files from the previously-made config backup
      1. /etc/vdev_id.conf
      2. /etc/45drives/server_info/server_info.json
      3. /opt/45drives/dalias/dalias.conf
    7. reload vdev aliases
      1. udevadm control --reload
    8. enable and start cockpit
      1. systemctl enable cockpit
      2. systemctl start cockpit
      3. firewall-cmd --add-service cockpit
      4. firewall-cmd --add-service cockpit --permanent
    9. reattach your ZFS pool (or just log in to cockpit and import from there, assuming the vdev aliases were configured correctly)
      1. zpool list
      2. zpool import POOLNAME
    10. log in to cockpit (port 9090 of the machine’s IP) to review the ZFS pool and device status
      1. if any problems are seen with your pool at this point, resolve them here before continuing
    11. restore the NFS configuration
      1. dnf install nfs-server
      2. firewall --add-service nfs
      3. firewall --add-service nfs --permanent
      4. systemctl enable nfs-server
      5. systemctl start nfs-server
      6. copy the exports files back in to place from the backup
        1. /etc/exports
        2. /etc/exports.d/*
      7. export the filesystems
        exportfs -rav
    12. restore the VM configuration
      1. dnf install cockpit-machines
      2. restore the soft-tpm, ISOs and other backing files under /var/lib/libvirt
      3. recreate the storage domains from the XML files
        for poolfile in pool-*.virsh.xml ; do virsh pool-define --file $poolfile ; done
      4. recreate the networks from the XML files
        for netfile in net-*.virsh.xml ; do virsh network-define --file $netfile ; done
      5. review the status of the storage pools and networks (ie: from cockpit) to make sure they are all in state active
      6. recreate the VMs from the XML files
        for vmfile in dom-*.virsh.xml ; do virsh define --file $vmfile ; done
      7. restore the “user session” polkit fix if required from the backup files
        /etc/polkit-1/localauthority/50-local.d/50-org.example-libvirt-remote-access.pkla
      8. verify that your VMs can start and run correctly
    13. copy anything else that you care about that was in your backup (iSCSI configs, contents of root’s homedir, etc)
    14. reboot the machine at least once to make sure that all services come back up automatically
    15. restart client systems and ensure they are happy

This got me to the point where everything I cared about was back up and running. Proxmox and ESXi systems were happy using the NFS service with the restored configuration, and since the machine was rejoined to my freeIPA realm early on, the file ownership was all reflected correctly when doing a directory listing of an NFS share.

Hope this helps someone. It may look like a lot of work, but if you are careful, methodical, a little bit paranoid, and BACK UP YOUR CONFIGURATION AND DATA, it should be possible for anyone with a working knowledge of Linux installation and command line management.