Proxmox on HL15

I updated the GitHub Issue to give more detail:

  • Installed Packages version numbers and the source repo
  • Shared that the 45Drives Motherboard is the only item not fully working
  • Gave the use-case to reproduce as the issue happen during the the “Motherboard & CPU” portion.
  • Shared a snapshot of the error.
  • Noted what is working within the 45Drive Motherboard feature.

I checked the python script within the helper_script. I noticed that the prebuilt models of the “HL15 - Fully Built & Burned In” do not have the two motherboard models nor the CPU model.

I did not debug the problem further as I assume there are more updates than just the python script.

2 Likes

I will need to create all of the graphics for this motherboard as well as map out the various regions within to add this functionality.

I have been pretty busy trying to get the new Stornado up and running and just haven’t had the time to implement this yet. I’ll update here once I do. Luckily, the two motherboard offerings are pretty similar, so I’ll get support for both the X11SPH-nCTPF and X11SPH-nCTF added in the same release.

2 Likes

@mhooper, if you need any help let me know. I started reading the repo a couple days ago.

I appreciate all your work on the project - thank you in advance.

Same here @mhooper if you need me to get you any info or whatever from my machine let me know! Happy to help.

The changes I made from stock full build are:

  • Added more RAM and filled up all the RAM slots.
  • Swapped processor to a Intel Xeon Silver 4210
  • Added a nvidia 1080 into the PCI x16 slot.
1 Like

@Hutch-45Drives How does Proxmox and Houston manage RAM? With ZFS using a lot of RAM for caching, would it reserve a set amount of ZFS cache? use all that’s available?

Also, to migrate from TrueNAS is it as simple as exporting the ZFS pool and then importing into Houston?

Thanks,
Josh

Hey @CapitalJD! I know I’m not @Hutch-45Drives but I think I can help. :grinning:

RAM is managed by the kernel in Proxmox (and TrueNAS for that matter). The default value of /sys/module/zfs/parameters/zfs_arc_max in Linux should be 0 which will use 50% of available RAM for ZFS leaving the other 50% for other resources like VM’s. Proxmox has a wiki on adjusting this if you want to have more or less available to ZFS Arc.

ZFS on Linux - Proxmox VE

As far as migrating your pool from TrueNAS, the big gotcha is the version of ZFS enabled for the zpool and what’s installed on the OS. If there’s a mismatch, you are limited in moving to the newer version due to unsupported flags and features built into the zpool and filesystem. On Linux, you can use zpool upgrade to see information about the pool and zfs --version to check what’s on your system. Both TrueNAS and Proxmox list ZFS versions in their release notes:

  • TrueNAS Core 13 U6.1 uses ZFS 2.1.14
  • TrueNAS Scale 23.10.1 uses ZFS 2.2.2
  • Promox 8.1 use ZFS 2.2.0

If you’re good on versioning, it should be as easy as you describe: export and then import!

6 Likes

Thanks, @rymandle05 I couldn’t have said it better myself!

We have a guide for setting the arc size for ZFS on our public knowledgebase here for anyone wanting to do this KB450086 - Setting ZFS ARC Limit on Linux - 45Drives Knowledge Base

It is as easy as exporting and importing your pool again along as the versions of ZFS match or are newer as you stated

2 Likes

This is great! Thank you both. I’m going from TrueNAS Scale to Proxmox 8.1. Do I need to worry that the versioning is slightly newer?

Thanks!

I’m not entirely sure how “picky” ZFS is about versioning and if it considers all the way to the patch level or maybe just the minor release. That said, my opinion is you’ll probably be fine as long as you update Proxmox before trying anything. I logged into my proxmox cluster which is on 8.1.4 and ZFS is at version 2.2.2 now. This is the same as Cobia. Although, I can’t guarantee anything. :no_mouth:

root@pve1:~# uname -r
6.5.11-7-pve
root@pve1:~# zfs --version
zfs-2.2.2-pve1
zfs-kmod-2.2.2-pve1
1 Like

I have enough space on my Synology and enough time that I’m going to backup and can start fresh if I need to. I’ll see how it goes!

No luck with the import. It’s entirely possible I moved some hardware around that messed it up though. Not a big deal, I had planned for this contingency.

Oh no! Could you see the pool to import? If you can drop down the command line, I bet you can get more information than the generic error Cockpit/Houston will provide.

I could see the pool but there were errors. Not a big deal. I took the opportunity to do a clean setup and am in the process of transferring everything back over. Adding the HL15 was to help me do some reconfiguring and combining of my storage anyway.

EDIT: I had originally created these pools on TrueNAS core, so I think that could have been the issue.

HI All,

If you are importing a pool from a different system/OS you may need to force the import in the CLI for the first time the pool is imported into the new OS.

You can do this by running “zpool import -f” change the pool name to match your pool and the pool should then import

2 Likes

There is a great update on this thread this morning.

I got a message via github from someone commenting on the issue reported @orix originally that there was a package update.

My HL-15 did a software update (check) and there was a cockpit-hardware package updated.

The result is the following image…

The Motherboard is fully supported graphically in Cockpit.

Thank you @mhooper for the recent updates. I can see the changes that were added 2 weeks ago.

This resolution does make the system look as well as the enterprise solutions.

FYI - both motherboard models of the HL-15 are in the package. I can personally verify the X11SPH-nCTPF - the SFP+ motherboard.

It would be nice to have something with the RJ45 connections to confirmed as well (but I am assuming it works).

7 Likes

Very happy to see this! Hoping to see some love for Ubuntu 22.04 or 24.04 LTS (releasing in April) or Rocky 9. If I had the coding skills I’d be happy to help, but alas, outside of some Ansible I’m a hardware and network guy at the end of the day!

I would .love to see an update with Rocky 9.

I followed the above instuctons and root is not listed in the “disallowed-users but cockpit is still not allowing root or any other users lo log in. I keep getting " Wrong user name or password” error. Any ideas on what is going on here.

Can you ssh into the server using root or log into Proxmox using the root password? I’m thinking this might not be a cockpit issue

Yes to both. That’s why I am perplexed. No issues with SSH or root login in the Proxmox UI. I think I am going to reinstall everything fresh and see what happens from there.