User management guidance wanted

I have 4 machines, two TrueNAS boxes that exist primarily to provide backup storage for stuff. They are Peabody and Sherman. A small System76 Meerkat runs Rocky 8 and is a Roon Server, its sole job. Roon runs as root. Rocky is a Rocky 8 homelab machine with 2 roles, one to back up Meerkat system and music volumes and replicate to Peabody and Sherman. Second role is to run whatever crazy ass thing I want to play with.

And there is an Apple Silicon iMac that is my daily driver. He’s an iMac because for most uses of that machine MacOS apps affort the best UX. The machine is used for photo editing, tax preparation, various Minister of Coin tasks, and eMail (Proton Mail)

Anyway, with a constellation of 5 user management is getting to be a mess. Has anybody found a way out of this quagmire? I miss Yellow Pages (yep, I’m that old plus 20 years). Is this enough to be worth the bother of a Domain Controller (shudders)? Maybe something like Ansible can keep a record of user accounts and set up users and quotas.

Motivation is that none of the ZFS boxes have quotas set. They seem happy when I do something interactively or via a CIFS share. When I run an Rsync script or zfs replication script, there is suddenly no room on the volume. That’s the report on the log, not “no quota for you” or “quota exceeded”. Another of those lazy developer issues.

Looking at the TrueNAS Scale boxes, datasets have quota and permission views. But they are blank. That seems fine for CIFS service. Ark Backup and TimeMachine write to CIFS shares and are happy.

When shell tries something, the well is dry.

So users are root, the first user (1000), and dave (my interactive login). And whatever applications need.
Roon, runs as root. Ark and TimeMachine run as user 1000 on the Mac

Meerkat / is 512 GB. Music volume is 2 TB.

Rocky is fat with 4x8 TB as RaidZ2.
Sherman is fat with 4x16 TB as RaidZ2
Peabody is cramped with 4x6 + 1x8 as RaidZ2 (runt)

Goal: In a homelab environment, one place to record and set disk quotas for various zfs filesystems.
Any ideas? Any skinned knuckles?

Are you asking about SSO and AD, or disk quotas? I’m not really sure. I don’t know what Yellow Pages was (assuming it isn’t the phone company’s business directory).

Have you set quotas on datasets and they aren’t working?

You’ve listed out a handful (or less) of users and three ZFS boxes. What are the data flows between the boxes (that’s somewhat rhetorical)? Is it scheduled or ad-hoc tasks that are filling your volumes? If your issue is with the error message you get when a volume is full, I’m not sure creating other users or setting quotas is going to help with that.

IxSystems does make TrueCommand, to manage multiple TrueNAS systems from a “single pane of glass”. Maybe that is what you are after? I haven’t used it, but i think it would be free in your current environment.

I guess I’m not seeing the quagmire, but perhaps I compartmentalize how my servers are used more than you do.

I’m not sure I was really sure what to ask about. I’ll try to clarify things somewhat. First some context. I’m a former modeling and simulation developer whose Unix experience begins with SunOS 4 in 1988. Along the way M&S moved from Sun/SGI to Linux and Intel. I always did my work user machine admin but not workgroup admin. I retired in 2013 so I’m somewhat fossilized back there experience-wise.

One problem from the beginning at home has been to keep track of who owns what in the ZFS shares.

User 1000 owned the userland data on MacOS.

On my Roon server, root owned the userland music library.

Then I complicated things by adding a second MacOS user 1001 in a misguided effort to start a clean environment without 15 years of cruft. It didn’t work. but 1001 is the current primary Mac user.

User 1000 is the primary Rocky Linux user.

the IXians have changed the TrueNAS admin in the newest Scale factory installs. Peabody has the old model. Sherman has the new model. truenas_admin is now the primary user and root is nologin. So all 3 environments now have sudo configured. There was no central list of usernames and user ids.

When I set up the ZFS pool architecture, I set up datasets by function(Backups, Containers, Replication) by host realizing I had to know where something came from to know where it should be restored. This also established the user-number to user-name mapping to use.

The first thought was to use a directory service (LDAP?) to keep users straight. Samba includes a domain controller. My understanding was to use genuine Microsoft domain controllers as it had quirks the Samba folk hadn’t captured. I keep a Microsoft-free household so a Microsoft primary domain controller is not an option. (They want money. You need a high priest too)

TrueNAS and HoustonUI have different views of quotas. Neither seems to have a ZFS allow interface. Since I’ve not set quotas or issued any ZFS allow directives, I’m not quite where to start or how to keep the 3 ZFS pools and their datasets compatible. I need to do some study and planning before slinging commands.

The second thought has been to use Ansible play books or some such to record the configuration and make it executable. Has anybody taken on learning Ansible? I see some brave souls blogging about their encounters. Their work is specific to their use cases. I suspect I have the same problem. There is no cook book.

Are my replications failing because the effective user is not allowed to use replication? Or because nobody gave the user any quota? The message is effectively that there is not room. Which usually means device full. It is most definitely not.

I think Available is more for the consistent, repeatable creation of virtual and physical machines. I don’t think that is what you want.

If you are getting the error right away then you likely have some sort of permissions issue. If it does some copying then throws the error there is some other issue with temp space or something.

The root cause may be different for rsync vs ZFS replication.

So presumably your scripts are running as a different user than what you are doing interactively. If you are ok with the interactive user account, I’d investigate how to update the scripts to use that user. For example;

rsync -a username@remote_host:/home/username/dir1 place_to_sync_on_local_machine

Whether you want to create a user specific to rsync and/or ZFS replication, and whether you would do that logically for each client or as a shared login for any client wanting to replicate to the server is up to you. It doesn’t seem like you are trying to separately audit each client.

Here’s something from AI about rsync:

Thanks, Matthew. A lot of this is newbie or rusty problems. Over the weekend, I interrupted an update I thought had finished making the homelab machine unable to boot. So I reinstalled following KB articles for Rocky Linux and ZFS. Rocky is back in service.

I have the 2 rsync evolutions working after the reinstall. I have a 3rd to set up.

After thinking about the user problem, I decided to have Rocky pull the rsync backups (not ZFS-capable hosts) and push replications to the larger TrueNAS box. Only one place to worry about SSH keys and remote user numbers didn’t matter. This is working. The new TrueNAS Scale starts userids at 3000! So 500, 1000, and 3000 in the constellation! argh!

I have the replication to TrueNAS sorted, I think. The issue was a snapshot task on the TrueNAS box had snapshotted the file system. Not that root does not have quotas on ZFS.

I had been snapshotting the pool root filesystem. A 45drives video mentioned in passing that that was not such a wise choice. Now I know why. Root recursive snapshot had taken a daily snapshot of the target filesystem for the replication. I fixed that on Monday.

Now waiting for cron to come around.