rootless backup of rootless podman volumes?
from Railcar8095@lemmy.world to selfhosted@lemmy.world on 02 Oct 11:23
https://lemmy.world/post/36790604

I am moving from Docker to rootless podman and one thing that’s surprising to me is that podman can create files that my user is, seemingly, not allowed to even read, so I need root to backup them.

For example, this one created by the postgres service of immich:

-rw-------. 1 525286 525286 1.6K Oct 2 20:16 /var/home/railcar/immich/postgres/pg_stat_tmp/global.stat

Is this expected in general (not for immich in particular)? Is there a single solution to solve this of does it have to be built in the images? It really feels wrong that I can start a container that will create files I am not allowed to even read.

#selfhosted

threaded - newest

ryokimball@infosec.pub on 02 Oct 11:33 next collapse

Sounds legit to me. Padman could be seen as a separate Unix system or the programs to live in, and therefore would have its own set of user and group IDs. As long as the created files have permissions that are different from The host permissions and they will still be inaccessible without some permission manipulation.

dm9pZCAq@lemmy.0x0c.link on 02 Oct 12:08 next collapse

docs.podman.io/en/latest/…/podman-unshare.1.html

excess0680@lemmy.world on 02 Oct 14:43 collapse

In addition to podman unshare (which you would just prefix in front of commands like chmod), you can just temporarily do podman unshare chown -R root: <path> if you backup while the container is down. Don’t try that command on live containers.

For a more permanent solution, you can investigate which user (ID) is the default in the container and add the option –user-ns=“keep-id:uid=$the_user_id”. This does not work with all images, especially those that use multiple users per container, but if it works, the bind mount will have the same owner as the host.

To find the user ID, you can run podman exec <container> id. In most of the images I use, it’s usually 1000.

Botzo@lemmy.world on 02 Oct 13:06 next collapse

It seems like the only time I encounter this oddness is when some upstream docker image maintainer has done a weird with users (I once went 3 image levels up to figure out what happened).

Or if I borrow a dockerfile and don’t strip out the “nonroot” user hacks that got popularized years ago.

sainth@lemmy.world on 02 Oct 13:11 next collapse

As your user account, just run something like:

podman volume export VOLUME >backup.tar

Or from another machine, say you want to do a remote backup from your server:

ssh user@host podman volume export VOLUME | zstd -o backup.tar.zstd

Railcar8095@lemmy.world on 02 Oct 13:51 collapse

Thanks! it was a mounted volume in this case (just beside the location of the compose file), but it’s still good to know!

sainth@lemmy.world on 02 Oct 14:24 collapse

Ah, in that case you will probably need to go into the container to do the backup. I avoid mounted volumes.

InnerScientist@lemmy.world on 02 Oct 14:28 collapse

It is expected, the users inside the container are “real” users. They just get offset inside the container and some mapping is applied:

Root inside the container is mapped outside to the user running the container, everything that has the owner “root” inside the container can be read from outside the container as your user.

Everything that is saved as non-root inside the container gets mapped to the first subuid in /etc/subuids for your user + the uid inside the container.

You can change this mapping such that, for example, user 1000 inside the container gets mapped to your user outside the container.

An example:

You have a postgres database inside a container with a volume for the database files. The postgres process inside the container doesn’t run as root but instead runs as uid 100 as such it also saves its files with that user.
If you look at the volume outside the container you will get a permission denied error because it is owned by user 100100 (subuids starts at 100000 and usid inside container is 100).

To fix: Either run your inner processes as root, this can often be done using environment variables and has almost no security impact or add --userns keep-id:uid=100,gid=100 to the cmdline to make uid 100 inside the container map to your user instead of root (this creates a new image automatically and takes a while on the first run)