Docker Desktop on Linux?
from Hezaethos@piefed.zip to selfhosted@lemmy.world on 25 Apr 02:47
https://piefed.zip/c/selfhosted/p/1429485/docker-desktop-on-linux

After trying out Cosmos Cloud (and it not working for the clients), I’m back at square one again. I was going to install Docker Desktop, but I see it warns that it runs on a VM. Will this be a problem when trying to remote connect to certain services, like Mealie or Jellyfin?

#selfhosted

threaded - newest

reluctant_squidd@lemmy.ca on 25 Apr 02:54 next collapse

Podman is the way.

anonfopyapper@lemmy.world on 25 Apr 03:20 collapse

No its not.

While podman fully OCI image compliant, the network stack of it is different. And podman runsias user, not as root.

Not to mention that podman is a CLI, but OP asked for GUI

JustJack23@slrpnk.net on 25 Apr 03:25 next collapse

podman-desktop.io

Hezaethos@piefed.zip on 25 Apr 03:47 next collapse

Ok, this is interesting 🙂 they also have a learning center thing it says, so maybe I can take the classes/lessons/tutorials they mention.

I just really hope I then figure out the remote connection stuff. That’s the one I’m most paranoid about and wanting to figure out

irmadlad@lemmy.world on 25 Apr 05:54 collapse

That’s interesting. I didn’t know Podman had a Windows environment desktop app.

JustJack23@slrpnk.net on 25 Apr 06:01 next collapse

Tbf Idk how well it works on windows, but on Linux and Mac I have had no problems with it.

devfuuu@lemmy.world on 25 Apr 06:33 collapse

Also works good in macos. Been using it instead of the lima/colima stack for a few weeks now.

qaz@lemmy.world on 25 Apr 11:23 next collapse

And podman runsias user, not as root.

Both Podman and Docker have rootfull and rootless options

neclimdul@lemmy.world on 25 Apr 13:50 collapse

Ive recently replaced my docker setup on Linux with podman and as was said this isn’t entirely true. Running as user is actually a good thing but Podman machine allows root like you’re probably used to and the docker compatibility seems pretty good.

The networking seems a bit less stable with like wifi network changes and stuff but its definitely something to keep an eye on and give a shot as a more open alternative.

anonfopyapper@lemmy.world on 25 Apr 14:17 collapse

I don’t have much experience with podman in production. I use podman to spin up some web tools locally on Linux computer startup (so I have my own searxng only for me). It is a great tool.if you need to spin up something in user space, and because of that, it actually consumes slightly less RAM in general.

About the docker: I never used it as rootless. I’m not sure, but documentation probably said that it is partially or not fully supported or lacks some features or something like that. Anyways, I just run it as root, just because everyone is using it (by everyone, I mean stinky enterprise nerds that do their silly .sh scripts only for docker).

I had some troubles with networking and volume mounts in podman BTW, mostly permissions issues. I guess it makes sense, when a container, tries to create a root file on a rootless podman host and fails.

Though docker is not really fully open source AFAIK. Maybe its source-available, but I remember some bullshit about it.

slazer2au@lemmy.world on 25 Apr 03:10 next collapse

You can run a Portainer container to manage your containers

vk6flab@lemmy.radio on 25 Apr 03:19 next collapse

Why run Docker Desktop when it’s installable as a cli service?

What are you actually trying to achieve?

Hezaethos@piefed.zip on 25 Apr 03:42 next collapse

ease of use.

I’m a noob at networking.

osanna@lemmy.vg on 25 Apr 03:50 next collapse

there’s only one way to get better at it. by doing it.

twinnie@feddit.uk on 25 Apr 04:17 collapse

Or if it’s not something that’s valuable to you just do it the easy way.

BrianTheeBiscuiteer@lemmy.world on 25 Apr 10:21 next collapse

Don’t think Docker Desktop would simplify networking, unless it added a new feature since I last used it ~2 years ago.

atzanteol@sh.itjust.works on 25 Apr 06:14 next collapse

What does networking have to do with docker? You haven’t explained what you’re trying to achieve.

Hezaethos@piefed.zip on 25 Apr 11:11 collapse

Access containers remotely

ser@lemmy.zip on 26 Apr 07:32 next collapse

Check out CasaOS. Really easy to set up.

squinky@sh.itjust.works on 25 Apr 09:25 next collapse

Docker Desktop isn’t needed on Linux. If you want a UI, try Portainer

ohshit604@sh.itjust.works on 26 Apr 08:51 collapse

Just throw your services in a docker-compose.yml file, create a docker bridge network, and assign the:

networks:
    - YourDockerNetwork

To the services in the yaml file, specify the ports it want’s to open with

ports:
    - 8080:8080

And let it start up. If you want to get more complicated suggest reading the man page which really isn’t that long of a read.

Networking really cannot be simplified, you have to view it in a logistical way of how is Point A communicating with Point B which where Docker bridge networks come into play, they make the communication easy, if all your containers are all on the same docker network all you have to do is specify ContainerName:Port for them to communicate back and forth internally.

djdarren@piefed.social on 25 Apr 03:36 collapse

As a Mac user who’s migrated over to Linux over the past year or so, I’ve got an idea of where OP is coming from.

Docker on macOS is accessed via a Desktop GUI, so you can easily see what you have installed, how it’s running, etc… So when I shifted over to Linux, I was thrown off by there being no such tool. I wasn’t used to using a terminal to do everything, and grumbled quite a lot about there being no Docker Desktop GUI, given how many self-hostable services run through Docker.

I’ve since gotten used to it, but it really is quite jarring.

Mordikan@kbin.earth on 25 Apr 09:04 collapse

There are a lot of Docker GUI tools out there. There just isn't Docker Desktop. Here are a few:

  1. Portainer
  2. Podman Desktop
  3. Yacht (pretty sure this is unmaintained currently but still should work)
djdarren@piefed.social on 25 Apr 10:37 collapse

Oh aye, I get it. But when you’re new to the platform and trying to work with tools that are familiar, you don’t know about any of that.

foggy@lemmy.world on 25 Apr 03:24 next collapse

Docker containers are isolated by default… nothing on the outside can reach them unless you say so. You open the door with a port mapping

In your compose file:

ports:

  • ‘8096:8096’

Read this as HOST:CONTAINER. It says: “when something hits my server on port 8096, forward it to port 8096 inside the Jellyfin container.”

So once it’s running, you go to your-server-ip:8096 in a browser and you’re talking to Jellyfin. The container is still isolated you’ve just opened one specific door to access it.

Hezaethos@piefed.zip on 25 Apr 03:42 collapse

Wouldn’t this be insecure? Is that what the reverse proxy thing is for - to keep it safe?

Also, is it possible to make it so the name is simpler? I bought a domain name just in case.

Is there a place I can learn about ports and networking more? Something like Khan Academy but for networking?

foggy@lemmy.world on 25 Apr 04:04 next collapse

To your first questions, well need to untangle a few thoughts wrapped into that.

Right now, your-server-ip:8096 is plain HTTP. On your home network that’s usually fine. Over the internet, you’d want HTTPS so passwords and stream data aren’t sent in the clear.

Just opening a port on Docker only exposes it to your local network. It’s not on the public internet unless you also forward the port on your router. So by default, only devices on your network can reach it.

From there is it secure? That’s on Jellyfin. Using strong passwords and trusting Jellyfin is secure is as good as you can do here.

The reverse proxy is where you handle the https, and where you go from domain.com:8096 to jelly.domain.com

If using caddy for example, that looks like this somewhere in your caddyfile

jellyfin.yourdomain.com {

reverse_proxy localhost:8096

}

Your reverse proxy sits in front of your containers and routes traffic by hostname. You tell it: “when someone visits jellyfin.yourdomain.com, send them to the Jellyfin container on port 8096.”

The other pieces you’ll need:

DNS pointing jellyfin.yourdomain.com at your server’s IP (public IP if accessing from outside, local IP if just at home) A TLS certificate so HTTPS works, Let’s Encrypt is free, and Caddy gets them automatically with zero config (Nginx and Traefik can too, just more setup).

Also rputer port forwarding on 80 and 443 to make it accessible outside the house.

The last part there is the actual risky part. When you put a service on the open internet, bots from everywhere will find it instantly and begin running scripts to try to find a way in. With only this setup, again, the insecurity is Jellyfin. When an exploit drops, you need to be updated ASAP to stay on top of your security.

There are tons of ways to make this more secure. The easiest way would be tailscale/(wireguard). You basically install tailscale on every device that will connect to your server instead of opening your routers port. It keeps your device off the open internet but allows devices all on its tailscale VPN connect to it with the domain you setup.

You can achieve something similar using cloudflare tunnels. Your server runs a daemon that reaches out to cloudflare and it’s served to the internet that way, friends access via normal URL, no extra download required.

Lastly the best but cost prohibitive option is to do it through a VPS. A virtual private server does what cloudflare does, basically, but you control it. If the money is not prohibitive, I strongly recommend this. When cloudflare goes down again (and it will go down again), you won’t be beholden to their infrastructure being online to access your server.

Happy to clarify anything here. I wrote this response in 3 parts and rereading it it feels a little disjointed lol

JustJack23@slrpnk.net on 25 Apr 04:05 next collapse

training.linuxfoundation.org/networking/

It seems they don’t have anything on networking exactly, but maybe some of their stuff on container orchestration can be helpful training.linuxfoundation.org/full-catalog/?_sft_p…

About the domains and reverse proxies, if you are testing or on local network reverse proxy or domain are not needed. If you want to access it outside of your network they make more sense.

But also for my services I use tailscale.com and that way I avoid dealing with domain and reverse proxies by instead just connecting to my local network remotely.

foggy@lemmy.world on 25 Apr 05:14 collapse

Ah sneaky. You added a question.

The answer to is there somewhere you can learn about this? Yes and no. You will ultimately learn by doing for this stuff.

Comptia network+ study guides will have all this knowledge and more.

If you’re all in, Hack The Box is a freemium platform (think codecademy but less hand-holdy) that isn’t designed to teach you this, but will absolutely teach you this in the process. It is a platform for offensive and defensive cybersecurity. These things are covered as afterthoughts in bigger pictures, but it will (at least for folks who learn by doing) force you to familiarize yourself with it implicitly.

Otherwise as far as IPs and ports and containers, I can tell you all you need to know, because it ain’t much. It feels confusing/overwhelming at first but everything individual slice of this stuff is pretty simple. It’s just an absurd amount of knowledge. Just take baby steps and learn what you need to know to get done what you seek.

foggy@lemmy.world on 25 Apr 05:16 collapse

I didn’t have too much coffee, you had too much coffee.

IP address: a machine’s address on a network. Like a street address.

Port: a numbered door on that machine. The IP gets you to the building; the port gets you to the right room. Different programs listen on different ports.

DNS: the phonebook. Maps friendly names like example.com to IPs so you don’t have to memorize numbers.

Router: the doorman between your home and the internet. Stuff inside can reach out; nothing gets in unless you tell it to.

Container: a sandboxed mini-computer running on your machine. Isolated by default. You map a host port to a container port to let traffic in.

Reverse proxy: a switchboard. One program that takes all incoming traffic and routes it to the right service based on the hostname.

foggy@lemmy.world on 25 Apr 05:31 collapse

Welcome to foggy’s IP, ports, and containers lesson, take a shot of espresso, we’re going in!

special IP addresses:

127.0.0.1 - “This same machine.” Talking to yourself. Also written as localhost.

192.168.x.x - private home network range. What your router hands out to your devices. Not routable on the internet. 10.x.x.x - another private range. Bigger, used by businesses and some routers. Same idea as 192.168.

172.16.x.x to 172.31.x.x - the third private range. Docker likes this one for its internal container networks.

0.0.0.0 - “all interfaces” or “any address.” When a service binds to this, it means “listen on every network this machine is connected to.” Also sometimes means “no specific address” depending on context.

255.255.255.255 - brosdcast. “Everyone on this network.” Rarely something you’ll type, but you’ll see it.

169.254.x.x - link-local. What your machine assigns itself when it wanted a DHCP address from the router but didn’t get one. If you see this, something’s wrong with your network.


Port talk:

Ports 0-1023: well-known ports. Reserved for standard services. On Linux you need root to bind to these. The ones you’ll actually see:

  • 22: SSH (remote terminal access)
  • 53: DNS
  • 80: HTTP (unencrypted web)
  • 443: HTTPS (encrypted web)
  • 25, 465, 587: email sending (SMTP and variants)
  • 143, 993: email reading (IMAP)

Ports 1024-49151: registered ports. Assigned to specific apps by convention. A sampling:

  • 3306: MySQL/MariaDB
  • 5432: PostgreSQL
  • 6379: Redis
  • 8080: common “alternate HTTP” port, used when 80 is taken
  • 8096: Jellyfin
  • 32400: Plex
  • 27017: MongoDB

Nothing enforces these: they’re just conventions. You could run Jellyfin on port 7777 if you wanted.

Ports 49152–65535: ephemeral ports. A neato part:

When you connect to a servers port 443, for example, your machine connects to the server’s port 443, but it also needs a port on your end for the server to send replies back to. Your OS grabs a random unused port from this high range, uses it for that one connection, and releases it when done. Thus, ‘ephemeral’


Containers? Sure:

A container is a program packaged in a bubble. It’s basically a VM without the machine part. Let’s say you wanna run Jellyfin AND Plex. Let’s say tomorrow there’s a brand new video file format and Jellyfin supports it and Plex doesn’t. Jellyfin needs to use some new version of ffmpeg that Plex cannot use. The solution? Containers.

Each program is containered with what it needs to run happily. Nothing more. Your machine does the rest.

Hezaethos@piefed.zip on 25 Apr 08:40 next collapse

You should be a teacher. You made me go from despising Networking to interested in learning it more

foggy@lemmy.world on 25 Apr 15:45 collapse

Thanks! I hope it helped.

I’m actually literally in the process of reaching out to my old Computability and Complexity professor who is now cs chair and cyber security lead for my alma mater. Wanna pitch him some ideas for me doing an adjunct in a cyber warfare lab 🤓

jupiter@mastodon.gamedev.place on 25 Apr 09:39 collapse

@foggy I never thought ephemeral ports were still a thing. How do I increase this range, e.g. on a machine expecting to make a lot of connections?

foggy@lemmy.world on 25 Apr 15:43 collapse

If it’s a Linux box, everything over 1023 just needs root.

For Debian flavors,

/proc/sys/net/ipv4/ip_local_port_range

At least for those I use. Idk for rhel etc.

I can check my boxes with system ctl:

sysctl net.ipv4.ip_local_port_range

And tested on a VM, this wide s your ephemeral range:

sysctl -w net.ipv4.ip_local_port_range=“1024 65535”

Manage persistence in /etc/sysctl.conf

I’ll be honest here, I asked Claude for the windows equiv of that. I haven’t tested. Proceed with caution:

To check:

netsh int ipv4 show dynamicport tcp

To expand ephemeral range:

netsh int ipv4 set dynamicport tcp start=10000 num=55535

Syntax makes enough sense to me, but I repeat I have not vetted this.

HOWEVER,

all moot. You have 65k ports PER CONNECTION, holmes. Sorry I’m drunk now my tones changes and typos = more :)

So you at 10.0.0.1 connect to Google at 8.8.8.8 and cloudflare at 1.1.1.1, you can use 130k connections between the two. So this isn’t as useful as you may think you need it to be (idk what you’re doing lol, load balancer?)

If you’re churning through tons of short connections, you can “run out” of ports even though you have plenty… they’re all just cooling down.

net.ipv4.tcp_tw_reuse=1

lets the kernel grab them sooner.

Claude says Windows would be

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\TcpTimedWaitDelay

That’s a registry change. Proceed with extreme caution. Use a VM or throw away machine. I have absolutely not vetted the windows version here and registry edits are inherently dangerous. I usually yell at an AI that tells me to use regedit. Probably don’t do this unless the system is backed up and those backups are tested.

Hope this helps your crazy load balancer or whatever :)

jupiter@mastodon.gamedev.place on 25 Apr 23:48 collapse

@foggy Appreciate the writeup, no worries about coming across as rude, it's more important to be comprehensive and correct.

My thinking was quite off indeed.

Decronym@lemmy.decronym.xyz on 25 Apr 04:10 next collapse

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
DHCP Dynamic Host Configuration Protocol, automates assignment of IPs when connecting to a network
DNS Domain Name Service/System
Git Popular version control system, primarily for code
HTTP Hypertext Transfer Protocol, the Web
HTTPS HTTP over SSL
IMAP Internet Message Access Protocol for email
IP Internet Protocol
Plex Brand of media server package
SMTP Simple Mail Transfer Protocol
SSH Secure Shell for remote terminal access
SSL Secure Sockets Layer, for transparent encryption
TLS Transport Layer Security, supersedes SSL
VPN Virtual Private Network
VPS Virtual Private Server (opposed to shared hosting)

[Thread #254 for this comm, first seen 25th Apr 2026, 11:10] [FAQ] [Full list] [Contact] [Source code]

folekaule@lemmy.world on 25 Apr 06:54 next collapse

I don’t see anyone addressing the question from the post: whether it is a problem that Docker Desktop on Linux runs in a separate VM.

The page says:

Docker Desktop on Linux runs a Virtual Machine (VM) which creates and uses a custom docker context, desktop-linux, on startup. This means images and containers deployed on the Linux Docker Engine (before installation) are not available in Docker Desktop for Linux.

To expand on what that means: If you install Docker as usual (the CLI) on Linux, it runs as a process (running as root). The process will isolate the container processes from the rest of the system using Linux kernel features, but you’re really just running processes on your host kernel that have limited access to the file system, network, etc.

When you run in it a separate VM, which is how Docker Desktop is also run on Windows and MacOS, you are running it in a separate Linux instance (VM) that cannot communicate with the outside by default. So, if you’re running Docker on the host computer and inside a VM, those are separate Docker installs and can’t talk to each other. That is what the warning is about.

You can absolutely expose the VM to the outside, the same as if you ran it on Windows. Docker will let you expose those ports and it handles the messy bits of the networking for you. You just have to tell Docker when you run the container (on the command line or in a docker compose file) which ports to expose. By default, nothing is exposed. To do that you can use the -p option. For example:

docker run --rm -it -p 8080:80 httpd

Will run an instance of Apache HTTPd and expose it on port 8080. The container itself listens to port 80, but on the outside it’s 8080. If you then hit localhost:8080 you should see “It works!”.

A note on Docker networking: from within the container, localhost is referring to the container itself, not the host. So if you try to do e.g. curl http://localhost:8080/ inside the container, your connection would be refused.

Docker Desktop is often frowned upon because you have to pay to use it in a commercial setting (there was some backlash because it used to be free), it’s quite expensive, and they require a minimum license count for enterprise licenses (I know because we bought one at work). So, I suggest exploring free alternatives like Podman Desktop. However, note that they do not always have feature parity with Docker Desktop.

I like Docker Desktop because it gives me a nice dashboard to see all my containers, resource usage, etc. I would not have requested it for work, though, if it weren’t for my IDE (Visual Studio) requiring it at the time (they have added Podman support since).

Final note: I recommend just diving into using Docker from the command line and learn that. Docker complicates networking a little bit because it adds more layers, but understanding Docker is very useful if you’re into self hosting or software development.

GottaHaveFaith@fedia.io on 25 Apr 07:25 next collapse

docker desktop is just a GUI for managing docker, is there a specific reason you need it? Anyway you can try Portainer

GreenKnight23@lemmy.world on 25 Apr 08:55 next collapse

docker runs native on Linux. you run docker desktop on windows and Mac because they don’t have Linux runtimes that can run docker.

docker desktop would be useless for Linux.

learn the command line scrub. better now than never.

captcha_incorrect@lemmy.world on 25 Apr 10:26 next collapse

Maybe OP does not want to spend time to learn something that only has one use case for them. Maybe OP has external factors that forces the use of Docker Desktop.

Be nicer.

GreenKnight23@lemmy.world on 25 Apr 13:28 collapse

container management is just the tip of selfhosting with docker.

learning the CLI will go a long way to help themselves later when they need to learn it to fix something the UI doesn’t handle well.

Be nicer.

would you have preferred noob?

captcha_incorrect@lemmy.world on 27 Apr 07:19 collapse

learning the CLI will go a long way to help themselves later when they need to learn it to fix something the UI doesn’t handle well.

I agree with you on this, but we don’t now the constraints of OP. Perhaps the mentioned clients need access. Ask instead of assume.

would you have preferred noob?

I have no problem with other people’s level of knowledge, be it greater or lesser than mine. I have a problem with how you conveyed your message. Here is how you could have phrased it instead, as an example of what I meant with “be nicer”.

Learn the command line, it will more than likley be of great help in the future. Better now than never.

DacoTaco@lemmy.world on 25 Apr 12:06 collapse

Ye no. Im a cli boy and even still i use podman desktop for my podman containers on my pc.
A good gui goes a long way for something that has a gazillion parameters. But also, you will take my git cli from my cold dead hands god damnit

SpikesOtherDog@ani.social on 25 Apr 10:55 next collapse

I started with docker desktop on Linux thinking it was the easiest way to get started. It initially ran well, but I started having weird stability issues. Moving to the cli resolved this. Relying on the documentation and web searches will help you quickly gain familiarity.

qaz@lemmy.world on 25 Apr 11:35 next collapse

There is Cockpit which allows you to manage the server and has simple management for containers. However, I recommend using something like Dockge with compose because it makes it easier to change the configuration of containers without recreating them manually.

nutbutter@discuss.tchncs.de on 25 Apr 17:41 next collapse

A lot of people don’t know that Docker Desktop is actually proprietary.

lka1988@lemmy.dbzer0.com on 25 Apr 19:43 next collapse

I just run Docker CLI and Dockge on top of it. Works great. Dockge gives me the general “most-used” controls, and if I need to do anything more advanced I can just drop into the terminal.

ikidd@lemmy.dbzer0.com on 26 Apr 09:11 collapse

I’d bite the bullet and learn how to use Compose from the command line so you can work on it over SSH. If you want a UI, try LazyDocker