Those who are hosting on bare metal: What is stopping you from using Containers or VM's? What are you self hosting?
from kiol@lemmy.world to selfhosted@lemmy.world on 24 Sep 12:58
https://lemmy.world/post/36414259

Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

#selfhosted

threaded - newest

kiol@lemmy.world on 24 Sep 13:00 next collapse

Are you concerned about your self-hosted bare metal machine being a single point of failure? Or, are you concerned it will be difficult to reproduce?

30p87@feddit.org on 24 Sep 13:16 collapse

Considering I have a full backup, all services are Arch packages and all important data is on its own drive, I’m not concerned about anything

kutsyk_alexander@lemmy.world on 24 Sep 13:06 next collapse

I use Raspberry Pi 4 with 16GB SD-card. I simply don’t have enough memory and CPU power for 15 separate database containers for every service which I want to use.

kiol@lemmy.world on 24 Sep 13:08 next collapse

So, are you running 15 services on the Pi 4 without containers?

kutsyk_alexander@lemmy.world on 24 Sep 13:41 collapse

The list of what I run on my RPi:

Some of them run in containers, some of them run on the bare metal.

kiol@lemmy.world on 24 Sep 16:47 collapse

I see. Are you the only user?

kutsyk_alexander@lemmy.world on 24 Sep 20:04 collapse

No.

RvTV95XBeo@sh.itjust.works on 24 Sep 22:22 collapse

Is your favorite color purple?

comrade_twisty@feddit.org on 24 Sep 13:09 collapse

Databases on sd cards are a nightmare for sd card lifetimes. I would really recommend getting at least a USD SSD stick instead if you want to keep it compact.

Your SD card will die suddenly someday in the near future otherwise.

kutsyk_alexander@lemmy.world on 24 Sep 13:28 collapse

Thank you for your advice. I do use an external hard drive for my data.

51dusty@lemmy.world on 24 Sep 13:14 next collapse

my two bare metal servers are the file server and music server. I have other services in a pi cluster.

file server because I can’t think of why I would need to use a container.

the music software is proprietary and requires additional complications to get it to work properly…or at all, in a container. it also does not like sharing resources and is CPU heavy when playing to multiple sources.

if either of these machines die, a temporary replacement can be sourced very easily(e.g. the back of my server closet) and recreated from backups while I purchase new or fix/rebuild the broken one.

IMO the only reliable method for containers is a cluster because if you’re running several containers on a device and it fails you’ve lost several services.

kiol@lemmy.world on 24 Sep 14:10 collapse

Cool, care to share more specifics on your Pi Cluster

51dusty@lemmy.world on 25 Sep 11:18 collapse

I followed one of the many guides for installing proxmox on Rpis. 3node, 4gb rpi4s

I use the cluster for lighter services like Trilium, FreshRss, secondary DNS, a jumpbox… and something else I forget. I’m going to try immich and see how it performs.

my recent goto for cheap($200-300) servers are Debian + old Intel Macbook pros. I have two Minecraft bedrock servers on MBPs… one an i5, the other an i7.

I also use a Lenovo laptop to host some industrial control software for work.

tofu@lemmy.nocturnal.garden on 24 Sep 13:25 next collapse

TrueNAS is on bare metal has I have a dedicated NAS machine that’s not doing everything else and also is not recommended to virtualize. Not sure if that counts.

Same for the firewall (opnsense) since it is it’s own machine.

kiol@lemmy.world on 24 Sep 14:10 collapse

Have you tried running containers on Truenas?

tofu@lemmy.nocturnal.garden on 24 Sep 22:29 collapse

No because I run my containers elsewhere, not on the NAS

LifeInMultipleChoice@lemmy.world on 24 Sep 13:25 next collapse

For me it’s lack of understanding usually. I haven’t sat down and really learned what docker is/does. And when I tried to use it once I ended up with errors (thankfully they all seemed contained by the docker) but I just haven’t gotten around to looking more into than seeing suggestions to install say Pihole in it. Pretty sure I installed Pihole outside of one. Jellyfin outside, copyparty outside, and I something else im forgetting at the moment.

I was thinking of installing a chat app in one, but I put off that project because I got busy at work and it’s not something I normally use.

I guess I just haven’t been forced to see the upsides yet. But am always wanting to learn

slazer2au@lemmy.world on 24 Sep 13:47 collapse

containerisation is to applications as virtual machines are to hardware.

VMs share the same CPU, memory, and storage on the same host.
Containers share the same binaries in an OS.

LifeInMultipleChoice@lemmy.world on 24 Sep 14:46 collapse

When you say binaries do you mean locally stored directories kind of like what Lutris or Steam would do for a Windows game. (Create a fake c:\ )

slazer2au@lemmy.world on 24 Sep 15:00 collapse

Not so much a fake one but overlay the actual directory with specific needed files for that container.

Take the Linux lib directory. It exists on the host and had python version 3.12 installed. Your docker container may need python 3.14 so an overlay directory is created that redirects calls to /lib/python to /lib/python3.14 instead of the regular symlinked /lib/python3.12.

LifeInMultipleChoice@lemmy.world on 24 Sep 15:11 collapse

So let’s say I theoretically wanted to move a docker container to another device or maybe if I were re-installing an OS or moving to another distro, could I in theory drag my local docker container to an external and throw my device in a lake and pull that container off into the new device? If so … what then, I link the startups, or is there a “docker config” where they are all able to be linked and I can tell it which ones to launch on OS launch, User launch, delay or what not?

slazer2au@lemmy.world on 24 Sep 15:54 collapse

For ease of moving containers between hosts I would use a docker-compose.yaml to set how you want storage shared, what ports to present to the host, what environment variables your application wants. Using Wordpress as an example this would be your starting point
github.com/docker/awesome-compose/…/compose.yaml

all the settings for the database is listed under the db heading. You would have your actual database files stored in /home/user/Wordpress/db_data and you would link /home/user/Wordpress/db_data to /var/lib/MySQL inside the container with the line

volumes:
      - db_data:/var/lib/mysql  

As the compose file will also be in home/user/Wordpress/ you can drop the common path.

That way if you wanted to change hosts just copy the /home/user/Wordpress folder to the new server and run docker compose up -d and boom, your server is up. No need to faf about.

Containers by design are suppose to be temporary and the runtime data is recreated each time the container is launched. The persistent data is all you should care for.

LifeInMultipleChoice@lemmy.world on 24 Sep 16:50 collapse

“Containers by design are suppose to be temporary and the runtime data is recreated each time the container is launched. The persistent data is all you should care for.”

So that’s really why they should be good for Jellyfin/File servers, as the data isn’t needing to be stored in container, just the run files. I suppose the config files as well.

When I reverse proxy to my network using wireguard (set up on the jellyfin server, I also think I have a rustdesk server on there) on the other hand, is it worth using a container, or is that just the same either way?

I have shoved way to many things on an old laptop, but I never have to touch it really, and the latest update mint put out actually cured any issues I had. I used to have to reboot once a week or so to get everything back online when it came to my Pihole and shit. Since the latest update I ran in September 4th, I haven’t touched it for anything. Screen just stays closed in a corner of my desk with other shit stacked on top

tychosmoose@lemmy.world on 24 Sep 13:28 next collapse

I’m doing this on a couple of machines. Only running NFS, Plex (looking at a Jellyfin migration soon), Home Assistant, LibreNMS and some really small other stuff. Not using VMs or LXC due to low-end hardware (pi and older tiny pc). Not using containers due to lack of experience with it and a little discomfort with the central daemon model of Docker, running containers built by people I don’t know.

The migration path I’m working on for myself is changing to Podman quadlets for rootless, more isolation between containers, and the benefits of management and updates via Systemd. So far my testing for that migration has been slow due to other projects. I’ll probably get it rolling on Debian 13 soon.

neidu3@sh.itjust.works on 24 Sep 13:30 next collapse

I started hosting stuff before containers were common, so I got used to doing it the old fashioned way and making sure everything played nice with each other.

Beyond that, it’s mostly that I’m not very used to containers.

bizarroland@lemmy.world on 24 Sep 13:31 next collapse

I’m running a TrueNAS server on bare metal with a handful of hard drives. I have virtualized it in the past, but meh, I’m also using TrueNAS’s internal features to host a jellyfin server and a couple of other easy to deploy containers.

kiol@lemmy.world on 24 Sep 14:08 collapse

So Truenas itself is running your containers?

bizarroland@lemmy.world on 24 Sep 16:07 collapse

Yeah, the more recent versions basically have a form of Docker as part of its setup.

I believe it’s now running on Debian instead of free BSD, which probably simplified the containers set up.

30p87@feddit.org on 24 Sep 13:30 next collapse

That I’ve yet to see a containerization engine that actually makes things easier, especially once a service does fail or needs any amount of customization. I’ve two main services in docker, piped and webodm, both because I don’t have the time (read: am too lazy) to write a PKGBUILD. Yet, docker steals more time than maintaining a PKGBUILD, with random crashes (undebuggable, as the docker command just hangs when I try to start one specific container), containers don’t start properly after being updated/restarted by watchtower, and debugging any problem with piped is a chore, as logging in docker is the most random thing imagineable. With systemd, it’s in journalctl, or in /var/log if explicitly specified or obviously useful (eg. in multi-host nginx setups). With docker, it could be a logfile on the host, on the guest, or stdout. Or nothing, because, why log after all, when everything “just works”? (Yes, that’s a problem created by container maintainers, but one you can’t escape using docker. Or rather, in the time you have, you could more easily properly(!) install it bare metal) Also, if you want to use unix sockets to more closely manage permissions and prevent roleplaying a DHCP and DNS server for ports (by remembering which ports are used by which of the 25 or so services), you’ll either need to customize the container, or just use/write a PKGBUILD or similar for bare metal stuff.

Also, I need to host a python2.7 django 2.x or so webapp (yes, I’m rewriting it), which I do in a Debian 13 VM with Debian 9 and Debian 9 LTS repos, as it most closely resembles the original environment, and is the largest security risk in my setups, while being a public website. So into qemu it goes.

And, as I mentioned, either stuff is officially packaged by Arch, is in the AUR or I put it into the AUR.

renegadespork@lemmy.jelliefrontier.net on 24 Sep 13:52 next collapse

Do you host on more than one machine? Containerization / virtualization begins to shine most brightly when you need to scale / migrate across multiple servers. If you’re only running one server, I definitely see how bare metal is more straight-forward.

30p87@feddit.org on 24 Sep 14:20 next collapse

One main server, with backup servers being very easy to get up and running, either by full-restoring the backup, or installing and restoring specific services. As everything’s backed up to a Hetzner Storage Box, I can always restore it (if I have my USB sticks with the keyfiles).

I don’t really see the need for multiple running hosts, apart from:

  • Router
  • Workstation which has a 1070 in it, if I need a GPU for something. My 1U server only has space for a low profile and one slot GPU/HPC processor, and one of those would cost way more than its value over my old 1070 would be.
zod000@lemmy.dbzer0.com on 25 Sep 07:57 collapse

This is a big part of why I don’t use VMs or containers at home. All of those abstractions only start showing their worth once you scale them out.

renegadespork@lemmy.jelliefrontier.net on 25 Sep 09:03 collapse

Hm, I don’t know about that either. While scale is their primary purpose, another core tenant of containerization is reproducibility. For example

  1. If you are developing any sort of software, containers are a great way to ensure that the environment of your builds remains consistent.
  2. If you are frequently rebuilding a server/application for any reason, containers provide a good way to ensure everything is configured exactly as it was before, and when used with Git, changes are easy to track. There are also other tools that excel at this (like Ansible).
zod000@lemmy.dbzer0.com on 25 Sep 11:38 collapse

That to me still feels like a variety of “scale”. All of these tools (Ansible is a great example) are of dubious benefit when your scale of systems is small. If you only have a single dev machine or server, having an infrastructure-as-code system or containerized abstraction layer, just feels to me like unnecessary added mental overhead. If this post had been in a community about FOSS development or general programming, I’d feel differently as all of these things can be of great use there. Maybe my idea of selfhosting just isn’t as grandiose as some of the people in here. If you have a room full of server racks in your house, that’s a whole other ballgame.

deadcade@lemmy.deadca.de on 24 Sep 14:23 next collapse

Personally I have seen the opposite from many services. Take Jitsi Meet for example. Without containers, it’s like 4 different services, with logs and configurations all over the system. It is a pain to get running, as none of the services work without everything else being up. In containers, Jitsi Meet is managed in one place, and one place only. (When using docker compose,) all logs are available with docker compose logs, and all config is contained in one directory.

It is more a case-by-case thing whether an application is easier to set up and maintain with or without docker.

Jakeroxs@sh.itjust.works on 25 Sep 08:06 collapse

For logs dozzle is also fantastic, and you can do “agents” if you have multiple docker nodes and connect them togetherb

Semi_Hemi_Demigod@lemmy.world on 24 Sep 14:32 next collapse

You can customize and debug pretty easily, I’ve found. You can create your own Dockerfile based on one you’re using and add customizations there, and exec will get you into the container.

towerful@programming.dev on 24 Sep 15:17 collapse

especially once a service does fail or needs any amount of customization.

A failed service gets killed and restarted. It should then work correctly.
If it fails to recover after being killed, then it’s not a service that’s fully ready for containerisation.
So, either build your recovery process to account for this… or fix it so it can recover.
It’s often why databases are run separately from the service. Databases can recover from this, and the services are stateless - doesn’t matter how many you run or restart.

As for customisation, if it isn’t exposed via env vars then it can’t be altered.
If you need something beyond the env vars, then you use that container as a starting point and make your customisation a part of your container build processes via a dockerfile (or equivalent)

It’s a bit like saying “chisels are great. But as soon as you need to cut a fillet steak, you need to sharpen a side of the chisel instead of the tip of the chisel”.
It’s using a chisel incorrectly.

30p87@feddit.org on 24 Sep 15:23 collapse

Exactly. Therefore, docker is not useful for those purposes to me, as using arch packages (or similar) is easier to fulfill my needs.

Jerry@feddit.online on 24 Sep 13:35 next collapse

Depends on the application for me. For Mastodon, I want to allow 12K character posts, more than 4 poll question choices, and custom themes. Can’t do it with Docker containers. For Peertube, Mobilizon, and Peertube, I use Docker containers.

kiol@lemmy.world on 24 Sep 14:07 collapse

Why could you not have that Mastodon setup in containers? Sounds normal afaik

farcaller@fstab.sh on 24 Sep 14:12 collapse

I’ll chime in: simplicity. It’s much easier to keep a few patches that apply to local OS builds: I use Nix, so my Mastodon microVM config just has an extra patch line. If there’s a new Mastodon update, the patch most probably will work for it too.

Yes, I could build my own Docker container, but you can’t easily build it with a patch (for Mastodon specifically, you need to patch js pre-minification). It’s doable, but it’s quite annoying. And then you need to keep track of upstream and update your Dockerfile with new versions.

savvywolf@pawb.social on 24 Sep 13:48 next collapse

I’ve always done things bare metal since starting the selfhosting stuff before containers were common. I’ve recently switched to NixOS on my server, which also solves the dependency hell issue that containers are supposed to solve.

9tr6gyp3@lemmy.world on 24 Sep 13:52 next collapse

I thought about running something like proxmox, but everything is too pooled, too specialized, or proxmox doesn’t provide the packages I want to use.

Just went with arch as the host OS and firejail or lxc any processes i want contained.

towerful@programming.dev on 24 Sep 14:10 collapse

I’ve never installed a package on proxmox.
I’ve BARELY interacted with CLI on proxmox (I have a script that creates a nice Debian VM template, and occasionally having to really kill a VM).

What would you install on proxmox?!

9tr6gyp3@lemmy.world on 24 Sep 14:18 collapse

Firmware update utilities, host OS file system encryption packages, HBA management tools, temperature monitoring, and then a lot of the packages had bugs that were resolved with newer versions, but proxmox only provided old versions.

towerful@programming.dev on 25 Sep 02:20 collapse

Ah, fair.

enumerator4829@sh.itjust.works on 24 Sep 13:54 next collapse

My NAS will stay on bare metal forever. Any complications there is something I really don’t want. Passthrough of drives/PCIe-devices works fine for most things, but I won’t use it for ZFS.

As for services, I really hate using Docker images with a burning passion. I’m not trusting anyone else to make sure the container images are secure - I want the security updates directly from my distribution’s repositories, and I want them fully automated, and I want that inside any containers. Having Nixos build and launch containers with systemd-nspawn solves some of it. The actual docker daemon isn’t getting anywhere near my systems, but I do have one or two OCI images running. Will probably migrate to small VMs per-service once I get new hardware up and running.

Additionally, I never found a source of container images I feel like I can trust long term. When I grab a package from Debian or RHEL, I know that package will keep working without any major changes to functionality or config until I upgrade to the next major. A container? How long will it get updates? How frequently? Will the config format or environment variables or mount points change? Will a threat actor assume control of the image? (Oh look, all the distros actually enforce GPG signatures in their repos!)

So, what keeps me on bare metal? Keeping my ZFS pools safe. And then just keeping away from the OCI ecosystem in general, the grass is far greener inside the normal package repositories.

towerful@programming.dev on 24 Sep 15:21 next collapse

A NAS as bare metal makes sense.
It can then correctly interact with the raw disks.

You could pass an entire HBA card through to a VM, but I feel like it should be horses for courses.
Let a storage device be a storage device, and let a hypervisor be a hypervisor.

zod000@lemmy.dbzer0.com on 25 Sep 07:56 collapse

I feel like this too. I do not feel comfortable using docker containers that I didn’t make myself. And for many people, that defeats the purpose.

atzanteol@sh.itjust.works on 24 Sep 13:58 next collapse

Containers run on “bare metal” in exactly the same way other processes on your system do. You can even see them in your process list FFS. They’re just running in different cgroup’s that limit access to resources.

Yes, I’ll die on this hill.

arcayne@lemmy.today on 24 Sep 14:25 next collapse

Move over, bud. That’s my hill to die on, too.

Semi_Hemi_Demigod@lemmy.world on 24 Sep 14:28 next collapse

Learning this fact is what got me to finally dockerize my setup

sylver_dragon@lemmy.world on 24 Sep 14:48 next collapse

But, but, docker, kubernetes, hyper-scale convergence and other buzzwords from the 2010’s! These fancy words can’t just mean resource and namespace isolation!

In all seriousness, the isolation provided by containers is significant enough that administration of containers is different from running everything in the same OS. That’s different in a good way though, I don’t miss the bad old days of everything on a single server in the same space. Anyone else remember the joys of Windows Small Business Server? Let’s run Active Directory, Exchange and MSSQL on the same box. No way that will lead to prob… oh shit, the RAM is on fire.

sugar_in_your_tea@sh.itjust.works on 24 Sep 16:17 next collapse

kubernetes

Kubernetes isn’t just resource isolation, it encourages splitting services across hardware in a cluster. So you’ll get more latency than VMs, but you get to scale the hardware much more easily.

Those terms do mean something, but they’re a lot simpler than execs claim they are.

mesamunefire@piefed.social on 25 Sep 13:49 collapse

I love using it at work. Its a great tool to get everything up and running kinda like ansible. Paired with containerization it can make applications more “standard” and easy to spin back up.

That being said, for a home server, it feels like overkill. I dont need my resources spread out so far. I dont want to keep updating my kub and container setup with each new iteration. Its just not fun (to me).

atzanteol@sh.itjust.works on 24 Sep 17:41 next collapse

Oh for sure - containers are fantastic. Even if you’re just using them as glorified chroot jails they provide a ton of benefit.

AtariDump@lemmy.world on 25 Sep 16:41 collapse

…oh shit, the RAM is on fire.

The RAM. The RAM. The 🐏 is on fire. We don’t need no water let the mothefuxker burn.

Burn mothercucker, burn.

(Thanks phone for the spelling mistakes that I’m leaving).

FailBetter@crust.piefed.social on 24 Sep 15:32 collapse

Speak english doctor! But really is this a fancy way of saying its ok to docker all the things?

StrawberryPigtails@lemmy.sdf.org on 24 Sep 14:08 next collapse

Depends on the application. My NAS is bare metal. That box does exactly one thing and one thing only, and it’s something that is trivial to setup and maintain.

Nextcloud is running in docker (AIO image) on bare metal (Proxmox OS) to balance performance with ease of maintenance. Backups go to the NAS.

Everything else is running on in a VM which makes backups and restores simpler for me.

otacon239@lemmy.world on 24 Sep 14:11 next collapse

After many failures, I eventually landed on OMV + Docker. It has a plugin that puts the Docker management into a web UI and for the few simple services I need, it’s very straightforward to maintain. I don’t cloud host because I want complete control of my data and I keep an automatic incremental backup alongside a physically disconnected one that I manually update.

kiol@lemmy.world on 24 Sep 14:13 collapse

Cool, how are you managing your disks? Are you overall happy with OMV?

otacon239@lemmy.world on 24 Sep 14:17 collapse

Very happy with OMV. It’s not crazy customizable, so if you have something specialized, you might run into quirks trying to stick to the Web UI, but it’s just Debian under the hood, so it’s pretty manageable. 4x1TB drives RAID 5 for media/critical data, OS drive, and a Service data drive (databases, etc). Then an external 4TB for the incremental and another external 4TB for the disconnected backup.

kiol@lemmy.world on 24 Sep 14:20 collapse

Awesome, thanks. Upgrade process has been seamless?

otacon239@lemmy.world on 24 Sep 14:36 collapse

Haven’t had to do a full OS upgrade yet, but standard packages can be updated and installed right in the web UI as well.

towerful@programming.dev on 24 Sep 14:20 next collapse

I would always run proxmox to set up docker VMs.

I found Talos Linux, which is a dedicated distro for kubernetes. Which aligned with my desire to learn k8s.
It was great. I ran it as bare-metal on a 3 node cluster. I learned a lot, I got my project complete, everything went fine.
I will use Talos Linux again.
However next time, I’m running proxmox with 2 VMs per node - 3 talos control VMs and 3 talos worker VMs.
I imagine running 6 servers with Talos is the way to go. Running them hyperconverged was a massive pain. Separating control plane and data/worker plane (or whatever it is) makes sense - it’s the way k8s is designed.
It wasn’t the hardware that had issues, but various workloads. And being able to restart or wipe a control node or a worker node would’ve made things so much easier.

Also, why wouldn’t I run proxmox?
Overhead is minimal, get nice overview, get a nice UI, and I get snapshots and backups

possiblylinux127@lemmy.zip on 24 Sep 15:00 collapse

What hardware are you running it on?

towerful@programming.dev on 25 Sep 02:18 collapse

3x minisforums MS-01

Strider@lemmy.world on 24 Sep 14:30 next collapse

Erm. I’d just say there’s no benefit in adding layers just for the sake of it.

It’s just different needs. Say I have a machine where I run a dedicated database on, I’d install it just like that because as said there’s no advantage in making it more complicated.

akincisor@sh.itjust.works on 24 Sep 14:45 next collapse

I have a single micro itx htpc/media server/nas in my bedroom. Why use containers?

kiol@lemmy.world on 24 Sep 16:46 collapse

Do you back it up?

akincisor@sh.itjust.works on 24 Sep 18:44 collapse

My raid is rsync :)

TheMightyCat@ani.social on 24 Sep 14:58 next collapse

I’m selfhosting Forgejo and i don’t really see the benefit of migrating to a container, i can easily install and update it via the package manager so what benefit does containerization give?

turmoil@feddit.org on 25 Sep 04:02 collapse

If you don’t exceed a single deployment, it’s fine.

If however in the future, you want to add additional services to your host, let’s say an alerting or status system, it’s a lot easier to declare everything in a single place and then attach a reverse proxy to manage networking multiple services on one host.

sylver_dragon@lemmy.world on 24 Sep 15:17 next collapse

I started self hosting in the days well before containers (early 2000’s). Having been though that hell, I’m very happy to have containers.
I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That’s my own fault, but I’m a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.

These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.

Auli@lemmy.ca on 24 Sep 16:49 collapse

Yes containers have made everything so easy.

melfie@lemy.lol on 24 Sep 15:19 next collapse

I use k3s and enjoy benefits like the following over bare metal:

  • Configuration as code where my whole setup is version controlled in git
  • Containers and avoiding dependency hell
  • Built-in reverse proxy with the Traefik ingress controller. Combined with DNS in my OpenWRT router, all of my self hosted apps can be accessed via appname.lan (e.g., jellyfin.lan, forgejo.lan)
  • Declarative network policies with Calico, mainly to make sure nothing phones home
  • Managing secrets securely in git with Bitnami Sealed Secrets
  • Liveness probes that automatically “turn it off and on again” when something goes wrong

These are just some of the benefits just for one server. Add more and the benefits increase.

Edit:

Sorry, I realize this post is asking why go bare metal, not why k3s and containers are great. 😬

Andres4NY@social.ridetrans.it on 24 Sep 15:40 next collapse

@kiol I mean, I use both. If something has a Debian package and is well-maintained, I'll happily use that. For example, prosody is packaged nicely, there's no need for a container there. I also don't want to upgrade to the latest version all the time. Or Dovecot, which just had a nasty cache bug in the latest version that allows people to view other peoples' mailboxes. Since I'm still on Debian 12 on my mail server, I remain unaffected and I can let the bugs be shaken out before I upgrade.

Andres4NY@social.ridetrans.it on 24 Sep 15:43 collapse

@kiol On the other hand, for doing builds (debian packages and random other stuff), I'll use podman containers. I've got a self-built build environment that I trust (debootstrap'd), and it's pretty simple to create a new build env container for some package, and wipe it when it gets too messy over time and create a new one. And for building larger packages I've got ccache, which doesn't get wiped by each different build; I've got multiple chromium build containers w/ ccache, llvm build env, etc

Andres4NY@social.ridetrans.it on 24 Sep 15:45 collapse

@kiol And then there's the stuff that's not packaged in Debian, like navidrome. I use a container for that for simplicity, and because if it breaks it's not a big deal - temporary downtime of email is bad, temporary downtime of my streaming flac server means I just re-listen to the stuff that my subsonic clients have cached locally.

Andres4NY@social.ridetrans.it on 24 Sep 15:47 collapse

@kiol Syncthing? Restic? All packaged nicely in Debian, no need for containers. I do use Ansible (rather than backups) for ensuring if a drive dies, I can reproduce the configuration. That's still very much a work-in-progress though, as there's stuff I set up before I started using Ansible...

mesamunefire@piefed.social on 24 Sep 16:03 next collapse

All my services run on bare metal because its easy. And the backups work. It heavily simplifies the work and I don’t have to worry about things like a virtual router, using more cpu just to keep the container…contained and running. Plus a VERY tiny system can run:

  1. Peertube
  2. GoToSocial + client
  3. RSS
  4. search engine
  5. A number of custom sites
  6. backups
  7. Matrix server/client
  8. and a whole lot more

Without a single docker container. Its using around 10-20% of the RAM and doing a dd once in a while keeps everything as is. Its been 4 years-ish and has been working great. I used to over-complicate everything with docker + docker compose but I would have to keep up with the underlining changes ALL THE TIME. It sucked, and its not something I care about on my weekends.

I use docker, kub, etc…etc… all at work. And its great when you have the resources + coworkers that keep things up to date. But I just want to relax when I get home. And its not the end of the world if any of them go down.

kiol@lemmy.world on 24 Sep 16:45 next collapse

Do you use any tools for management, such as Ansible or similar?

mesamunefire@piefed.social on 24 Sep 17:03 collapse

Couple of custom bash scripts for the backups. Ive used ansible at work. Its awesome, but my own stuff doesn’t require any robustness.

Auli@lemmy.ca on 24 Sep 16:48 next collapse

Oh so the other 80% of your RAM can sit there and do nothing? My RAM is always around 80% or so as its caching stuff like it’s supposed to.

mesamunefire@piefed.social on 24 Sep 17:01 collapse

Hahaha that’s funny. I hope your not serous.

funkajunk@lemmy.world on 24 Sep 19:53 collapse

Unused RAM is wasted RAM

mesamunefire@piefed.social on 24 Sep 20:09 collapse

Welp OP did ask how we set it up. And for a family instance its good enough. The ram was extra that came with the comp. I have other things to do than optimize my family home server. There’s no latency at all already.

It spikes when peertube videos are uploaded and transcoded + matrix sometimes. Have a good night!

WhyJiffie@sh.itjust.works on 24 Sep 18:57 next collapse

what do you run for RSS?

also, I hope you are not doing backups by dding an in use filesystem

mesamunefire@piefed.social on 24 Sep 20:06 collapse

Freshrss. Sips resources.

The dd when I want. I have a script I tested a while back. The machine won’t be on yeah. Its just a small image with the software.

Miaou@jlai.lu on 25 Sep 13:32 collapse

Assuming you run Synapse, that uses more than 1.5GB RAM just idling, your system has at the very least 16GB of RAM… Hardly what I’d call “very tiny”

mesamunefire@piefed.social on 25 Sep 13:44 collapse

…ok so Im lying about my system for…some reason?

Synapse looks like its using 200M right now. It jumps to 1 GB when being heavily used, but I only use it for piefed and a couple of other local rooms. Honestly its not doing so much for us so we were thinking of getting rid of it. Its irritating to keep having to set up new devices and no one is really using it.

Peertube is much bigger running around 500MB just doing its thing.

Its a single family instance.

# ps -eo user,pid,ppid,cmd,pmem,rss --no-headers --sort=-rss | awk '{if ($2 ~ /^[0-9]+$/ && $6/1024 >= 1) {printf "PID: %s, PPID: %s, Memory consumed (RSS): %.2f MB, Command: ", $2, $3, $6/1024; for (i=4; i<=NF; i++) printf "%s ", $i; printf "\n"}}'  
PID: 2231, PPID: 1, Memory consumed (RSS): 576.67 MB, Command: peertube 3.6 590508 
PID: 2228, PPID: 1, Memory consumed (RSS): 378.87 MB, Command: /var/www/gotosocial/gotosoc 2.3 387964 
PID: 2394, PPID: 1, Memory consumed (RSS): 189.16 MB, Command: /var/www/synapse/venv/bin/p 1.1 193704 
PID: 678, PPID: 1, Memory consumed (RSS): 52.15 MB, Command: /var/www/synapse/livekit/li 0.3 53404 
PID: 1917, PPID: 645, Memory consumed (RSS): 45.59 MB, Command: /var/www/fastapi/venv/bin/p 0.2 46680 
nucleative@lemmy.world on 24 Sep 16:48 next collapse

I’ve been self-hosting since the '90s. I used to have an NT 3.51 server in my house. I had a dial in BBS that worked because of an extensive collection of .bat files that would echo AT commands to my COM ports to reset the modems between calls. I remember when we had to compile the slackware kernel from source to get peripherals to work.

But in this last year I took the time to seriously learn docker/podman, and now I’m never going back to running stuff directly on the host OS.

I love it because I can deploy instantly… Oftentimes in a single command line. Docker compose allows for quickly nuking and rebuilding, oftentimes saving your entire config to one or two files.

And if you need to slap in a traefik, or a postgres, or some other service into your group of containers, now it can be done in seconds completely abstracted from any kind of local dependencies. Even more useful, if you need to move them from one VPS to another, or upgrade/downgrade core hardware, it’s now a process that takes minutes. Absolutely beautiful.

roofuskit@lemmy.world on 24 Sep 17:22 collapse

Hey, you made my post for me though I’ve been using docker for a few years now. Never, looking, back.

laserjet@lemmy.dbzer0.com on 24 Sep 17:02 next collapse

Every time I have tried it just introduces a layer of complexity I can’t tolerate. I have struggled to learn everything required to run a simple Debian server. I don’t care what anyone says, docker is not simpler or easier. Maybe it is when everything runs perfectly but they never do so you have to consider the eventual difficulty of troubleshooting. And that would be made all the more cumbersome if I do not yet understand the fundamentals of Linux system.

However I do keep a list of packages I want to use that are docker-only. So if one day I feel up to it I’ll be ready to go.

kiol@lemmy.world on 24 Sep 17:18 collapse

Did you try compose scripts as opposed to docker run

laserjet@lemmy.dbzer0.com on 24 Sep 19:12 collapse

I don’t know. both? probably? I tried a couple of things here and there. it was plain that bringing in docker would add a layer of obfuscation to my system that I am not equipped to deal with. So I rinsed it from my mind.

If you think it’s likely that I followed some “how to get started with docker” tutorial that had completely wrong information in it, that just demonstrates the point I am making.

[deleted] on 24 Sep 17:04 next collapse

.

kiol@lemmy.world on 24 Sep 17:17 collapse

I mean, I did

raman_klogius@ani.social on 26 Sep 16:57 collapse

did what? apart from downvoting?

corsicanguppy@lemmy.ca on 24 Sep 17:20 next collapse

I don’t host on containers because I used to do OS security for a while.

kiol@lemmy.world on 24 Sep 17:27 collapse

Say more, what did that experience teach you? And, what would you do instead?

brucethemoose@lemmy.world on 24 Sep 17:21 next collapse

In my case it’s performance and sheer RAM need.

GLM 4.5 needs like 112GB RAM and absolutely every megabyte of VRAM from the GPU, at least without the quantization getting too compressed to use. I’m already swapping a tiny bit and simply cannot afford the overhead.

I think containers may slow down CPU<->GPU transfers slightly, but don’t quote me on that.

kiol@lemmy.world on 24 Sep 17:26 collapse

Can anyone confirm if containers would actually impact CPU to GPU transfers

brucethemoose@lemmy.world on 24 Sep 17:37 collapse

To be clear, VMs absolutely have overhead but Docker/Podman is the question. It might be negligible.

And this is a particularly weird scenario (since prompt processing literally has to shuffle ~112GB over the PCIe bus for each batch). Most GPGPU apps aren’t so sensitive to transfer speed/latency.

SpookyMulder@twun.io on 24 Sep 17:30 next collapse

No, you’re not looking to understand. You’re looking to persuade.

kiol@lemmy.world on 24 Sep 17:57 collapse

What do you mean?

WhyJiffie@sh.itjust.works on 24 Sep 18:53 collapse

I think this is like sepi’s response but less polite

kiol@lemmy.world on 24 Sep 19:01 collapse

I see. There is no disrespect intended, because it is a discussion thread starter. My question about this is: what would be the better phrasing for the subject matter of this post? Either way, discussion seems to be going great. Cheers all, because it isn’t a discussion of what is better: it is a general curiosity for people running bare metal, because it seems to receive zero discussion. I am glad to see such people responding, positive or negative.

WhyJiffie@sh.itjust.works on 24 Sep 19:23 collapse

I think people are assuming you want to convert people to The Church of Docker, in their minds, if you know what I mean. I do not see it that way, but “what is stopping you from using virtualization” has such a tone as if everyone is supposed to virtualize, but something prevents them and they can’t.

I think a better way to ask it would be “what are your reasons for sticking with bare metal?” or something like that.

sidenote: to me it seems some people here have quite bad experiences with docker. I mean it has parts I don’t like either, but I never had so many problems with it and I’m hosting a dozen of web services locally. maybe their experience was from the early days?

jaemo@sh.itjust.works on 24 Sep 17:31 next collapse

I generally abstract to docker anything I don’t want to bother with and just have it work.

If I’m working on something that requires lots of back and forth syncing between host and container, I’ll run that on bare metal and have it talk to things in docker.

Ie: working on an app or a website or something in language of choice on framework of choice, but postgres and redis are living in docker. Just the app I’m messing with and it’s direct dependencies run outside.

sepi@piefed.social on 24 Sep 17:36 next collapse

“What is stopping you from” <- this is a loaded question.

We’ve been hosting stuff long before docker existed. Docker isn’t necessary. It is helpful sometimes, and even useful in some cases, but it is not a requirement.

I had no problems with dependencies, config, etc because I am familiar with just running stuff on servers across multiple OSs. I am used to the workflow. I am also used to docker and k8s, mind you - I’ve even worked at a company that made k8s controllers + operators, etc. I believe in the right tool for the right job, where “right” varies on a case-by-case basis.

tl;dr docker is not an absolute necessity and your phrasing makes it seem like it’s the only way of self‐hosting you are comfy with. People are and have been comfy with a ton of other things for a long time.

kiol@lemmy.world on 24 Sep 17:58 collapse

Question is totally on purpose, so that you’ll fill in what it means to you. The intention is to get responses from people who are not using containers, that is all. Thank you for responding!

lka1988@lemmy.dbzer0.com on 24 Sep 22:22 next collapse

Honest response - respect.

sepi@piefed.social on 30 Sep 17:35 collapse

What is stopping you from running HP-UX for all your workloads? The question is totally in purpose so that you’ll fill in what it means to you.

7rokhym@lemmy.ca on 24 Sep 19:56 next collapse

Not knowing about Incus (LXD). It’s a life changer. Would never run any service on bare metal again.

Using GenAI to develop my Terraform and Ansible playbooks is magical. Also, use it to document everything in beautiful HTML docs from the outputs. Amazing.

misterbngo@awful.systems on 24 Sep 20:08 next collapse

Your phrasing of the question implies a poor understanding. There’s nothing preventing you from running containers on bare metal.

My colo setup is a mix of classical and podman systemd units running on bare metal, combined with a little nginx for the domain and tls termination.

I think you’re actually asking why folks would use bare metal instead of cloud and here’s the truth. You’re paying for that resiliency even if you don’t need it which means that renting the cloud stuff is incredibly expensive. Most people can probably get away with a$10 vps, but the aws meme of needing 5 app servers, an rds and a load balancer to run WordPress has rotted people. My server that I paid a few grand for on eBay would cost me about as much monthly to rent from aws. I’ve stuffed it full of flash with enough redundancy to lose half of it before going into colo for replacement. I paid a bit upfront but I am set on capacity for another half decade plus, my costs are otherwise fixed.

lazynooblet@lazysoci.al on 24 Sep 23:38 collapse

Your phrasing of the question implies poor understanding.

Your phrasing of the answer implies poor understanding. The question was why bare metal vs containers/VMs.

sepi@piefed.social on 30 Sep 17:37 collapse

The phrasing by the person you are responding to is perfectly fine and shows ample understanding. Maybe you do not understand what they were positing.

Surp@lemmy.world on 24 Sep 20:32 next collapse

What are you doing running your vms on bare metal? Time is a flat circle.

missfrizzle@discuss.tchncs.de on 24 Sep 21:16 collapse

for work I have a cloud dev VM, in which I run WSL2. so there’s at least two levels of VMs happening, maybe three honestly.

lka1988@lemmy.dbzer0.com on 24 Sep 20:37 next collapse

I run my NAS and Home Assistant on bare metal.

  • NAS: OMV on a Mac mini with a separate drive case
  • Home Assistant: HAOS on a Lenovo M710q, since 1) it has a USB zigbee adapter and 2) HAOS on bare metal is more flexible

Both of those are much easier to manage on bare metal. Everything else runs virtualized on my Proxmox cluster, whether it’s Docker stacks on a dedicated VM, an application that I want to run separately in an LXC, or something heavier in its own VM.

a1studmuffin@aussie.zone on 25 Sep 00:27 collapse

I’m curious why you feel these are easier to run on bare metal? I only ask as I’ve just built my first proxmox PC with the intent to run TrueNAS and Home Assistant OS as VMs, with 8x SAS enterprise drives on an HBA passed through to the TrueNAS VM.

Is it mostly about separation of concerns, or is there some other dragon awaiting me (aside from the power bills after I switch over)?

lka1988@lemmy.dbzer0.com on 25 Sep 06:57 collapse

Anything I run on Proxmox, per my own requirements, needs to be hardware-agnostic. I have a 3-node cluster set up to be a “playground” of sorts, and I like being able to migrate VMs/LXCs between different nodes as I see fit (maintenance reasons or whatever).

Some services I want to run on their own hardware, like Home Assistant, because it offers more granular control. The Lenovo M710q Tiny that my HA system runs on, even with its i7-7700T, pulls a whopping 10W on average. I’ll probably change it to the Pentium G4560T that’s currently sitting on my desk, and repurpose the i7-7700T for another machine that could use the horsepower.

My NAS is where Im more concerned about separation of duties. I want my NAS to only be a NAS. OMV is pretty simple to manage, has a great dashboard, spits out SMART data, and also runs a weekly rsync backup command on my RAID to a separate 8TB backup drive. I’m currently in the process of building a “new” NAS inside a gutted HP server case from 2003 to replace the Mac mini/USB 4-bay drive enclosure. New NAS will have a proper HBA to handle drives.

or is there some other dragon awaiting me (aside from the power bills after I switch over)?

My entire homelab runs about 90-130W. It’s pulled a total of ~482kWh since February (when I started monitoring it). That’s 3x tiny/mini/micro PCs (HP 800 G3 i7, HP 800 G4 i7, Lenovo M710q i7), an SFF (Optiplex 7050 i7), 2014 Mac mini (i5)/loaded 4-bay HDD enclosure/8TB USB HDD, Raspberry Pi 0W, and an 8-port switch.

a1studmuffin@aussie.zone on 25 Sep 16:22 collapse

Wow, thanks so much for the detailed rundown of your setup, I really appreciate it! That’s given me a lot to think about.

One area that took me by surprise a little bit with the HBA/SAS drive approach I’ve taken (and it sounds like you’re considering) is the power draw. I just built my new server PC (i5-8500T, 64GB RAM, Adaptec HBA + 8x 6TB 12GB SAS drives) and initial tests show on its own it idles at ~150W.

I’m fairly sure most of that is the HBA and drives, though I need to do a little more testing. That’s higher than I was expecting, especially since my entire previous setup (Synology 4-bay NAS + 4x SATA drives, external 8TB drive, Raspberry Pi, switch, Mikrotik router, UPS) idles at around 80W!

I’m wondering if it may have been overkill going for the SAS drives, and a proxmox cluster of lower spec machines might have been more efficient.

Food for thought anyway… I can tell this will be a setup I’m constantly tinkering with.

missfrizzle@discuss.tchncs.de on 24 Sep 21:21 next collapse

pff, you call using an operating system bare metal? I run my apps as unikernels on a grid of Elbrus chips I bought off a dockworker in Kamchatka.

and even that’s overkill. I prefer synthesizing my web apps into VHDL and running them directly on FPGAs.

until my ASIC shuttle arrives from Taipei, naturally, then I bond them directly onto Ethernet sockets.

/uj not really but that’d be sick as hell.

Tangent5280@lemmy.world on 24 Sep 22:11 collapse

I just imagine what the output of any program would be. Follow me, set yourself free!

Kurious84@lemmings.world on 24 Sep 21:56 next collapse

Anything you want dedicated performance on or require fine tuning for a specific performance use cases. Theyre out there.

fubarx@lemmy.world on 24 Sep 22:50 next collapse

Have done it both ways. Will never go back to bare metal. Dependency hell forced multiple clean installs down to bootloader.

The only constant is change.

pedro@lemmy.dbzer0.com on 24 Sep 23:27 next collapse

I’ve not cracked the docker nut yet. I don’t get how I backup my containers and their data. I would also need to transfer my Plex database into its container while switching from windows to Linux, I love Linux but haven’t figured out these two things yet

purplemonkeymad@programming.dev on 24 Sep 23:38 next collapse

An easy option is to add the data folders for the container you are using as a volume mapped to a local folder. Then the container will just put the files there and you can backup the folder. Restore is just put the files back there, then make sure you set the same volume mapping so the container already sees them.

You can also use the same method to access the db directory for the migration. Typically for databases you want to make sure the container is stopped before doing anything with those files.

hperrin@lemmy.ca on 25 Sep 00:12 next collapse

Anything you want to back up (data directories, media directories, db data) you would use a bind mount for to a directory on the host. Then you can back them up just like everything else on the host.

boiledham@lemmy.world on 25 Sep 04:56 next collapse

You would leave your plex config and db files on the disk and then map them into the docker container via the volume parameter (-v parameter if you are running command line and not docker-compose). Same goes for any other docker container where you want to persist data on the drive.

Passerby6497@lemmy.world on 25 Sep 05:40 collapse

All your docker data can be saved to a mapped local disk, then backup is the same as it ever is. Throw borg or something on it and you’re gold.

Look into docker compose and volumes to get an idea of where to start.

hperrin@lemmy.ca on 25 Sep 00:11 next collapse

There’s one thing I’m hosting on bare metal, a WebDAV server. I’m running it on the host because it uses PAM for authentication, and that doesn’t work in a container.

FreedomAdvocate@lemmy.net.au on 25 Sep 03:06 next collapse

Containerisation is all the rage, but in reality it’s not needed at all for all but a tiny number of self hosters. If a native program option exists, it’s generally just easier and more performant to use that.

Docker and the like shine when you’re frequently deploying and destroying. If you’re doing that with your home server you’re doing it very wrong.

I like docker, I use it on my server, but I am more and more switching back to native apps. There’s just zero advantage to running most things in docker.

donalonzo@lemmy.world on 25 Sep 09:01 collapse

Containers are as performant as a native program because they are native programs.

FreedomAdvocate@lemmy.net.au on 25 Sep 17:29 collapse

Nope. If you use docker containers on windows or mac, they’re running using an abstraction layer. Docker is the native app, but what’s running inside them isn’t. At best they are nearly identical in performance with negligible hit to performance, but as soon as you use things like port forwarding the performance takes a hit.

stackoverflow.com/…/what-is-the-runtime-performan…

eleitl@lemmy.zip on 25 Sep 05:26 next collapse

Obviously, you host your own hypervisor on own or rented bare metal.

frezik@lemmy.blahaj.zone on 25 Sep 05:37 next collapse

My file server is also the container/VM host. It does NAS duties while containers/VMs do the other services.

OPNsense is its own box because I prefer to separate it for security reasons.

Pihole is on its own RPi because that was easier to setup. I might move that functionality to the AdGuard plugin on OPNsense.

HiTekRedNek@lemmy.world on 25 Sep 09:17 collapse

My reasons for keeping OpnSense on bare metal mirror yours. But additionally I don’t want my network to take a crap because my proxmox box goes down.

I constantly am tweaking that machine…

splendoruranium@infosec.pub on 25 Sep 05:40 next collapse

Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

If it aint broke, don’t fix it 🤷

Routhinator@startrek.website on 25 Sep 05:52 next collapse

I’m running Kube on baremetal.

medem@lemmy.wtf on 25 Sep 07:08 next collapse

The fact that I bought all my machines used (and mostly on sale), and that not one of them is general purpose, id est, I bought each piece of hardware with a (more or less) concrete idea of what would be its use case. For example, my machine acting as a file server is way bigger and faster than my desktop, and I have a 20-year-old machine with very modest specs whose only purpose is being a dumb client for all the bigger servers. I develop programs in one machine and surf the internet and watch videos on the other. I have no use case for VMs besides the Logical Domains I setup in one of my SPARC hosts.

zod000@lemmy.dbzer0.com on 25 Sep 07:49 next collapse

Why would I want add overheard and complexity to my system when I don’t need to? I can totally see legitimate use cases for docker, and work for purposes I use VMs constantly. I just don’t see a benefit to doing so at home.

boonhet@sopuli.xyz on 26 Sep 04:57 collapse

Main benefit of Docker for home is Docker compose IMO. Makes it so easy to reuse your configuration

cichy1173@szmer.info on 26 Sep 11:29 collapse

Then check IaC, for example with Terraform or Ansible

boonhet@sopuli.xyz on 26 Sep 13:33 collapse

Why if I already need to know Docker for work, but not the others

I’ve used Kunernetes but not Ansible lol

HiTekRedNek@lemmy.world on 25 Sep 10:53 next collapse

In my own experience, certain things should always be on their own dedicated machines.

My primary router/firewall is on bare metal for this very reason.

I do not want to worry about my home network being completely unusable by the rest of my family because I decided to tweak something on the server.

I could quite easily run OpnSense in a VM, and I do that, too. I run proxmox, and have OpnSense installed and configured to at least provide connectivity for most devices. (Long story short: I have several subnets in my home network, but my VM OpnSense setup does not, as I only had one extra interface on that equipment, so only devices on the primary network would work)

And tbh, that only exists because I did have a router die, and installed OpnSense into my proxmox server temporarily while awaiting new-to-me equipment.

I didn’t see a point in removing it. So it’s there, just not automatically started.

AA5B@lemmy.world on 25 Sep 11:44 collapse

Same here. In particular I like small cheap hardware to act as appliances, and have several raspberry pi.

My example is home assistant. Deploying on its own hardware means an officially supported management layer, which makes my life easier. It is actually running containers but i don’t have to deal with that. It also needs to be always available so i use efficient “right sized” hardware and it works regardless whether im futzing with my “lab”

Damage@feddit.it on 25 Sep 12:01 collapse

My example is home assistant. Deploying on its own hardware means an officially supported management layer, which makes my life easier.

If you’re talking about backups and updates for addons and core, that works on VMs as well.

AA5B@lemmy.world on 25 Sep 15:44 collapse

For my use case, I’m continually fiddling with my VM config. That’s my playground, not just the services hosted there. I want home assistant to always be available so it can’t be there.

I suppose I could have a “production “ vm server that I keep stable, separately from my “dev” vm server but that would be more effort. Maybe it’s simply that I don’t have many services I want to treat as production, so the physical hardware is the cheapest and easiest option

yessikg@fedia.io on 25 Sep 12:35 next collapse

It's so simple that it takes so much less time, one day I may move to Podman but I need to have the time to learn. I host Jellyfin

SailorFuzz@lemmy.world on 25 Sep 12:46 next collapse

Mainly that I don’t understand how to use containers… or VMs that well… I have and old MyCloud NAS and little pucky PC that I wanted to run simple QoL services on… HomeAssistant, JellyFin etc…

I got Proxmox installed on it, I can access it… I don’t know what the fuck I’m doing… There was a website that allowed you to just run scripts on shell to install a lot of things… but now none of those work becuase it says my version of Proxmox is wrong (when it’s not?)… so those don’t work…

And at least VMs are easy(ish) to understand. Fake computer with OS… easy. I’ve built PCs before, I get it… Containers just never want to work, or I don’t understand wtf to do to make them work.

I wanted to run a Zulip or Rocket.chat for internal messaging around the house (wife and I both work at home, kid does home/virtualschool)… wanted to use a container because a service that simple doesn’t feel like it needs a whole VM… but it won’t work…

ChapulinColorado@lemmy.world on 25 Sep 14:18 collapse

I would give docker compose a try instead. I found Proxmox to be too much, when a simple yaml file (that can be checked into a repo) can do the job.

Pay attention to when people say things can be improved (secrets/passwords, rootless/podman, backups), etc. And come back to them later.

Just don’t expose things to the internet until you understand the risks and don’t check in secrets to a public git repo and go from there. It is a lot more manageable and feels like a hobby vs feeling like I’m still at work trying to get high availability, concurrency and all this other stuff that does not matter for a home setup.

lka1988@lemmy.dbzer0.com on 25 Sep 14:20 collapse

I would give docker compose a try instead. I found Proxmox to be too much, when a simple yaml file (that can be checked into a repo) can do the job.

Proxmox and Docker serve different purposes. They aren’t mutually exclusive. I have 4 separate VMs in my Proxmox cluster dedicated specifically to Docker; all running Dockge, too, so the stacks can all be managed from one interface.

ChapulinColorado@lemmy.world on 25 Sep 15:16 collapse

I get that, but the services listed by the other comment run just fine in docker with less hassle by throwing in some bind mounts.

The 4 VMs dedicated dockge instances is exactly the kind of thing I had in mind for people that want to avoid something that sounds more like work than a hobby when starting out. Building the knowledge takes time and each product introduced reduces the likelihood of it being completed anytime soon.

lka1988@lemmy.dbzer0.com on 25 Sep 17:00 collapse

Fair point. I’m 12 years into my own self-hosting journey, I guess it’s easy to forget that haha.

When I started dicking around with Docker, I initially used Portainer for a while, but that just had way too much going on and the licensing was confusing. Dockge is way easier to deal with, and stupid simple to set up.

Evotech@lemmy.world on 25 Sep 14:21 next collapse

It’s just another system to maintain, another link in the chain that can fail.

I run all my services on my personal gaming pc.

bhamlin@lemmy.world on 25 Sep 14:34 next collapse

It depends on the service and the desired level of it stack.

I generally will run services directly on things like a raspberry pi because VMs and containers offer added complexity that isn’t really suitable for the task.

At work, I run services in docker in VMs because the benefits far outweigh the complexity.

pineapplelover@lemmy.dbzer0.com on 25 Sep 14:38 next collapse

All I have is Minecraft and a discord bot so I don’t think it justifies vms

sem@lemmy.blahaj.zone on 25 Sep 14:53 next collapse

For me the learning curve of learning containers does not match the value proposition of what benefits they’re supposed to provide.

billwashere@lemmy.world on 25 Sep 15:06 collapse

I really thought the same thing. But it truly is super easy. At least just the containers like docker. Not kubernetes, that shit is hard to wrap your head around.

Plus if you screw up one service and mess everything up, you don’t have to rebuild your whole machine.

dogs0n@sh.itjust.works on 25 Sep 19:10 collapse

100% agree, my server has pretty much nothing except docker installed on it and every service I run is always in containers.

Setting up a new service is mostly 0% risk and apps can’t bog down my main file system with random log files, configs, etc that feel impossible to completely remove.

I also know that if for any reason my server were to explode, all I would have to do is pull my compose files from the cloud and docker compose up everything and I am exactly where I left off at my last backup point.

billwashere@lemmy.world on 25 Sep 15:10 next collapse

Ok I’m arguing for containers/VMs and granted I do this for a living… I’m a systems architect so I build VMs and containers pretty much all the time time at work… but having just one sorta beefy box at home that I can run lots of different things is the way to go. Plus I like to tinker with things so when I screw something up, I can get back to a known state so much easier.

Just having all these things sandboxed makes it SO much easier.

Smokeydope@lemmy.world on 25 Sep 16:41 next collapse

Im a hobbiest who just learned how to self host my own static website on a spare laptop over the summer. I went with what I knew and was comfortable with which is a fresh install of linux and installing from the apt package manager.

As im getting more serious im starting to take another look at docker. Unforunately my OS package manager only has old outdated versions of docker I may need to reinstall with like ubuntu/debian LTS server something with more cutting edge software in repo. I don’t care much for building from scratch and navigating dependency roulette.

kiol@lemmy.world on 25 Sep 18:11 collapse

What OS are you using?

Smokeydope@lemmy.world on 25 Sep 18:28 collapse

Linux Mint 22

BrianTheFirst@lemmy.world on 25 Sep 18:39 collapse

I guess it isn’t the most user friendly process, but you can add the official Docker repo and get an up-to-date version without compiling or anything. You just want to make sure to uninstall any Docker packages that you installed before, before you start.

linuxiac.com/how-to-install-docker-on-linux-mint-…

TeddE@lemmy.world on 25 Sep 19:46 collapse

They can but - if their current setup meets their needs - why? There ain’t nothing wrong with having a few simple spare laptops, each an isolated environment for a few simple home server tasks each.

Don’t get me wrong - I too advocate for docker, particularly on new builds, or as a relatively turnkey solution to get started for novice friends, but the best setup is the one that works, and they sound like they got theirs where they want it.

BrianTheFirst@lemmy.world on 27 Sep 21:46 collapse

…because that isn’t what they said. They said that they are getting more serious and now looking at Docker, but the outdated version in the Mint repo is preventing them from exploring that any further. So I offered a method that I know works without any of the “dependency roulette” that they were concerned about, while also giving a disclaimer that it isn’t exactly noob-friendly. 🤷‍♂️

TeddE@lemmy.world on 27 Sep 22:01 collapse

Fair point. I think my eyes glossed over the part where they said they where taking a second look at docker (but caught the rest about rebuilding the OS in general). My sincere apologies 😓😅

sj_zero@lotide.fbxl.net on 26 Sep 02:10 next collapse

I'm using proxmox now with lots of lxc containers. Prior to that, I used bare metal.

VMs were never really an option for me because the overhead is too high for the low power machines I use -- my entire empire of dirt doesn't have any fans, it's all fanless PCs. More reliable, less noise, less energy, but less power to throw at things.

Stuff like docker I didn't like because it never really felt like I was in control of my own system. I was downloading a thing someone else made and it really wasn't intended for tinkering or anything. You aren't supposed to build from source in docker as far as I can tell.

The nice thing about proxmox's lxc implementation is I can hop in and change things or fix things as I desire. It's all very intuitive, and I can still separate things out and run them where I want to, and not have to worry about keeping 15 different services running on the same version of whatever common services are required.

boonhet@sopuli.xyz on 26 Sep 04:53 collapse

Actually docker is excellent for building from source. Some projects only come with instructions for building in Docker because it’s easier to make sure you have tested versions of tools.

kossa@feddit.org on 26 Sep 02:37 next collapse

Well, that is how I started out. Docker was not around yet (or not mainstream enough, maybe). So it is basically a legacy thing.

My main machine is a Frankenstein monster by now, so I am gradually moving. But since the days when I started out, time has become a scarce resource, so the process is painfully slow.

ZiemekZ@lemmy.world on 26 Sep 02:39 next collapse

I consider them unnecessary layers of abstraction. Why do I need to fiddle with Docker Compose to install Immich, Vaultwarden etc.? Wouldn’t it be simpler if I could just run sudo apt install immich vaultwarden, just like I can do sudo apt install qbittorrent-nox today? I don’t think there’s anything that prohibits them from running on the same bare metal, actually I think they’d both run as well as in Docker (if not better because of lack of overhead)!

boonhet@sopuli.xyz on 26 Sep 04:44 collapse

Both your examples actually include their own bloat to accomplish the same thing that Docker would. They both bundle the libraries they depend on as part of the build

Mubelotix@jlai.lu on 26 Sep 04:57 next collapse

It’s not just libraries in a docker

boonhet@sopuli.xyz on 26 Sep 05:01 collapse

True, Docker does it better because any executables also have redundant copies. Running two different node applications on bare metal, they can still disagree about the node version, etc.

The actual old-school bloat-free way to do it is shared libraries of course. And that shit sucks.

communism@lemmy.ml on 26 Sep 05:40 collapse

Idk about Immich but Vaultwarden is just a Cargo project no? Cargo statically links crates by default but I think can be configured to do dynamic linking too. The Rust ecosystem seems to favour static linking in general just by convention.

boonhet@sopuli.xyz on 26 Sep 08:54 collapse

Yes, that was my point, you (generally) link statically in Rust because that resolves dependency issues between the different applications you need to run. Cost is a slightly bigger, bloatier binary, but generally it’s a very good tradeoff because a slightly bigger binary isn’t an inconvenience these days.

Docker achieves the same for everything, including dynamically linked projects that default to using shared libraries which can have dependency nightmares, other binaries that are being called, etc. It doesn’t virtualize an entire OS unless you’re using it on MacOS or Windows, so the performance overhead is not as big as people seem to think (disk space overhead, though… can get slightly bigger). It’s also great for dev environments because you can have different devs using whatever the fuck they prefer as their main OS and Docker will make everyone’s environment the same.

I generally wouldn’t put a Rust/Cargo project in docker by default since it’s pretty rare to run into external dependency issues with those, but might still do it for the tooling (docker compose, mainly).

nuggie_ss@lemmings.world on 26 Sep 05:19 next collapse

Warms me heart to see people in this thread thinking for themselves and not doing something just because other people are.

ieGod@lemmy.zip on 26 Sep 05:43 next collapse

You sure you mean bare metal here? Bare metal means no OS.

DarkMetatron@feddit.org on 26 Sep 06:08 next collapse

My servers and NAS were created long before Docker was a thing, and as I am running them on a rolling release distribution there never was a reason to change anything. It works perfectly fine the way it is, and it will most likely run perfectly fine the next 10+ years too.

Well I am planning, when I find the time to research a good successor, to replace my aging HPE ProLiant MicroServer Gen8 that I use as Homeserver/NAS. Maybe I will then setup everything clean and migrate the services to docker/podman/whatever is fancy then. But most likely I will only transfer all the disks and keep the old system running on newer hardware. Life is short…

OnfireNFS@lemmy.world on 26 Sep 14:27 next collapse

This reminds me of a question I saw a couple years ago. It was basically why would you stick with bare metal over running Proxmox with a single VM.

It kinda stuck with me and since then I’ve reimaged some of my bare metal servers with exactly that. It just makes backup and restore/snapshots so much easier. It’s also really convenient to have a web interface to manage the computer

Probably doesn’t work for everyone but it works for me

erock@lemmy.ml on 27 Sep 16:43 next collapse

Here’s my homelab journey: bower.sh/homelab

Basically, containers and GPU is annoying to deal with, GPU pass through to a VM is even more annoying. Most modern hobbyist GPUs also do not support splitting your GPU. At the end of the day, it’s a bunch of tinkering which is valuable if that’s your goal. I learned what I wanted, now I’m back to arch running everything with systemd and quadlet

CaptainBasculin@lemmy.bascul.in on 24 Sep 16:31 next collapse

Bare metal is cheaper if you already have some old pc components layjng around and they are not bound to my host pc being on. My PC uses a 600W power supply to run while the old laptop running my Jellyfin + pihole server use like 40W.

jet@hackertalks.com on 30 Sep 04:37 collapse

KISS

The more complicated the machine the more chances for failure.

Remote management plus bare metal just works, it’s very simple, and you get the maximum out of the hardware.

Depending on your use case that could be very important