From Docker with Ansible to k3s: I don't get it...
from sunoc@sh.itjust.works to selfhosted@lemmy.world on 09 Jul 08:04
https://sh.itjust.works/post/41842207
from sunoc@sh.itjust.works to selfhosted@lemmy.world on 09 Jul 08:04
https://sh.itjust.works/post/41842207
Hey! I have been using Ansible to deploy Dockers for a few services on my Raspberry Pi for a while now and it’s working great, but I want to learn MOAR and I need help…
Recently, I’ve been considering migrating to bare metal K3S for a few reasons:
- To learn and actually practice K8S.
- To have redundancy and to try HA.
- My RPi are all already running on MicroOS, so it kind of make sense to me to try other SUSE stuff (?)
- Maybe eventually being able to manage my two separated servers locations with a neat k3s + Tailscale setup!
Here is my problem: I don’t understand how things are supposed to be done. All the examples I find feel wrong. More specifically:
- Am I really supposed to have a collection of small yaml files for everything, that I use with
kubectl apply -f
?? It feels wrong and way too “by hand”! Is there a more scripted way to do it? Should I stay with everything in Ansible ?? - I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?
- Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard ?!
I feel that having a K3S + Traefik + Longhorn + Rancher on MicroOS should be straightforward, but it’s really not.
It’s very much a noob question, but I really want to understand what I am doing wrong. I’m really looking for advice and especially configuration examples that I could try to copy, use and modify!
Thanks in advance,
Cheers!
threaded - newest
K3s (and k8s for that matter) expect you to build a hierarchy of yaml configs, mostly because spinning up docker instances will be done in groups with certains traits applying to whole organization, certain ones applying only to most groups, but not all, and certain configs being special for certain services (http nodes added when demand is higher than x threshold).
But I wonder why you want to cluster navidrome or pihole? Navidrome would require a significant load before service load balancing is required (and non-trivial to implement), and pihole can be put behind a round-robin DNS forwarder, and also be weird to implement behind load balancing.
My goal is to have a k3s cluster as a deployment env and try and run the services I’m already using. I don’t need to have any advance load balancing, I just want pods to be restarted if one of my machine stops.
The answer to the first two questions are helm charts. They are collections of parametrized yaml files and the most popular way to install things into k8s. You just need one config file for each helm release (values.yaml).
If you want to go declarative with gitops rather than imperative, check ArgoCD or flux.
Or Kustomize, though I prefer Helm.
Then helmfile might be worth checking out
Thanks, I’ll check these more in detail!
Everyone talks about helm charts.
I tried them and hate writing them.
I found garden.io, and it makes a really nice way to consume repos (of helm charts, manifests etc) and apply them in a sensible way to a k8s cluster.
Only thing is, it seems to be very tailored to a team of developers. I kinda muddled through with it, and it made everything so much easier.
Although I massively appreciate that helm charts are used for most projects, they make sense for something you are going to share.
But if it’s a solo project or consuming other people’s projects, I don’t think it really solves a problem.
Which is why I used garden.io. Designed for deploying kubernetes manifests, I found it had just enough tooling to make things easier.
Though, if you are used to ansible, it might make more sense to use ansible.
Pretty sure ansible will be able to do it all in a way you are familiar with.
As for writing the manifests themselves, I find it rare I need to (unless it’s something I’ve made myself). Most software has a k8s helm chart. So I just reference that in a garden file, set any variables I need to, and all good.
If there aren’t helm charts or kustomize files, then it’s adapting a docker compose file into manifests. Which is manual.
Occasionally I have to write some CRDs, config maps or secrets (CMs and secrets are easily made in garden).
I also prefer to install operators, instead of the raw service. For example, I use Cloudnative Postgres to set up postgres databases.
I create a CRD that defines the database, and CNPG automatically provisions all the storage, pods, services, config maps and secrets.
The way I use kubernetes for the projects I do is:
Apply all the infrastructure stuff (gateways, metallb, storage provisioners etc) from helm files (or similar).
Then apply all my pods, services, certificates etc from hand written manifests.
Using garden, I can make sure things are deployed in the correct order: operators are installed before trying to apply a CRD, secrets/cms created before being referenced etc.
If I ever have to wipe and reinstall a cluster, it takes me 30 minutes or so from a clean TalosOS install to the project up and running, with just 3 or 4 commands.
Any on-the-fly changes I make, I ensure I back port to the project configs so when I wipe, reset, reinstall I still get what I expect.
However, I have recently found cdk8s.io and I’m meaning to investigate that for creating the manifests themselves.
Write code using a typed language, and have cdk8s create the raw yaml manifests. Seems like a dream!
I hate writing yaml. Auto complete is useless (the editor has no idea what format the yaml doc should take), auto formatting is useless (mostly because yaml is whitespace sensitive, and the editor has no idea what things are a child or a new parent). It just feels ugly and clunky.
helm charts are awful, i didn’t really like cdk8s either tbh. I think the future “package format” might be operators or Crossplane Composite Resources
Oh, operators are absolutely the way for “released” things.
But on bigger projects with lots of different pods etc, it’s a lot of work to make all the CRD definitions, hook all the events, and write all the code to deploy the pods etc.
Similar to helm charts, I don’t see the point for personal projects. I’m not sharing it with anyone, I don’t need helm/operator abstraction for it.
And something like cdk8s will generate the yaml for you to inspect. So you can easily validate that you are “doing the right thing” before slinging it into k8s.
garden seems similar to GitOps solutions like ArgoCD or FluxCD for deploying helm charts.
Here is an example of authentik deployed using helm and fluxcd.
Interesting, I might check them out.
I liked garden because it was “for kubernetes”. It was a horse and it had its course.
I had the wrong assumption that all those CD tools were specifically tailored to run as workers in a deployment pipeline.
I’m willing to re-evaluate my deployment stack, tbh.
I’ll definitely dig more into flux and ansible.
Thanks!
That’s CI 🙃
Confusing terms, but yeah. With ArgoCD and FluxCD, they just read from a git repo and apply it to the cluster. In my linked git repo, flux is used to install “helmreleases” but argo has something similar.
I’ll post more later (reply here to remind me), but I have your exact setup. It’s a great way to learn k8s and yes, it’s going to be an uphill battle for learning - but the payoff is worth it. Both for your professional career and your homelab. It’s the big leagues.
For your questions, no to all of them. Once you learn some of it the rest kinda falls together.
I’m going into a meeting, but I’ll post here with how I do it later. In the mean time, pick one and only one container you want to get started with. Stateless is easier to start with compared to something that needs volumes. Piece by piece brick by brick you will add more to your knowledge and understanding. Don’t try to take it all on day one. First just get a container running. Then access via a port and http. Then proxy. Then certs. Piece by piece, brick by brick. Take small victories, if you try to say “tomorrow everything will be on k8s” you’re setting yourself up for anger and frustration.
@sunoc@sh.itjust.works Edit: To help out I would do these things in these steps, note that steps are not equal in length, and they are not complete - but rather to help you get started without burning out on your journey. I recommend just taking each one, and when you get it working rather than jumping to the next one, instead taking a break, having a drink, and celebrating that you got it up and running.
Start documenting everything you do. The great thing about kubernetes is that you can restart from scratch if you have written everything down. I would start a new git repository with a README that contains every command you ran, what it did, and why you did it. Assume that you will be tearing down your cluster and rebuilding it - in fact I would even recommend that. Treat this first cluster as your testing grounds, and then you won’t feel crappy spinning up temporary resources. Then, you can rebuild it and know that you did a great job - and you’ll feel confident in rebuilding in case of hardware failure.
Get the sample nginx pod up and running with a service and deployment. Simply so you can
curl
the IP of your main node and port, and see the response. This I assume you have played with already.Point DNS to your main node, get the nginx pod with
http://your.dns.tld:PORT
. This should be the same as anything you’ve done with docker before.Convert the yaml to a helm chart as other have said, but don’t worry about “templating” yet, get comfortable with
helm install
,helm upgrade -i
, andhelm uninstall
. Understand what each one does and how they operate. Then go back and template, upgrade-ing after each change to understand how it works. It’s pretty standard to template the image and tag for example so it’s easy to upgrade them. There’s a million examples online, but don’t go overboard, just do the basics. My (template values.yaml) usually looks like:Just keep it simple for now.
istio
. I can go into more details why later, but I like that I can create a “VirtualService” for "$appname.my.custom.tld` and it will point to it.nginx.your.tld
and be able to curlhttp://nginx.your.tld
and see that it routes properly to your sample nginx service. Congrats, this is a huge one.Certificate
types in k8s. You’llVery true. Each brick you lay upgrades your setup and your skillset. There are very few mistakes in Kubernetes as long as you make sure your state is backed up.
Great writeup.
Wow that’s a lot of detail and information ! Thank you so much for taking the time to write all of this !
For the note taking part, it should be okay, I’m putting everything in my org-roam notes, including my current Ansible setup and my microOS combustion script!
For the rest, I’ll need to try it step by step; at the moment I think my problem is actually how to access the services with Traefik, I guess it will be an important step once I’ll figure it out.
Thanks again for the help!
Glad to be of help. It is the right decision, I have no regrets if migrating, but it is a long process. Just getting my first few services running was months, just so you are aware of that commitment, but it’s worth it.
Yeah - k8s has a bit of a steep learning curve. I recentlyish make the conversion from “a bunch of docker-compose files” to microk8s myself. So here are some thoughts for you (in no particular order).
I would avoid helm like the plague. Everybody is going to recommend it to you but it just puts a wrapper on a wrapper and is MUCH more complicated than what you’re going to need because you’re not spinning up hundreds of similar-but-different services. Making things into templates adds a ton of complexity and overhead. It’s something for a vendor to do, not a home-gamer. And you’re going to need to understand the basics before you can create helm charts anyway.
The actual yml files you need are actually relatively simple compared to a helm chart that needs to be parameterized and support a bazillion features.
So yes - you’re going to create a handful of yml files and
kubectl apply -f
them. But - you can do that with Ansible if you want, or you can combine them into a single yml (separate sections with----
).What I do is - for each service I create a directory. In it I have
name_deployment.yml
,name_service.yml
, name_ingress.ymland
name_pvc.yml`. I just apply them when I change them, which isn’t frequent. Each application I deploy generally has its own namespace for all its resources. I’ll combine deployments into a NS if they’re closely related (e.g. prometheus and grafana are in the same NS).Do yourself a favor and install
kubens
which lets you easily see and change your namespace globally. Gawd I hate having to type out my namespace for everything. 99% of the time when you can’t find a thing withkubectl get
you’re not looking in the right namespace.You’re going to need to sort out your storage situation. I use NFS for long-term storage for my pods and have microk8s configured to automatically create space on my NFS server when pods request a PV (persistent volume). You can also use local directories but that won’t cluster.
There are two basic types of “ingress” load balancing. “ClusterIp” means the cluster controller will act like a hostname-based router for HTTP. You can point your DNS entries at that server and it will route to your pods on their internal IP address based on the DNS name of the request. It’s easy to use and works very well - but it only works for HTTP traffic. The other is to use LoadBalancerIp that will give your pods an IP address on the network that you can connect to directly. The former only works for HTTP, the latter will let you use any ports (e.g. ssh for a forgejo instance).
I agree. k8s and helm have a steep learning curve. I have an engineering background and understand k8s in and out. Therefore, for my helm is the cleanest solution. I would recommend getting to know k8s and it’s resources, before using (or creating) helm charts.
Yeah - I did come down a bit harder on helm charts than perhaps I intended - but starting out with them was a confusing mess for me. Especially since they all create a new ‘custom-to-this-thing’ config file for you to work with rather than ‘standard yml you can google’. The layer of indirection was very confusing when I was learning. Once I abandoned them and realized how simple a basic deployment in k8s really is then I was able to actually make progress.
I’ve deployed half a dozen or so services now and I still don’t think I’d bother with helm for any of it.
For question 1: You can have multiple resource objects in a single file, each resource object just needs to be separated by
—
. The small resource definitions help keep things organized when you’re working with dozens of precisely configured services. It’s a lot more readable than the other solutions out there.For question 2, unfortunately Docker Compose is much more common than Kubernetes. There are definitely some apps that provide kubernetes documentation, especially Kubernetes operators and enterprise stuff, but Docker-Compose definitely has bigger market share for self-hosted apps. You’ll have to get experienced with turning a docker compose example into deployment+service+pvc.
Kubernetes does take a lot of the headaches out of managing self-hosted clusters though. The self-healing, smart networking, and batteries-included operators for reverse-proxy/database/ACME all save so much hassle and maintenance. Definitely Install ingress-nginx, cert-manager, ArgoCD, and CNPG (in order of difficulty).
Try to write yaml resources yourself instead of fiddling with Helm values.yaml. Usually the developer experience is MUCH nicer.
Feel free to take inspiration/copy from my 500+ container cluster: codeberg.org/jlh/h5b/src/branch/main/argo
In my repo,
custom_applications
are directories with hand-written/copy-pasted yaml files auto-synced via ArgoCD Operator, whileexternal_applications
are helm installations, managed via ArgoCD OperatorApplications
.Firstly, I want to say that I started with podman (alternative to docker) and ansible, but I quickly ran into issues. The last issue I encountered, and the last straw, was that creating a container, I was frustrated because Ansible would not actually change the container unless I used ansible to destroy and recreate it.
So I switched to Kubernetes.
To answer some of your questions:
So what I (and the industry) uses is called “GitOps”. It’s essentially you have a git repo, and the software automatically pulls the git repo and applies the configs.
Here is my gitops repo: github.com/moonpiedumplings/flux-config. I use FluxCD for GitOps, but there are other options like Rancher’s Fleet or the most popular ArgoCD.
As a tip, you can search github for pieces of code to reuse. I usually do
path:*.y*ml keywords keywords
to search for appropriate pieces of yaml.So the first issue is that Kubernetes doesn’t really have “containers”. Instead, the smallest controllable unit in Kubernetes is a “pod”, which is a collection of containers that share a network device. Of course, pods for selfhosted services like the type this community is interested in will rarely have more than one container in them.
There are ways to convert a docker-compose to a kubernetes pod.
But in general, Kubernetes doesn’t use compose files for premade services, but instead helm charts. If you are having issues installing specific helm charts, you should ask for help here so we can iron them out. Helm charts are pretty reliable in my experience, but they do seem to be more involved to set up than docker-compose.
So what you’re supposed to do is deploy an “ingress”, (k3s comes with traefik by default), and then use cert-manager to automatically apply get letsencrypt certs for ingress “objects”.
Actually, traefik comes with it’s own way to get SSL certs (in addition to ingresses and cert manager), so you can look into that as well, but I decided to use the standardized ingress + cert-manager method because it was also compatible with other ingress software.
Although it seems complex, I’ve come to really, really love Kubernetes because of features mentioned here. Especially the declarative part, where all my services can be code in a git repo.
Thanks for the detailed reply! You’re not the first to mention gitops for k8s, it seems interesting indeed, I’ll be sure to check it!
I’m the creator of ansible k3s playbooks, event if I not more active maintener. But there’s a big community : give a try and contrib github.com/k3s-io/k3s-ansible
I’ll check it! Thanks!
Don’t use kubernetes.
How about I’ll do anyway? <3
If you’re genuinely interested then fair enough. Just saying it’s not the only option, as a lot of people seem to think these days, and for personal projects I think it’s bonkers.
Ive actually been personally moving away from kubernetes for this kind of deployment and I am a big fan of using ansible to deploy containers using podman systemd units, you have a series of systemd .container files like the one below
You use ansible to write these into your /etc/containers/systemd/ folder. Example the file above gets written as /etc/containers/systemd/loki.container.
Your ansible script will then call
systemctl daemon-reload
and then you cansystemctl start loki
to finish the exampleNever heard about this way to use podman before! Thanks for letting me know!
Hey there,
I made a similar journey a few years ago. But I only have one home server and do not run my services in high availability (HA). As @non_burglar@lemmy.world mentioned, to run a service in HA, you need more than “just scaling up”. You need to exactly know what talks when to whom. For example, database entries or file writes will be difficult when scaling up a service not ready for HA.
Here are my solutions for your challenges:
kubectl apply -f
for each file. I would strongly recommend helm. Then you just have to runhelm install
per service. If you want to write each service by yourself, you will end up with multiple.yaml
files. I do it this way. Normally, you create one repository per service, which holds all YAML files. Alternatively, you could use a predefined Helm Chart and just customize the settings. This is comparable to DockerHub..yaml
configuration multiple replicas are defined, k8s will automatically balance these replicas on multiple servers and split the entire load on all servers in the same cluster. If you just look for configuration examples, look into Helm Charts. Often service provide examples only for Docker (and Docker Compose) and not for K8s.Changelog:
Never, ever install anything this way. The trend of “just run this shell script off the internet” is a menace. You don’t know what that script does, what repositories it may add, what it may install, whether somebody is typo-squatting the URL and you’re running something else, etc.
It’s just a bad idea. If you disagree then I have one question - how would you uninstall k3s after you ran that blackbox?
Yes, just running a random script from the internet is a very bad idea. You should also not copy and paste the command from above, since I’m only a random lemmy user. Nevertheless, if you trust k3s, and they promote this command on the official website (make sure it’s the official one) you can use it. As you want to install k3s, I’m going to assume you trust k3s.
If you want to review the script, go for it. And you should, I agree. I for myself reviewed (or at least looked over it) when I used the script for myself.
For the uninstallment: just follow the instructions on the official website and run
/usr/local/bin/k3s-uninstall.sh
sourceI really want to push back on the entire idea that it’s okay to distribute software via a
curl | sh
command. It’s a bad practice. I shouldn’t be reading 100’s of lines of shell script to see what sort of malarkey your installer is going to do to my system. This application creates an uninstall script. Neat. Many don’t.Of the myriad ways to distribute Linux software (deb, rpm, snap, flatpak, AppImage) an unstructured shell script is by far the worst.
I think that distributing general software via
curl | sh
is pretty bad for all the reasons that curl sh is bad and frustrating.But I do make an exception for “platforms” and package managers. The question I ask myself is: “Does this software enable me to install more software from a variety of programming languages?”
If the answer to that question is yes, which is is for k3s, then I think it’s an acceptable exception.
curl | sh
is okay for bootstrapping things like Nix on non Nix systems, because then you get a package manager to install various versions of tools that would normally try to get you to install themselves withcurl | bash
but then you can use Nix instead.K3s is pretty similar, because Kubernetes is a whole platform, with it’s own package manager (helm), and applications you can install. It’s especially difficult to get the latest versions of Kubernetes on stable release distros, as they don’t package it at all, so getting it from the developers is kinda the only way to get it installed.
Relevant discussion on another thread: programming.dev/post/33626778/18025432
One of my frustrations that I express in the linked discussion is that it’s “developers” who are making bash scripts to install. But k3s is not just developers, it’s made by Suse who has their own distro, OpenSuse, using OpenSuse tooling. It’s “packagers” making k3s and it’s install script, and that’s another reason why I find it more acceptable.
Microk8s manages to install with a snap. I know that snap is “of the devil” around these parts but it’s still better than a custom bash script.
Custom bash scripts will always be worse than any alternative.
I’ve tried snap, juju, and Canonical’s suite. They were uniquely frustrating and I’m not interested in interacting with them again.
The future of installing system components like k3s on generic distros is probably systemd sysexts, which are extension images that can be overlayed onto a base system. It’s designed for immutable distros, but it can be used on any standard enough distro.
There is a k3s sysext, but it’s still in the “bakery”. Plus sysext isn’t in stable release distros anyways.
Until it’s out and stable, I’ll stick to the one time bash script to install Suse k3s.
You’re welcome to make whatever bad decisions you like. I can manage snaps with standard tooling. I can install, update, remove them with simple ansible scripts in a standard way.
Bash installers are bad. End of.
Canonical’s snap use a proprietary backend, and comes at a risk of vendor lock in to their ecosystem.
The bash installer is fully open source.
You can make the bad decision of locking yourself into a closed ecosystem, but many sensible people recognize that snap is “of the devil” for a good reason.
You’ve made this about snap. Flatpak, rpm, deb, etc. all work too.
Except k3s does not provide a deb, a flatpak, or a rpm.
Ah - you have discovered my complaint.
That is precisely why I went with microk8s instead. I don’t install software from people who can’t be bothered to package their software using standard deployment tools which has been the correct way to distribute Linux software for decades.
So instead you decided to go with Canonical’s snap and it’s proprietary backend, a non standard deployment tool that was forced on the community.
Do you avoid all containers because they weren’t the standard way of deploying software for “decades” as well? (I know people that actually do do that though). And many of my issues about developers and vendoring, which I have mentioned in the other thread I linked earlier, apply to containers as well.
In fact, they also apply to snap as well, or even custom packages distributed by the developer. Arch packages are little more than shell scripts, Deb packages have pre/post hooks which run arbitrary bash or python code, rpm is similar. These “hooks” are almost always used for things like installing. It’s hypocritical to be against
curl | bash
but be for solutions like any form of packages distributed by the developers themselves, because all of the issues and problems withcurl | bash
apply to any form of non-distro distributed packages — including snaps.You are are willing to criticize bash for not immediately knowing what it does to your machine, and I recognize those problems, but guess what snap is doing under the hood to install software: A bash script. Did you read that bash script before installing the microk8s snap? Did you read the 10s of others in the repo’s used for doing tertiary tasks that the snap installer also calls?
The bash script used for installation doesn’t seem to be sandboxed, either, and it runs as root. I struggle to see any difference between this and a generic bash script used to install software.
Although, almost all package managers have commonly used pre/during/post install hooks, except for Nix/Guix, so it’s not really a valid criticism to put say, Deb on a pedestal, while dogging on other package managers for using arbitrary bash (also python gets used) hooks.
But back on topic, in addition to this, you can’t even verify that the bash script in the repo is the one you’re getting. Because the snap backend is proprietary. Snap is literally a bash installer, but worse in every way.
Dude - you gotta get off the snap hate train for a bit.
Do you not understand the difference between “hey, run this rando shell script on the internet” and “hey, use this standardized installer which may run some shell scripts”?
I don’t give a shit about all the canonical hate. For me snap does what I want:
flatpak run something.something.something
BS)It’s not bash I’m criticizing. Do you understand that? Because stop reading if you don’t and go back through my list. I’ll wait.
So good - you get that bash isn’t the problem. It’s the bespoke unstructured installer/upgrader/unisntaller part that is bad. You could write your installer in C, Python, etc. and I’ll levy the same complaints. You want me to install your python app? It should be available through pypi and pip. Not some rando bespoke installer.
docs.k3s.io/installation/uninstall
There is also a k3s option for Nixos, which removes the security and side-affect risks of running a random bash script installer.
And this is why I do not like K8s at all. The only reason to use it is to have something on your CV. Besides that, Docker Swarm and Hashicorp Nomad feel a lot better and are a lot easier to manage.
I personally feel like K8s has a purpose but not in homelab since our infrastructure is usually small. I don’t need clever load-balancing or autoscaling for most of my work.
Of course it is overkill for a homelab. The other features you mentioned, can be achieved by Nomad or Swarm as well. And with Nomad you don’t even have to use the Docker engine.
Just ask yourself the following question: why is helm so popular? Why do I need a third party scripting language just for K8s?
You clearly will feel that K8s did many things right. 10 years ago. But we learned from that. And operations cost are exploding everywhere I see K8s in use (with or without Helm). Weird side effects, because at this layer you almost have an indefinite amount of edge cases.
That’s why I move away from K8s. To make very large and complex platforms manageable for a small operations team. The DevOps Engineers don’t like that obviously, because it is a major skill on the job market. In the end, I have to prioritize and all I can do is spread awareness, that K8s was great at some point, as was Windows 98 SE.
I’ve thought about k8s, but there is so much about Docker that I still don’t fully know.
You’re right to be reluctant to apply everything by hand. K3s has a built-in feature that watches a directory and applies the manifests automatically: docs.k3s.io/installation/packaged-components
This can be used to install Helm charts in a declarative way as well: docs.k3s.io/helm
If you want to keep your solution agnostic to the kubernetes environment, I would recommend that you try ArgoCD (or FluxCD, but I never tried it so YMMV).
I use Kube everyday for work but I would recomend you to not use it. It’s complicated to answer problems you don’t care about. How about docker swarm, or podman services ?
I disagree, it is great to use. Yes, some things are more difficult but as OP mentioned he wants to learn more, and running your own cluster for your services is an amazing way to learn k8s.
The more I think about it the more I think you are right.
If possible: do that on company time. Let the boss pay for it.
You have a lot of responses here, but I’ll tell what k8s actually is, since a lot of people seem to get this wrong.
Just like k8s, docker has many tools. Although docker is packaged in a way, that it looks like it’s just 1 tool. This is docker desktop. Under the hood there is docker engine that is really a runtime and image management service and API. You can look at this more if you wanted. There is containerd, runc, cri-o. These were all created so that different implementations can all talk to this API in a standard way and work.
Moving on to k8s. K8s is a way to scale these containers to run in different ways and scale horizontally. There are ways to even scale nodes vertically and horizontally to allow for more or less resources to place these containers on. This means k8s is very event driven and utilizes a lot of APIs to communicate and take action.
You said that you are doing kubectl apply constantly and you say feels wrong. In reality, this is correct. Under the hood you are talking with the k8s control plane and it’s taking that manifest and storing it. Other services are communicating with the control plane to understand what they have to do. In fact you can apply a directory of manifests, so you don’t have to specify each file individually.
Again there are many tools you can use to manage k8s. It is an orchestration system to manage pods and run them. You get to pick what tool you want to use. If you want something you can do from a git repo, you can use something like argocd or flux. This is considered to be gitops and more declarative. If you need a templating implementation, there are many, like helm, json net, and kustomize (although not a full templating language). These can help you define your manifests in a more repeatable and meaningful way, but you can always apply these using the same tools (kubectl, argocd, flux, etc…)
There are many services that can run in k8s that will solve one problem or another and these tools scale themselves, since they mostly all use the same designs that keep scalability in mind. I kept things very simple, but try out vanilla k8s first to understand what is going on. It’s great that you are questioning these things as it shows you understand there is probably something better that you can do. Now you just need to find the tools that are right for you. Ask what you hate or dislike about what you are doing and find a way to solve that and if there are any tools that can help. landscape.cncf.io is a good place to start to see what tools exist.
Anyway, good luck on your adventure. K8s is an enterprise tool after all and it’s not really meant for something like a home lab. It’s an orchestration system and NOT a platform that you can just start running stuff on without some effort. Getting it up and running is day 1 operations. Managing it and keeping it running is day 2 operations.
I would add that you can run kubectl apply on directories and/or have multiple yaml structure in the same taml file (separated with —, it’s a yaml standard).
I see, that makes sense actually! Thanks for the message!
I saw the landscape website before, that’s a LOT of projects! =O