When’s the last time you checked if your backup solution works?
JetpackJackson@feddit.org
on 13 Mar 02:39
nextcollapse
Yesterday! Switched my media server from freebsd to alpine and got the arr stack all set up using the backup zip files
halcyoncmdr@piefed.social
on 13 Mar 03:35
nextcollapse
Backup? Psh… That’s what the lab is for.
Ek-Hou-Van-Braai@piefed.social
on 13 Mar 04:28
nextcollapse
But if my backups actually work then I miss out on the joy of rebuilding everything from scratch and explaining to my wife why non of the lights in the house work anymore.
What’s a backup solution…? (I’m only being half sarcastic, I really need to set one up, but it’s not as “fun” as the rest of my homelab, open to suggestions)
I at least have external backups for important family pics and docs! But yea the homelab itself is severely lacking. If it dies, I get to start from scratch. Been gambling for years that “I’ll get around to a backup solution before it dies”. I wouldn’t bet on me :|
You do, of course have a dedicated rsyslogd server? An isolated system to which logs are sent, so that if someone compromises another one of your systems, they can’t wipe traces of that compromise from those systems?
Oh. You don’t. Well, that’s okay. Not every lab can be complete. That Raspberry Pi over there in the corner isn’t actually doing anything, but it’s probably happy where it is. You know, being off, not doing anything.
probable_possum@leminal.space
on 13 Mar 03:17
nextcollapse
Ah. The approach that squirrel@piefed.zip suggested. ;)
All of your systems are set up, but are they capable of being redeployed using a configuration management software package? Ansible or something like that?
Oh. They’re not. Well, that’s probably okay. I mean, you could probably go manually reproduce configurations, more or less.
You have an intrusion detection system set up, right? A server watching your network’s traffic, looking for signs that systems on your network have been compromised, and to warn you? Snort or something like that?
Oh. You don’t. Well, that’s probably okay. I mean, probably nothing on your network has been compromised. And probably nothing in the future will be.
neidu3@sh.itjust.works
on 13 Mar 02:53
nextcollapse
Barring any hardware issues or external factors, will it run for 10000 years? Any logs not properly rotated? And other outputs accumulating and eventually filling up a filesystem?
Egonallanon@feddit.uk
on 13 Mar 03:00
nextcollapse
Buy a UPS and setup a NUT server on the spare raspberry pi you have lying around.
All of those systems in your homelab…they aren’t all pulling down their updates multiple times over your network link, right? You’re making use of a network-wide cache? For Debian-family systems, something like Apt-Cacher NG?
Oh. You’re not. Well, that’s probably okay. I mean, not everyone can have their environment optimized to minimize network traffic.
the_tab_key@lemmy.world
on 13 Mar 04:33
nextcollapse
I set this up years ago, but then decided it was better to just install different distros on each of my computers. Problem solved?
You have squid or some other forward http proxy set up to share a cache among all the devices on your network set up to access the Web, to minimize duplicate traffic?
And you have a shared caching DNS server set up locally, something like BIND?
Oh. You don’t. Well, that’s probably okay. I mean, it probably doesn’t matter that your devices are pulling duplicate copies of data down. Not everyone can have a network that minimizes latency and avoids inefficiency across devices.
InnerScientist@lemmy.world
on 13 Mar 06:14
collapse
That won’t work in most cases, all https traffic isn’t cached unless you mitm https which is a bad idea and not worth it.
Only cache updates those are worth it and most have a caching server option.
Couple it to your smart watch, backup every 10 seconds, and make it vibrate when successful
WhyJiffie@sh.itjust.works
on 13 Mar 10:18
collapse
you are just making yourself learn to ignore that your smartwatch vibrates. It’s a bit like breathing and blinking, you are so used to it you can completely forget that its happening. if your smartwatch, or phone, or whatever, starts vibrating all the time, you will get used to it and not notice when it stops happening anymore, but also it will hide any actually meaningful notification.
Oh but I have them !
Every day an email is sent out with the backup status.
Every day I got my email in the morning with the back up logs.
For years.
I associated email received to backup successful, until a month or so when my vpn broke and the emails where just “could not connect”, but it took me a while to bother actually opening the message body as it had always been the same for years.
So I’ll manage it differently, have the email subject be more explicit about a success or a failure amongst other things.
Always learning :^)
CameronDev@programming.dev
on 13 Mar 03:15
nextcollapse
Have you tested your backups recently? Having them complete is one thing, having the data you need for recovery is another. Have you backed up your vm configurations and build scripts?
Go test your latest backup!
CameronDev@programming.dev
on 13 Mar 10:04
collapse
Ah, that frission of excitement when you come to restore! Will it work? Does it contain that very important file? Is it up to date? How much will future you hate past you if it isn’t there?
You have remote power management set up for the systems in your homelab, right? A server set up that you can reach to power-cycle other servers, so that if they wedge in some unusable state and you can’t be physically there, you can still reboot them? A managed/smart PDU or something like that? Something like one of these guys?
Oh. You don’t. Well, that’s probably okay. I mean, nothing will probably go wrong and render a device in need of being forcibly rebooted when you’re physically away from home.
lemming741@lemmy.world
on 13 Mar 03:20
nextcollapse
if you can cycle your home assistant with the shelly plug whilst your home assistant is down, yes. from experience it’s really quite annoying to have a smart plug switch off HA…
lemming741@lemmy.world
on 13 Mar 05:11
nextcollapse
HA is on the same proxmox host as the router. So yeah I can end up locked out. Hasn’t happened yet tho!
The relay is on my test machine, it’s always nvidia that crashes there.
The Shelly can be configured to automatically turn back on after a certain amount of time. It has local scripting capabilities.
If they did that… I don’t know.
tychosmoose@lemmy.world
on 13 Mar 03:30
nextcollapse
If you do have the smart PSU and power management server you probably also went down the rabbit hole of scripting the power cycling, right? Maybe made that server hardened against power loss disk corruption so it can be run until UPS battery exhaustion.
What if there is a power outage and NUT shuts everything down? Would be nice to have everything brought back up in an orderly way when power returns. Without manual intervention. But keeping you informed via logging and push notifications.
FauxLiving@lemmy.world
on 13 Mar 05:53
nextcollapse
Oh. You don’t. Well, that’s probably okay. I mean, nothing will probably go wrong and render a device in need of being forcibly rebooted when you’re physically away from home.
The old lighting wasn’t that great anyway. If I were to just put lighting on a DMX512-controlled network, then all of it could be synchronized to whole-house audio…
Sure. What that guy is using is actually not the most-interesting diagram style, IMHO, for automatic layout of network maps, if you want large-scale stuff, which is where the automatic layout gets more interesting. I have some scripts floating around somewhere that will generate very large network maps — run a bunch of traceroutes, geolocate IPs, dump the results into an sqlite database, and then generate an automatically laid-out Internet network map. I don’t want to go to the trouble of anonymizing the addresses and locations right now, but if you have a graphviz graph and want to try playing with it, I used:
goes looking
Ugh, it’s Python 2, a decade-and-a-half old, and never got ported to Python 3. Lemme gin up an example for the non-hierarchical graphviz stuff:
That’ll take a ton of graphviz edges and nicely lay them out while trying to avoid crossing edges and stuff, in a non-hierarchical map. Get more complicated maps that it can’t use direct lines on, it’ll use splines to curve lines around nodes. You can create massive network maps like this. Note that I was last looking at graphviz’s automated layout stuff about 15 years ago, so it’s possible that they have better layout algorithms now, but this can deal with enormous numbers of nodes and will do reasonable things with them.
I just grabbed his example because it was the first graphviz network map example that came up on a Web search.
non_burglar@lemmy.world
on 13 Mar 06:35
nextcollapse
This is just as true in my non-computer hobbies that involve physical systems instead of code and configs!
If I had to just barely meet the requirements using as little budget as possible while making it easy for other people to work on, that would be called “work.” My brain needs to indulge in some over-engineering and “I need to see it for myself” kind of design decisions.
You have all your devices attached to a console server with a serial port console set up on the serial port, and if they support accessing the BIOS via a serial console, that enabled so that you can access that remotely, right? Either a dedicated hardware console server, or some server on your network with a multiport serial card or a USB to multiport serial adapter or something like that, right? So that if networking fails on one of those other devices, you can fire up minicom or similar on the serial console server and get into the device and fix whatever’s broken?
Oh, you don’t. Well, that’s probably okay. I mean, you probably won’t lose networking on those devices.
varnia@lemmy.blahaj.zone
on 13 Mar 04:34
nextcollapse
I had a automatic reboot of all VMs and the hypervisor because of a kernel update at night. Nextcloud decided to start in maintenance mode and Jellyfin refused to start because the cache folder didn’t have enough space left. Authentik also complained about outdated provider configuration…
Need to investigate the Nextcloud and Authentic issue during weekend 🤗
I haven’t messed with my raspberry pi in maybe a month… And I think one of my backups got corrupted because I receive an email saying that it failed along with tons of errors every night. Hmm, maybe I should get to that soon…
AkatsukiLevi@lemmy.world
on 13 Mar 04:49
nextcollapse
Do you have a spinning fish display in front of your homelab server, right? We all know the spinning fish improves performance and security, it is a indispensable part of homelabbing
I’ve moved my homelab twice because it became stable, I really liked the services it was running, and I didn’t want to disturb the last lab*cough*prod server.
My current homelab will be moar containers. I’m sure I’ll push it to prod instead of changing the IP address and swapping name tags this time.
greedytacothief@lemmy.dbzer0.com
on 13 Mar 05:56
nextcollapse
Yeah, my home server was being a little too stable and I wasn’t really learning anything. So I switched from fedora to proxmox, now I’ve got a nixos vm I’m going to try to get all my services running in.
FauxLiving@lemmy.world
on 13 Mar 05:57
nextcollapse
The comments in this thread have collectively created thousands of person-hours worth of work for us all…
Honestly, that would be living the dream... I have too many other things I want to do!
possiblylinux127@lemmy.zip
on 13 Mar 06:04
nextcollapse
You need monitoring
jaschen306@sh.itjust.works
on 13 Mar 06:27
nextcollapse
Kubernetes?
wizardbeard@lemmy.dbzer0.com
on 13 Mar 06:39
collapse
I’m remembering a very not fun discussion my team had about “the monitoring system not sending any alerts doesn’t inherently mean everything is ok” after an outage that was missed by our monitoring system.
You need to make sure you’re monitoring connectivity as well as specific problem states. No data is a problem state often overlooked, and it’s not always considered for every resource type in these systems out of the box.
And you probably want a heartbeat notification. Yes, it’s noise, but if you don’t see anything from monitoring you need to question if monitoring is the thing that broke. It sending out a notification every so often going “yes I am online” is useful.
EonNShadow@pawb.social
on 13 Mar 06:44
nextcollapse
I wish it was stable
I had a drive die yesterday
DownByLaw@sh.itjust.works
on 13 Mar 06:50
nextcollapse
Have you already tried implementing an identity provider like Authentik, so you can add OIDC and ldap for all your services, while you are the only one that’s using them? 🤔
PumpkinEscobar@lemmy.world
on 13 Mar 06:54
nextcollapse
Behind a traefik reverse proxy with lets encrypt for ssl even though the services aren’t exposed to the internet?
diablomnky666@lemmy.wtf
on 13 Mar 07:34
nextcollapse
To be fair a lot of apps don’t handle custom CAs like they should. Looking at you Home Assistant! 😠
DownByLaw@sh.itjust.works
on 13 Mar 08:23
nextcollapse
Don’t forget about Anubis and crowdsec to make it even safer inside your LAN
suicidaleggroll@lemmy.world
on 13 Mar 12:34
collapse
Who cares if it’s exposed to the internet?
Encrypting your local traffic is still valuable to protect your systems from any bad actors on your local network (neighbor kid cracks your wifi password, some device on your network decides to start snooping on your local traffic, etc)
Many services require HTTPS with a valid cert to function correctly, eg: Bitwarden. Having a real cert for a real domain is much simpler and easier to maintain than setting up your own CA
epicshepich@programming.dev
on 13 Mar 07:35
nextcollapse
Probably a good idea to switch over to WPA-Enterprise using Authentik’s RADIUS server support and let all of the users of your wireless access point log in with their own network credentials, while you’re at it.
Coleslaw4145@lemmy.world
on 13 Mar 06:59
nextcollapse
Now try migrating all your docker containers to podman.
fossilesque@mander.xyz
on 13 Mar 07:21
nextcollapse
Don’t encourage me.
epicshepich@programming.dev
on 13 Mar 07:35
collapse
It’s not that difficult to get SELinux working with podman quadlets, especially if you run things rootless. I have a kerberized service account for each application I host and my quadlets are configured to run under those. I very rarely encounter applications that simoky can’t be run rootless but I usually can find an adequate alternative. I think right now the only thing that runs as root is one of the talk or collabora containers in my nextcloud stack. No selinux issues either.
epicshepich@programming.dev
on 13 Mar 09:13
collapse
I use podman-compose with system accounts and I don’t have a ton of issues. The biggest one is that I can’t seem to get bluetooth and pip working on Home Assistant at the same time. Most of the servers I manage have SELinux and it works fine as long as I use :z/:Z with bind mounts.
A few years ago, I set up a VPS for my friend’s business; at the time, I didn’t know how to work with SELinux so I just turned it off. I tried to flip it back on, and it somehow bricked the system. We had to restore from a backup. Since then, I’ve been afraid to enable it on my flagship homelab server.
WhyJiffie@sh.itjust.works
on 13 Mar 09:28
collapse
are you sure it really bricked it? when turning it on, on next boot it needs to go over all the files and retag them or something like that, and it can take a significant amount of time
epicshepich@programming.dev
on 13 Mar 10:36
collapse
Honestly, I don’t know what happened, but it was unreachable via SSH and the web console. There shouldn’t have been a ton of files to tag since it was an Almalinux system that started with SELinux enabled, and all we added was a container app or two.
WhyJiffie@sh.itjust.works
on 14 Mar 02:00
collapse
that started with SELinux enabled
that does not matter, it needs to go over all of them. I don’t know how long it takes with SSD, but with HDD it can take a half an hour or more, with a mostly base system. and the kernel starts doing this very early, when not even systemd or other processes are running, so no ssh, but web console should have been working to see what its doing
epicshepich@programming.dev
on 14 Mar 02:03
collapse
Good to know! I do hope to eventually re-enable SELinux on my flagship server, so I’ll keep this in mind. As for my friend’s server, I think he migrated to Alpine a while back.
Wouldn’t an immutable OS be overall a pretty good idea for a stable server?
epicshepich@programming.dev
on 13 Mar 12:52
nextcollapse
I honestly don’t know a ton about immutable distros other than that they let you front-load some difficulty in getting things set up in exchange for making it harder to break. I was just surprised that the distro of choice was Bazzite, since its target audience seems to be gamers.
At the start I just wanted a desktop machine that runs Steam through sunshine/moonlight so hardware support and gaming stuff such was very important.
My homelab used to run on my laptop when it could all fit within a couple 100s of GB and I was the only user but moving it was tricky. Since I’m a programmer I’m not afraid of this stuff so I just spent the hours to figure out one problem at a time.
I ended up figuring out adding HDD whitelist in SELinux, make it accessible in podman, manually edit fstab because tools didn’t work, systemd service for startup, logging in automatically where I already forgot everything and would have not had to do any of this on a bog standard Ubuntu server.
epicshepich@programming.dev
on 14 Mar 01:39
collapse
Respect! I too often take it for granted that it’s a privilege for my gaming rig and my homelab server to be separate boxes.
My server is Almalinux, my laptop is Mint, and my gaming rig is Nobara. But if I had to consolidate everything in to one machine, I’d pick Nobara.
SexualPolytope@lemmy.sdf.org
on 13 Mar 07:53
nextcollapse
Yes of course. Had to spend a couple of hours fixing permission related issues.
poolhelmetinstrument@lemmy.world
on 13 Mar 09:03
collapse
But did you run them as rootful or the intended rootless way.
SexualPolytope@lemmy.sdf.org
on 13 Mar 10:59
collapse
Rootless. The docker containers were rootful, hence the permission struggles.
immobile7801@piefed.social
on 13 Mar 09:58
collapse
I had problems getting apps with multiple containers working in quadlets (definitely a knowledge issue on my part, but didn’t feel the time learning it was beneficial, but will probably revisit during kubernetes learning) so went back to podman with docker compose.
SexualPolytope@lemmy.sdf.org
on 13 Mar 11:01
collapse
I think it’s kinda better using quadlets, because I wrote some custom scripts, and quadlets made the process better. But podman compose is probably file too.
emerald@lemmy.blahaj.zone
on 13 Mar 15:23
collapse
And then migrate all your podman containers to proxmox
nucleative@lemmy.world
on 13 Mar 07:10
nextcollapse
Never run:
docker compose pull
docker compose down
docker compose up -d
Right before the end of your day. Ask me how I know 😂
shym3q@programming.dev
on 13 Mar 08:09
nextcollapse
compose up will automatically recreate with newer images if the new one were pulled. so there is no need for compose down btw
Oh, gosh, I did this last evening. I didn’t check what time it was, and initiated an update on some 70 containers. I have a cron that shuts down the server in the evening, and sure enough, right in the middle of the updates, it powered off. I didn’t even mess with it and went to bed. Re-initiated the update this morning, and everything is up and running. Whew!
AnUnusualRelic@lemmy.world
on 13 Mar 07:13
nextcollapse
At 71, I have to document. I started a long time ago. I worked for a mec. contractor long ago, and the rule was: ‘If you didn’t write it down, it didn’t happen.’ That just carried over to everything I do.
Vile_port_aloo@lemmy.world
on 13 Mar 12:45
collapse
Do you write down what you write down on the internet?
As in a blog or wiki? I do not because I am not authoritative. What I know came from reading, doing, screwing it up, ad nauseam. When something finally clicks for me, I write it down because 9 times out of 10, I will need that info later. But my writing would be so full of inaccuracies that it would be embarrassing and possibly lead someone astray.
Vile_port_aloo@lemmy.world
on 15 Mar 12:06
collapse
It’s how cults start!
I’ve started to take a l lot more notes at work I guess there will be a time where I take notes of what month it is!
I guess there will be a time where I take notes of what month it is!
You may jest, but there are times when I can’t remember what I had for breakfast. They say that you never truly forget anything, but that our recall mechanism fades over time. For a myriad of reasons, including age, my recall mechanism is shit.
Vile_port_aloo@lemmy.world
on 17 Mar 00:47
collapse
Offt depends what you had and your version of health. I am hopeful that technology helps when I am that age, only a few years but ai agents seem to be a start. Just need to let go of those big data fears.
Started running unmanic on my plex library to save hard drive space since apparently the powers that be don’t want us to even own hard drives anymore. So far it’s going great, it’ll probably take weeks since I don’t have a gpu hooked up to it
fleem@piefed.zeromedia.vip
on 13 Mar 10:16
nextcollapse
heck i really wish we could all throw a party together. part swap, stories swap. show off cool shit for everyone to copy.
help each other fill in the missing pieces
y’all seem like cool peeps meme-ing about shit nobody else gets!
time to test the backups!
Ensign_Crab@lemmy.world
on 13 Mar 10:29
nextcollapse
I’m getting tired of having to update DNS records every time I want to add a new service.
I guess the tricky part will be making sure the services support this kind of routing…
shadowtofu@discuss.tchncs.de
on 13 Mar 11:10
nextcollapse
I had the same idea, but the solution I thought about is finding a way to define my DNS records as code, so I can automate the deployment. But the pain is tolerable so far (I have maybe 30 subdomains?), I haven’t done anything yet
In Nginx you can do rewrites so services think they are at the root.
CorvidCawder@sh.itjust.works
on 13 Mar 11:17
nextcollapse
Wildcard CNAME pointing to your reverse proxy who then figures out where to route the request to? That’s what I’ve been doing - this way there’s no need to ever update DNS at all :)
I find the path a bit clunky because the apps themselves will oftentimes get confused (especially front-ends). So keeping everything “bare” wrt path, and just on “separate” subdomains is usually my preferred approach.
magic_smoke@lemmy.blahaj.zone
on 13 Mar 11:57
nextcollapse
Alternatively if you’re tired of manual DNS configuration:
FreeIPA, like AD but fer ur *Nix boxes
Configures users, sudoer group, ssh keys, and DNS in one go.
Also lotta services can be integrated using LDAP auth too.
So far I’ve got proxmox, jellyfin, zoneminder, mediawiki, and forgejo authing against freeipa in top of my samba shares.
Ansible works too just because its uses ssh, but I’ve yet to figure out how to build ansible inventories dynamically off of freeIPA host groups. Seen a coupla old scripts but that’s about it.
Current freeipa plugin for it seems more about automagic deployment of new domains.
Having a very similar infrastructure, I would love to know if you ever find anything that works for this. I’ve been maintaining a SnipeIT instance manually, but that’s a real PITA. Tried the same with ITSM-NG, but haven’t even lookid in it for months.
suicidaleggroll@lemmy.world
on 13 Mar 12:25
collapse
Why are you having to update your DNS records when you add a new service? Just set up a wildcard A record to send *.myserver.com to the reverse proxy and you never have to touch it again. If your DNS doesn’t let you set wildcard A records, then switch to a better DNS.
Scrath@lemmy.dbzer0.com
on 13 Mar 15:02
nextcollapse
Not OP but a lot of people probably use pi-hole which doesn’t support wildcards for some inane reason
Croquette@sh.itjust.works
on 13 Mar 16:04
nextcollapse
That’s my case. I send every new subdomain to my nginx IP on pi-hole and then use nginx as a reverse proxy
That was my exact setup as well until I switched to a different router which supported both custom DNS entries and blocklists, thereby making the pi-hole redundant
Croquette@sh.itjust.works
on 13 Mar 20:52
collapse
I run opnsense, so I need to dump pi-hole. But I don’t have the energy right now to do that.
Pi-Hole was pretty straightforward at the time and I did not look back since then. Annoying, but easy.
I use a MikroTik Router and while I do love the amount of power it gives me, I very quickly realized that I jumped in at the deep end. Deeper than I can deal with unfortunately.
I did get everything running after a week or so but I absolutely had to fight the router to do so.
Sometimes less is more I guess
qjkxbmwvz@startrek.website
on 13 Mar 16:43
nextcollapse
I switched to Technitium and I’ve been pretty happy. Seems very robust, and as a bonus was easy to use it to stop DNS leaks (each upstream has a static route through a different Mullvad VPN, and since they’re queried in parallel, a VPN connection can go down without losing any DNS…maybe this is how pihole would have handled it too though).
OP, totally understand, but this is a level of success with your homelab. Nothing needs fiddling with. Now, there is a whole Awesome Self Hosted list you could deploy on a non-production server and run that through the paces.
Abbysimons@lemmy.world
on 13 Mar 12:38
nextcollapse
“Yes, while connected to my wireguard server through port 123 here from my Chinese office, I should probably try to upgrade the wireguard server. That’s a great idea!”
I used to make nginx changes while vpn’d into my network and utilizing guacamole (served via said nginx). I’m not a smart man.
damnthefilibuster@lemmy.world
on 13 Mar 18:51
nextcollapse
Backups. You’re forgetting them.
InnerScientist@lemmy.world
on 15 Mar 00:55
collapse
Pro tip: If you’re using openwrt or other managed network components don’t forget to automatically back those up too. I almost had to reset my openwrt router and having to reconfigure that from scratch sucks.
MonkeMischief@lemmy.today
on 13 Mar 19:02
nextcollapse
Don’t worry, you’re one Docker pull away from having to look up how to manually migrate Postgres databases within running containers!
(Looks at my PaperlessNGX container still down. Still irritated.)
It makes me start looking for the next thing. Got my jellyfin, got my pi hole, my retro console and just recently home assistant set up. (Just a few more buts to add to that). Next i think i am going to look into self hosting a cloud storage solution. Like google drive/photos etc. Would be nice to make my own backups and have them offline
threaded - newest
Let’s tinker around and accidentally break something.
and debug it until you have to reinstall your entire stack from scarch
GET OUT OF MY HOUSE!
Are you implying it’s possible to debug without having to reinstall from scratch? Preposterous! 😂
Guess this is a good time to test my infrastructure automation.
Scarched arth
“Damn, I’ve got this Debian server shit down. I wonder how an opensuse server would work out”
*installs tumbleweed*
True story
My
manperson!When’s the last time you checked if your backup solution works?
Yesterday! Switched my media server from freebsd to alpine and got the arr stack all set up using the backup zip files
Backup? Psh… That’s what the lab is for.
But if my backups actually work then I miss out on the joy of rebuilding everything from scratch and explaining to my wife why non of the lights in the house work anymore.
Carry around a candle in one of those old timey holders like Scrooge Mcduck
What’s a backup solution…? (I’m only being half sarcastic, I really need to set one up, but it’s not as “fun” as the rest of my homelab, open to suggestions)
No mercy for you, then. ;)
I at least have external backups for important family pics and docs! But yea the homelab itself is severely lacking. If it dies, I get to start from scratch. Been gambling for years that “I’ll get around to a backup solution before it dies”. I wouldn’t bet on me :|
wiki.archlinux.org/title/Timeshift
You do, of course have a dedicated rsyslogd server? An isolated system to which logs are sent, so that if someone compromises another one of your systems, they can’t wipe traces of that compromise from those systems?
Oh. You don’t. Well, that’s okay. Not every lab can be complete. That Raspberry Pi over there in the corner isn’t actually doing anything, but it’s probably happy where it is. You know, being off, not doing anything.
Ah. The approach that squirrel@piefed.zip suggested. ;)
Thanks for the tutorial though.
Hmmm. My pi{VPN,hole,dhcp,HA} has a little bit of overhead left…
All of your systems are set up, but are they capable of being redeployed using a configuration management software package? Ansible or something like that?
Oh. They’re not. Well, that’s probably okay. I mean, you could probably go manually reproduce configurations, more or less.
You have an intrusion detection system set up, right? A server watching your network’s traffic, looking for signs that systems on your network have been compromised, and to warn you? Snort or something like that?
Oh. You don’t. Well, that’s probably okay. I mean, probably nothing on your network has been compromised. And probably nothing in the future will be.
.
Barring any hardware issues or external factors, will it run for 10000 years? Any logs not properly rotated? And other outputs accumulating and eventually filling up a filesystem?
Buy a UPS and setup a NUT server on the spare raspberry pi you have lying around.
All of those systems in your homelab…they aren’t all pulling down their updates multiple times over your network link, right? You’re making use of a network-wide cache? For Debian-family systems, something like Apt-Cacher NG?
Oh. You’re not. Well, that’s probably okay. I mean, not everyone can have their environment optimized to minimize network traffic.
I set this up years ago, but then decided it was better to just install different distros on each of my computers. Problem solved?
You can forgejo with a container index enabled, I don’t know if there’s a way to use that as a proxy for downloading containers though.
You have squid or some other forward http proxy set up to share a cache among all the devices on your network set up to access the Web, to minimize duplicate traffic?
And you have a shared caching DNS server set up locally, something like BIND?
Oh. You don’t. Well, that’s probably okay. I mean, it probably doesn’t matter that your devices are pulling duplicate copies of data down. Not everyone can have a network that minimizes latency and avoids inefficiency across devices.
That won’t work in most cases, all https traffic isn’t cached unless you mitm https which is a bad idea and not worth it.
Only cache updates those are worth it and most have a caching server option.
Then it turns out your monitoring system failed and FUCK IT’S BEEN A MONTH SINCE THE LAST PROPER BACKUP
Hearbeat notifications man. “Yes I am online” email once a day or so. Yeah it’s more emails to delete but it can be a lifesaver.
but you probably won’t notice that some of the regular emails are not sent anymore
Couple it to your smart watch, backup every 10 seconds, and make it vibrate when successful
you are just making yourself learn to ignore that your smartwatch vibrates. It’s a bit like breathing and blinking, you are so used to it you can completely forget that its happening. if your smartwatch, or phone, or whatever, starts vibrating all the time, you will get used to it and not notice when it stops happening anymore, but also it will hide any actually meaningful notification.
Oh but I have them !
Every day an email is sent out with the backup status.
Every day I got my email in the morning with the back up logs.
For years.
I associated email received to backup successful, until a month or so when my vpn broke and the emails where just “could not connect”, but it took me a while to bother actually opening the message body as it had always been the same for years.
So I’ll manage it differently, have the email subject be more explicit about a success or a failure amongst other things.
Always learning :^)
Do your backups work?
Have you tested your backups recently? Having them complete is one thing, having the data you need for recovery is another. Have you backed up your vm configurations and build scripts?
Go test your latest backup!
Restore is future me’s problem. Fuck that guy :D
Ah, that frission of excitement when you come to restore! Will it work? Does it contain that very important file? Is it up to date? How much will future you hate past you if it isn’t there?
You have remote power management set up for the systems in your homelab, right? A server set up that you can reach to power-cycle other servers, so that if they wedge in some unusable state and you can’t be physically there, you can still reboot them? A managed/smart PDU or something like that? Something like one of these guys?
Oh. You don’t. Well, that’s probably okay. I mean, nothing will probably go wrong and render a device in need of being forcibly rebooted when you’re physically away from home.
Does a $12 Shelly plug count?
if you can cycle your home assistant with the shelly plug whilst your home assistant is down, yes. from experience it’s really quite annoying to have a smart plug switch off HA…
HA is on the same proxmox host as the router. So yeah I can end up locked out. Hasn’t happened yet tho! The relay is on my test machine, it’s always nvidia that crashes there.
An 8 switch relay, old Pi, and 8 hardware store outlets can be had for not much more. I did that and let PiKVM control my outlets directly.
This is the back of my 10" rack before it was cleaned up. Lots of custom work on this that I’ll be posting a page on my site about when complete.
<img alt="" src="https://lemmy.world/pictrs/image/7333a37a-4d43-4cec-ad47-3cbb3777c97f.jpeg">
@tal@lemmy.today in case you are interested
The Shelly can be configured to automatically turn back on after a certain amount of time. It has local scripting capabilities.
If they did that… I don’t know.
If you do have the smart PSU and power management server you probably also went down the rabbit hole of scripting the power cycling, right? Maybe made that server hardened against power loss disk corruption so it can be run until UPS battery exhaustion.
What if there is a power outage and NUT shuts everything down? Would be nice to have everything brought back up in an orderly way when power returns. Without manual intervention. But keeping you informed via logging and push notifications.
*furiously adds a new item to the TODO list*
Tal just got the chaotic evil tag today.
I built an 8 outlet version of those with relays and wall outlets for… a lot less.
You should use Arch, then you can update every 15 minutes 🤭
I havent messed much with my servers in 2 years. I think that means I’ll hit my RIO in another 5 :)
Have you tried introducing unnecessary complexity?
If you know how your setup works, then that’s a great time for another project that breaks everything.
Saturday morning: “Incus and podman seem interesting. I bet I could swap everything over while the family is out this afternoon”
Sunday evening: “Dad, when will the lights work again?”
As soon as selinux decides I have permission.
The old lighting wasn’t that great anyway. If I were to just put lighting on a DMX512-controlled network, then all of it could be synchronized to whole-house audio…
Don’t forget to integrate it into Home Assistant so you can alert the ISS when the mail man is on the porch.
Infrastructure diagram? No! In this homelab we refer to the infrastructure hyperdodecahedron.
It seems like a good time to learn graphviz’s dot format for the network layout diagrams, with automated layout.
mamchenkov.net/…/graphviz-dot-erds-network-diagra…
TIL. Thank you!
Sure. What that guy is using is actually not the most-interesting diagram style, IMHO, for automatic layout of network maps, if you want large-scale stuff, which is where the automatic layout gets more interesting. I have some scripts floating around somewhere that will generate very large network maps — run a bunch of traceroutes, geolocate IPs, dump the results into an sqlite database, and then generate an automatically laid-out Internet network map. I don’t want to go to the trouble of anonymizing the addresses and locations right now, but if you have a graphviz graph and want to try playing with it, I used:
goes looking
Ugh, it’s Python 2, a decade-and-a-half old, and never got ported to Python 3. Lemme gin up an example for the non-hierarchical graphviz stuff:
graph.dot:
graph foo { a--b a--d b--c d--e c--e e--f b--d }Processed with:
Generates something like this:
<img alt="" src="https://lemmy.today/pictrs/image/c7fb0167-fbda-47f5-914f-a0daa3066c67.png">
That’ll take a ton of graphviz edges and nicely lay them out while trying to avoid crossing edges and stuff, in a non-hierarchical map. Get more complicated maps that it can’t use direct lines on, it’ll use splines to curve lines around nodes. You can create massive network maps like this. Note that I was last looking at graphviz’s automated layout stuff about 15 years ago, so it’s possible that they have better layout algorithms now, but this can deal with enormous numbers of nodes and will do reasonable things with them.
I just grabbed his example because it was the first graphviz network map example that came up on a Web search.
Haha too right mate
This is just as true in my non-computer hobbies that involve physical systems instead of code and configs!
If I had to just barely meet the requirements using as little budget as possible while making it easy for other people to work on, that would be called “work.” My brain needs to indulge in some over-engineering and “I need to see it for myself” kind of design decisions.
I can help with that. It’s a skill I have. LOL
You have all your devices attached to a console server with a serial port console set up on the serial port, and if they support accessing the BIOS via a serial console, that enabled so that you can access that remotely, right? Either a dedicated hardware console server, or some server on your network with a multiport serial card or a USB to multiport serial adapter or something like that, right? So that if networking fails on one of those other devices, you can fire up
minicomor similar on the serial console server and get into the device and fix whatever’s broken?Oh, you don’t. Well, that’s probably okay. I mean, you probably won’t lose networking on those devices.
I just installed Debian on a decommissioned Chromebox for exactly this purpose + 4x usb-to-serial adapters.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
14 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.
[Thread #161 for this comm, first seen 13th Mar 2026, 11:00] [FAQ] [Full list] [Contact] [Source code]
I had a automatic reboot of all VMs and the hypervisor because of a kernel update at night. Nextcloud decided to start in maintenance mode and Jellyfin refused to start because the cache folder didn’t have enough space left. Authentik also complained about outdated provider configuration…
Need to investigate the Nextcloud and Authentic issue during weekend 🤗
I haven’t messed with my raspberry pi in maybe a month… And I think one of my backups got corrupted because I receive an email saying that it failed along with tons of errors every night. Hmm, maybe I should get to that soon…
Do you have a spinning fish display in front of your homelab server, right? We all know the spinning fish improves performance and security, it is a indispensable part of homelabbing
J O E L
www.youtube.com/watch?v=5Jls8KcGxTA
Going into spring/summer that’s ideal, I wanna go places do things. Mid winter, I’m feature creeping till something breaks.
Gotta be honest, my home lab chugs along quite happily.
Atomic fedora makes it hard to break, and then all the services are containerized and managed by configuration and just files only.
When there’s an update to a service:
just pull service. Firewall needs configuring:just firewall-reset && just firewall-enable.The only flaky thing is a vpn that I run through glutan and I’m thinking of dumping that provider.
Man I always get sad when I see this meme format because the story behind it is so fucking tragic… :(
What story?
If it’s stable, it’s not a lab.
That’s infrastructure.
I’ve moved my homelab twice because it became stable, I really liked the services it was running, and I didn’t want to disturb the last lab*cough*prod server.
My current homelab will be moar containers. I’m sure I’ll push it to prod instead of changing the IP address and swapping name tags this time.
Yeah, my home server was being a little too stable and I wasn’t really learning anything. So I switched from fedora to proxmox, now I’ve got a nixos vm I’m going to try to get all my services running in.
The comments in this thread have collectively created thousands of person-hours worth of work for us all…
Honestly, that would be living the dream... I have too many other things I want to do!
You need monitoring
Kubernetes?
I’m remembering a very not fun discussion my team had about “the monitoring system not sending any alerts doesn’t inherently mean everything is ok” after an outage that was missed by our monitoring system.
You need to make sure you’re monitoring connectivity as well as specific problem states. No data is a problem state often overlooked, and it’s not always considered for every resource type in these systems out of the box.
And you probably want a heartbeat notification. Yes, it’s noise, but if you don’t see anything from monitoring you need to question if monitoring is the thing that broke. It sending out a notification every so often going “yes I am online” is useful.
One alert daily reporting that there are no alerts is probably good for a home lab…
If logging is down and there’s no one around to log it, is it really down?
Who will log the loggers?
Me to my lab.
<img alt="" src="https://lemmy.zip/pictrs/image/adf13581-185b-4900-8ac9-04333a1f8e32.avif">
I wish it was stable
I had a drive die yesterday
Have you already tried implementing an identity provider like Authentik, so you can add OIDC and ldap for all your services, while you are the only one that’s using them? 🤔
Behind a traefik reverse proxy with lets encrypt for ssl even though the services aren’t exposed to the internet?
To be fair a lot of apps don’t handle custom CAs like they should. Looking at you Home Assistant! 😠
Don’t forget about Anubis and crowdsec to make it even safer inside your LAN
Who cares if it’s exposed to the internet?
Encrypting your local traffic is still valuable to protect your systems from any bad actors on your local network (neighbor kid cracks your wifi password, some device on your network decides to start snooping on your local traffic, etc)
Many services require HTTPS with a valid cert to function correctly, eg: Bitwarden. Having a real cert for a real domain is much simpler and easier to maintain than setting up your own CA
Hey my wife uses some of them too!
Probably a good idea to switch over to WPA-Enterprise using Authentik’s RADIUS server support and let all of the users of your wireless access point log in with their own network credentials, while you’re at it.
Now try migrating all your docker containers to podman.
Don’t encourage me.
And then try turning on SELinux!
It’s not that difficult to get SELinux working with podman quadlets, especially if you run things rootless. I have a kerberized service account for each application I host and my quadlets are configured to run under those. I very rarely encounter applications that simoky can’t be run rootless but I usually can find an adequate alternative. I think right now the only thing that runs as root is one of the talk or collabora containers in my nextcloud stack. No selinux issues either.
I use podman-compose with system accounts and I don’t have a ton of issues. The biggest one is that I can’t seem to get bluetooth and pip working on Home Assistant at the same time. Most of the servers I manage have SELinux and it works fine as long as I use
:z/:Zwith bind mounts.A few years ago, I set up a VPS for my friend’s business; at the time, I didn’t know how to work with SELinux so I just turned it off. I tried to flip it back on, and it somehow bricked the system. We had to restore from a backup. Since then, I’ve been afraid to enable it on my flagship homelab server.
are you sure it really bricked it? when turning it on, on next boot it needs to go over all the files and retag them or something like that, and it can take a significant amount of time
Honestly, I don’t know what happened, but it was unreachable via SSH and the web console. There shouldn’t have been a ton of files to tag since it was an Almalinux system that started with SELinux enabled, and all we added was a container app or two.
that does not matter, it needs to go over all of them. I don’t know how long it takes with SSD, but with HDD it can take a half an hour or more, with a mostly base system. and the kernel starts doing this very early, when not even systemd or other processes are running, so no ssh, but web console should have been working to see what its doing
Good to know! I do hope to eventually re-enable SELinux on my flagship server, so I’ll keep this in mind. As for my friend’s server, I think he migrated to Alpine a while back.
I set my homelab up on Bazzite immutable with podman and SELinux. It took a while to work everything out and have it boot up into a valid state hahaha
Any reason you chose Bazzite for your homelab distro? First I’ve heard of someone doing that!
Wouldn’t an immutable OS be overall a pretty good idea for a stable server?
I honestly don’t know a ton about immutable distros other than that they let you front-load some difficulty in getting things set up in exchange for making it harder to break. I was just surprised that the distro of choice was Bazzite, since its target audience seems to be gamers.
Good for stability, bad for flexibility for when the homelab grows more complex.
At the start I just wanted a desktop machine that runs Steam through sunshine/moonlight so hardware support and gaming stuff such was very important.
My homelab used to run on my laptop when it could all fit within a couple 100s of GB and I was the only user but moving it was tricky. Since I’m a programmer I’m not afraid of this stuff so I just spent the hours to figure out one problem at a time.
I ended up figuring out adding HDD whitelist in SELinux, make it accessible in podman, manually edit fstab because tools didn’t work, systemd service for startup, logging in automatically where I already forgot everything and would have not had to do any of this on a bog standard Ubuntu server.
Respect! I too often take it for granted that it’s a privilege for my gaming rig and my homelab server to be separate boxes.
My server is Almalinux, my laptop is Mint, and my gaming rig is Nobara. But if I had to consolidate everything in to one machine, I’d pick Nobara.
I came to the same conclusion, Nobara for would have been best.
Just did that last weekend. Nothing to do anymore. 😢
Did you do Quadlets?
Yes of course. Had to spend a couple of hours fixing permission related issues.
But did you run them as rootful or the intended rootless way.
Rootless. The docker containers were rootful, hence the permission struggles.
I had problems getting apps with multiple containers working in quadlets (definitely a knowledge issue on my part, but didn’t feel the time learning it was beneficial, but will probably revisit during kubernetes learning) so went back to podman with docker compose.
I think it’s kinda better using quadlets, because I wrote some custom scripts, and quadlets made the process better. But podman compose is probably file too.
And then migrate all your podman containers to proxmox
Never run:
Right before the end of your day. Ask me how I know 😂
compose upwill automatically recreate with newer images if the new one were pulled. so there is no need forcompose downbtwYou’re right. I got in the habit of doing that because I’m endlessly tweaking my .env files and I don’t think those reload unless you shut down first
Oh, gosh, I did this last evening. I didn’t check what time it was, and initiated an update on some 70 containers. I have a cron that shuts down the server in the evening, and sure enough, right in the middle of the updates, it powered off. I didn’t even mess with it and went to bed. Re-initiated the update this morning, and everything is up and running. Whew!
That’s not a homelab, that’s a home server.
I test in my Homeproduction
Time to distro-hop!
You can always configure your vim further
or learn emacs
Then configure vim using emacs
<img alt="" src="https://media3.giphy.com/media/v1.Y2lkPTc5MGI3NjExZjRrbWhyMm5heXQ1dDY2eDF2a2ZqcXN1d2NtbmVxOG5pb2FqNm5nbyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/Xn7mOX7VQDDOw/giphy.gif">
Time to start documenting it!
NEVER1!!!11!!
Don’t look too closely you can jinx it.
At 71, I have to document. I started a long time ago. I worked for a mec. contractor long ago, and the rule was: ‘If you didn’t write it down, it didn’t happen.’ That just carried over to everything I do.
Do you write down what you write down on the internet?
As in a blog or wiki? I do not because I am not authoritative. What I know came from reading, doing, screwing it up, ad nauseam. When something finally clicks for me, I write it down because 9 times out of 10, I will need that info later. But my writing would be so full of inaccuracies that it would be embarrassing and possibly lead someone astray.
It’s how cults start!
I’ve started to take a l lot more notes at work I guess there will be a time where I take notes of what month it is!
You may jest, but there are times when I can’t remember what I had for breakfast. They say that you never truly forget anything, but that our recall mechanism fades over time. For a myriad of reasons, including age, my recall mechanism is shit.
Offt depends what you had and your version of health. I am hopeful that technology helps when I am that age, only a few years but ai agents seem to be a start. Just need to let go of those big data fears.
Nothing to install? Not with that attitude!
Start a 10" rack.
Can’t believe nobody here mentioned nixOS so far? How about moving all of your configs in a flake and manage all of your systems with it?
I made a git repo and started putting all of my dot files in a Stow and then I forgot why I was doing it in the first place.
So that when setting up a new system, you can migrate all your user configuration easily, while also version-controlling it.
I already have Ansible to manage my system and I like to have the same base between my pc and my server build muscle memory.
If I was managing a pc fleet I would consider NixOS, but I don’t see the appeal right now.
Okay, but why not create more work for yourself by rebuilding everything from scratch?
Started running unmanic on my plex library to save hard drive space since apparently the powers that be don’t want us to even own hard drives anymore. So far it’s going great, it’ll probably take weeks since I don’t have a gpu hooked up to it
heck i really wish we could all throw a party together. part swap, stories swap. show off cool shit for everyone to copy.
help each other fill in the missing pieces
y’all seem like cool peeps meme-ing about shit nobody else gets!
time to test the backups!
You just described a convention.
Always a white knuckle event for me
infosecmap.com
wiki.hackerspaces.org/List_of_Hacker_Spaces
Also check out meetup.com for linux user groups and other events.
Time to expand.
Actually, one thing I want to do is switch from services being on a subdomain to services being on a path.
I’m getting tired of having to update DNS records every time I want to add a new service.
I guess the tricky part will be making sure the services support this kind of routing…
I had the same idea, but the solution I thought about is finding a way to define my DNS records as code, so I can automate the deployment. But the pain is tolerable so far (I have maybe 30 subdomains?), I haven’t done anything yet
In Nginx you can do rewrites so services think they are at the root.
Wildcard CNAME pointing to your reverse proxy who then figures out where to route the request to? That’s what I’ve been doing - this way there’s no need to ever update DNS at all :)
I find the path a bit clunky because the apps themselves will oftentimes get confused (especially front-ends). So keeping everything “bare” wrt path, and just on “separate” subdomains is usually my preferred approach.
Alternatively if you’re tired of manual DNS configuration:
FreeIPA, like AD but fer ur *Nix boxes
Configures users, sudoer group, ssh keys, and DNS in one go.
Also lotta services can be integrated using LDAP auth too.
So far I’ve got proxmox, jellyfin, zoneminder, mediawiki, and forgejo authing against freeipa in top of my samba shares.
Ansible works too just because its uses ssh, but I’ve yet to figure out how to build ansible inventories dynamically off of freeIPA host groups. Seen a coupla old scripts but that’s about it.
Current freeipa plugin for it seems more about automagic deployment of new domains.
Having a very similar infrastructure, I would love to know if you ever find anything that works for this. I’ve been maintaining a SnipeIT instance manually, but that’s a real PITA. Tried the same with ITSM-NG, but haven’t even lookid in it for months.
Why are you having to update your DNS records when you add a new service? Just set up a wildcard A record to send *.myserver.com to the reverse proxy and you never have to touch it again. If your DNS doesn’t let you set wildcard A records, then switch to a better DNS.
Not OP but a lot of people probably use pi-hole which doesn’t support wildcards for some inane reason
That’s my case. I send every new subdomain to my nginx IP on pi-hole and then use nginx as a reverse proxy
That was my exact setup as well until I switched to a different router which supported both custom DNS entries and blocklists, thereby making the pi-hole redundant
I run opnsense, so I need to dump pi-hole. But I don’t have the energy right now to do that.
Pi-Hole was pretty straightforward at the time and I did not look back since then. Annoying, but easy.
I use a MikroTik Router and while I do love the amount of power it gives me, I very quickly realized that I jumped in at the deep end. Deeper than I can deal with unfortunately.
I did get everything running after a week or so but I absolutely had to fight the router to do so.
Sometimes less is more I guess
I switched to Technitium and I’ve been pretty happy. Seems very robust, and as a bonus was easy to use it to stop DNS leaks (each upstream has a static route through a different Mullvad VPN, and since they’re queried in parallel, a VPN connection can go down without losing any DNS…maybe this is how pihole would have handled it too though).
And of course, wildcards supported no problem.
It does support it, you just have to add it to dnsmasq. I have it Setup under
misc.dnsmasq_lineslike so:Then I have my proxied service reachable under
service.proxy.example.comBecause I’m an idiot. 🤦 Thanks!
OP, totally understand, but this is a level of success with your homelab. Nothing needs fiddling with. Now, there is a whole Awesome Self Hosted list you could deploy on a non-production server and run that through the paces.
The rare moment when everything actually works. 😄
Quick! Break something!
Maybe try this…
Wreck it Ralph!!
Living the good life
How is the kubernetes (k3s/rke2) migration coming along?
One word: chaos engineering!
I should do some breaking network changes… While tunneled in.
“Yes, while connected to my wireguard server through port 123 here from my Chinese office, I should probably try to upgrade the wireguard server. That’s a great idea!”
Ask me how I know.
I stopped the tailscale service…
… while ssh’d through the tailscale interface.
Luckily, it was my home server and I had to drive there anyway.
I used to make nginx changes while vpn’d into my network and utilizing guacamole (served via said nginx). I’m not a smart man.
Backups. You’re forgetting them.
Pro tip: If you’re using openwrt or other managed network components don’t forget to automatically back those up too. I almost had to reset my openwrt router and having to reconfigure that from scratch sucks.
Don’t worry, you’re one Docker pull away from having to look up how to manually migrate Postgres databases within running containers!
(Looks at my PaperlessNGX container still down. Still irritated.)
I feel your pain. Had to fix my immich, NC and Joplin postgresdb. Turned out, DB via NFS is a risky life. ;D
github.com/pgautoupgrade/docker-pgautoupgrade
Or if you are on k8s, you can use cloudnativepg.
I’m just using Docker on Proxmox, buuuut… I’m gonna look into this project. It looks like a LIFESAVER. Thank you for sharing this. You’re awesome! :D
No upstream bugs to fix?
Off topic, warning: this comment section is making me want to learn things
It’s been 2 days off reddit and my brain has opinions other than “aaaargh” or “meh”.
Proceed with caution
Yes that does seem to describe modern computing, indeed, consumer electronics in general.
It’s no longer about solving actual problems, it IS the problem.
It makes me start looking for the next thing. Got my jellyfin, got my pi hole, my retro console and just recently home assistant set up. (Just a few more buts to add to that). Next i think i am going to look into self hosting a cloud storage solution. Like google drive/photos etc. Would be nice to make my own backups and have them offline