Need some pointers for hardware
from graynk@discuss.tchncs.de to selfhosted@lemmy.world on 02 May 02:48
https://discuss.tchncs.de/post/59509309

Hey!

I’ve decided that it’s time to finally get something resembling an actual server for my home setup, and I was hoping you folks could give me some pointers (given the current prices).

My current set up is just my old laptop with 2 external hard drives plugged in - one is the regular portal USB HDD, another is 3.5 HDD plugged via powered enclosure (ZFS and LUKS on both). I want to switch that for something relatively small, but extendable, as I want to add more disk space in the future. I’m selfhosting Plex, Immich and Navidrome, and occasionally some multiplayer games like Valheim. I’m not planning to use Proxmox or TrueNAS/whatever, I mostly just plan to throw Debian on it and spin everything in Docker.

I looked through some guides on selfhosting.sh and on Reddit, but that just got me more confused, as everyone keeps suggesting Optiplexes and NUCs, but I don’t get how to combine that with 20TB+ disk space while ensuring the disks are secure and well powered. Plus my understanding is most of those mini-PC’s/refurbished workstations use regular DDR3/4, whereas I was hoping to get ECC.

Should I go DIY route, or is there something I could get as a solid enough base to expand in the future? If DIY is the answer - what mobo/cpu/case should I get? My ideal budget (for everything excluding hard drives and maybe PSU since I have one lying around) is ~500 euros, but if paying a bit more would mean a substantially better deal - then I’d be OK with that. I’m in Berlin, so if you know any good local markets - that’d be great too.

Thanks!

#selfhosted

threaded - newest

myrmidex@belgae.social on 02 May 03:21 next collapse

I can see you getting confused, seems to me you want 2 separate servers: a storage box and a services box. The services box would be doable for 500 euros - although ECC might throw a wrench in the works there. For a storage box, 500 euros won’t even buy the HDDs needed.

graynk@discuss.tchncs.de on 02 May 04:16 collapse

storage box and a services box

That’s exactly what I want to avoid though. I see no reason to power and network 2 different small boxes when just one slightly bigger one will do. And as mentioned - 500 is without HDDs, I plan to use the ones I have for now and extend it later.

myrmidex@belgae.social on 02 May 05:02 collapse

Oh if you will keep using the external drives, then you have options. ECC will cost you though, some 300 euros extra as far as I can tell after a quick search. And then all the other internals would end you well above the 500 mark.

If using the external USBs, I’d just drop the ECC requirement and get a NUC.

poVoq@slrpnk.net on 02 May 08:59 collapse

Old DDR3 ECC is actually cheaper than regular DDR3 RAM, and it generally works with AMD CPUs (who unlike Intel don’t artificially restrict ECC support to their enterprise offerings).

But tbh, ECC is generally not needed and I wouldn’t bother designing a system around it. Use a file system with checksums and regularly scrub the drives and you should not have any major issues with random bit flips that ECC protects against.

Onomatopoeia@lemmy.cafe on 02 May 04:34 next collapse

Why do you want ECC? (Hint: unless you’re running a business database dealing with financials, you don’t need it). I’ve run Windows server on desktop hardware since the 90’s with no issues, and today’s hardware is far better than what we had then.

The reason people settle on NUCs and SFF desktops is power. They virtually sip watts.

I don’t usually recommend specifics for someone but rather ideas and ways to look at your requirements, but given your requirements (20 TB), it would be worth considering a commercial NAS, or at least a NAS enclosure running a NAS OS like UnRAID or TrueNAS.

Expansion is generally not something I’d think about for a NAS (though it can be done today). I expand my NAS once a year (swap out one drive) but I keep 3 local copies - so if it failed I can restore locally rather than from a cloud backup.

So your data lives on a NAS, and you can then either run your services there (they mostly support containers, etc these days), but I’d get a NUC or SFF to host that stuff. It makes for nice separation and gives you some flexibility.

Back to SFF and NUC - my last desktop hardware idled at 100 watts. It was visible on my power bill and used more power than my lights or just about any other single device other than heat or stove.

My SFF server idles at just under 20 watts and peaks at 80 when I’m converting videos. It currently has 8Tb of storage, but I could easily get 20 in there, it would just be expensive.

Oh, and a good NAS can spin down drives to save power when idle, which for most of us is like 90% of the time (I have an ancient NAS as redundancy that does this - it idles around 5w).

kata1yst@sh.itjust.works on 02 May 05:01 next collapse

I’ll strongly disagree. Anyone who cares about the data they store in their server should care about ECC. There’s a specific reason it’s used so widely by servers, not just financial databases or whatever.

There’s also a ton of misinformation on the Internet about it, so don’t buy into the “ZFS write hole” or whatever. But ECC is very important in my experience. It’s saved my bacon in a material way twice now, and in ways that normal RAM would have just silently continued breaking things. Really that’s not much of a price premium if you’re willing to buy used, so it’s more a question of why not?

There are many computers (especially business line computers including low power SFF) that will take ECC or even ship with it from ebay or whatever. Or you can build rigs with ECC, I’ve done this route twice and had good results.

Onomatopoeia@lemmy.cafe on 02 May 05:08 collapse

Disagree all you want - ECC has no bearing outside of high-resiliency databases.

I say this having nearly 4 decades in enterprise - ECC only matters then.

OP is definitely not doing anything requiring ECC, recommending it is just wasting money.

kata1yst@sh.itjust.works on 02 May 05:27 next collapse

I respect your perspective and personal experience, and I’m not trying to convince you (I’m not even downvoting you, as it’s not a disagreement button). I’m trying to convince whoever might come along and read this that the small extra price is worth it if their computer is going to hold data dear to them and be running 24x7.

ECC is extremely good at covering cosmic ray bitflips, which happen with extreme regularity on software that runs and modifies data on the fly- server software. Yes, even home run stuff. That’s just playing Russian roulette, it probably won’t break anything, but why take the risk at least 10 times every day?

It’s also great at catching failing RAM sticks and preventing them from doing horrible things to every bit of data running through them. This is the failure ECC caught for me at home twice.

I have only 2.5 decades in the enterprise server and software space, I won’t claim to your 4. But I know I wouldn’t take that risk at work, and I value my home data more rather than less.

I’m not a researcher or even a particularly well practiced rhetorician, so here’s probably a much more convincing argument.

Or this perhaps.

amorpheus@lemmy.world on 02 May 10:37 collapse

Another link: www.phoronix.com/news/Linus-Torvalds-ECC

He has very strong opinions on certain topics, ECC being important is one of them.

amorpheus@lemmy.world on 02 May 10:36 collapse

Adding another data point pro-ECC: www.phoronix.com/news/Linus-Torvalds-ECC

Torvalds went on his lengthy post to say, "The “modern DRAM is so reliable that it doesn’t need ECC” was always a bedtime story for children that had been dropped on their heads a bit too many times. Yes, I’m pissed off about it. You can find me complaining about this literally for decades now.

graynk@discuss.tchncs.de on 02 May 09:03 next collapse

ECC is not a hard requirement for me, but if I can get it - I’ll try to, as to me it makes sense for something that runs 24/7 and handles my personal data.

I have a very strong aversion to separating storage from my server. I just don’t see why I need to route power and network to 2 small boxes (none of which would do what I need it to do on its own + considering very crappy room layouts in rented apartments) and then fiddle with network access, when 1 slightly bigger box would do what I need it to do. Some 7-8 years ago I’ve bought dirt cheap second-hand Huananzhi x79 with Xeon E5 and DDR3-ECC with some low profile NVIDIA GPU and it all still works now - and something like that would mostly be OK for me even now (except I left it in another country).

That said, it’s possible a reasonably powerful NAS will be enough for me on its own?

Rossphorus@lemmy.world on 02 May 09:44 collapse

Strong disagree. I ran non-ECC memory on my server and services would unexpectedly crash maybe once per week. Over the span of a year I had two databases get corrupted that cost me a lot of time to fix. I tried swapping sticks but it happened with all of them. I switched to ECC memory and the problems disappeared. I needed more memory anyway and the price delta for ECC was about $100. I didn’t have to swap CPUs or anything, AMD desktop CPUs and chipsets support it out of the box. ECC memory is absolutely worth it.

Decronym@lemmy.decronym.xyz on 02 May 05:10 next collapse

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
NAS Network-Attached Storage
NUC Next Unit of Computing brand of Intel small computers
Plex Brand of media server package
SATA Serial AT Attachment interface for mass storage
SBC Single-Board Computer
ZFS Solaris/Linux filesystem focusing on data integrity

6 acronyms in this thread; the most compressed thread commented on today has 10 acronyms.

[Thread #271 for this comm, first seen 2nd May 2026, 12:10] [FAQ] [Full list] [Contact] [Source code]

owenfromcanada@lemmy.ca on 02 May 07:22 next collapse

I don’t know if it would give you as much hard drive space as you want, but I got an Odroid HC4 a while ago and it’s been great. One of the very few SBCs with two SATA slots ready to go.

poVoq@slrpnk.net on 02 May 09:12 next collapse

Previously had some good experience with this store selling refurbished hardware: www.computerstoreberlin.de

hexagonwin@lemmy.today on 02 May 09:22 next collapse

if power bills are not an issue, maybe an older dell/hp workstation? noise and heat could be problematic though.

[deleted] on 02 May 12:21 next collapse

.

Danitos@reddthat.com on 02 May 12:51 collapse

I can’t say this is a good advice by itself; this is simply my setup as I was in your position just 1 month ago, maybe this gives you more ideas.

I recently bought an 16 GB non-ECC DDR5 (which is unnecessary, DDR4 works just fine) open-box 4-disk bay TerraMaster as my main server for the equivalent of around €400, thinking that in a distant future I can buy a different rack just for disks. This model has an Intel N150, which has the H.265 codec, which should handle the transcoding in your Plex instance (which, IMO, you should consider moving to Jellyfin instead in the new computer you end up getting). The 4-disk bay (there are some models with more bays) allows you to get some disks now and then fill up the bays later on if you want to. Note: I just realized this almost reads like a Terramaster ad, that’s not my goal; you can search for similar options from other fabricants.

As for the OS, I insist in recommending you TrueNAS, since it’s Debian-based, since it’s not like Proxmox where everything has to be VM, it’s simply Debian with a nice UI for spinning up Docker instances + disk/snapshot/backups management, all of which are optional: you can easily mount your disk pool and setup them up as a ZRAID (data redundancy in case of disk failure), stripe (no data redundancy), etc., organize everything in easy to use folders, schedule different snapshot schedules for different folders, etc. You can also easily mount Docker containers, either through the “app store” (a selection of Docker containers wrapped in a nice UI for configuration) or manually with docker-compose.yml files. IMO, you lose little, but gain a lot with the OS being already configured for a lot of the stuff you want to do, and the easy to use Web UI.