Hello everyone,
I am about to renovate my selfhosting setup (software wise). And then thought about how I could help my favourite lemmy community become more active. Since I am still learning many things and am far away from being a sysadmin I don’t (just) want tell my point of view but thought about a series of posts:
Your favourite piece of selfhosting
I thought about asking everyone of you for your favourite piece of software for a specific use case. But we have to start at the bottom:
Operating systems and/or type 1 hypervisors
You don’t have to be an expert or a professional. You don’t even have to be using it. Tell us about your thoughts about one piece of software. Why would you want to try it out? Did you try it out already? What worked great? What didn’t? Where are you stuck right now? What are your next steps? Why do you think it is the best tool for this job? Is it aimed at beginners or veterans?
I am eager to hear about your thoughts and stories in the comments!
And please also give me feedback to this idea in general.
I’ve been using Ubuntu server on my server for close to a decade now and it has been just rock solid.
I know Ubuntu gets (deserved) hate for things like snaps shenanigans, but the LTS is pretty great. Not having to worry about a full OS upgrade for up to 10 years (5 years standard, 10 years if you go Ubuntu pro (which is free for personal use)) is great.
A couple times I’ve considered switching my server to another distro, but honestly, I love how little I worry about the state of my server os.
Kinda dumb but I run DietPi on a mini PC. Just nice and simple
+1. Very easy, very stable.
I also started with DietPi an every device, works like a charm. But I personally want to try something else to learn a bit more.
Edit:
I think about trying NixOS in the near future.
Proxmox Virtual Environment (PVE, Hypervisor), my beloved. Especially in combination with Proxmox Backup Server (PBS).
My homelab would not exist without Proxmox VE, as I’m definitely not going to use Nutanix or VMWare. I love working with linux and Proxmox VE is literally debian with a modified kernel and a Management Webinterface on top.
I first learned about Proxmox VE in my company, while we still had VMWare for us and all of our customers. We gradually switched everyone over to Proxmox VE and now I’m using it at home too. Proxmox is an Austrian (my country) company, so I was double hyped about this software.
A few things I like most about Proxmox VE
- Ease of access to the correct part of the documentation you currently need (*)
- Open Source
- Company resides in my country (no US big tech walled garden)
- Linux / Debian based, so no learning new OS’s and toolchains
- Free version available
- Forum available and actually used
(*) What I mean by ease of access to the correct part of the documentation is: Whenever you’re in the WebUI and need to decide on some settings, there’s a button somewhere on the same page which is going to lead you directly to the portion of the documentation you need right now. I don’t know why this seems like such a great luxury, every software should have something like this.
Next steps
My “server” (some mini PC with spare parts I already had) is getting too weak for the workload I put it through, so I’m going to migrate to a better “server”. I already have a PC and most of the necessary parts, I just need some SSDs and an AMD CPU.
Even migrating from PVE (old) -> PVE (new) couldn’t be easier:
- PVE (old): create last backup to PBS, shut down PVE (old)
- PVE (new): add PBS, restore Backups
- ???
- profit
I think it’s great to have a series posting about personal achievements and troubles with selfhosting. There’s so much software out there, you always get to see someone doing something you didn’t even know could be done or using a software you didn’t realize even existed. Sharing is caring.
I’m pretty happy with Debian as my server’s OS. I recently gave in to temptation and switched from stable to testing, on my home systems I run Arch because i like to have the most up to date stuff, but with my servers that’s a bit less important, even so debian testing is usually pretty stable itself anyway so I’m not worried much about things breaking because of it.
No love for Open Media Vault? I run it virtualized under Proxmox and I’m quite happy with it, not very fancy but super stable.
I run about twenty containers on OMV, with 4 8tb drives in a ZFS ZRAID5 setup. I love how users can be shared across services, for example the same user may access SMB shares or connect via OpenVPN.
+1 for OMV. I use it at work all the time to serve Clonezilla images through an SMB share. It’s extremely reliable. The Clonezilla PXE server is a separate VM, but the toolkit is available in the
clonezilla
package, and I could even integrate the two services if I felt particularly masochistic one day.My first choice for that role was TrueNAS, but at the time I had to use an old-ass Dell server that only had hardware RAID, and TrueNAS couldn’t use ZFS with it.
I’ve been using NixOS on my server. Having all the server’s config in one place gives me peace of mind that the server is running exactly what I tell it to and I can rebuild it from scratch in an afternoon.
I don’t use it on my personal machine because the lack of fhs feels like it’d be a problem, but when selfhosting most things are popular enough to have a module already.
I used to really like esxi, but broadcom screwed us on that.
Hyper-v sucks to run and manage. It’s also pretty bloated.
Proxmox is pretty awesome if you want full VMs. I’m gonna move everything I have onto it eventually.
For ease of use, if you have Synology that can run containers, it’s okay.
I also like and tend to use unraid at my house, but that’s more because of my insane storage requirements and how I upgrade with dissimilar disks fairly frequently. (I’m just shy of 500tb and my server holds 38 disks.)
Damn, 38 disks! How do you connect them all? Some kind of server hardware?
Curious because I’m currently using all 6 SATA ports on an old consumer motherboard and not sure how I’ll be able to expand my storage capacity. The best option I’ve seen so far would probably be adding PCIe SATA controller(s), but I can’t imagine having enough PCIe slots to reach 38 disks that way! Wondering if there’s another option I haven’t seen yet.
Yep. It’s a 4u super micro chassis with the associated backplanes.
I had some servers left over from work. It’s set up to also take jbod cards with mini-sas to expand into additional shelf’s if I need that.
My setup really isn’t much of an entry setup. It’s similar to this: https://store.supermicro.com/us_en/4u-superstorage-ssg-641e-e1cr36h.html
(I’m just shy of 500tb and my server holds 38 disks.)
That means every one of your disks is >13TB? That’s expensive!
It’s been a long term build. With unraid it’s been pretty easy to slowly add disks one disk at a time.
I’m moving everything towards 22tb disks right now. It’s still got a handful of 4 and 5tb disks in it. I’ve ended up with a pile of smaller disks that I’ve pulled and just… sit around.
I also picked up a Synology recently that houses 12x 12tb disks that goes into that total count. I’ve got another couple Synologys just laying around unused.
I’ve got 30x4TB disks, just because second hand enterprise gear is so cheap. I’ll slowly replace the 4TB SAS with larger capacity SATA to make use of the spin down functionality of unraid. I don’t need the extra speed of SAS and I wouldn’t mind saving a few watt-hours.
I’m interested in learning more about nixOS but until i get there, proxmox all day
I use TrueNAS SCALE at home on my NAS and since they ditched kubernetes (and Truecharts, which was a happy little accident) it’s been great.
It’s free.
New hardware is incorporated into the kernel reasonably regularly IMO.
ZFS file system
Pretty easy to control with GUI exclusively
Docker is now very easy to use, images are community supported mostly but I’ve not had issues with Jellyfin, *arr, pihole, reverse proxy etc.
OS: Unraid
It’s primarily NAS software, with a form of software raid functionality built in.
I like it mainly because it works well and the GUI makes is very easy to use and work with.On top of that you can run VMs and docker containers, so it is very versatile as well.
I use it to host the following services on my network:
- Nextcloud
- Jellyfin
- CUPS
It costs a bit of money up-front, but for me it was well-worth the investment.
+1 for unraid. Nice OS that let’s me easily do what I want
Love Unraid. Been using it for a few years now on an old Dell server. I’m about to transform my current gaming PC into the main server so I can utilize the GPU pass-through and CPU pinning for things like running a VM just for LLM/AI and a VM for EndeavourOS for gaming. I just need to figure out how to keep my old server somehow working still bc of all the drive storage I have already setup, which my PC doesn’t have space for without a new case.
For anyone looking to setup Unraid, I highly recommend the SpaceInvaderOne YouTube channel. It helped tremendously when I got started.
Rocky Linux. Been using debian but I like firewalld a bit more than ufw, and I don’t trust myself enough to let myself touch iptable.
You can run Firewalld anywhere
I know. But coming out of the box is nicer.
Stage 1: Ubuntu server
Stage 2: Ubuntu server + docker
Stage 3: Ansible/OpenTofu/Kubernetes
Stage 4: Proxmox
Don’t get me wrong, I use libvrt where it makes sense but why would anyone go to proxmox from a full iac setup?
I do 2 at home, and 3 at work, coming from 4 at both and haven’t looked back.
Because it is much simpler to provision a VM
Maybe for the initial setup, but nothing is more repeatable than automation. The more manual steps you have to build your infra, the harder it is to recover/rebuild/update later
You automate the VM deployments.
if you’re automating the creation and deployment of vms, and the downstream operating systems, and not doing some sort of HA/failover meme setup… proxmox makes things way more complicated than raw libvirt/qemu/kvm.
Can you please elaborate on this? I am currently using MicroOS and think about NixOS because of quick setup. But also about Proxmox and NixOS on top. Where would libvirt fit in in this scenario?
If you ran a raw Ubuntu/fedora/whatever, you can use qemu/libvrt to run small virtual machines as required. You start and stop them with virsh, define them with simple xml files, and can easily automate the creation/destruction of them if desired.
oops straight to stage 4.
but wait stage 3 looks daunting
Kubernetes is overkill for most things not just self hosting. If you need to learn it great otherwise don’t waste your time on it. Extremely complicated given what it provides.
fr, unless you’re horizontally scaling something or managing hundreds of services what’s the point
I agree with this thread, but to answer your question I think the point is to tinker with it j “just because”. We’re all in this for fun, not profit.
I’ve been using Alpine Linux. I’ve always leaned towards minimalism in my personal life so Alpine seems like an appropriate fit for me.
Since what is installed is intentional, I am able to keep track of changes more accurately. I keep a document for complete setup by hand, then reduce that to an install script so I can get back to the same state in a minimal amount of time if needed.
Since I only have a Laptop and two Raspberry Pi’s with no intention of expanding or upgrading, this works for me as a personal hobby.
I’ve even gone as far as to use Alpine Sway as a desktop to keep everything similar as well.
I wouldn’t recommend it for anyone who doesn’t have the time to learn. It doesn’t use systemd and packages are often split meaning you will have to figure out what additional packages you may need beyond the core package.
I appreciate the approach Alpine takes because from a security point of view, less moving parts means less surface area to exploit. In today’s social climate, who knows how or when I’ll become a target.
debian very simple an classic but i started using bsds recemtly
I use Debian as well for all my servers whether they are a VM or container. It is light weight, well supported and dead stable.
Hypervisor Gotta say, I personally like a rather niche product. I love Apache Cloudstack.
Apache Cloudstack is actually meant for companies providing VMs and K8S clusters to other companies. However, I’ve set it up for myself in my lab accessible only over VPN.
What I like best about it is that it is meant to be deployed via Terraform and cloud init. Since I’m actively pushing myself into that area and seeking a role in DevOps, it fits me quite well.
Standing up a K8S cluster on it is incredibly easy. Basically it is all done with cloud init, though that process is quite automated. In fact, it took me 15m to stand up a 25 node cluster with 5 control nodes and 20 worker nodes.
Let’s compare it to other hypervisors though. Well, Cloudstack is meant to handle global operations. Typically, Cloudstack is split into regions, then into zones, then into pods, then into clusters, and finally into hosts. Let’s just say that it gets very very large if you need it to. Only it’s free. Basically, if you have your own hardware, it is more similar to Azure or AWS, then to VMWare. And none of that even costs any licensing.
Technically speaking, Cloudstack Management is capable of handling a number of different hypervisors if you would like it to. I believe that includes VMWare, KVM, Hyperv, Ovm, lxc, and XenServer. I think it is interesting because even if you choose to use another hypervisor that you prefer, it will still work. This is mostly meant as a transition to KVM, but should still work though I haven’t tested it.
I have however tested it with Ceph for storage and it does work. Perhaps doing that is slightly more annoying than with proxmox. But you can actually create a number of different types of storage if you wanted to take the cloud provider route, HDD vs SSD.
Overall, I like it because it works well for IaaS. I have 2000 vlans primed for use with its virtual networking. I have 1 host currently joined, but a second host in line for setup.
Here is the article I used to get it initially setup, though I will admit that I personally used a different vlan for the management ip and the public ip vlan. http://rohityadav.cloud/blog/cloudstack-kvm/