I do the same. Fedora on my laptop because I want a balance of stability and having the newest features. Servers run Debian, because I don’t have time to fix and update things.
I do the same. Fedora on my laptop because I want a balance of stability and having the newest features. Servers run Debian, because I don’t have time to fix and update things.
Logcheck. It took ages to make sure innocent logs are ignored, but now I get an email as soon as anything non-routine happens on my servers. I get emails with logs from every update, every time I log in, etc. This has given me the most confidence that nothing unexpected is happening on my servers. Of course, one needs to make sure that the firewall is configured well, and that you use ssh keys etc., but logcheck is how I know I’m doing enough.
How do you upload a snapshot?
Basically, as you said. Mount the data somewhere and back up its contents.
I back up snapshots rather than current data, because I don’t want to stop the running containers that read and write from that data. I’d rather avoid the situation where the container is writing data while it’s being backed up. The back up happens shortly after the daily snapshot is made so the difference between current and snapshot data is small.
As others have said, with an incremental filesystem level mechanism, the backup process won’t be too taxing for the CPU. I have ZFS set up which makes this easy and I make hourly snapshots using sanoid which also get sent to another mirrored pair of connected drives using syncoid. Then, once a day, I upload encrypted daily snapshots to a bucket in the cloud using restic. Sounds complicated, but actually sanoid/syncoid and restic do all the heavy lifting. All I did is automate their schedules using systemd timers and some scripts to backup the right directories.
Looks perfect! Exactly what I was looking for. Thanks!
My configuration and deployment is managed entirely via an Ansible playbook repository. In case of absolute disaster, I just have to redeploy the playbook. I do run all my stuff on top of mirrored drives so a single failure isn’t disastrous if I replace the drive quickly enough.
For when that’s not enough, the data itself is backed up hourly (via ZFS snapshots) to a spare pair of drives and nightly to S3 buckets in the cloud (via restic). Everything automated with systemd timers and some scripts. The configuration for these backups is part of the playbooks of course. I test the backups every 6 months by trying to reproduce all the services in a test VM. This has identified issues with my restoration procedure (mostly due to potential UID mismatches).
And yes, I have once been forced to reinstall from scratch and I managed to do that rather quickly through a combination of playbooks and well tested backups.
What benefit do you get from running a Cloudflare proxy if you’re directing it to a VPS? I used to run with a Cloudflare proxy when my reverse proxy was hosted at home. Since then, I’ve moved it to a VPS and I no longer use the Cloudflare proxy, because I only expose the IP address of the VPS which is fine. Arguably Cloudflare provides you with DDoS protection, but that’s so far never been a problem for me.
Wireguard easily supports dual stack configuration on a single interface, but the VPN server must also have IPv6 enabled. I use AirVPN and I get both IPv6 and IPv4 with a single wireguard tunnel. In addition to the ::/0 route you also need a static IPv6 address for the wireguard interface. This address must be provided to you by ProtonVPN.
If that’s not possible, the only solution is to entirely disable IPv6.
Correct. And getting the right configuration is pretty easy. Debian has good defaults. The only changes I make are configuring it to send emails to me when updates are installed. These emails will also then tell you if you need to reboot in subject line which is very convenient. As I said I also blacklist kernel updates on the server that uses ZFS as recompiling the modules causes inconsistencies between kernel and user space until a reboot. If you set up emails, you will also know when these updates are ready to be installed because you’ll be notified that they’re being held van.
So yea, I strongly recommend unattended-upgrades with email configured.
Edit: you can also make it reboot itself if you want to. Might be worth it on devices that don’t run anything very important and that can handle downtime.
A few simple rules make it quite simple for me:
This has been working great for me for the past several months.
For containers, I rely on Podman auto update and systemd. Actually my own script that imitates its behavior because I had issues with Podman pulling images which were not new, but which nevertheless triggered restarts of the containers. However, I lock the major version number manually and check and update major versions manually. Major version updates stung me too much in the past when I’d update them after a long break.
I expose my services to the web via my own VPS proxy :) I simply run only very few of them, use 2FA when supported, keep them up to date, run each service as rootless podman, and have a very verbose logcheck set up in case the container environment gets compromised, and allow only ports 80 and 443, and, very importantly, truly sensitive data (documents and such) is encrypted at rest so that even if my services are compromised that data remains secure.
For ssh, I have set up a separate raspberry pi as a wireguard server into my home network. Therefore, for any ssh management I first connect via this wireguard connection.
Thanks for this useful reply! I think I’ll just need to closely examine my setup and figure out if I really need the ability to up/down interfaces like I described or whether the more persistent approach of networkd is actually more suitable for me. Sometimes I just want to reproduce behaviour that I’ve used before, but may not actually need.
Thanks for your reply! One thing I’m struggling with networkd is hysteresis. That is, toggling the interface down and then back up does not do what I expect it to. That is, setting the interface down does not clear up the configuration, and setting the interface up does not reconfigure the interface. I have to run reconfigure for that. I was hoping that the declarative approach of networkd would make it easy to predict interface state and configuration.
This does make sense because configuration is not the same as operational state. However, what would the equivalent of ifdown (set interface down and remove configuration) and ifup (set interface up and reconfigure) be using networkd and networkctl? This kind of feature would be useful for me to test config changes, debug networking issues, disconnect part of the network while I’m making some changes, etc.
Thanks a lot for these tips! Especially about using the upstream deb.
I subscribed. I use navidrome since it has a slick UI and supports the subsonic API. Having both in one is great.
Thanks for your reply! Out of curiosity, what made you go with Prometheus over zabbix and check_mk in the end? Those two seem to be heavily recommended.
Maintaining legacy options is always maintenance overhead or things you need to work around when implementing new features. I suspect that they’ve concluded that not enough people use it anymore to justify the overhead.
Why not have the reverse proxy also do renewal for the SMTP relay certificate and just rsync it to the relay? For a while I had one of my proxies do all the renewals and the other would rsync it.
Many open source projects are not developed by unpaid volunteers. The Linux kernel, for example, is primarily developed by professionals on paid time. I’m not convinced the Linux kernel development would continue without business contribution. I’m not convinced all open source projects could just continue without any payment.