The best solution to that situation is just a more vigorous application of XGH.
The best solution to that situation is just a more vigorous application of XGH.
If you don’t mind me asking, then how do you know the kernel they use is bloated compared to any other kernel? A vast majority of the device-list stuff is loaded only when that device is detected with kernel modules. You aren’t actually running everything from the entire kernel, it just has support for the devices if it does detect them. which is basically the functionality you are asking for, ad-hoc device modules.
Monolithic kernels aren’t “bad”. That’s subjective. Monolithic kernels have measurable and significant performance benefits, over micro kernels. You also gain a massive complexity reduction. Micro kernels, historically, have not been very successful, e.g. Hurd, because that complexity management is extremely difficult. Not impossible, but so far kernel development has favored monolithic kernels not without reason.
If what you say is actually that easy, why wouldn’t all distro’s just do that during the install, and during updates with their package managers? I believe you could do this in Gentoo, but I don’t know if it has measurable benefits beyond what performance tuning for your specific CPU arch would give you. Since none of those devices you aren’t running are consuming any resources beyond the storage space of the kernel.
NixOS with YaST support would indeed be an incredibly powerful setup. It would make the whole Nix ecosystem significantly more beginner friendly and even for someone who wants to be a poweruser. It would be really nice to have config options laid out for you in a UI. Most of the time I have to have the options search, and package search websites open because there’s no easy way to get those lists within the console.
You mention that their kernel is bloated, would you mind sharing how you measure it compared to other kernels. Such as their kernel vs something more trimmed down. Is it a storage space savings or memory? I’ve never really considered the weight of a kernel when considering different distros so if you have some method I’d love to try and compare what I’m running.
Vegans for OpenTofu brought a smile to my face immediately, I shall hopefully remember to use this when it comes up.
Whitespace is not visible. It is the absence of something that is visible. Whitespace should be used for the comfort of the reader, not to determine scope. Are you proposing that a " " character is more visible than “{}”? The fact I must quote it to make what I am discussing even apparent speaks for itself. I’m not arguing that indentation is bad, far from it. In fact, the flexibility of using indentation purely for readability, makes code more readable.
If you run it in podman, podman can export into a kubernete file, but its been a long time since I’ve tried it though. podman kube generate $CONTAINERNAME
is podman-compose really dead? Their github page looks active at a glance. The tooling is so similar, I use podman for local testing, and deploy to docker, but I’ve also done the reverse. As long as your not using really exotic parameters its really just a drop in replacement, I’ve even used GPU passthrough for AI project no problem in both docker and podman. At the end of the day, they’re just slightly different frontends for the same backend.
As far as docker support, its often as simple as just providing a Dockerfile, which is basically the same thing as your build scripts. These days I’ve often used the Dockerfile INSTEAD of the readme to find help compiling some projects.
Surely tar --help
is a valid tar command, right?
I agree, whether or not it is good or bad, or readability concerns over nested braces. I fundamentally hate invisible delimiters. If it matters, make it visible. We have so many ascii characters, why not just borrow a few?
if the only way to use the open source client, is with a closed source server, is it really open source at all? The platform is the server.
Totally reasonable, something like LVM can at least get you to a raid1 setup, pretty easily.
Raid0 (combining both drives’ capacities) is not really tiered storage. You would want Raid1 (each drive is a copy of the other drive ), but doing this isn’t a backup. How will you be monitoring the drives so that you know if one of them actually fails?
I don’t think the RPi has a new enough kernel, but with bcachefs you can do tiered storage. By combining the storage of the ssd + hardrives, into a single block device, then make the ssd the read/write cache, and give the whole pool replicas=2, so that that if one drive dies you still have the failover of the other drive. Do be aware this setup is still not a backup however.
I’ve used it in the past with rclone, just mounting it with a systemd service on boot, and treating it like another folder on the system. Does it give you any logs as to why its not connecting right?
You still use keys?
I use pgup/pgdn every day. Especially with terminal multiplexers, as I am unaware of how to view the scrollback buffer of long outputs faster than a quick couple of pgup’s.
It does make sense. Thank you. I appreciate the link!
However, my cloud usage is purely as a proxy/load balancer, as none of my cloud providers hold any actual data. They’re just routing traffic, and all data/processing is on premises. What I’m interested in, is how to setup something like what you describe, but on premises also. From a design stand point, if I wanted to protect myself from a ransomware attack, obviously my cloud backups would be lost because they’re a mounted filesystem during a backup eventually. So I don’t know how to wrap my head around handling this, just storage design wise as specific tools I can figure out. How does one create a recovery point, and keep it safe from something like this? Just image the entire file system from a live booted offline environment? Feels like a chicken-egg problem to me.
I’ve thought about how I could handle disaster recovery for my homelab environment, but I haven’t come to any good solutions. For example, if my main concern was being hit by crypto. I can’t just recover from a regular backup, since I’m not sure how I can make a backup without that backup just being encrypted along side everything else. Since I mainly just backup everything to my file server, which is then synced to the cloud. In that setup, my cloud backups would be lost as well.
Would you have some starting points on how others handle disaster recovery? I’d like to avoid manually making an offline backup, because inevitably I’d forget to do it, which would make it useless anyway.
I wouldn’t consider arch minimalist. It just defaults to a netinstall with no desktop. Debian’s default net install also doesn’t have a desktop. Arch is more “vanilla” than debian, but not noticeably more minimal on first install.
aren’t a lot of games aarch64 only? do they even support x86? I’ve attempted in the past to use waydroid for a game, but no way to install it on an x86 machine. Does waydroid support some kind of box64 layer?