Why then not just use ZFS or BTRFS? Way less overhead.
Ceph’s main advantage is the distribution of storage over multiple nodes, which you’re not planning on doing?
Why then not just use ZFS or BTRFS? Way less overhead.
Ceph’s main advantage is the distribution of storage over multiple nodes, which you’re not planning on doing?
Oof, imagine seeing this when standing on the moon, knowing there is no home anymore, and having a limited oxygen supply…
I think your idea is pretty much correct. One step that might be missing is updating your boot loader to boot into the correct partition, depending on your configuration.
That’s probably the issue, crontab has another workdir, so calling the script with a relative path won’t work.
Just use the full path to the script, something like /home/username/folder/directory/backup.sh
and it’ll probably just work.
Today I’ve migrated my data from my old zfs pool to a new bigger one, the rsync of 13.5TiB took roughly 18 hours. It’s slow spinning disks storage so that’s fine.
The second and third runs of the same rsync took like 5 seconds, blazing fast.