I’m trying to find a good method of making periodic, incremental backups. I assume that the most minimal approach would be to have a Cronjob run rsync periodically, but I’m curious what other solutions may exist.

I’m interested in both command-line, and GUI solutions.

  • inex@feddit.de
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    Timeshift is a great tool for creating incremental backups. Basically it’s a frontend for rsync and it works great. If needed you can also use it in CLI

  • HarriPotero@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    1 year ago

    I rotate between a few computers. Everything is synced between them with syncthing and they all have automatic btrfs snapshots. So I have several physical points to roll back from.

    For a worst case scenario everything is also synced offsite weekly to a pCloud share. I have a little script that mounts it with pcloudfs, encfs and then rsyncs any updates.

    • garam@lemmy.my.id
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Back In times

      Isn’t timeshift have same purpose, or it’s just matter of preference?

      • NoXPhasma@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Yes, it is the same purpose, kinda. But timeshift runs as a cron and allows for an easy rollback, while I use BIT for manual backups.

  • akash_rawal@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    I use rsync+btrfs snapshot solution.

    1. Use rsync to incrementally collect all data into a btrfs subvolume
    2. Deduplicate using duperemove
    3. Create a read-only snapshot of the subvolume

    I don’t have a backup server, just an external drive that I only connect during backup.

    Deduplication is mediocre, I am still looking for snapshot aware duperemove replacement.

    • Jo Miran@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I’m not trying to start a flame war, but I’m genuinely curious. Why do people like btrfs over zfs? Btrfs seems very much so “not ready for prime time”.

      • EddyBot@feddit.de
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        btrfs is included in the linux kernel, zfs is not on most distros
        the tiny chance that an externel kernel module borking with a kernel upgrade happens sometimes and is probably scary enough for a lot of people

      • akash_rawal@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        Features necessary for most btrfs use cases are all stable, plus btrfs is readily available in Linux kernel whereas for zfs you need additional kernel module. The availability advantage of btrfs is a big plus in case of a disaster. i.e. no additional work is required to recover your files.

        (All the above only applies if your primary OS is Linux, if you use Solaris then zfs might be better.)

      • Rockslide0482@discuss.tchncs.de
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I’ve only ever run ZFS on a proxmox/server system but doesn’t it have a not insignificant amount of resources required to run it? BTRFS is not flawless, but it does have a pretty good feature set.

  • Jo Miran@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    At the core it has always been rsync and Cron. Sure I add a NAS and things like rclone+cryptomator to have extra copies of synchronized data (mostly documents and media files) spread around, but it’s always rsync+Cron at the core.

  • knfrmity@lemmygrad.ml
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    I have scripts scheduled to run rsync on local machines, which save incremental backups to my NAS. The NAS in turn is incrementally backed up to a remote server with Borg.

    Not all of my machines are on all the time so I also built in a routine which checks how old the last backup is, and only makes a new one if the previous backup is older than a set interval.

    I also save a lot of my config files to a local git repo, the database of which is regularly dumped and backed up in the same way as above.

  • rodbiren@midwest.social
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Use synching on several devices to replicate data I want to keep backups of. Family photos, journals, important docs, etc. Works perfect and I run a relay node to give back to the community given I am on a unlimited data connection.

    • stewsters@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I use syncthing for my documents as well. My source code is in GitHub if it’s important, and I can reinstall everything else if I need.

  • BCsven@lemmy.ca
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    DejaDup on one computer. Another is using Syncthing, another I do a manual Grsync. i really should have a better plan. lol

  • mariom@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Is it just me or the backup topic is recurring each few days on [email protected] and [email protected]?

    To be on topic as well - I use restic+autorestic combo. Pretty simple, I made repo with small script to generate config for different machines and that’s it. Storing between machines and b2.

    • grue@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      It hasn’t succeeded in nagging me to properly back up my data yet, so I think it needs to be discussed even more.

      • TheAnonymouseJoker@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        2
        ·
        1 year ago

        I would argue you need to lose your data once to consider it important over a lot of useless things in your life. Most people are like this.

  • useless@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    I use btrbk to send btrfs snapshots to a local NAS. Consistent backups with no downtime. The only annoyance (for me at least) is that both send and receive ends must use the same SELinux policy or labels won’t match.

  • kool_newt@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I made my own bash script that uses rsync. I stopped using Github so here’s a paste lol.

    I define the backups like this, first item is source, other items on that line are it’s exclusions.

    /home/shared
    /home/jamie     tmp/ dj_music/ Car_Music_USB
    /home/jamie_work
    
    
    #!/usr/bin/ssh-agent /bin/bash
    
    # chronicle.sh
    
    
    
    # Get absolute directory chronicle.sh is in
    REAL_PATH=`(cd $(dirname "$0"); pwd)`
    
    # Defaults
    BACKUP_DEF_FILE="${REAL_PATH}/backup.conf"
    CONF_FILE="${REAL_PATH}/chronicle.conf"
    FAIL_IF_PRE_FAILS='0'
    FIXPERMS='true'
    FORCE='false'
    LOG_DIR='/var/log/chronicle'
    LOG_PREFIX='chronicle'
    NAME='backup'
    PID_FILE='~/chronicle/chronicle.pid'
    RSYNC_OPTS="-qRrltH --perms --delete --delete-excluded"
    SSH_KEYFILE="${HOME}/.ssh/id_rsa"
    TIMESTAMP='date +%Y%m%d-%T'
    
    # Set PID file for root user
    [ $EUID = 0 ] && PID_FILE='/var/run/chronicle.pid'
    
    
    # Print an error message and exit
    ERROUT () {
        TS="$(TS)"
        echo "$TS $LOG_PREFIX (error): $1"
        echo "$TS $LOG_PREFIX (error): Backup failed"
        rm -f "$PID_FILE"
        exit 1
    }
    
    
    # Usage output
    USAGE () {
    cat << EOF
    USAGE chronicle.sh [ OPTIONS ]
    
    OPTIONS
        -f path   configuration file (default: chronicle.conf)
        -F        force overwrite incomplete backup
        -h        display this help
    EOF
    exit 0
    }
    
    
    # Timestamp
    TS ()
    {
        if
            echo $TIMESTAMP | grep tai64n &>/dev/null
        then
            echo "" | eval $TIMESTAMP
        else
            eval $TIMESTAMP
        fi
    }
    
    
    # Logger function
    # First positional parameter is message severity (notice|warn|error)
    # The log message can be the second positional parameter, stdin, or a HERE string
    LOG () {
        local TS="$(TS)"
        # local input=""
    
        msg_type="$1"
    
        # if [[ -p /dev/stdin ]]; then
        #     msg="$(cat -)"
        # else
            shift
            msg="${@}"
        # fi
        echo "$TS chronicle ("$msg_type"): $msg"
    }
    
    # Logger function
    # First positional parameter is message severity (notice|warn|error)
    # The log message canbe stdin or a HERE string
    LOGPIPE () {
        local TS="$(TS)"
        msg_type="$1"
        msg="$(cat -)"
        echo "$TS chronicle ("$msg_type"): $msg"
    }
    
    # Process Options
    while
        getopts ":d:f:Fmh" options; do
            case $options in
                d ) BACKUP_DEF_FILE="$OPTARG" ;;
                f ) CONF_FILE="$OPTARG" ;;
                F ) FORCE='true' ;;
                m ) FIXPERMS='false' ;;
                h ) USAGE; exit 0 ;;
                * ) USAGE; exit 1 ;;
        esac
    done
    
    
    # Ensure a configuration file is found
    if
        [ "x${CONF_FILE}" = 'x' ]
    then
        ERROUT "Cannot find configuration file $CONF_FILE"
    fi
    
    # Read the config file
    . "$CONF_FILE"
    
    
    # Set the owner and mode for backup files
    if [ $FIXPERMS = 'true' ]; then
    #FIXVAR="--chown=${SSH_USER}:${SSH_USER} --chmod=D770,F660"
    FIXVAR="--usermap=*:${SSH_USER} --groupmap=*:${SSH_USER} --chmod=D770,F660"
    fi
    
    
    # Set up logging
    
    if [ "${LOG_DIR}x" = 'x' ]; then
        ERROUT "(error): ${LOG_DIR} not specified"
    fi
    
    mkdir -p "$LOG_DIR"
    LOGFILE="${LOG_DIR}/chronicle.log"
    
    # Make all output go to the log file
    exec >> $LOGFILE 2>&1
    
    
    # Ensure a backup definitions file is found
    if
        [ "x${BACKUP_DEF_FILE}" = 'x' ]
    then
        ERROUT "Cannot find backup definitions file $BACKUP_DEF_FILE"
    fi
    
    
    # Check for essential variables
    VARS='BACKUP_SERVER SSH_USER BACKUP_DIR BACKUP_QTY NAME TIMESTAMP'
    for var in $VARS; do
        if [ ${var}x = x ]; then
            ERROUT "${var} not specified"
        fi
    done
    
    
    LOG notice "Backup started, keeping $BACKUP_QTY snapshots with name \"$NAME\""
    
    
    # Export variables for use with external scripts
    export SSH_USER RSYNC_USER BACKUP_SERVER BACKUP_DIR LOG_DIR NAME REAL_PATH
    
    
    # Check for PID
    if
        [ -e "$PID_FILE" ]
    then
        LOG error "$PID_FILE exists"
        LOG error 'Backup failed'
        exit 1
    fi
    
    # Write PID
    touch "$PID_FILE"
    
    # Add key to SSH agent
    ssh-add "$SSH_KEYFILE" 2>&1 | LOGPIPE notice -
    
    # enhance script readability
    CONN="${SSH_USER}@${BACKUP_SERVER}"
    
    
    # Make sure the SSH server is available
    if
        ! ssh $CONN echo -n ''
    then
        ERROUT "$BACKUP_SERVER is unreachable"
    fi
    
    
    # Fail if ${NAME}.0.tmp is found on the backup server.
    if
        ssh ${CONN} [ -e "${BACKUP_DIR}/${NAME}.0.tmp" ] && [ "$FORCE" = 'false' ]
    then
        ERROUT "${NAME}.0.tmp exists, ensure backup data is in order on the server"
    fi
    
    
    # Try to create the destination directory if it does not already exist
    if
        ssh $CONN [ ! -d $BACKUP_DIR ]
    then
        if
            ssh $CONN mkdir -p "$BACKUP_DIR"
            ssh $CONN chown ${SSH_USER}:${SSH_USER} "$BACKUP_DIR"
        then :
        else
            ERROUT "Cannot create $BACKUP_DIR"
        fi
    fi
    
    # Create metadata directory
    ssh $CONN mkdir -p "$BACKUP_DIR/chronicle_metadata"
    
    
    #-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    # PRE_COMMAND
    
    if
        [ -n "$PRE_COMMAND" ]
    then
        LOG notice "Running ${PRE_COMMAND}"
        if
            $PRE_COMMAND
        then
            LOG notice "${PRE_COMMAND} complete"
        else
            LOG error "Execution of ${PRE_COMMAND} was not successful"
            if [ "$FAIL_IF_PRE_FAILS" -eq 1 ]; then
                ERROUT 'Command specified by PRE_COMMAND failed and FAIL_IF_PRE_FAILS enabled'
            fi
        fi
    fi
    
    
    #-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    # Backup
    
    # Make a hard link copy of backup.0 to rsync with
    if [ $FORCE = 'false' ]; then
        ssh $CONN "[ -d ${BACKUP_DIR}/${NAME}.0 ] && cp -al ${BACKUP_DIR}/${NAME}.0 ${BACKUP_DIR}/${NAME}.0.tmp"
    fi
    
    
    while read -u 9 l; do
    
        # Skip commented lines
        if [[ "$l" =~ ^#.* ]]; then
        continue
        fi
    
        if [[ $l = '/*'* ]]; then
            LOG warn "$SOURCE is not an absolute path"
            continue
        fi
    
        # Reduce whitespace to one tab
        line=$(echo $l | tr -s [:space:] '\t')
    
        # get the source
        SOURCE=$(echo "$line" | cut -f1)
    
        # get the exclusions
        EXCLUSIONS=$(echo "$line" | cut -f2-)
    
        # Format exclusions for the rsync command
        unset exclude_line
        if [ ! "$EXCLUSIONS" = '' ]; then
            for each in $EXCLUSIONS; do
                exclude_line="$exclude_line--exclude $each "
            done
        fi
    
    
        LOG notice "Using SSH transport for $SOURCE"
    
    
        # get directory metadata
        PERMS="$(getfacl -pR "$SOURCE")"
    
    
        # Copy metadata
        ssh $CONN mkdir -p ${BACKUP_DIR}/chronicle_metadata/${SOURCE}
        echo "$PERMS" | ssh $CONN -T "cat > ${BACKUP_DIR}/chronicle_metadata/${SOURCE}/metadata"
    
    
        LOG debug "rsync $RSYNC_OPTS $exclude_line "$FIXVAR" "$SOURCE" \
        "${SSH_USER}"@"$BACKUP_SERVER":"${BACKUP_DIR}/${NAME}.0.tmp""
    
        rsync $RSYNC_OPTS $exclude_line $FIXVAR "$SOURCE" \
        "${SSH_USER}"@"$BACKUP_SERVER":"${BACKUP_DIR}/${NAME}.0.tmp"
    
    done 9< "${BACKUP_DEF_FILE}"
    
    
    #-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    # Try to see if the backup succeeded
    
    if
        ssh $CONN [ ! -d "${BACKUP_DIR}/${NAME}.0.tmp" ]
    then
        ERROUT "${BACKUP_DIR}/${NAME}.0.tmp not found, no new backup created"
    fi
    
    
    # Test for empty temp directory
    if
        ssh $CONN [ ! -z "$(ls -A ${BACKUP_DIR}/${NAME}.0.tmp 2>/dev/null)" ]
    then
        ERROUT "No new backup created"
    fi
    
    #-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    # Rotate
    
    # Number of oldest backup
    X=`expr $BACKUP_QTY - 1`
    
    
    LOG notice 'Rotating previous backups'
    
    # keep oldest directory temporarily in case rotation fails
    ssh $CONN [ -d "${BACKUP_DIR}/${NAME}.${X}" ] && \
    ssh $CONN mv "${BACKUP_DIR}/${NAME}.${X}" "${BACKUP_DIR}/${NAME}.${X}.tmp"
    
    
    # Rotate previous backups
    until [ $X -eq -1 ]; do
        Y=$X
        X=`expr $X - 1`
    
        ssh $CONN [ -d "${BACKUP_DIR}/${NAME}.${X}" ] && \
        ssh $CONN mv "${BACKUP_DIR}/${NAME}.${X}" "${BACKUP_DIR}/${NAME}.${Y}"
        [ $X -eq 0 ] && break
    done
    
    # Create "backup.0" directory
    ssh $CONN mkdir -p "${BACKUP_DIR}/${NAME}.0"
    
    
    # Get individual items in "backup.0.tmp" directory into "backup.0"
    # so that items removed from backup definitions rotate out
    while read -u 9 l; do
    
        # Skip commented lines
        if [[ "$l" =~ ^#.* ]]; then
        continue
        fi
    
        # Skip invalid sources that are not an absolute path"
        if [[ $l = '/*'* ]]; then
            continue
        fi
    
        # Reduce multiple tabs to one
        line=$(echo $l | tr -s [:space:] '\t')
    
        source=$(echo "$line" | cut -f1)
    
        source_basedir="$(dirname $source)"
    
        ssh $CONN mkdir -p "${BACKUP_DIR}/${NAME}.0/${source_basedir}"
    
        LOG debug "ssh $CONN cp -al "${BACKUP_DIR}/${NAME}.0.tmp${source}" "${BACKUP_DIR}/${NAME}.0${source_basedir}""
    
        ssh $CONN cp -al "${BACKUP_DIR}/${NAME}.0.tmp${source}" "${BACKUP_DIR}/${NAME}.0${source_basedir}"
    
    done 9< "${BACKUP_DEF_FILE}"
    
    
    # Remove oldest backup
    X=`expr $BACKUP_QTY - 1` # Number of oldest backup
    ssh $CONN rm -Rf "${BACKUP_DIR}/${NAME}.${X}.tmp"
    
    # Set time stamp on backup directory
    ssh $CONN touch -m "${BACKUP_DIR}/${NAME}.0"
    
    # Delete temp copy of backup
    ssh $CONN rm -Rf "${BACKUP_DIR}/${NAME}.0.tmp"
    
    #-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    # Post Command
    
    if
        [ ! "${POST_COMMAND}x" = 'x' ]
    then
        LOG notice "Running ${POST_COMMAND}"
        if
            $POST_COMMAND
        then
            LOG notice "${POST_COMMAND} complete"
        else
            LOG warning "${POST_COMMAND} complete with errors"
        fi
    fi
    
    # Delete PID file
    rm -f "$PID_FILE"
    
    # Log success message
    LOG notice 'Backup completed successfully'