Posted by
ads' corner
on
Sunday, 2020-12-27 Posted in [Ansible][Backup][Software]
Was asked quite a few times how I do my backups with Restic.
For more than 10 years I was using “Duplicity” for backups, but in 2019 I changed to Restic. The main reason for the change was that Duplicity still can’t handle “Big Data”, as in: larger directories. In 2009 someone opened an issue on the Duplicity bugtracker, and this problem still exists as of today. For about two years I was shifting around the problem, excluding files, trying to make the sigfile smaller. But at some point I decided that it is enough and I need to change the tool.
Duplicity knows two backup modes: full backup and incremental backup. Once in a while you take a full backup, and then you add incremental backups to that full backup. In order to restore a certain backup you need the full backup and the incremental backups. Therefore my go-to mode was to always have two full backups and a couple incremental backups in-between. Even if something goes wrong with the latest full backup, I can still go back to the previous full backup (of course with some changes lost, but that’s still better than nothing). When taking a new full backup, the oldest one is only deleted when the new one is completed. Accordingly, when a new incremental backup is created, it’s a new set of files. Removing the backup removes all the files from this incremental backup. That worked well, but needed scheduling. Over time I wrote a wrapper script around Duplicity, which did schedule new full and incremental backups.
Restic works in a different way. There is no concept of “full backup” and “incremental backup”. Basically every backup is a full backup, and Restic figures out which files changed, got deleted, or added. Also it does deduplication: if files are moved around, or appear multiple times, they are not added multiple times into the backup. Deduplication is something which Duplicity can’t do. But because Restic can do deduplication, there is no common set of files which belong to a single snapshot. Data blobs from one backup can stay in the repository forever, removing snapshots might not remove any files at all.
Restic on the other hand needs “prune” to remove old data. A snapshot can be removed according to the policy specified, but this does not remove the data from the backup directory. A prune run will go over the data and remove any block which is no longer needed.
My first question - after figuring out which other backup tool to use: shall I replicate the wrapper script, or try something else? Given that the backup doesn’t need complex scheduling, I decided against writing a complex wrapper. And since I am now deploying all devices with Ansible, I decided to integrate this into my Playbooks, and deploy a set of shell scripts. The goal was to have a small number of dedicated scripts doing the daily backup work, and another set of “helper” scripts which I can use to inspect the backup, modify it, or restore something.
My main goals for this: “small number of programs/scripts” (Unix style: each tool does one job), “rapid development” (don’t spend weeks writing another scheduler), “rapid deployment” (re-run Playbooks and let Ansible deploy this to all devices).
My environment
I have three different sets of devices
Laptops: they have an external disk which is mounted for backups, and the backup lands there first. From there another script syncs it to the NAS, where the procedure is replicated (another backup), and then this other backup is synced to an external host.
Small devices, like Raspberry Pi: they don’t have an external disk attached. The backup goes directly to the NAS. Even the cache and logfiles are on the NAS, to avoid writing to the SDcard too often. Also in theory they don’t need a backup, because they are all installed using Ansible, and I can just replicate the installation. However a backup still gives me the ability to look into changes over time, or restore something which - for whatever reason - is not in the Ansible deployment.
NAS: A Raspberry Pi takes a backup of all NAS drives, afterwards this backup is synced to an external host. No one wants all backups in one place.
Now I also have three different sets of shell scripts. One set which can handle the external disk, one set without external disk, one set just for the NAS backup. It’s not ideal, and if I find time I will clean this up and move everything into one single set of scripts.
Directories
Although it’s not exactly the way Restic is usually used to take backups, historically I always splitted my backups by the top-level directory. I have backups for (as example):
/etc
/home
/var
/usr
…
Sometimes /home is split into the home directory of the primary laptop user (the largest backup directory), and all other home directories as separate backups.
I decided to keep this structure with Restic, as it allows me to define different retention times for the backups. Directories like /usr/ or /var/ have 6 months retention time, whereas /home has 2 years.
Encryption
Restic allows to encrypt the backups. This is handled by having a “credentials” vault in my Playbook directory, and then including the password in the backup scripts. The password can be set as an environment variable ($RESTIC_PASSWORD), therefore all scripts which need to access the repository have the following line near the top:
1
2
# password used to encrypt all backupsexportRESTIC_PASSWORD="{{ restic_password }}"
I don’t do unencrypted backups.
Integration into monitoring
The backups are integrated into my monitoring. For this I need to know the “age” of every backup, and if one or more of them are outdated.
After every backup is taken, the status of each backup (usually a directory in /) is written to a file in /root (/root/restic-status.log). This file is just appended with the status of each backup, and never deleted. The status script finds all backup entries in it, and then for each backup finds the latest entry. Example from the status file:
Backups exist for root, srv, tmp, usr and var on the 24th, and for opt and root on the 25th. The status will report that the following backup entries exist: root, srv, tmp, usr, var, opt - and will report that srv, tmp, usr and var are outdated (the “outdated” depends on what is configured as age).
The reason this status file exists: scanning the backup is quite expensive, and needs to mount the backup disk. Having the status in a file avoids a backup scan every time the monitoring requests this data. The status only updates with a new backup, therefore there is no need to scan the status every single time.
This will also catch any missing backups: if one backup only appears earlier in the status file, and is missing later, it will be reported as outdated.
Ansible
As mentioned before, I’m using Ansible to roll out the backup installation. One nice feature of this Playbook is that I can just include the role for every laptop or Raspberry Pi, and just have to change the configuration file. The relevant parts of the Playbook:
The above list does not include the scripts I’m using to sync files to the NAS, or external hosts. I will explain the above scripts later in this post. The scripts live in /root, because they contain sensitive data, like the password for the backup. Make sure that your laptop/desktop disk is encrypted, otherwise everyone can extract the backup password from there.
Copy the cron job file:
1
2
3
4
5
6
7
- name:Copy restic-backup cron job to servercopy:src:'{{ playbook_dir }}/files/restic-backup.cron'dest:'/etc/cron.d/restic-backup'owner:"root"group:"root"mode:"0600"
The file credentials/backup-ts-restic.txt specifies a name for the backup directory, which is usally the date when this backup was used first. Like 2020-01-01. But can be any string, and it’s not a secret. This is mainly a relict from when I was using Duplicity, and once in a while I started with a fresh backup (not just a new full backup). In this case I changed the timestamp to a new directory, and wiped the old directory.
backup_name_ts: move the backup name into a variable for Ansible
backup_path: full path where the backup is stored
backup_status_file: the status file for the backup, at the end of the backup the current status for each backup is added here
backup_ok_time: time in seconds how long a backup is “ok”, any backup older than that is considered outdated and will report an error in the monitoring
backups: a list, with some examples how many snapshots Restic will keep, how long they are kept, and which files are to excluded
The Backup Scripts
The scripts do quite extensive logging: the backup is meant to work without attention. If the monitoring or the cron job reports a problem it’s useful to have logfiles. Older logfiles are removed after a while.
This “version” of the scripts is for our laptops, and mounts an external disk for the backups. The scripts are simplified - in the original version they also create data for Telegraf to be included in Grafana.
restic-backup.sh
This is the main backup script:
Mount the external disk
Set and verify a couple variables (details filled in by Ansible during deployment)
Run a backup for each set of backups (restic backup)
#!/bin/bash
## list all locks for the backups## usually there should be no locks, but if a backup repository# is help locked, this script will showts=`date +'%Y-%m-%d_%H%M%S'`restic=/usr/bin/restic
set +e
mountpoint -q /backup
mp=$?if["$mp" !="0"];
then
mount /backup ||exit0fiset -e
# password used to encrypt all backupsexportRESTIC_PASSWORD="{{ restic_password }}"holding_locks=0{% for backup in backups %}######################################################################## Backup: {{ backup.name }}# Directory: {{ backup.backup }}if[ -z "{{ backup.destination|default("") }}"];
then# use default backup destinationbackup_path_logs="{{ backup_path }}/logs/{{ backup.name }}"backup_path_backup="{{ backup_path }}/backup/{{ backup.name }}"backup_path_cache="{{ backup_path }}/cache/{{ backup.name }}"else# use backup destination specified for this backupbackup_path_logs="{{ backup.destination|default("") }}/logs"backup_path_backup="{{ backup.destination|default("") }}/backup"backup_path_cache="{{ backup.destination|default("") }}/cache"fi# sanity checksif[ -z "$backup_path_logs" -o -z "$backup_path_backup" -o -z "$backup_path_cache"];
thenecho"Internal error!"exit1fiif[ ! -d "$backup_path_backup"];
thenecho"Backup not present! (list-locks): $backup_path_backup"exit1fiecho"log directory (list-locks): $backup_path_logs" >> "$backup_path_logs/$ts-variables.log" 2>&1echo"backup directory (list-locks): $backup_path_backup" >> "$backup_path_logs/$ts-variables.log" 2>&1echo"cache directory (list-locks): $backup_path_cache" >> "$backup_path_logs/$ts-variables.log" 2>&1exportTMPDIR="$backup_path_cache"# only run if the directory is initializedif[ -f "$backup_path_backup/config"];
thenlocks=`$restic --repo "$backup_path_backup" --cache-dir "$backup_path_cache" --no-lock list locks | grep -v "opened successfully" | grep -v "created new cache in" ; /bin/true`if[ -n "$locks"];
thenecho"{{ backup.name }} is holding locks:"echo"$locks"holding_locks=1fifi# end backup: {{ backup.name }}#######################################################################{% endfor %}set +e
mountpoint -q /backup
mp=$?if["$mp"="0"];
then
umount /backup
fiset -e
if["$holding_locks" -eq 0];
thenexit0elseexit1fi
restic-release-locks.sh
When a repository lock is left behind, it must be cleaned up before another operation can take place on the repository.
This script checks if a restic is running, and if not it removes all existing locks.
#!/bin/bash
## release all locks for the backups## this script will check if no backup tool is running,# and then release old locksts=`date +'%Y-%m-%d_%H%M%S'`restic=/usr/bin/restic
running=`ps auxwf | grep restic | grep -v restic`if[ -n "$running"];
thenecho"There is a 'restic' instance running!"echo"Refuse to unlock."exit1fiset +e
mountpoint -q /backup
mp=$?if["$mp" !="0"];
then
mount /backup ||exit0fiset -e
# password used to encrypt all backupsexportRESTIC_PASSWORD="{{ restic_password }}"holding_locks=0{% for backup in backups %}######################################################################## Backup: {{ backup.name }}# Directory: {{ backup.backup }}if[ -z "{{ backup.destination|default("") }}"];
then# use default backup destinationbackup_path_logs="{{ backup_path }}/logs/{{ backup.name }}"backup_path_backup="{{ backup_path }}/backup/{{ backup.name }}"backup_path_cache="{{ backup_path }}/cache/{{ backup.name }}"else# use backup destination specified for this backupbackup_path_logs="{{ backup.destination|default("") }}/logs"backup_path_backup="{{ backup.destination|default("") }}/backup"backup_path_cache="{{ backup.destination|default("") }}/cache"fi# sanity checksif[ -z "$backup_path_logs" -o -z "$backup_path_backup" -o -z "$backup_path_cache"];
thenecho"Internal error!"exit1fiif[ ! -d "$backup_path_backup"];
thenecho"Backup not present! (release-locks): $backup_path_backup"exit1fiecho"log directory (release-locks): $backup_path_logs" >> "$backup_path_logs/$ts-variables.log" 2>&1echo"backup directory (release-locks): $backup_path_backup" >> "$backup_path_logs/$ts-variables.log" 2>&1echo"cache directory (release-locks): $backup_path_cache" >> "$backup_path_logs/$ts-variables.log" 2>&1exportTMPDIR="$backup_path_cache"# only run if the directory is initializedif[ -f "$backup_path_backup/config"];
thenlocks=`$restic --repo "$backup_path_backup" --cache-dir "$backup_path_cache" --no-lock list locks | grep -v "opened successfully" | grep -v "created new cache in" ; /bin/true`if[ -n "$locks"];
thenecho"{{ backup.name }} is holding locks ..."$restic --repo "$backup_path_backup" --cache-dir "$backup_path_cache" --no-lock unlock
fifi# end backup: {{ backup.name }}#######################################################################{% endfor %}set +e
mountpoint -q /backup
mp=$?if["$mp"="0"];
then
umount /backup
fiset -e
restic-stats.sh
This script shows very extensive stats about each backup and snapshot. Example:
Snapshot: 1011e065e3727c284fd3bf8b10a82f719a49136ed822ff31c0e9c4a8bdcd3fe9
Stats for 1011e065e3727c284fd3bf8b10a82f719a49136ed822ff31c0e9c4a8bdcd3fe9 in restore-size mode:
Total File Count: 3850
Total Size: 6.849 MiB
Stats for 1011e065e3727c284fd3bf8b10a82f719a49136ed822ff31c0e9c4a8bdcd3fe9 in files-by-contents mode:
Total File Count: 1799
Total Size: 6.728 MiB
Stats for 1011e065e3727c284fd3bf8b10a82f719a49136ed822ff31c0e9c4a8bdcd3fe9 in blobs-per-file mode:
Total Blob Count: 1798
Total File Count: 1798
Total Size: 6.728 MiB
Stats for 1011e065e3727c284fd3bf8b10a82f719a49136ed822ff31c0e9c4a8bdcd3fe9 in raw-data mode:
Total Blob Count: 2175
Total Size: 8.052 MiB
It takes a long time to run this script, it has to scan all backups for details.
This script scans the status file ($backup_status_file: /root/restic-status.log), extracts all backup entries and finds the latest timestamp for each entry. This is used for a) showing the backup status and b) can be directly integrated into a monitoring system. The output suits the Icinga2 Plugin API.
#!/bin/bash
## list status for all backups## this file operates solely on the backup status file# it is not necessary to actually parse the backup status on the backup disk# this avoids mounting any backup# the script 'restic-generate-status.sh' will generate an updated statustime_now=`date '+%s'`if[ ! -f "{{ backup_status_file }}"];
thenecho"Backup status file not found!"exit3fi# get list of all backup targets in status filetargets=`cat "{{ backup_status_file }}" | cut -f 1 -d ' ' | sort -u`#echo "$targets"backup_ok=0backup_fail=0status_text=""for target in $targets;
dotime_backup1=`cat "{{ backup_status_file }}" | grep "^$target" | cut -f 2 -d ' ' | sort -u | sort | tail -n 1`time_backup2=`date --date="$time_backup1"'+%s'`time_diff=`expr "$time_now" - "$time_backup2"`time_left=`expr "{{ backup_ok_time }}" - "$time_diff"`if["$time_diff" -ge "{{ backup_ok_time }}"];
thenstatus_add="Backup fail: $target ($time_diff seconds old)"backup_fail=$((backup_fail+1))elsestatus_add="Backup OK: $target ($time_diff seconds old, $time_left seconds left)"backup_ok=$((backup_ok+1))fiif[ -z "$status_text"];
thenstatus_text="$status_add"elsestatus_text="$status_text"$'\n'"$status_add"fidoneif["$backup_fail" -gt "0"];
thenecho"Backup status: $backup_ok OK, $backup_fail FAIL"echo"${status_text}"exit2elseecho"Backup status: $backup_ok OK"echo"${status_text}"exit0fi
restic-generate-status.sh
This script is a helper script, and updates the backup status for each backup.
#!/bin/bash
## generate an updated backup status## update the status file, so the monitoring can pick up the# backup status without mounting the backup every single timets=`date +'%Y-%m-%d_%H%M%S'`restic=/usr/bin/restic
set +e
mountpoint -q /backup
mp=$?if["$mp" !="0"];
then
mount /backup ||exit0fiset -e
# password used to encrypt all backupsexportRESTIC_PASSWORD="{{ restic_password }}"{% for backup in backups %}######################################################################## Backup: {{ backup.name }}# Directory: {{ backup.backup }}if[ -z "{{ backup.destination|default("") }}"];
then# use default backup destinationbackup_path_logs="{{ backup_path }}/logs/{{ backup.name }}"backup_path_backup="{{ backup_path }}/backup/{{ backup.name }}"backup_path_cache="{{ backup_path }}/cache/{{ backup.name }}"else# use backup destination specified for this backupbackup_path_logs="{{ backup.destination|default("") }}/logs"backup_path_backup="{{ backup.destination|default("") }}/backup"backup_path_cache="{{ backup.destination|default("") }}/cache"fi# sanity checksif[ -z "$backup_path_logs" -o -z "$backup_path_backup" -o -z "$backup_path_cache"];
thenecho"Internal error!"exit1fiif[ ! -d "$backup_path_backup"];
thenecho"Backup not present! (generate-status): $backup_path_backup"exit1fiecho"log directory (generate-status): $backup_path_logs" >> "$backup_path_logs/$ts-variables.log" 2>&1echo"backup directory (generate-status): $backup_path_backup" >> "$backup_path_logs/$ts-variables.log" 2>&1echo"cache directory (generate-status): $backup_path_cache" >> "$backup_path_logs/$ts-variables.log" 2>&1exportTMPDIR="$backup_path_cache"echo"Generate status for: {{ backup.name }}"# store backup statusbackup_status=`$restic --repo "$backup_path_backup" --cache-dir "$backup_path_cache" --verbose snapshots --json --last | jq -M -r -c -j '"{{ backup.name }} ",.[0].time'`echo"$backup_status"echo"$backup_status" >> "{{ backup_status_file }}"echo""# end backup: {{ backup.name }}#######################################################################{% endfor %}set +e
mountpoint -q /backup
mp=$?if["$mp"="0"];
then
umount /backup
fiset -e
restic-restore.sh
Restic offers a “live” restore mode, where the backup can be mounted (using fuse) on a specified mount point, and then all snapshots and file versions are available as subdirectories. This makes it - as example - easy to compare different versions of one file by running “diff” against the files in the snapshot subdirectories.
This script is rarely used, and before the mount point and the desired backup needs to be added near the top of the file. Then start the script, and check the mount point directory for the backups. Once you are done with restore, hit “Ctrl+C” to end the restore process.
#!/bin/bash
## restore from a backup# configuration:## set the following variables, then remove the "exit 0" line below# which repository shall be used for restore (specify the name)# available repositories: {% for backup in backups %}{{ backup.name }} {% endfor %}repository=""# where to mount the backupmount=""exit0######################################################################## don't change anything below this line!#######################################################################set -e
if[ -z "$repository"];
thenecho"Specify a repository!"exit1fiif[ -z "$mount"];
thenecho"Specify a repository!"exit1fiif[ ! -d "$mount"];
thenecho"mount point ($mount) must be an existing directory!"exit1fiset +e
mountpoint -q "$mount"mp=$?if["$mp"="0"];
thenecho"mount point ($mount) is already mounted!"exit1fiset -e
# get the real path for the mount pointmount=`realpath $mount`if["$mount"="/backup"];
thenecho"mount point can't be the /backup path!"exit1fits=`date +'%Y-%m-%d_%H%M%S'`restic=/usr/bin/restic
set +e
mountpoint -q /backup
mp=$?if["$mp" !="0"];
then
mount /backup ||exit0fiset -e
# password used to encrypt all backupsexportRESTIC_PASSWORD="{{ restic_password }}"found=0for repo in {% for backup in backups %}{{ backup.name }}{% endfor %};
doif["$repo"=="$repository"]thenfound=1breakfidone;
if["$found" !="1"];
thenecho"Repository not found!"exit1fibackup_path_logs=""backup_path_backup=""backup_path_cache=""{% for backup in backups %}if["$repository"=="{{ backup.name }}"];
thenif[ -z "{{ backup.destination|default("") }}"];
thenbackup_path_logs="{{ backup_path }}/logs/{{ backup.name }}"backup_path_backup="{{ backup_path }}/backup/{{ backup.name }}"backup_path_cache="{{ backup_path }}/cache/{{ backup.name }}"elsebackup_path_logs="{{ backup.destination|default("") }}/logs"backup_path_backup="{{ backup.destination|default("") }}/backup"backup_path_cache="{{ backup.destination|default("") }}/cache"fifi{% endfor %}# sanity checksif[ -z "$backup_path_logs" -o -z "$backup_path_backup" -o -z "$backup_path_cache"];
thenecho"Repository not found!"exit1fiif[ ! -d "$backup_path_backup"];
thenecho"Backup not present! (restore): $backup_path_backup"exit1fiecho"log directory (restore): $backup_path_logs" >> "$backup_path_logs/$ts-variables.log" 2>&1echo"backup directory (restore): $backup_path_backup" >> "$backup_path_logs/$ts-variables.log" 2>&1echo"cache directory (restore): $backup_path_cache" >> "$backup_path_logs/$ts-variables.log" 2>&1exportTMPDIR="$backup_path_cache"echo"Mounting $repository on $mount":
set +e
/usr/bin/nice -n 19 /usr/bin/ionice -c 3$restic --repo "$backup_path_backup" --cache-dir "$backup_path_cache" --verbose mount "$mount" 2>&1 | tee -a "$backup_path_logs/$ts-mount.log"echo"Restore finished, unmounting backup drive"
sleep 2
umount /mnt > /dev/null 2>&1set +e
mountpoint -q /backup
mp=$?if["$mp"="0"];
then
umount /backup
fiset -e
restic-check-read-data.sh
The check command which is run as part of some of the scripts does a verification of the repository, however it does not read all the data. In order to verify that each and every data block is available and can be read, the check --read-data command must be used.
This script runs this step on all backups. This is a lengthy and expensive process!
#!/bin/bash
## check the integrity of the archivets=`date +'%Y-%m-%d_%H%M%S'`restic=/usr/bin/restic
set +e
mountpoint -q /backup
mp=$?if["$mp" !="0"];
then
mount /backup ||exit0fiset -e
# password used to encrypt all backupsexportRESTIC_PASSWORD="{{ restic_password }}"{% for backup in backups %}{% if backup.name =="home-ads" %}######################################################################## Backup: {{ backup.name }}# Directory: {{ backup.backup }}if[ -z "{{ backup.destination|default("") }}"];
then# use default backup destinationbackup_path_logs="{{ backup_path }}/logs/{{ backup.name }}"backup_path_backup="{{ backup_path }}/backup/{{ backup.name }}"backup_path_cache="{{ backup_path }}/cache/{{ backup.name }}"else# use backup destination specified for this backupbackup_path_logs="{{ backup.destination|default("") }}/logs"backup_path_backup="{{ backup.destination|default("") }}/backup"backup_path_cache="{{ backup.destination|default("") }}/cache"fi# sanity checksif[ -z "$backup_path_logs" -o -z "$backup_path_backup" -o -z "$backup_path_cache"];
thenecho"Internal error!"exit1fiecho"log directory (check-read-data): $backup_path_logs" >> "$backup_path_logs/$ts-variables.log" 2>&1echo"backup directory (check-read-data): $backup_path_backup" >> "$backup_path_logs/$ts-variables.log" 2>&1echo"cache directory (check-read-data): $backup_path_cache" >> "$backup_path_logs/$ts-variables.log" 2>&1exportTMPDIR="$backup_path_cache"locks=`$restic --repo "$backup_path_backup" --cache-dir "$backup_path_cache" --no-lock list locks | grep -v "opened successfully" | grep -v "created new cache in" ; /bin/true`if[ -n "$locks"];
thenecho"locks:"echo"$locks"echo"Backup '{{ backup.name }}' is locked!" 2>&1 | tee -a "$backup_path_logs/$ts-error.log"exit1fiecho"Repository: {{ backup.name }}"echo" Directory: {{ backup.backup }}"$restic --repo "$backup_path_backup" --cache-dir "$backup_path_cache" --verbose check --read-data
echo""echo""# end backup: {{ backup.name }}#######################################################################{% endif %}{% endfor %}set +e
mountpoint -q /backup
mp=$?if["$mp"="0"];
then
umount /backup
fiset -e
restic-rebuild-index.sh
Restic keeps an index for each backup, which is used to speedup the access to the repository. If this index is out of sync with the actual backup, it needs to be rebuild.
This is usually the case for a single backup, not for all backups. The script to re-generate the index therefore needs the backup name near the top of the file.
#!/bin/bash
## rebuild the index for a backup# configuration:## set the following variables, then remove the "exit 0" line below# which repository shall be used for rebuilding the index (specify the name)# available repositories: {% for backup in backups %}{{ backup.name }} {% endfor %}repository=""exit0######################################################################## don't change anything below this line!#######################################################################set -e
if[ -z "$repository"];
thenecho"Specify a repository!"exit1fits=`date +'%Y-%m-%d_%H%M%S'`restic=/usr/bin/restic
set +e
mountpoint -q /backup
mp=$?if["$mp" !="0"];
then
mount /backup
mr=$?if["$mr" !="0"];
thenecho"Can't mount backup!"exit1fifiset -e
# password used to encrypt all backupsexportRESTIC_PASSWORD="{{ restic_password }}"found=0for repo in {% for backup in backups %}{{ backup.name }}{% endfor %};
doif["$repo"=="$repository"]thenfound=1breakfidone;
if["$found" !="1"];
thenecho"Repository not found!"exit1fibackup_path_logs=""backup_path_backup=""backup_path_cache=""{% for backup in backups %}if["$repository"=="{{ backup.name }}"];
thenif[ -z "{{ backup.destination|default("") }}"];
thenbackup_path_logs="{{ backup_path }}/logs/{{ backup.name }}"backup_path_backup="{{ backup_path }}/backup/{{ backup.name }}"backup_path_cache="{{ backup_path }}/cache/{{ backup.name }}"elsebackup_path_logs="{{ backup.destination|default("") }}/logs"backup_path_backup="{{ backup.destination|default("") }}/backup"backup_path_cache="{{ backup.destination|default("") }}/cache"fifi{% endfor %}# sanity checksif[ -z "$backup_path_logs" -o -z "$backup_path_backup" -o -z "$backup_path_cache"];
thenecho"Repository not found!"exit1fiif[ ! -d "$backup_path_backup"];
thenecho"Backup not present! (rebuild-index): $backup_path_backup"exit1fiecho"log directory (rebuild-index): $backup_path_logs" >> "$backup_path_logs/$ts-variables.log" 2>&1echo"backup directory (rebuild-index): $backup_path_backup" >> "$backup_path_logs/$ts-variables.log" 2>&1echo"cache directory (rebuild-index): $backup_path_cache" >> "$backup_path_logs/$ts-variables.log" 2>&1exportTMPDIR="$backup_path_cache"set +e
$restic --repo "$backup_path_backup" --cache-dir "$backup_path_cache" --verbose rebuild-index --cleanup-cache=trueset +e
mountpoint -q /backup
mp=$?if["$mp"="0"];
then
umount /backup
fiset -e
restic-forget-snapshot.sh
Occasionally you want to remove a single snapshot from a backup. The following script does this, and you need to add the backup name and the snapshot name (from list-snapshots.sh) near the top of the file. Multiple snapshots can be specified, separated by space.