A while ago I switched backups from "Duplicity" to "Restic". About time: I was using Duplicity for many years (I think I started using it around 2010, long before "Restic" became available) and it served me well. But recently I ran into more and more issues, especially with archives getting larger and larger. There is an 11 years old open bug in the Duplicity bugtracker, which describes a showstopper for backing up larger archives. And it doesn't look like this will be solved anytime soon. Therefore it was time for something new.
Since I'm rolling out my backups with Ansible, it was relatively easy to create a set of scripts for Restic which use almost the same infrastructure as the old Duplicity backups. That works as expected on all our laptops. But the Raspberry Pi, which does the fileserver backups, seem to had a problem. Backups took way longer than before, jumped from 30-60 minutes (depending on the amount of changes) to constantly around 10 hours.
After some investigation (means: --verbose --verbose --verbose debugging), it turns out that Restic identifies most of the files as new, even though they did not change at all. Some background information: the Raspberry mounts the QNAP fileserver using the SMB3 protocol. The "mount -t cifs" uses the "serverino" option, but apparently that is not enough to provide a stable inode number. And if the inode for a file changes, Restic assumes it is a new file.
On the bright side, because the content of the files do not change, the deduplication still works, and no additional content is added to the backup. The size of the backup does not increase. Still, Restic fetches all the data from the server, and that takes a long time.
Continue reading "Restic upgrade on Debian Buster"
Sometimes old computers are not updated quickly enough, or just kept running ...
root@system ~ # lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.10
And so it happens that the support for Ubuntu 16.10 (codename: yakkety) came to an end, and the packages were removed from the regular Ubuntu servers. Trying to run an upgrade (do-release-upgrade) ended in the following message:
Checking package manager
Can not upgrade
An upgrade from 'yakkety' to 'artful' is not supported with this tool.
Continue reading "Upgrade from Ubuntu 16.10 (yakkety) to 17.10 (artful)"
Last year I posted about how to configure "locales" in Debian or Ubuntu, using Ansible. Back then I did not know that there is an Ansible "debconf" module available, and I have no idea how I could miss it. Anyway, this makes things a bit easier, but not much.
First of all, the module let's you both set and query values. However because the "locales" package does not use debconf for the list of locales, but stores this list in /etc/locale.gen, things are still unnecessary complicated. But I managed to get it working without having to use an additional file as flag if this step was completed before.
Continue reading "Configuring "locales" in Debian and Ubuntu, using Ansible - Reloaded"
After updating Linux packages, it sometimes is required to reboot the host. Debian and Ubuntu provide this information by the presence of a special file: /var/run/reboot-required. Ansible makes it easy to reboot a host, but there are a few aspects which need attention.
Continue reading "Execute a required reboot, with Ansible (Debian/Ubuntu)"
Missing locale settings will result in error messages like:
ads@ansible-ubuntu-03:~$ perl --version
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = "en_US:en",
LC_ALL = (unset),
LC_PAPER = "de_DE.UTF-8",
LC_ADDRESS = "de_DE.UTF-8",
LC_MONETARY = "de_DE.UTF-8",
LC_NUMERIC = "de_DE.UTF-8",
LC_TELEPHONE = "de_DE.UTF-8",
LC_IDENTIFICATION = "de_DE.UTF-8",
LC_MEASUREMENT = "de_DE.UTF-8",
LC_TIME = "de_DE.UTF-8",
LC_NAME = "de_DE.UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
This can be configured, but unfortunately the "locale" package does not use the debconf database. This makes it more complicated to configure the locales settings using Ansible.
Continue reading "Configuring "locales" in Debian and Ubuntu, using Ansible"
When using XEN you can start virtual machines using "xm create". However after rebooting the host machine, the virtual machines are not started automatically. This minor problem is easy to solve. Let's say the configuration for the virtual machine is in /etc/xen/pluto.cfg:
mkdir -p /etc/xen/auto
ln -s /etc/xen/pluto.cfg .
This creates a link to autostart the virtual machine. The source of the symlink must be an absolute path.
In addition to the symlink, the "pluto.cfg" file must contain the following entries:
on_xend_stop = 'shutdown'
on_xend_start = 'start'
If you are using the "xm" toolkit, you can add these two lines to the template in /etc/xen-tools/xm.tmpl. After these changes, "pluto" starts when the host system starts.
New server, S-ATA disks, the hardware RAID controller is not really supported. So we decided to use a software RAID 1 for the two disks. Here are the steps to create the RAID:
- Boot the Debian Etch installer.
- If the installation comes to "Partition method", use "Manual".
- In the following menu, scroll to your first disk and hit enter: the partitionier asks you, if you want to create an empty partition table. Say "yes". (Hint: this will erase your existing data, if any.)
- The partitioner is back in the disk overview, scroll one line downwards over the line with "FREE SPACE" and hit enter.
- Create a partition with the size you need, but remember the size and the logical type.
- In the "Partition settings" menu, go to "Use as" and hit enter.
- Change the type to "physical volume for RAID".
- Finish this partition with "Done setting up the partition".
- Create other partitions on the same disk, if you like.
- Now repeat all the steps from the first disk for the second disk.
- After this, you should have at least two disks with the same partition schema and all partitions (beside swap) should be marked for RAID use.
- Now look at the first menu entry in the partitioner menu, there is a new line: "Configure software RAID". Go into this menu.
- Answer the question, if you like to write the changes, with "Yes".
- Now pick "Create MD device".
- Use RAID1 and give the number of active and spare devices (2 and 0 in our case).
- In the following menu, select the same device number on the first and second disk and Continue.
- Repeat this step for every two devices until you are done. Then use "Finish" from the Multidisk configuration options.
- You are back in the partitioner menu and now you see one ore more new partitions named as "Software RAID Device". You can use this partitions like any normal partition and continue installing your system.
Thanks to mastermind from #postgresql-de for all the help.