Skip to content

Local cachíng of Ansible Facts

Every time Ansible runs a Playbook, the first step (by default) is gathering facts about the target system:

PLAY [all-systems]

TASK [Gathering Facts]
ok: [host1]
ok: [host2]

This step is implicit, and it is not necessary (but possible) to add the gather facts step to every Playbook. The module which retrieves all the information is "setup", and by default it tries to gather as much information about the target system as possible. When the "setup" task is added as an extra step in the Playbook, the information about the destination system is refreshed and updated:

  tasks:
    - name: Refresh destination information
      setup:

That might be necessary when a Playbook changed vital system settings.

Gathering the facts is a time-consuming process, and for a short Playbook it is quite possible that this is the longest-running task. And it's repeated every time the Playbook runs.

Ansible provides Cache plugins which can store the gathered facts. If the system facts don't change between Playbook runs, this will greatly speed up the runtime of Playbooks. The facts cache can be stored in JSON files, in a Redis DB, in a Memcache, and a few other options. The simplest way, without additional tools required, is the "jsonfile" cache. Central implementations like Redis or Memcache allow multiple Ansible controller hosts to use the same facts cache, whereas local caches like "JSON" are only available on a single host, and every Ansible controller must build and maintain it's own cache.

 

Continue reading "Local cachíng of Ansible Facts"
  • Twitter
  • Bookmark Local cachíng of Ansible Facts at del.icio.us
  • Facebook
  • Google Bookmarks
  • FriendFeed
  • Digg Local cachíng of Ansible Facts
  • Bloglines Local cachíng of Ansible Facts
  • Technorati Local cachíng of Ansible Facts
  • Fark this: Local cachíng of Ansible Facts
  • Bookmark Local cachíng of Ansible Facts at YahooMyWeb
  • Bookmark Local cachíng of Ansible Facts at Furl.net
  • Bookmark Local cachíng of Ansible Facts at reddit.com
  • Bookmark Local cachíng of Ansible Facts at blinklist.com
  • Bookmark Local cachíng of Ansible Facts at Spurl.net
  • Bookmark Local cachíng of Ansible Facts at Simpy.com
  • Bookmark Local cachíng of Ansible Facts at blogmarks
  • Bookmark Local cachíng of Ansible Facts with wists
  • wong it!
  • Bookmark using any bookmark manager!
  • Stumble It!
  • Identi.ca

Restic upgrade on Debian Buster

A while ago I switched backups from "Duplicity" to "Restic". About time: I was using Duplicity for many years (I think I started using it around 2010, long before "Restic" became available) and it served me well. But recently I ran into more and more issues, especially with archives getting larger and larger. There is an 11 years old open bug in the Duplicity bugtracker, which describes a showstopper for backing up larger archives. And it doesn't look like this will be solved anytime soon. Therefore it was time for something new.

Since I'm rolling out my backups with Ansible, it was relatively easy to create a set of scripts for Restic which use almost the same infrastructure as the old Duplicity backups. That works as expected on all our laptops. But the Raspberry Pi, which does the fileserver backups, seem to had a problem. Backups took way longer than before, jumped from 30-60 minutes (depending on the amount of changes) to constantly around 10 hours.

After some investigation (means: --verbose --verbose --verbose debugging), it turns out that Restic identifies most of the files as new, even though they did not change at all. Some background information: the Raspberry mounts the QNAP fileserver using the SMB3 protocol. The "mount -t cifs" uses the "serverino" option, but apparently that is not enough to provide a stable inode number. And if the inode for a file changes, Restic assumes it is a new file.

On the bright side, because the content of the files do not change, the deduplication still works, and no additional content is added to the backup. The size of the backup does not increase. Still, Restic fetches all the data from the server, and that takes a long time.

 

Continue reading "Restic upgrade on Debian Buster"
  • Twitter
  • Bookmark Restic upgrade on Debian Buster at del.icio.us
  • Facebook
  • Google Bookmarks
  • FriendFeed
  • Digg Restic upgrade on Debian Buster
  • Bloglines Restic upgrade on Debian Buster
  • Technorati Restic upgrade on Debian Buster
  • Fark this: Restic upgrade on Debian Buster
  • Bookmark Restic upgrade on Debian Buster at YahooMyWeb
  • Bookmark Restic upgrade on Debian Buster at Furl.net
  • Bookmark Restic upgrade on Debian Buster at reddit.com
  • Bookmark Restic upgrade on Debian Buster at blinklist.com
  • Bookmark Restic upgrade on Debian Buster at Spurl.net
  • Bookmark Restic upgrade on Debian Buster at Simpy.com
  • Bookmark Restic upgrade on Debian Buster at blogmarks
  • Bookmark Restic upgrade on Debian Buster with wists
  • wong it!
  • Bookmark using any bookmark manager!
  • Stumble It!
  • Identi.ca