Ansible and LXC Containers

Posted by ads' corner on Saturday, 2016-08-13
Posted in [Ansible][Linux]

LXC is one of many available containerization solutions for Linux. Ansible has basic support for LXC integrated, which is fine if you do not intend to do much inside of the container (aka: fire & forget). My goal however is to start a full flavored container, and manage this container with Ansible as well. That’s where things get a bit tricky, and looking around I couldn’t find much documentation how to do this.

This posting describes my approach.

I had several problems to solve:

  • A container usually has a private IP-address on the hypervisor host
  • Ansible needs to know on which hypervisor the container must be started
  • Ansible can’t connect to the container before it is started

Define hypervisors and containers

In order to solve the first problem, I grouped my hypervisor hosts and my container hosts in two groups in my host file:

[hv1]
192.168.0.187 hostname=ansible-ubuntu-03

[hv2]
192.168.0.188 hostname=ansible-ubuntu-05

# hypervisor group
[hypervisors:children]
hv1
hv2

[vm1]
10.0.3.10

# VM group
[vms:children]
vm1

Add hypervisor information

Every container needs additional information on which hypervisor it is running:

[vm1:vars]
vm_physical_host=192.168.0.187
vm_physical_user=...
vm_ip=10.0.3.10
vm_name=vm1
vm_user=ubuntu

Start container

With the available information, Ansible can connect to the hypervisor and then loop over the containers and start each of them:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
- hosts: hypervisors
  become: yes
  tasks:

    - name: Check OS (Hypervisor)
      fail: msg="Not an Ubuntu OS!"
      when: ansible_distribution != 'Ubuntu'

    - name: Loop over VMs
      include: deploy-basic-vm.yml vm_ip={{ hostvars[item]['vm_ip'] }} vm_name={{ hostvars[item]['vm_name'] }} vm_physical_user={{ hostvars[item]['vm_physical_user'] }}
      with_items: "{{ groups['vms'] }}"
      when: hostvars[item]['vm_physical_host'] == '{{ inventory_hostname }}'

This play will loop over every defined container in the vms group, and run the include file which in turn will setup the container:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
---

- name: IP-address for container
  lineinfile: dest=/etc/lxc/dnsmasq-hosts.conf regexp='^{{ vm_name }},{{ vm_ip }}$' line='{{ vm_name }},{{ vm_ip }}' state=present
  notify:
    - lxc-net restart
    - dnsmasq reload

- meta: flush_handlers

- name: Create VM container
  lxc_container:
    name: '{{ vm_name }}'
    container_log: true
    container_log_level: INFO
    template: ubuntu
    template_options: --release xenial
    backing_store: dir
    container_config:
      - "lxc.start.auto = 1"
      - "lxc.start.delay = 5"
      - "lxc.network.ipv4 = {{ vm_ip }}"
    state: started
  register: create_vm

- name: create /root/.ssh/known_hosts
  file: path=/root/.ssh/known_hosts state=touch owner=root group=root mode=0600
  changed_when: False

- name: create /home/<user>/.ssh/known_hosts
  file: path=/home/{{ vm_physical_user }}/.ssh/known_hosts state=touch owner={{ vm_physical_user }} group={{ vm_physical_user }} mode=0600
  changed_when: False

- name: check if ssh keys exist for user
  stat: path=/home/{{ vm_physical_user }}/.ssh/id_rsa.pub
  register: ssh_key_rsa_exists

- name: generate ssh keys for <user>
  shell: /usr/bin/ssh-keygen -t rsa -N '' -f /home/{{ vm_physical_user }}/.ssh/id_rsa
  when: ssh_key_rsa_exists.stat.exists != True
  become: no

- name: wait for VM
  wait_for: host={{ vm_ip }} port=22 state=started delay=5 timeout=60
  when: create_vm.changed

- name: Add VM ssh key to host - root
  shell: /usr/bin/ssh-keyscan -H {{ vm_ip }} >> /root/.ssh/known_hosts
  when: create_vm.changed

- name: Add VM ssh key to host - user
  shell: /usr/bin/ssh-keyscan -H {{ vm_ip }} >> /home/{{ vm_physical_user }}/.ssh/known_hosts
  when: create_vm.changed

- name: Create .ssh directory for root in VM
  file: path=/var/lib/lxc/{{ vm_name }}/rootfs/root/.ssh state=directory owner=root group=root mode=0700

- name: Create .ssh directory for ubuntu in VM
  file: path=/var/lib/lxc/{{ vm_name }}/rootfs/home/ubuntu/.ssh state=directory owner=1000 group=1000 mode=0700

- name: create authorized_keys files in VM for root
  file: path=/var/lib/lxc/{{ vm_name }}/rootfs/root/.ssh/authorized_keys state=touch owner=root group=root mode=0600
  changed_when: False

- name: create authorized_keys files in VM for ubuntu
  file: path=/var/lib/lxc/{{ vm_name }}/rootfs/home/ubuntu/.ssh/authorized_keys state=touch owner=1000 group=1000 mode=0600
  changed_when: False

- name: Add host ssh key to VM - root -> root
  shell: cat /root/.ssh/id_rsa.pub >> /var/lib/lxc/{{ vm_name }}/rootfs/root/.ssh/authorized_keys
  when: create_vm.changed

- name: Add host ssh key to VM - root -> ubuntu
  shell: cat /root/.ssh/id_rsa.pub >> /var/lib/lxc/{{ vm_name }}/rootfs/home/ubuntu/.ssh/authorized_keys
  when: create_vm.changed

- name: Add host ssh key to VM - <user> -> ubuntu
  shell: cat /home/{{ vm_physical_user }}/.ssh/id_rsa.pub >> /var/lib/lxc/{{ vm_name }}/rootfs/home/ubuntu/.ssh/authorized_keys
  when: create_vm.changed

- name: check if authorized_keys file exist for root
  stat: path=/root/.ssh/authorized_keys
  register: ssh_authorized_keys_exists_root

- name: check if authorized_keys file exist for user
  stat: path=/home/{{ vm_physical_user }}/.ssh/authorized_keys
  register: ssh_authorized_keys_exists_user

- name: Add authorized_keys key to VM - root -> ubuntu
  shell: cat /root/.ssh/authorized_keys >> /var/lib/lxc/{{ vm_name }}/rootfs/home/ubuntu/.ssh/authorized_keys
  when: ssh_authorized_keys_exists_root.stat.exists == True and create_vm.changed

- name: Add authorized_keys key to VM - user -> ubuntu
  shell: cat /home/{{ vm_physical_user }}/.ssh/authorized_keys >> /var/lib/lxc/{{ vm_name }}/rootfs/home/ubuntu/.ssh/authorized_keys
  when: ssh_authorized_keys_exists_user.stat.exists == True and create_vm.changed

- name: Add nopasswd to sudoers file
  lineinfile: "dest=/var/lib/lxc/{{ vm_name }}/rootfs/etc/sudoers state=present regexp='^ubuntu' line='ubuntu ALL=(ALL) NOPASSWD: ALL'"
  when: create_vm.changed

- name: Install Python in VM
  shell: chroot /var/lib/lxc/{{ vm_name }}/rootfs/ apt-get --yes install python
  when: create_vm.changed

And the handler file:

1
2
3
4
5
6
7
- name: lxc-net restart
  service: name=lxc-net state=restarted
  delegate_to: '{{ vm_physical_host }}'

- name: dnsmasq reload
  command: killall -s SIGHUP dnsmasq
  delegate_to: '{{ vm_physical_host }}'

After this play, the container is created and started, Python is installed in it, the default ubuntu user is setup to run sudo without password. Also ssh keys for the host (hypervisor) are created, and exchanged with the container. Creating ssh keys can probably be moved into the hypervisor setup, but it’s included here for completeness.

Setup connection information for the container

In order to connect Ansible to the container, it needs to use the hypervisor as proxy:

1
2
[vm1:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o User={{ vm_user }} -o ProxyCommand="ssh -o StrictHostKeyChecking=yes -o UserKnownHostsFile=known_hosts -W {{ vm_ip }}:22 {{ vm_physical_user }}@{{ vm_physical_host }}"'

The ssh arguments will use key authentication to connect to the hypervisor host (the inner ProxyCommand), and will ignore unknown keys for connecting to the container (else you need to exchange keys between your container and the Ansible host).

Now Ansible can connect to the container and you can deploy your regular plays there as well.


Categories: [Ansible] [Linux]
Tags: [Ansible] [Automation] [Lxc] [Ssh] [Sudo] [Ubuntu]