Create VMs on KVM with Ansible

So the Ansible virt module doesn’t have a clone option and the creation of guests is a little limited. Because of this we have to use the shell or command modules and try to make them idempotent. This is a simple example and the dictionary can be expanded to a lot more customization. There is a way to use libvirt as a dynamic inventory and set group and host vars on guests, but I’ll cover that in a different post.

- name: create VMs
  hosts: kvm
  become: true
    - vms.yml

    - name: get VM disks
      command: "ls {{ vm_location }}"
      register: disks
      changed_when: "disks.rc != 0"

    - name: create disk
      command: >
               virt-builder --format qcow2 centos-7.4
               -o {{ vm_location}}/{{ item.key }}.{{ item.value.file_type }}
               --root-password password:{{ root_pass }}
      when: item.key not in disks.stdout
      with_dict: "{{ guests }}"

    - name: get list of VMs
        command: "list_vms"
      register: vms 

    - name: create vm
      command: >
                virt-install --import --name {{ item.key }}
                --memory {{ item.value.mem }} --vcpus {{ item.value.cpus }}
                --disk {{ vm_location }}/{{ item.key }}.{{ item.value.file_type }}
                --noautoconsole --os-variant {{ item.value.os_type }}
      when: item.key not in vms.list_vms
      with_dict: "{{ guests }}"

    - name: start vm
        name: "{{ item.key }}"
        state: running
      with_dict: "{{ guests }}"

So we do a few checks to make sure the disk isn’t already in the directory and another check to make sure the VM isn’t already created. The dictionary is in the referenced vars_file:

vm_location: "/data/VMs"
root_pass: "password"

    mem: 512
    cpus: 1
    os_type: rhel7
    file_type: qcow2
    mem: 512
    cpus: 1
    os_type: rhel7
    file_type: qcow2

Obviously you wouldn’t put the password in plain text, you’d either use ansible-vault, a vars prompt, or a survey in tower.

And here’s our output. I had already created a system called test so it skips over to give us idempotence.


Automated Ansible testing with Molecule

Infrastructure testing provides some challenges just because of the mere fact you are building machines and not just compiling code. To test Ansible, I used to run Ansible with –syntax-check and –list-tasks. For roles I would run local tests with Vagrant using the tests/ directory in the role. The tests had Ansible test itself with the uri module or other checks. This is ok for simple checks but can be cumbersome and time consuming as it doesn’t catch everything.

Molecule has made this very simple. Install Molecule with pip install molecule. Create your role with molecule init --role-name role --driver-name driver. The current drivers are Docker, OpenStack, Vagrant, EC2, GCE, LXC, LXD, and a few others with Docker being the default. It’s not absolutely required to build the role with molecule init but it adds the molecule dir with default testing and the .yamllint file.

Rather than rewrite their documentation, I’ll just go through what I have set up to test my roles. Currently I’m using the Vagrant driver with a CentOS 7 box ( I don’t do much with non RHEL distros) and the libvirt provider for Vagrant. So here’s what my molecule/molecule.yml file looks like:

  name: galaxy
  name: vagrant
    name: libvirt
  name: yamllint
  - name: bind
    box: centos/7
  name: ansible
    name: ansible-lint
  name: default
  name: testinfra
    sudo: True
    name: flake8

Molecule also uses testinfra to run automated tests against the Vagrant box. Here’s a simple example to check a firewalld role:

import os

import testinfra.utils.ansible_runner

testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(

def test_firewall_installed(host):
    package = host.package("firewalld")
    assert package.is_installed

def test_firewall_service(host):
    service = host.service("firewalld")
    assert service.is_running
    assert service.is_enabled

def test_firewall_config(host):
    config = host.file("/etc/firewalld/zones/public.xml")
    assert config.exists
    assert config.contains('<service name="http"/>')
    assert config.contains('<service name="https"/>')
    assert config.contains('<service name="ssh"/>')

Testinfra will check that the package firewalld is installed, the service is running and enabled, and that the /etc/firewalld/zones/public.xml file contains the services defined.

To start your molecule testing you can run molecule test.

Here’s a video running this setup against a firewalld role:

AWX is the official open source Tower

After what I think is one of the most anticipated open source releases ever, Red Hat finally released the source for Ansible Tower. Tower is an awesome tool, but the cost is prohibitive to a lot of places (but still cheaper than the competitors). I had a trial license at home but that limits you to 10 hosts in your inventory. Because of this, I have been using Jenkins with the Ansible plugin. However, Tower’s big advantages here are inventory management, secrets management, and templates (playbooks). Workflow templates are awesome and give you a nice GUI to draw out your workflow for your playbooks.

Red Hat released the source for Tower as AWX (the original name for the web interface). It’s like the Fedora of the Ansible world. It’s the bleeding edge upstream for Tower. Here’s a quote from their FAQ:


AWX is designed to be a frequently released, fast-moving project where all new development happens.

Ansible Tower is produced by taking selected releases of AWX, hardening them for long-term supportability, and making them available to customers as the Ansible Tower offering.

This is a tested and trusted method of software development for Red Hat, which follows a similar model to Fedora and Red Hat Enterprise Linux.

AWX is installed on Docker or OpenShift, and there are instructions for both. Currently I’m running it in a Fedora 26 server on my OpenStack setup and it took a little time to install (~20 minutes). However, once it’s done the web UI is pretty fast. And the best part is the potato


IP Address From QEMU Guest Agent

On a KVM host, it’s fairly easy to get a guest’s IP from the internal network to the host.

[jhooks@kvm2 ~]$ sudo virsh domifaddr Tower
 Name       MAC address          Protocol     Address
 vnet0      52:54:00:e7:01:55    ipv4

However if you you have the guest on a full bridge or macvtap interface you won’t see anything. To get that information you need to add the QEMU guest agent.

On CentOS it’s simply called qemu-guest-agent.

With this installed you can now query the guest’s address by passing --source agent:

[jhooks@kvm2 ~]$ sudo virsh domifaddr Tower --source agent
Name       MAC address          Protocol     Address
lo         00:00:00:00:00:00    ipv4
-          -                    ipv6         ::1/128
eth0       52:54:00:fc:77:18    ipv4
-          -                    ipv6         fe80::5054:ff:fefc:7718/64
vnet0      52:54:00:e7:01:55    ipv4

Ansible Random String Generator

To minimize configuration drift we can’t directly log into a system with admin privileges. Root is disabled through SSH and console on both servers and workstations. We can’t even directly log into a prod server at all. Every admin level task is done through Tower. Because of this, having a root password that’s usable and remember-able is essentially pointless (and insecure anyway).

I created an Ansible module to generate a random string based on a size parameter. This way any time the provisioning playbook is run against a system it will generate a new random password for the root user. If for some reason we need to triage a system, we can grant local admin access through tower for that time and then revoke it later so it’s audited.

Here’s the important bits of the module:

import string
import random 
from ansible.module_utils.basic import AnsibleModule

def random_generator(size, chars=string.ascii_letters + string.digits + '$#&@'):
    return ''.join(random.choice(chars) for _ in range(size))

module = AnsibleModule(
    argument_spec = dict(
    size = dict(required=True, type='int')
size = module.params.get('size')

    success = True
    ret_msg = str(random_generator(size))
except KeyError:
   success = False
   ret_msg = "Error"

if success:

if __name__ == "__main__":

Here’s an example playbook. Obviously in prod you wouldn’t have it display the string, this is just as an example.

- name: generate password 
    size: 40
  register: pass

- name: print password [not in production]
    var: pass

- name: set root password
    name: root
    password: "{{ pass.msg|password_hash('sha512') }}"

And here’s the output from a playbook run:


Another option is to use Jinja filters so you don’t need to create a module (I was doing it for practice). This way you just need to pass something into the password_hash filter. Here’s an example:

- name: generate random password
    name: root
    password: "{{ ansible_fqdn | password_hash('sha512') | password_hash('sha512') }}"

This will take the value for ansible_fqdn and pass it into the sha512 hash filter. This gives you a random 107 character password. Then it passes that string into the sha512 hash filter again to actually set the hash to that value.

Minimizing Conditionals in Ansible

Ansible gives you conditionals to use when you want to check if something meets a certain criteria. For example:

- name: install apache
    name: httpd
    state: installed
  when: ansible_distribution == 'CentOS'

If you have a lot of these you can separate out specific .yml files to include a conditional:

- include: centos.yml
  when: ansible_distribution == 'CentOS'

However sometimes a cleaner way (in my opinion) is to use a variable to minimize the use of conditional statements. Using the same example with Apache:


- name: install apache
    name: "{{ apache_package }}"
    state: installed


    apache: 'httpd'
    apache: 'httpd'
    apache: 'httpd'
    apache: 'apache2'

apache_package: "{{ dist_dict[ansible_distribution]['apache'] }}"

You can take this one step further and utilize that same dictionary for services (and other things) as well.

    apache_package: 'httpd'
    apache_service: 'httpd'
    apache_package: 'httpd'
    apache_service: 'httpd'
    apache_package: 'httpd'
    apache_service: 'httpd'
    apache_package: 'apache2'
    apache_service: 'apache2'

apache_package: "{{ dist_dict[ansible_distribution]['apache_package'] }}"
apache_service: "{{ dist_dict[ansible_distribution]['service_service'] }}"

Then your tasks could simply be:

- name: install apache
    name: "{{ apache_package }}"
    state: installed

- name: start apache
    name: "{{ apache_service }}"
    state: started
    enabled: true

This gives you the power to utilize these variables in other places without including whole .yml files or using a bunch of conditional statements.

Dynamic DNS with Cloudflare

At home I use Ubiquiti gear for all of my networking and I use Cloudflare for my external DNS. Rather than use another service like DynDNS or No-IP, I set up a small script that runs on my EdgeRouter Lite that updates records for my stuff at home in a simple cron job. The script just uses Cloudflare’s API to update an existing record. I haven’t found a way to get the record name from the web interface yet so you do need to get the record ID from the API.


ip=$(ifconfig eth0 | grep "inet addr:" | cut -d: -f2 | awk '{ print $1 }')

curl -X PUT "$zoneID/dns_records/$recordID" \
     -H "X-Auth-Email: $email" \
     -H "X-Auth-Key: $key" \
     -H "Content-Type: application/json" \
     --data '{"type":"A","name":"'"$recordName"'","content":"'"$ip"'","ttl":120,"proxied":false}' -k

Another option rather than getting the IP from the interface is using (thanks Major!) which will return a just your IP in a string. This way you don’t need to use your edge device to interface with Cloudflare, any internal system will work.