Skip to content

Instantly share code, notes, and snippets.

@jairojunior
Created November 16, 2018 15:45
Show Gist options
  • Save jairojunior/4a08e9c26f74af311c8438c4aa11c231 to your computer and use it in GitHub Desktop.
Save jairojunior/4a08e9c26f74af311c8438c4aa11c231 to your computer and use it in GitHub Desktop.
Testing Ansible Roles with Molecule

Testing Ansible Roles with Molecule

Tests techniques play an important role in software development, and it's no different when we are talking about Infrastructure as Code (IaC).

While developing you are always testing and constant feedback is necessary to drive your development. If you're taking to long to get feedback on a change, your steps might be too large, making errors hard to spot. Baby steps and fast feedback are the essence of TDD (Test Driven Development), but how do I apply this to the development of ad-hoc playbooks or roles?

When you're developing an automation, a typical workflow would start with a new virtual machine. We will use Vagrant to illustrate this idea, but it could be libvirt, Docker, VirtualBox or VMware used directly in your machine and even an instance in a private or public cloud or a virtual machine provisioned in your data center hypervisor (oVirt, Xen, VMware).

NOTE: When deciding which one to use you'll have to balance feedback speed and similarity with your real target environment.

The minimal start point with vagrant would be:

vagrant init centos/7 # or any other box

then we would add ansible provisioning to our Vagrantfile:

config.vm.provision "ansible" do |ansible|
  ansible.playbook = "playbook.yml"
end

In the end, our workflow would be:

  1. vagrant up
  2. Edit playbook.
  3. vagrant provision
  4. vagrant ssh to verify VM state.
  5. Repeat steps 2 to 4.

NOTE: Occasionally, the VM would be destroyed and brought up again (vagrant destroy -f; vagrant up) to increase the reliability of our playbook (i.e test if our automation is working end-to-end).

Although this is a good workflow, we're still doing all the hard work of connecting to the VM and verifying that everything is working as expected.

When tests are not automated, we'll face similar issues of when we do not automate our infrastructure, for instance:

  • The Computer is faster and more reliable for this kind of task.
  • His time is cheaper than ours.

Luckily, there are tools to help us automate these verifications, like testinfra and goss.

We will focus on testinfra, as it is written in Python and is the default verifier for Molecule. The idea is pretty simple: automate your verifications using Python:

def test_nginx_is_installed(host):
    nginx = host.package("nginx")
    assert nginx.is_installed
    assert nginx.version.startswith("1.2")


def test_nginx_running_and_enabled(host):
    nginx = host.service("nginx")
    assert nginx.is_running
    assert nginx.is_enabled

In a development environment, this script would connect to the target host using SSH (just like Ansible) to perform the above verifications (package presence/version and service state):

py.test --connection=ssh --hosts=server

In short, during infrastructure automation development, our challenge is to: provision new infrastructure, execute playbooks against then and verify that our changes reflect the state we declared in our playbooks.

  • What can be verified with testinfra?

    • Infrastructure is up and running from the user point of view (e.g. httpd or nginx is answering requests, MariaDB or PostgreSQL is handling SQL queries).
    • OS Service is started and enabled.
    • A process is listening on a specific port.
    • A process is answering requests.
    • Configuration files were correctly copied or generated from templates.
    • Virtually anything you do to assert you server state is correct.
  • What safety these automated tests provide?

    • Perform complex changes or introduce new features without breaking existing behavior (e.g. It still works in RHEL-based distributions after adding support for Debian-based).
    • Refactor/improve our codebase when new versions of Ansible are released and new best practices come.

What we did with vagrant, Ansible and testinfra so far is easily mapped to the steps described in the Four-Phase Test pattern - a way to structure our tests that makes our test objective clear. It is composed of the following phases: setup, exercise, verify and teardown:

  • setup: Prepare the environment for the test execution (e.g. spin up new virtual machines).

    vagrant up

  • exercise: Effectively executes the code against the system under test (i.e. ansible-playbook).

    vagrant provision

  • verify: Verify the previous step output.

    py.test (with testinfra)

  • teardown: Back to the state prior to setup.

    vagrant destroy

The same idea we used for an ad-hoc playbook could be applied to role development/testing, but do I have to do all these steps everytime I start developing something new? What if I would like to use containers instead of vagrant for this? Or an OpenStack? What if I'd rather use goss instead of testinfra? How do I continuously run this for every change in my code? Is there a simpler and faster way to achieve this?

Molecule

Molecule is a tool that helps developing roles using tests. It can even initialize a new role with test cases, you just need to: molecule init role –role-name foo

Molecule is flexible enough to allow you to use different drivers for infrastructure provisioning: Docker, Vagrant, OpenStack, GCE, EC2, Azure. It also allows using different server verification tools, like testinfra and goss.

Its commands ease the execution of tasks commonly used during development workflow:

  • lint - Executes yaml-lint, ansible-lint and flake8, reporting failure if there are issues.
  • syntax - Verify the role for syntax errors.
  • create - Create an instance with the configured driver.
  • prepare - Configure instances with preparation playbooks.
  • converge - Execute playbooks targeting hosts.
  • idempotence - Execute playbook twice and fails in case of changes in the second run (non-idempotent).
  • verify - Execute verifications libraries code. (testinfra or goss)
  • destroy - Destroy instances.
  • test - Executes all the previous steps.

NOTE: Additionally, the login command can be used to connect to provisioned servers for troubleshooting purposes.

Step-by-step

How to go from no tests at all for a decent codebase being executed for every change/commit?

  1. virtualenv (Optional)

virtualenv is a tool to create isolated environments while virtualenvwrapper is a collection of extensions that facilitate the use of virtualenv.

The use of these tools avoid dependencies conflicts between molecule and other Python packages in your machine.

sudo pip install virtualenvwrapper
export WORKON_HOME=~/envs
source /usr/local/bin/virtualenvwrapper.sh
mkvirtualenv mocule
  1. Molecule

Install molecule with docker driver:

pip install molecule ansible docker-py

Generate a new role with test scenarios:

molecule init role -r role_name

Or for existing roles:

molecule init scenario -r my-role

All the necessary configuration is generated with your role and we only need to care about writing test cases using testinfra:

import os

import testinfra.utils.ansible_runner

testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
    os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')


def test_jboss_running_and_enabled(host):
    jboss = host.service('wildfly')

    assert jboss.is_enabled


def test_jboss_listening_http(host):
    socket = host.socket('tcp://0.0.0.0:8080')

    assert socket.is_listening


def test_mgmt_user_authentication(host):
    command = """curl --digest -L -D - http://localhost:9990/management \
                -u ansible:ansible"""

    cmd = host.run(command)

    assert 'HTTP/1.1 200 OK' in cmd.stdout

This example test case for a Wildfly role verifies that OS service is enabled, a process is listening in port 8080, and authentication was properly configurated.

It's pretty straightforward to code these tests and you basically need to think about an automated way to verify something.

If you think about it, you're already doing it when you're logging in a machine targeted by your playbook, or when building verifications for your monitoring/alerting systems. This knowledge will contribute to building something with testinfra API or using a system command.

CI

Continuously execute your Molecule tests is plain simple. The example above works for TravisCI with Docker driver, but it could be easily adapted for any CI Server and any infrastructure drivers supported by Molecule.

---
sudo: required
language: python
services:
  - docker
before_install:
  - sudo apt-get -qq update
  - pip install molecule
  - pip install docker-py
script:
  - molecule test

A sample output can be seen here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment