SYS-CON MEDIA Authors: Pat Romanski, Liz McMillan, Zakia Bouachraoui, Elizabeth White, William Schmarzo

Blog Feed Post

Immutable Infrastructure with Ansible and Packer

Immutable Infrastructure with Ansible and Packer by Marko Locher from Codeship

At Codeship we run immutable servers which we internally call Checkbot. These are the machines responsible for running your tests, deploying your software and reporting the results back to our web application. Of course, there are constant changes to the setup of these images. New software needs to be installed, packages upgraded, old software versions removed. Let’s see how we do that!

Vagrant and Packer Workflow

The software stack used for building and testing these images in our current workflow consists of Vagrant for development, Packer for actual image generation and a series of shell scripts for provisioning. This worked fine for the last years, but as our team grows and more people are making changes to the scripts, this can easily get out of hand and become confusing. So we were looking for a lightweight tool to replace our shell scripts with. As we didn’t want to have an agent running to watch over the host, most configuration management tools were not an acceptable solution.

Using Ansible

Ansible with it’s YAML based syntax and agentless model fits quite nicely. We are still in the process of getting started, but the experience was so good, I couldn’t wait to share my findings. Maybe this post can convince you to take a look at Ansible and get started with configuration management yourself.

Getting started with Ansible

According to their website “Ansible is the simplest way to automate IT”. You could compare it to other configuration management systems like Puppet or Chef. These are complicated to setup and require installation of an agent on every node. Ansible is different. You simply install it on your machine and every command you issue is run via SSH on your servers. There is nothing you need to install on your servers and there are no running agents either.

> # Ansible installation via pip
> $ sudo pip install ansible

Something that took me a while to appreciate was the fact that Ansible playbooks (the pendant to Chef cookbooks or Puppet modules) are plain YAML files. This makes certain aspects a bit harder, but keeps the playbooks simple and easy to understand. (Try writing complicated shell commands with multiple levels of quoting and you will see what I mean.) Even for somebody who doesn’t know a lot about Ansible. For a more thorough introduction, please see the Ansible homepage and don’t forget to check the fantastic docs available at http://docs.ansible.com.

Building Immutable Infrastructure with Ansible

I started with the default integrations in Packer and Vagrant, which are straightforward to setup and require just a few lines of configuration.

Packer

{
    "provisioners": [
        {
            "type": "shell",
            "execute_command": "echo 'vagrant' | {{ .Vars }} sudo -E -S sh '{{ .Path }}'",
            "inline": [
                "sleep 30",
                "apt-add-repository ppa:rquillo/ansible",
                "/usr/bin/apt-get update",
                "/usr/bin/apt-get -y install ansible"
            ]
        },
        {
            "type": "ansible-local",
            "playbook_file": "../ansible/checkbot.yml",
            "role_paths": [
                "../ansible/roles/*"
            ]
        }
    ]
}

Vagrant

# Provisioning with ansible
config.vm.provision "ansible" do |ansible|
    ansible.inventory_path = "ansible/inventory"
    ansible.playbook = "ansible/checkbot.yml"
    ansible.sudo = true
end

But I decided to change those in favor of a couple shell scripts to get more flexibility when calling Ansible. Also it allows me to compensate for certain differences in the way Ansible is integrated with both Packer and Vagrant. As removing any possible differences is key in avoiding subtle bugs in testing vs. production. As an example take our current code for creating a LXC container and configuring some basic settings. I’m sure that, even without any further explanation, you can quite easily figure out what each item is supposed to do.

Config.j2

# Template used to create this container: /usr/share/lxc/templates/lxc-ubuntu
# Parameters passed to the template:
# For additional config options, please look at lxc.conf(5)

# Common configuration
lxc.include = /usr/share/lxc/config/ubuntu.common.conf

# Container specific configuration
lxc.rootfs = /var/lib/lxc/{{lxc_container}}/rootfs
lxc.mount = /var/lib/lxc/{{lxc_container}}/fstab
lxc.utsname = {{lxc_container}}
lxc.arch = amd64

# Network configuration
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.hwaddr = 00:16:3e:11:f6:6c

# cgroup configuration
lxc.cgroup.memory.limit_in_bytes = {{lxc_memory_limit}}M

# Hooks
lxc.hook.pre-start = /var/lib/lxc/{{lxc_container}}/pre-start

config.yml

---
# file: host/defaults/main.yml

# LXC
lxc_container: codeship
lxc_memory_limit: 15360

lxc.yml

---
# file: host/tasks/lxc.yml

- name: LXC | Installation
  apt:
    pkg: "{{item}}"
    state: present
  with_items:
    - lxc
    - lxc-templates
    - debootstrap
    - bridge-utils
    - socat

- name: LXC | Check configuration
  command: lxc-checkconfig

- name: LXC | Create new container
  command: "lxc-create -n {{lxc_container}} -t ubuntu creates=/var/lib/lxc/{{lxc_container}}/"

- template: src=lxc/config.j2 dest=/var/lib/lxc/{{lxc_container}}/config
- template: src=lxc/pre-start.j2 dest=/var/lib/lxc/{{lxc_container}}/pre-start mode=0744 owner=root group=root

pre-start.j2

#!/bin/sh

# setup ssh access for the root user
mkdir -p /var/lib/lxc/{{lxc_container}}/rootfs/root/.ssh/
cp ~ubuntu/.ssh/id_rsa.pub /var/lib/lxc/{{lxc_container}}/rootfs/root/.ssh/authorized_keys

# setup ssh access for the rof user
if [ -d "/var/lib/lxc/{{lxc_container}}/rootfs/home/rof/" ]; then
  mkdir -p /var/lib/lxc/{{lxc_container}}/rootfs/home/rof/.ssh/
  cp ~ubuntu/.ssh/id_rsa.pub /var/lib/lxc/{{lxc_container}}/rootfs/home/rof/.ssh/authorized_keys
fi

This is only the beginning and a small step in configuring a whole build system for use by Codeship, but it shows the beauty of Ansible. It is extremely simple to understand. It provides a good abstraction of commonly needed patterns, like package installation, templates for configuration files, variables to be used by playbooks or configuration files and a lot more. And it doesn’t require any software installation on the host except an SSH server, which is pretty standard anyways.

And in combination with Packer we have an environment that let’s us build our production system running on EC2 as simple as a box used for development with Vagrant. And that’s great, because it makes our team more productive.

Codeship – A hosted Continuous Deployment platform for web applications

What’s possible with Ansible

Nevertheless we are far from finished. I am just starting to learn what is possible with Ansible and what modules are available. Some of the items on my checklist for the next months include

  • running multiple playbooks in parallel to speed up provisioning
  • getting to know the module system a lot better, and possibly write some modules myself
  • fine tuning the output generated by ansible
  • converting all the remaining shell scripts to playbooks, which is going to be the biggest part

What do YOU think about Ansible? If you have ideas or suggestions to improve our workflow, please let us know in the comments!

Further Information

Read the original blog entry...

More Stories By Manuel Weiss

I am the cofounder of Codeship – a hosted Continuous Integration and Deployment platform for web applications. On the Codeship blog we love to write about Software Testing, Continuos Integration and Deployment. Also check out our weekly screencast series 'Testing Tuesday'!

Latest Stories
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve fu...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Dhiraj Sehgal works in Delphix's product and solution organization. His focus has been DevOps, DataOps, private cloud and datacenters customers, technologies and products. He has wealth of experience in cloud focused and virtualized technologies ranging from compute, networking to storage. He has spoken at Cloud Expo for last 3 years now in New York and Santa Clara.
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
The Transparent Cloud-computing Consortium (T-Cloud) is a neutral organization for researching new computing models and business opportunities in IoT era. In his session, Ikuo Nakagawa, Co-Founder and Board Member at Transparent Cloud Computing Consortium, will introduce the big change toward the "connected-economy" in the digital age. He'll introduce and describe some leading-edge business cases from his original points of view, and discuss models & strategies in the connected-economy. Nowad...
Data center, on-premise, public-cloud, private-cloud, multi-cloud, hybrid-cloud, IoT, AI, edge, SaaS, PaaS... it's an availability, security, performance and integration nightmare even for the best of the best IT experts. Organizations realize the tremendous benefits of everything the digital transformation has to offer. Cloud adoption rates are increasing significantly, and IT budgets are morphing to follow suit. But distributing applications and infrastructure around increases risk, introdu...
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, described how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching o...
Founded in 2002 and headquartered in Chicago, Nexum® takes a comprehensive approach to security. Nexum approaches business with one simple statement: “Do what’s right for the customer and success will follow.” Nexum helps you mitigate risks, protect your data, increase business continuity and meet your unique business objectives by: Detecting and preventing network threats, intrusions and disruptions Equipping you with the information, tools, training and resources you need to effectively m...
Concerns about security, downtime and latency, budgets, and general unfamiliarity with cloud technologies continue to create hesitation for many organizations that truly need to be developing a cloud strategy. Hybrid cloud solutions are helping to elevate those concerns by enabling the combination or orchestration of two or more platforms, including on-premise infrastructure, private clouds and/or third-party, public cloud services. This gives organizations more comfort to begin their digital tr...
Intel is an American multinational corporation and technology company headquartered in Santa Clara, California, in the Silicon Valley. It is the world's second largest and second highest valued semiconductor chip maker based on revenue after being overtaken by Samsung, and is the inventor of the x86 series of microprocessors, the processors found in most personal computers (PCs). Intel supplies processors for computer system manufacturers such as Apple, Lenovo, HP, and Dell. Intel also manufactu...
Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes. We are offering early bird savings...
"Calligo is a cloud service provider with data privacy at the heart of what we do. We are a typical Infrastructure as a Service cloud provider but it's been designed around data privacy," explained Julian Box, CEO and co-founder of Calligo, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, software-defined solution with rich machine ...
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee A...
Inzata is a powerful, revolutionary data analytics platform for integrating, exploring, and analyzing data of any kind, from any source, at massive scale. Powerful AI-assisted Modeling and a patented analytics engine help users quickly load, blend and model raw and unstructured data into powerful enterprise data models, actionable real-time analytics and engaging visualizations. Go beyond spreadsheets and slides and compose a powerful narrative about how your business is performing, and how y...