Introduction

In this quite extensive post I will walk you through the process of creating from scratch a box in EC2 ready to use for deploying your Rails app using Ansible. In the process I will show how to write a simple module that, while not necessary, will illustrate some points as well.

Keep in mind that this is an example based on how we work in our company and that, at the same time, I am not specialised in devops or system administration, so the following example can have its pitfalls. It is also my first work in Ansible so there are probably more efficient ways to do some of the stuff.

At the end of the process we will have a box with the following elements:

  • rbenv
  • desired ruby version
  • passenger
  • apache

All this is built on CentOS 64 bit machines.

Finally, I’d like to mention that this work has been possible thanks to the amazing Ansible community on freenode, which has been extremely helpful and patient when I needed guidance.

All this code can be found in my public rails_server_playbook github repository, in which I will be adding improvements or fixing mistakes.

Background

At work we have (almost) all of our infrastructure in AWS. We essentially use EC2 instances to host a series of applications and services, and have an RDS MySQL database. We do not require fancy things, and in fact we only use a tiny amount of features of the range AWS has to offer. We also don’t need to be constantly creating and terminating new instances for autoscaling.

The main problem before we decided to give Ansible a try was that the process for updating our boxes was extremely painful and inefficient. We had this old AMI template which we kept upgrading every time we needed something changed on it, and then reprovisioned our servers with the new template. Applying those changes to tens of machines was tedious and error prone. There was also the fact that the existing documentation on the packages installed and configuration files changed was scarce, making changes even more difficult for the fear of breaking something and the annoyance of having to be reverse engineering everything.

To put a stop to this, we decided it was time to automate all this, and by that time Ansible was just becoming trendy. It also seemed to fit all our needs:

  • Simple
  • Lightweight
  • Agentless

Also, the community around it seemed great, so we decided to give it a go. This project would let us not only automate the whole server provisioning, but also have a comprehensive documentation (under version control) of what was installed on the boxes.

So basically the scenario I will describe assumes that:

  • We have a known list of machines we want in our infrastructure
  • Every machine can be either for staging or production
  • Every machine has an associated and known elastic ip (which is eventually linked to one or multiple domain names)
  • While some of the machine characteristics are particular, some of them are going to be shared among all of them
  • Most of the machines will be used to run Rails apps on them. Most of the times the same app

With that in mind, let me explain how our Ansible playbooks work.

Creating the instances

The instance creation is centralised in a role called ec2_creation. The role is fairly simple.

The instance configuration is on the file roles/ec2_creation/vars/main.yml. This file uses some kind of a hierarchical model. The variable default_values contains shared values among all of the instances. Then we have two more variables: staging and production, each one containing specific configuration for each environment.

Here’s what the file looks like:

---
default_values:
  instance_type: "m1.large"
  region: "eu-west-1"
  zone: "eu-west-1b"
  key_pair: "Apps"
  image_id: "ami-f011e187" #Amazon Linux 64 bits blank slate
  security_groups: ["App Frontends", "App Private"]

instances:
  production:
    example.com:
      elastic_ip: "54.247.104.88"
      name: "example.com"

  staging:
    example.com:
      elastic_ip: "54.247.89.191"
      name: "stag-example.com"
      instance_type: "m1.small"

instance_values:
  name: "{{ instances[rails_env][site]['name'] | default(default_values['name']) }}"
  instance_type: "{{ instances[rails_env][site]['instance_type'] | default(default_values['instance_type']) }}"
  region: "{{ instances[rails_env][site]['region'] | default(default_values['region']) }}"
  zone: "{{ instances[rails_env][site]['zone'] | default(default_values['zone']) }}"
  key_pair: "{{ instances[rails_env][site]['key_pair'] | default(default_values['key_pair']) }}"
  image_id: "{{ instances[rails_env][site]['image_id'] | default(default_values['image_id']) }}"
  elastic_ip: "{{ instances[rails_env][site]['elastic_ip'] | default(default_values['elastic_ip']) }}"
  security_groups: "{{ instances[rails_env][site]['security_groups'] | default(default_values['security_groups']) }}"

The trick behind this is that we will pass our ansible-playbook command the extra variables --extra-vars "rails_env=staging site=example.com" and then the instance_values variable will contain everything we need to create the instance.

The main provisioning file, which we call provisioning.yml, has several parts, the first one is like this:

---
#Create the instance
- hosts: localhost
  connection: local
  gather_facts: false
  roles:
    - ec2_creation

This is what will create the actual instance. Let me explain the parameters:

  • hosts: localhost -- we setup the hosts as localhost because the actual task will be run on the local machine.
  • connection: local -- same as above, we do not need any special connection to connect to localhost.
  • gather_facts: false -- no need to gather any facts.
  • roles: ec2_creation -- this will basically tell the playbook to apply the role ec2_creation, which contains the individual tasks to create the instance.

The ec2_creation role has two tasks, which you can see in the main.yml file on its tasks folder:

---
- include: gather_ec2_facts.yml #Get ec2 information
- include: create_instance.yml #Create the instance

The first task will connect to EC2 and query the current instances to see if what we want to create exists or not. In order to do this we use a custom made module named ec2_instances that you can check on the appendix if you’re interested in knowing how it works. For the time being the only thing needed to know is that we will register the output of this module to a variable for later use. The code for this task is as follows:

---
- name: Check if instance exists
  ec2_instances:
    region: "eu-west-1"
  register: ec2_instances

- name: Debug EC2 facts
  debug: msg="{{ ec2_instances.instances }}"

We pass the module a single parameter region, in this case hardcoded to "eu-west-1", you can use a variable if you prefer it.

The register instruction will save the output of the module to a variable named ec2_instances that we will be using later.

Finally, there’s a second task that will just output into the console the information retrieved. I use the debug task often when I’m not sure what information each variable holds.

Once we have the information on the existing instances, we invoke the tasks on the create_instance.yml file, which is a bit more complex:

---
- name: Get instance information
  debug:
    msg: "{{ instance_values }}"

- name: Create instance
  ec2:
    region: "{{ instance_values['region'] }}"
    zone: "{{ instance_values['zone'] }}"
    keypair: "{{ instance_values['key_pair'] }}"
    group: "{{ instance_values['security_groups'] }}"
    instance_type: "{{ instance_values['instance_type'] }}"
    image: "{{ instance_values['image_id'] }}"
    count: 1
    wait: yes
    instance_tags:
      Name: "{{ instance_values['name'] }}"
  when: ec2_instances.instances[instance_values['name']]|default("") == ""
  register: ec2_info

- name: Wait for instances to listen on port 22
  wait_for:
    state: started
    host: "{{ ec2_info.instances[0].public_dns_name }}"
    port: 22
  when: ec2_info|changed

- name: Add new instance to ec2hosts group
  add_host:
    hostname: "{{ ec2_info.instances[0].public_ip }}"
    groupname: ec2hosts
    instance_id: "{{ ec2_info.instances[0].id }}"
  when: ec2_info|changed

- name: Get ec2_info information
  debug:
    msg: "{{ ec2_info }}"

The first one is yet another debug statement to show the values which will be used to create the instance.

The second one is the one that creates the instance. It uses the ec2 module, and most of the parameters are self explanatory, so I will focus on several that I find need some more attention:

  • count: 1 -- in this case, as mentioned, we only need one box per app.
  • wait: yes -- will wait until the instance is booting to return.
  • instance_tags -- this parameter is very important, as we will use the Name tag of the instances to uniquely identify them.
  • register: ec2_info -- we register the details of the newly created instance in this variable because we will need this information on a later task.
  • when -- this one is also important because it will determine whether we will actually create the instance or not. If you remember the previous step, we connected to EC2 to get the existing instances and save that information on the variable ec2_instances. This variable has a dictionary of the instances on EC2 indexed by the value of their Name tag. So in our case, ec2_instances.instances[instance_values['name']] will hold the information of the instance on EC2 with the name of the instance we want to create. If that information is there, it means the instance exists, so we do not create it. The way used to check for the dictionary having the key is a bit unorthodox and I'm open to a more elegant solution on the comments, but what we basically do is try to evaluate it and default it to the empty string using Jinja2 in case it's undefined. We compare this result to the empty string and if both values are equal it means that the initial evaluation failed to find the key as it had to use the default value (see Defaulting Undefined Variables).

The next task on the list will wait until the instance is up and listening to port 22 before doing anything else. Note that as a host we pass information from the registered variable: ec2_info.instances[0].public_dns_name and that we only execute the task if the previous step has created the instance with when: ec2_info|changed.

The reason to wait until ssh is up and listening to connections is that we will need to access the new box via ssh to run the rest of the playbook.

Finally, on the last task (ignoring the debug one) we add this newly created instance to a group of hosts we name ec2_hosts. The actual ip is in the ec2_info.instances[0].public_ip variable, and we also add some more information on the host that we will use later, like the EC2 instance id.

And that is all the ec2_creation role will do. Next on the list for the main provisioning playbook is the application of the common and passenger roles, which will install and configure everything needed on the box.

Configuring the newly created box

Once we have the machine up and running, what comes next is a standard set of tasks for ansible. I grouped those tasks in a role called common that has the usual role structure and elements in separate folders:

  • tasks -- the different tasks to perform.
  • vars -- useful variables used along the role.
  • files -- static config files for the target machine.
  • templates -- template files that need some stuff replaced.
  • handlers -- triggers for various things.

Now this common role is not yet a 100% complete and chances are for a fully working setup some more things will need to be added (like development yum packages for compiling certain ruby gems), but it’s a good start as a skeleton.

The application of the common role in the main provisioning playbook is done by adding this to the provisioning.yml file:

#Configure and install all we need
- hosts: ec2hosts
  gather_facts: false
  remote_user: rails
  roles:
    - common
    - passenger

The common role has its tasks separared in several files that are included in the right order in the main.yml file:

---
# Main tasks for all hosts
- include: hostname.yml
- include: sudoers.yml
- include: rails_user.yml
- include: packages.yml
- include: rbenv.yml

Note that we will create a user rails that will be the user running the applications, and that we also have the ec2-user user provided by the Amazon Linux AMI that has sudo permissions.

In order for this to work, you have to make sure you can connect to the newly created instance with the ec2-user by adding your EC2 key into your ssh-agent.

To avoid repeating myself in each task in which we do this, note that most of them require the sudo modifier so the commands are run with superuser permissions.

The hostname task

The fist thing we’ll do is set up the machine hostname. We will use a pattern to define our machine hostnames, and that pattern will be <environment>.<site> (eg: stag.example.com). The task uses the hostname module and is pretty straightforward:

---
- name: Setup hostname
  hostname:
    name: "{{ rails_env }}.{{ site }}"
  remote_user: ec2-user
  sudo: yes

The sudoers task

In here we will add some configuration to the sudoers system. This is the task code:

---
- name: sudoers // copy cloud-init file for ec2-user
  copy:
    src: cloud-init
    dest: /etc/sudoers.d/cloud-init
    owner: root
    group: root
    mode: 0440
  remote_user: ec2-user
  sudo: yes

In here we copy a file into the remote machine and assign it the correct owner and permissions. The file is located in the roles/common/files/cloud-init path and has this contents:

ec2-user ALL=(ALL) NOPASSWD: ALL
rails ALL=(ALL) NOPASSWD: ALL

What this does is allow both the ec2-user and rails users to be able to run sudo commands without having to type in the password. This will be handy for us to run commands with superuser privileges, but keep in mind the security implications of it.

The rails_user task

The next step is creating the rails user. The file actually contains more than one task:

---
- name: rails_user // create the user
  user:
    name: rails
    state: present
  remote_user: ec2-user
  sudo: yes

- name: rails_user // clean authorized keys
  shell: "(test -f /home/rails/.ssh/authorized_keys && echo -n \"\" > /home/rails/.ssh/authorized_keys) || true"
  remote_user: ec2-user
  sudo: yes
  tags: ssh_keys

- name: rails_user // set up authorized_keys
  authorized_key:
    user: rails
    key: "{{ item }}"
  with_file:
    - ssh_keys/brafales
  remote_user: ec2-user
  sudo: yes
  tags: ssh_keys

- name: copy ssh keys
  local_action: command scp -rp3 user@securehost.com:./ami_files/ssh/* rails@{{ hostvars.localhost.ec2_info.instances[0].public_ip }}:./.ssh/
  sudo: no
  tags: ssh_keys

- name: ensure correct ssh file permissions
  file: path=/home/rails/.ssh/{{ item }} owner=rails group=rails mode=0600
  with_items:
    - id_rsa
    - id_rsa.pub
  tags: ssh_keys

- name: ssh config
  copy: src=ssh_config dest=/home/rails/.ssh/config owner=rails group=rails mode=0644
  tags: ssh_keys

- include: bash.yml

We start by using the user module to create the user.

Once the user is created we setup the ssh authorized keys so we can log in with this user using an arbitrary number of ssh keys we want (in our case we use one key per developer plus the ones we need for deployments). We do this on two stages: the first one clears the ~/.ssh/authorized_keys to get rid of old keys using a shell command, and then we use the authorized_key module and use the loop pattern so we can add as many keys as we want. The public keys are on the roles/common/files/ssh-keys path.

Another ssh key related task is needed too. In our architecture, all frontends using the rails user share the same ssh key as well. This simplifies a lot connection in between all our ec2 machines. In this case, though, because we need both the private and public keys, we do not store it in a repository, but on a special machine (which also holds sensitive password information on rails yaml files, for example) that in this example would be securehost.com. So in order to copy the keys securely, we run a local command that will use scp to transfer the keys from the secure host to the new box. It’s important to notice the flag -rp3 on the command, which will transfer the files using the local machine as a gateway. Otherwise it would try to connect using ssh between both hosts, which would not yet be possible precisely because the new machine lacks the keys to connect to the secure host (this of course assumes the shell from which you’re running the playbook has ssh access to the secure host).

After that we finish our ssh maintenance with two more things. First we ensure the newly copied keys have the right permissions with the file module, and lastly we copy the ssh config we want the user to use from roles/common/files/ssh_config to ~/.ssh/config. This config has just the line StrictHostKeyChecking no which will ignore fingerprint changes when connecting to ssh hosts. This is, again, a security compromise made based on the fact that we reprovision boxes often.

The bash task

At the end of the rails_user.yml you’ll notice we included the bash.yml file. This will configure some bash options for the new user and create some folders that we will need later. The file has the following contents:

---
- name: bash // create plugins folder
  file:
    path: /home/rails/.bashrc.d
    state: directory
    mode: 0755

- name: bash // copy bashrc file
  copy:
    src: bashrc
    dest: /home/rails/.bashrc
    mode: 0644

- name: bash // copy rails_env file
  template:
    src: rails_env.sh.j2
    dest: /home/rails/.bashrc.d/rails_env.sh
    mode: 0644

What we do here is create the .bashrc.d folder in the user home folder, which will hold additional bash configuration files. We then copy the .bashrc config file from roles/common/files/bashrc:

# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
	. /etc/bashrc
fi

# User specific aliases and functions
for f in .bashrc.d/*
do
  if [ -f $f ] ; then
    . $f
  fi
done

This will make sure every file in the newly created plugins folder will get sourced upon login. And finally we add one file to the plugins folder. In this case it’s not just a plain file but a template, located in roles/common/templates/rails_env.sh.j2:

export RAILS_ENV="{{ rails_env }}"

This will simply make sure the machine has the correct RAILS_ENV environment variable set up.

The packages task

This is a really simple task that will simply install some packages in the system using the yum package manager module:

---
- name: install common packages
  yum:
    name: "{{ item }}"
    state: latest
  with_items: packages
  sudo: yes

The packages to install are gathered from a variable named packages that is defined in roles/common/vars/main.yml (it also holds more variables to be used in other tasks):

---
packages:
  - git
  - curl-devel
  - httpd24
  - httpd24-devel
  - apr-devel
  - apr-util-devel
  - gcc47.x86_64
  - gcc47-c++.x86_64
  - openssl.x86_64
  - openssl-devel.x86_64
rbenv_root: /home/rails/.rbenv
ruby_version: 2.1.1
passenger_version: 4.0.41

By default it will install the bare software to later build rbenv, but it is a good place to add other packages that you may need for other purposes.

The apache task

To host the apps we will use apache. The software has already been installed in a previous task, but we still need to add a configuration file to it. This will be done with the following task:

---
- name: Copy apache config file
  copy:
    src: apache_custom.conf
    dest: /etc/httpd/conf.d/
    mode: 0644
  sudo: yes

The config file, that you can find on roles/common/files/apache_custom.conf has the following contents:

#Include rails configs
IncludeOptional /home/rails/conf/vhosts/*.conf

And it will just let us add customised virtual hosts into our rails user home folder, for each of our apps.

The rbenv task

And finally, we install the rbenv ruby version manager.

This is a little more involved task with several steps on it, and all the information to do this is on the rbenv web page and is just adapted to our structure:

---
- name: rbenv // copy rbenv bash plugin
  template:
    src: rbenv.sh.j2
    dest: /home/rails/.bashrc.d/rbenv.sh
    mode: 0644
  tags: rbenv

- name: rbenv // clone repo
  git:
    repo: git://github.com/sstephenson/rbenv.git
    dest: "{{ rbenv_root }}"
  tags: rbenv

- name: rbenv // clone ruby-build
  git:
    repo: git://github.com/sstephenson/ruby-build.git
    dest: "{{ rbenv_root }}/plugins/ruby-build"
  tags: rbenv

- name: rbenv // check ruby installed
  shell: "rbenv versions | grep {{ ruby_version }}"
  register: ruby_installed
  ignore_errors: yes
  tags: rbenv

- name: rbenv // install ruby
  shell: rbenv install "{{ ruby_version }}"
  when: ruby_installed|failed
  notify: rbenv rehash
  tags: rbenv

- name: rbenv // set global ruby
  shell: rbenv global "{{ ruby_version }}"
  tags: rbenv

- name: rbenv // update rubygems
  shell: gem update --system
  tags: rbenv

- name: rbenv // install bundler
  shell: gem install bundler
  tags: rbenv

We begin by copying a bash plugin that will make sure that rbenv is properly set up upon login. For this we use a template in roles/common/templates/rbenv.sh.j2:

export RBENV_ROOT="{{ rbenv_root }}"
PATH="$HOME/.rbenv/bin:$PATH"
eval "$(rbenv init -)"

If you need details on this check the documentation on rbenv where it explains why it’s needed. The template uses the variable rbenv_root that contains the folder in which rbenv will be installed.

After this we clone the rbenv repository into the installation folder using the git module.

Once we have rbenv, we also clone the ruby-build plugin, that will allow us to build the ruby versions that we need for our systems to run the applications.

Now we are ready to build the ruby we need. But before that we check that it’s actually not been already built, to avoid extra work. We do this by running the command rbenv versions | grep and registering the result into the ruby_installed variable, that we will use later as a conditional.

The next task builds ruby and has two special things:

- name: rbenv // install ruby
  shell: rbenv install "{{ ruby_version }}"
  when: ruby_installed|failed
  notify: rbenv rehash
  tags: rbenv

The first one is that it has a conditional, so it will only be run when the variable we registered before is false with the line when: ruby_installed|failed. The second one is that it has a notify tag that will trigger a handler with the line notify: rbenv rehash.

This has to be done because of the rbenv architecture, that requires you to run a special command every time you install a new command line tool to a managed ruby.

We can do this with ansible handlers. This will let us call the special handler rbenv rehash when certain conditions are met (like a task being executed) without having to repeat the same set of things on different places.

In our case, this handler is set up in the file roles/common/handlers/main.yml:

---
- name: rbenv rehash
  shell: rbenv rehash

And is a very simple shell command.

So now that we have the version of ruby we want installed, we make it the default ruby interpreter for rbenv:

- name: rbenv // set global ruby
  shell: rbenv global "{{ ruby_version }}"
  tags: rbenv

We then update the rubygems software:

- name: rbenv // update rubygems
  shell: gem update --system
  tags: rbenv

And last, but not least, install the bundler gem:

- name: rbenv // install bundler
  shell: gem install bundler
  tags: rbenv

And that is all for the common role.

Installing Phusion Passenger

The playbook also includes the role passenger, which will, as you may expect, get a working passenger installation done.

The role is divided into two tasks, listed in the main.ymltask file:

---
- include: gem.yml
- include: httpd_conf.yml

The first thing we do is install the passenger gem and compile, if needed, the necessary libraries:

---
- name: install gem
  shell: gem install passenger -v {{ passenger_version }}
  notify: rbenv rehash

- name: check if already compiled
  shell: passenger-install-apache2-module --snippet | cut -d " " -f 3 | head -n 1 | xargs test -f
  register: passenger_compiled
  ignore_errors: true

- name: compile module
  shell: passenger-install-apache2-module --auto
  when: passenger_compiled|failed

The first task is pretty self explanatory. It’s important to note that we need to call the rbenv rehash handler, as the gem will install new binaries that otherwise would not be accessible to rbenv.

After that, we check if, by any change, we already installed and compiled passenger before. The way to do it is to run the command passenger-install-apache2-module --snippet and getting the part of the output that points us to the library that it’s built. Then we do a test -f of that file to check if it exists. We register the result on the passenger_compiled variable for later use.

In the case passenger_compiled fails we need to compile the module. We can achieve this easily by running the shell command on the config above. Note that we pass the --auto modifier so it doesn’t need any interaction from the user.

That will leave us with everything installed and on place. Now apache needs to be told to use this new module, which we do in the httpd_conf.yml file:

---
- name: get passenger snippet
  shell: passenger-install-apache2-module --snippet
  register: passenger_snippet
  tags: passenger

- name: setup snippet
  shell: echo "{{ passenger_snippet.stdout }}" > /etc/httpd/conf.modules.d/02-passenger.conf
  sudo: true
  tags: passenger

- name: setup passenger options
  copy: src=passenger-options.conf dest=/etc/httpd/conf.modules.d/02-passenger-options.conf owner=root group=root mode=0644
  sudo: true
  tags: passenger

The first thing to do is capture the config snippet from the passenger-install-apache2-module --snippet command and save it to passenger_snippet. Then we create a new apache config file with its contents on /etc/httpd/conf.modules.d/02-passenger.conf. All files in the folder /etc/httpd/conf.modules.d/ will be automatically loaded by apache assuming you have not changed the main config file. Finally, we copy another file with some passenger defaults to /etc/httpd/conf.modules.d/02-passenger-options.conf:

PassengerMaxPoolSize 35
PassengerMaxInstancesPerApp 8
PassengerPoolIdleTime 500
PassengerStartTimeout 300
PassengerMaxRequestQueueSize 500

Feel free to use your own values for this.

And that is all. After this you only need to work on your own apache configurations and deployment scripts to get things up and running.

Associating the elastic ip to the new box

In the main provisioning task, there is a final task that will use the ec2_eip module to associate the elastic ip to the new box:

#Associate elastic ip
- hosts: localhost
  connection: local
  gather_facts: false
  tasks:
    - include_vars: roles/ec2_creation/vars/main.yml
    - name: associate elastic ip to instance
      ec2_eip:
        instance_id: "{{ ec2_info['instance_ids'][0] }}"
        ip: "{{ instance_values['elastic_ip'] }}"
        region: "{{ instance_values['region'] }}"

The ec2_instances bespoke module

If you are interested in the module that was built for the purpose of getting a list of your inventory on AWS, you can find it on the library/ec2_instances file. The library folder is the place to put modules not present in ansible. It is heavily based on other ec2 modules already found in the core, and it’s basically a wrapper around the python boto library:

#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible.  If not, see <http://www.gnu.org/licenses/>.

DOCUMENTATION = '''
---
module: ec2_instances
short_description: get all instances information from EC2
description:
    - This module gets instances information from EC2
version_added: 1.6
options:
    filters:
      - Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details.
    required: false
  instance_ids:
    description:
      - A list of strings of instance IDs
    required: false
  max_results:
    description:
      - The maximum number of paginated instance items per response.
    required: false
  ec2_url:
    description:
      - URL to use to connect to EC2-compatible cloud (by default the module will use EC2 endpoints)
    required: false
    default: null
    aliases: [ EC2_URL ]
  ec2_access_key:
    description:
      - EC2 access key. If not specified then the EC2_ACCESS_KEY environment variable is used.
    required: false
    default: null
    aliases: [ EC2_ACCESS_KEY ]
  ec2_secret_key:
    description:
      - EC2 secret key. If not specified then the EC2_SECRET_KEY environment variable is used.
    required: false
    default: null
    aliases: [ EC2_SECRET_KEY ]
  region:
    description:
      - the EC2 region to use
    required: true
    default: "us-east-1"
    aliases: [ ec2_region ]
  validate_certs:
    description:
      - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
    required: false
    default: "yes"
    choices: ["yes", "no"]
    aliases: []
    version_added: "1.5"
  profile:
    description:
      - uses a boto profile. Only works with boto >= 2.24.0
    required: false
    default: null
    aliases: []
    version_added: "1.6"
  security_token:
    description:
      - security token to authenticate against AWS
    required: false
    default: null
    aliases: []
    version_added: "1.6"

requirements: [ "boto" ]
author: Bernat Rafales <bernat@rafales-mulet.com>
notes:
   - This module will get instance info from EC2
'''

EXAMPLES = '''
  ec2_instances:
    region: "eu-west-1"
'''

try:
    import boto.ec2
except ImportError:
    boto_found = False
else:
    boto_found = True

def main():
    argument_spec = ec2_argument_spec()
    argument_spec.update(dict(
            filters = dict(required=False, type='dict'),
            instance_ids = dict(required=False, type='list'),
            max_results = dict(required=False, type='int')
        )
    )

    module = AnsibleModule(
        argument_spec = argument_spec
    )

    if not boto_found:
        module.fail_json(msg="boto is required")

    ec2 = ec2_connect(module)

    filters = module.params.get('filters')
    instance_ids = module.params.get('instance_ids')
    max_results = module.params.get('max_results')

    try:
        reservations = ec2.get_all_reservations(instance_ids=instance_ids, filters=filters, max_results=max_results)
        instances = [i for r in reservations for i in r.instances]
        module.exit_json(changed=False, instances=dict([(i.tags['Name'], i.id) for i in instances if 'Name' in i.tags and i.state != 'terminated']))
    except boto.exception.EC2ResponseError, e:
        module.fail_json(msg=str(e))


# import module snippets
from ansible.module_utils.basic import *
from ansible.module_utils.ec2 import *

if __name__ == '__main__':
    main()

The important bit is on the last try/except block, in which me make a request using boto and then craft a response that only includes instances in which there is a tag with the 'Name' key and the status of the instance is not 'terminated'.

Final comments

In the repository you will find a couple of roles that you may find useful for installing a redis database engine (or just the client).

Please feel free to comment on mistakes, improvements or any other questions you may have on this.

Congratulations for reading this to the end :)