Automating Docker and Terraform Deployment with Ansible

This is a project that aims to automate the deployment of a Docker application using Terraform and Ansible.

Automating Docker and Terraform Deployment with Ansible

Overview

The project is divided into three main parts:

Part 1: Ansible & Docker
This part focuses on setting up the infrastructure for the project by installing the necessary tools like Docker, Docker & Docker Compose python module, and Docker Compose. It also configures the Docker daemon and adds the ec2-user/new-user to the docker group.
Part 2: Ansible Integration in Terraform
This part expands on the previous part by integrating Ansible with Terraform. It allows Terraform to execute Ansible Playbooks to configure the infrastructure before deployment.
Part 3: Automating Docker and Terraform Deployment with Ansible
This part combines the previous parts to fully automate the deployment of a Docker application using Terraform and Ansible.

Prerequisites

  • AWS account with IAM credentials

  • AWS CLI

  • Terraform

  • Ansible

  • Docker

Configure AWS CLI to connect to AWS account

  • Connect with an AWS user

  • UI access through password

  • CLI Access through Access key ID and Secret Access key

aws configure

AWS Access Key ID : (which is given in the IAM users section)

AWS Secret Access Key : (which is given in the IAM users section)

Default region name : (set your preferred region, for ex: ap-south-1)

Default output format : (ser your preferred output format, for ex: json)

Configuration is automatically stored in your home directory under /.aws

Part 1: Ansible and Docker

Overview

  • Create an AWS EC2 Instance with Terraform

  • Configure Inventory file to connect to AWS EC2 Instance

  • Install Docker and docker-compose

  • Copy docker-compose file to the server

  • Start docker containers

Create an AWS EC2 Instance with Terraform

You can access the Terraform file here.

Create your own "terraform.tfvars" file and include the following:

vpc_cidr_block = "10.0.0.0/16"
subnet_1_cidr_block = "10.0.0.0/24"
avail_zone = "your preferred az" (example: "ap-south-1a")
env_prefix = "dev"
instance_type = "t2.micro"
ssh_key = "your public ssh key" (like "/home/sonali-rajput/.ssh/id_rsa.pub")
my_ip = "your IP"
ssh_private_key = "private ssh key" (like "/home/sonali-rajput/.ssh/id_rsa")

terraform init (for initializing)

terraform apply

Your EC2 instance will be created and you'll get the server-ip in your console output.

Configure Inventory file to connect to AWS EC2 Instance

You can find the code here.

Create a "hosts" file and save the server-ip address as:

[docker_server]
<your_server_ip> ansible_ssh_private_key_file=~/.ssh/id_rsa ansible_user=ec2-user

Now, We will write the Ansible Playbook for installing the necessary tools.

Install Docker and docker-compose

Create a deploy-docker.yml

---
- name: Insatll Docker, Docker-compose
  hosts: docker_server
  become: yes
  tasks:
    - name: Install Docker
      yum:
        name: docker
        update_cache: true
        state: present
    - name: Install docker-compose
      get_url:
        url: https://github.com/docker/compose/releases/download/1.27.4/docker-compose-Linux-{{lookup('pipe', 'uname -m')}}
        dest: /usr/local/bin/docker-compose
        mode: +x
    - name: Start docker daemon
      systemd:
        name: docker
        state: started
    - name: Install docker python module
      pip:
        name: 
          - docker
          - docker-compose

Add ec2-user to docker group

- name: add ec2-user to docker group
  hosts: docker_server
  become: yes
  tasks:
    - name: add ec2-user to docker group
      user: 
        name: ec2-user
        groups: docker
        append: yes
    - name: reconnect to server session
      meta: reset_connection

Copy docker-compose file to the server and Start docker containers

- name: start docker containers
  hosts: docker_server
  tasks:
    - name: copy docker compose
      copy:
        src: /home/sonali-rajput/Projects/ansible/docker-compose.yml
        dest: /home/ec2-user/docker-compose.yml
    - name: start container
      docker_compose:
        project_src: /home/ec2-user
        state: present

Now run ansible-playbook deploy-docker.yml

Now ssh to the server by running ssh ec2-user@<your-server-ip> and do docker ps

You will see your containers running successfully.

Okay you've come so far but we want the playbook to be more generic and re-usable so instead of using only ec2-user we can create a new user, you can check the code for making a new user here.

Manual tasks between provisioning and configuring

  1. We get the IP address manually from TF output

  2. Update the hosts file manually

  3. Execute Ansible command

We want to automate this complete process.

Part 2: Ansible Integration in Terraform

First, destroy the current setup by running terraform destroy

We're going to add a local-exec provisioner in our "main.tf" file. It basically invokes a local executable after a resource is created and it is invoked on the machine running TF, not on the resource!

run pwd and copy the path and paste it in the "working_dir" attribute

we can use --inventory flag to use the Host IP of newly created server dynamically.

 provisioner "local-exec" {
      working_dir = "/home/sonali-rajput/Projects/ansible"
      command = "ansible-playbook --inventory ${self.public_ip}, --private-key ${var.ssh_private_key} --user ec2-user  deploy-docker-newuser.yml"
     }
  }

However, using provisioners is not the recommended way in TF as it is not Idempotent.

also, we might have an issue if this gets executed before even server started so for this ansible needs to check if the EC2 is ready.

Part 3: Automating Docker and Terraform Deployment with Ansible

for that we'll add a play in ansible playbook to check if the server is accessible on its ssh port.

In the file here add:

- name: wait for ssh connection
  hosts: all
  tasks: 
    - name: ensure ssh port open
      wait_for:
        port: 22
        delay: 10
        timeout: 100
        host: '{{ (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }}'
        search_regex: OpenSSH
      vars:
        ansible_connection: local

You can check more about it in ansible wait_for_module.

Another way to execute a provisioner

If you want to separate it from the aws instance resource and have it as a separate task in Terraform we can actually do that by using a resource type called "null_resource".

resource "null_resource" "configure_server" {
    triggers = {
      trigger = aws_instance.myapp-server.public_ip
    }
    provisioner "local-exec" {
      working_dir = "/home/sonali-rajput/Projects/ansible"
      command = "ansible-playbook --inventory ${aws_instance.myapp-server.public_ip}, --private-key ${var.ssh_private_key} --user ec2-user  deploy-docker-newuser.yml"
    }
  }

As we have added a new provider, we have to do terraform init then do terraform apply

ssh into the server ssh ec2@"server-ip" and check the containers are running by sudo docker ps

if you want to check your application running, in your browser type your-server-ip:port-number

something like

For more Please consider following me here and do not miss any of my blogs Follow me on Twitter.

Thank you so much for reading! If you found this helpful, please click the heart ❤️ button below a few times to show your support!

Did you find this article valuable?

Support Sonali Rajput by becoming a sponsor. Any amount is appreciated!