Ansible is a configuration management tool used widely in the IT industry. It’s grown so fast because it’s open-source, based on the popular Python programming language, and uses YAML for its configuration, making it easy to learn and use.

Why use Ansible with AWS?

Ansible falls along similar lines to Puppet or Chef. AWS supports these configuration tools through their OpsWorks service, but they don’t support Ansible. So, why would I want to use Ansible over Chef or Puppet with Opsworks?

Reasons to use Ansible


Ansible is fully open-source. While it is part of Red Hat’s Ansible Platform portfolio, like many Red Hat products, it’s got an open-source version which Red Hat then forks and customises for their paying customers.

Large Selection of Modules

Since Ansible is open-source and built on popular tools and languages, the community has made and continues to maintain many modules. These modules are part of the ansible.builtin collection (built into Ansible), but there are also whole community and vendor collections containing many valuable modules.


In configuration management solutions, you have a couple of methodologies. Pull (for example, Puppet) relies on a locally installed agent to reach out for updated configuration code periodically. Ansible uses a push methodology (agentless), meaning it applies your configuration and does this via standardised remote-access protocols such as SSH, WinRM, or SSM.

Support for AWS Services (you'll need)

I won’t say native support, as many of the AWS modules are in the Ansible.AWS collection. This collection is hosted publicly on galaxy for all to download and use and contains many great modules that allow you to integrate Ansible into your AWS automation pipelines fully. Some good examples of integration are:

  • Pulling data from AWS Secrets Manager or AWS Systems Manager Parameter Store
  • Dynamically create AWS inventories using data gathered from EC2 or RDS
  • Manage S3 Buckets, DynamoDB Databases, VPCs and EC2 Instances
  • Create and Manage Cloudformation Stacks

While I won’t advise Ansible as a solution to completely automate AWS end-to-end, it is a competent tool to automate the configuration and maintenance of lift & shift infrastructure. In addition, using AWS’s services makes building a modern cloud environment easier, even if you have some legacy infrastructure.

Easy to learn and develop

By far one of the best reasons to use Ansible. In this current day, where finding technical resources is so tricky and many teams are strained for time, using a tool that is widely supported, easy to learn, and fast to develop is crucial. In addition, your teams can run Ansible locally on their machines (no test environment needed) on containers or VMs, allowing you to develop quickly.

The Example Build

Ok, great. Now you know why using Ansible is a great idea - but to learn the tool, we need to deploy something accurate, practical, and helpful.

Architecture Diagram

Keep in mind that there are some things we’ll be deploying as part of this that aren’t included in this diagram, but these are part of CodePipeline and CodeBuild, so I’m not too worried about showing them here. The main thing to note is the core services we’ll be utilising - CodeBuild, CodePipeline, and Secrets Manager.

How it works

So, before deploying this, let’s understand exactly how it will work.

AWS CodeCommit

We’ll be using AWS CodeCommit to store our Ansible Code.

AWS CodePipeline

CodePipeline will be our tool of choice to manage our pipeline. It’s very affordable, and the pipeline will only be triggered when a code change is detected, meaning we only execute the pipeline when we change our configuration.

AWS CodeBuild

CodePipeline calls CodeBuild after its source stage (downloading the code from our repository and placing it in an artifact S3 bucket). CodeBuild downloads the source code from the artifact bucket and executes it. It will need to be attached to a subnet with internet access through a NAT gateway, which is why we have the networking configured how we do.

Secrets Manager

Secrets Manager allows us to store key/value data in a secure location secured by strict access controls and encrypted by AWS. CodeBuild will pull down the SSH key from Secrets Manager and use it to connect to our Webserver EC2 Instance. Pulling our credentials from Secrets Manager makes our pipeline more secure as we can keep our credentials somewhere safe (and out of the repository or plain-text variables).


Inarguably the most crucial component here, Ansible, will be configuring our Webserver EC2 Instance using the credentials we pulled down from Secrets Manager. The web server will only be a simple Nginx server but will allow us to see how to deploy configurations using Ansible in AWS and make this process secure.

Final Notes

So, in the following article, we’ll deploy this infrastructure and start using it!

For now, though, here are some good articles you can read which will give you a better understanding of the technologies we’ll be using:

  1. Getting Started - Ansible Documentation
  2. AWS CodePipeline - AWS
  3. - Ansible Documentation

Keep learning, folks.