Table of contents
- Prerequisites Before Installation
- 1. Installing Terraform and Ansible
- 2. Creating Directories for Terraform and Ansible
- 3. Setting Up Infrastructure Directory in Terraform (With File Content)
- 1. Creating the Infrastructure Directory and Adding Files:
- 2. Go Back to the Terraform Directory and Add Main Infrastructure Files:
- 3. Final Directory Structure:
- Next Steps
- 1. Run Terraform Commands
- 2. Check Provisioned Resources
- 3. Secure the Private Key
- 4. Access EC2 Instances via SSH
- 5. Setting Up Ansible!!
- 6. Create Playbook for Installing Nginx
- 7. Verify Directory Structure
- 7. Initializing Roles for Nginx using Ansible Galaxy
- 8. Add update_inventories.sh Script
- 9. Final Directory Structure
- 10. Infrastructure Destruction
- Conclusion of the Project
Introduction
This project demonstrates how to set up a reliable and scalable infrastructure for different environments (development, staging, and production) using Terraform for creating resources and Ansible for configuring them. The focus is on automation, scalability, and following best practices throughout the process.
Prerequisites Before Installation
1. Setting Up the Ubuntu OS Environment
You need an Ubuntu 20.04 or 22.04 LTS environment. This can be one of the following:
Local machine running Ubuntu OS.
AWS EC2 instance with Ubuntu.
Virtual machine (e.g., VirtualBox or VMware) running Ubuntu.
Provisioning an Ubuntu EC2 Instance (Optional)
If you're using AWS, follow these steps:
Log in to the AWS Management Console.
Navigate to EC2 > Launch Instances.
Choose an Ubuntu AMI (e.g., Ubuntu 20.04 LTS).
Select an instance type (e.g.,
t2.micro
for free tier eligibility).Configure security groups to allow:
SSH (port 22)
HTTP (port 80)
HTTPS (port 443)
Launch the instance and connect using:
ssh -i your-key.pem ubuntu@your-ec2-public-ip
2. AWS CLI
Install and configure the AWS CLI to interact with AWS services:
Install the AWS CLI on Ubuntu:
sudo apt-get update sudo apt-get install awscli -y aws --version
Configure the AWS CLI:
aws configure
Enter your Access Key ID, Secret Access Key, region, and output format when prompted.
3. Access Keys and Permissions
Obtain AWS IAM Access Keys (Access Key ID and Secret Access Key).
Ensure the IAM user/role has permissions for EC2, S3, and IAM resources.
4. AMI Information
Have the AMI ID for the OS image you plan to use.
You can find it via AWS Console or using AWS CLI:
aws ec2 describe-images --filters "Name=name,Values=your-ami-name"
5. Network Configurations
Ensure you have the following network details:
VPC
Subnet IDs
Security Groups
Key Pairs
These are needed for deploying resources within your network.
6. User Account with Sudo Privileges
Ensure the user account has sudo privileges to install packages and make system-level changes.
7. Basic Knowledge of YAML and HCL
Ansible uses YAML for playbooks.
Terraform uses HCL (HashiCorp Configuration Language) for infrastructure definitions.
1. Installing Terraform and Ansible
a. Installing Terraform on Ubuntu
Update the package list:
sudo apt-get update
Install dependencies:
sudo apt-get install -y gnupg software-properties-common
Add HashiCorp's GPG Key:
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
Add the HashiCorp repository:
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
Install Terraform:
apt-get update && sudo apt-get install terraform
Verify installation:
terraform --version
b. Installing Ansible on Ubuntu
Add the Ansible PPA:
sudo apt-add-repository ppa:ansible/ansible
Update the package list:
sudo apt update
Install Ansible:
sudo apt install ansible
Verify installation:
ansible --version
2. Creating Directories for Terraform and Ansible
Organize your project with two separate directories:
Navigate to your project directory:
mkdir <your-project-name> && cd <your-project-name>
Create a directory for Terraform:
mkdir terraform
Create a directory for Ansible:
mkdir ansible
Verify the directory structure:
tree
Your project structure should look like this:
<your-project-name>/
├── terraform/
└── ansible/
This structure keeps your Terraform scripts (for infrastructure provisioning) and Ansible playbooks (for server configuration) separate and organized.
3. Setting Up Infrastructure Directory in Terraform (With File Content)
After creating the infra
directory, add basic configurations to each Terraform file to provision essential AWS resources.
1. Creating the Infrastructure Directory and Adding Files:
Navigate to the Terraform Directory:
cd terraform
Create the
infra
Directory:mkdir infra && cd infra
Create and Populate the Terraform Files:
bucket.tf
(S3 Bucket Configuration):resource "aws_s3_bucket" "my_s3_bucket" { bucket = "${var.env}-rahul-devops-bucket" tags = { Name = "${var.env}-rahul-devops-bucket" Environment = var.env } }
dynamodb.tf
(DynamoDB Table for State Locking):resource "aws_dynamodb_table" "my_table" { name = "${var.env}-devops-db-table" billing_mode = "PAY_PER_REQUEST" hash_key = "userId" attribute { name = "userId" type = "S" } tags = { Name = "${var.env}-devops-db" Environment = var.env } }
ec2.tf
(EC2 Instance Configuration):resource "aws_key_pair" "my_key_pair" { key_name = "${var.env}-devops-key" public_key = file("devops-key.pub") } resource "aws_default_vpc" "default" {} resource "aws_security_group" "my_sg" { name = "${var.env}-devops-sg" description = "This is security for every instance" vpc_id = aws_default_vpc.default.id ingress { description = "Allow access to port 22 for ssh" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "Allow access to port 80 for http" from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "Allow access to port 443 for https" from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { description = "Allow access to every port for outgoing" from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "${var.env}-devops-sg" Environment = var.env } } resource "aws_instance" "my_instance" { count = var.instance_count ami = var.aws_ami_owners instance_type = var.instance_type key_name = aws_key_pair.my_key_pair.key_name security_groups = [aws_security_group.my_sg.name] root_block_device { volume_size = var.instance_volume_size volume_type = "gp3" } tags = { Name = "${var.env}-devops-instance" Environment = var.env } }
output.tf
(Output Definitions):output "instance_public_ips" { description = "This is used to show all env's servers public ips" value = aws_instance.my_instance[*].public_ip }
variable.tf
(Variable Declarations):variable "env" { description = "This is env variable like dev,stg,prod" type = string } variable "instance_type" { description = "This is Instance type like t2.micro, t2.medium etc.." type = string } variable "instance_count" { description = "This contain the instance count for the every enviroment" type = number } variable "instance_volume_size" { description = "This contain the size of every instance should be" type = number } variable "aws_instance_os_distro" { description = "Defines the operating system image filter for selecting an appropriate AMI (e.g., Ubuntu 20.04)." type = string default = "ubuntu/images/hvm-ssd/*amd64*" } variable "aws_ami_owners" { description = "The owner ID of the AMI to use. Default is Canonical (for Ubuntu AMIs)." type = string default = "099720109477" }
Verify the File Structure:
tree
Your structure should look like this:
infra/ ├── bucket.tf ├── dynamodb.tf ├── ec2.tf ├── output.tf └── variable.tf
2. Go Back to the Terraform Directory and Add Main Infrastructure Files:
Go Back to the Terraform Directory:
cd ..
Create the
main.tf
File (Using Modules for Multi-Environment Setup):# dev-infrastructure module "dev-infra" { source = "./infra" env = "dev" instance_count = 2 instance_type = "t2.micro" ami = "ami-03fd334507439f4d1" instance_volume_size = 8 } # stg-infrastructure module "stg-infra" { source = "./infra" env = "stg" instance_count = 2 instance_type = "t2.micro" ami = "" instance_volume_size = 8 } # prod-infrastructure module "prod-infra" { source = "./infra" env = "prod" instance_count = 3 instance_type = "t2.micro" ami = "" instance_volume_size = 8 } output "dev_infra_instance_public_ips" { value = module.dev-infra.instance_public_ips } output "stg_infra_instance_public_ips" { value = module.stg-infra.instance_public_ips } output "prod_infra_instance_public_ips" { value = module.prod-infra.instance_public_ips }
Create the
providers.tf
File (AWS Provider Configuration):provider "aws" { region = "eu-west-1" }
Create the
terraform.tf
File (Backend Configuration for State Management):terraform { required_providers { aws = { source = "hashicorp/aws" version = "5.80.0" } } }
Generate SSH Keys (
devops-key
anddevops-key.pub
):ssh-keygen -t rsa -b 2048 -f devops-key -N ""
This generates:
devops-key
(private key)devops-key.pub
(public key)
3. Final Directory Structure:
Your final directory structure should look like this:
├── devops-key # Private SSH key for EC2 access
├── devops-key.pub # Public SSH key for EC2 access
├── infra
│ ├── bucket.tf
│ ├── dynamodb.tf
│ ├── ec2.tf
│ ├── output.tf
│ └── variable.tf
├── main.tf # Defines environment-based modules
├── providers.tf # AWS provider configuration
├── terraform.tf # Backend configuration for state management
Next Steps
1. Run Terraform Commands
terraform init
Initialize Terraform with the required providers and modules.terraform plan
Review the plan to apply changes to your infrastructure.terraform apply
Apply the changes to provision infrastructure.
2. Check Provisioned Resources
Instances
List of EC2 instances running or created.Buckets
List of S3 buckets running or created.DynamoDB tables
List of DynamoDB tables running or created.
3. Secure the Private Key
Run the following command to set proper permissions and secure the private key:
chmod 400 devops-key # Set read-only permissions for the owner to ensure security
This ensures the private key (devops-key) is only accessible by you.
4. Access EC2 Instances via SSH
Use the following command to SSH into EC2 instances using the generated private key:
ssh -i devops-key ubuntu@<your-ec2-ip>
5. Setting Up Ansible!!
5.1. Create Dynamic Inventories Directory
Navigate to the Ansible directory you created earlier.
Step 1: Create the inventories directory:
mkdir -p inventories/dev inventories/prod inventories/stg
5.2. Add Inventory Content for Each Environment
For dev environment (
inventories/dev
):[servers] server1 ansible_host=3.249.218.238 server2 ansible_host=34.241.195.105 [servers:vars] ansible_user=ubuntu ansible_ssh_private_key_file=/home/rahul/devops-key ansible_python_interpreter=/usr/bin/python3
For stg environment (
inventories/stg
):[servers] server1 ansible_host=34.244.89.121 server2 ansible_host=34.242.151.189 [servers:vars] ansible_user=ubuntu ansible_ssh_private_key_file=/home/rahul/devops-key ansible_python_interpreter=/usr/bin/python3
For prod environment (
inventories/prod
):[servers] server1 ansible_host=3.252.144.3 server2 ansible_host=63.34.12.124 server3 ansible_host=34.244.48.139 [servers:vars] ansible_user=ubuntu ansible_ssh_private_key_file=/home/rahul/devops-key ansible_python_interpreter=/usr/bin/python3
5.3. Directory Structure
After setting up the inventories, the resulting directory structure should look like:
ansible
├── inventories
│ ├── dev
│ ├── prod
│ └── stg
└── playbooks
6. Create Playbook for Installing Nginx
6.1. Create the Playbooks Directory
Navigate to the Ansible directory and create the
playbooks
directory:mkdir playbooks
6.2. Create the install_nginx_playbook.yml File
Navigate to the
playbooks
directory and create theinstall_nginx_playbook.yml
file:--- - name: Install Nginx and render a webpage to it hosts: servers become: yes roles: - nginx-role
7. Verify Directory Structure
After completing the above steps, your Ansible directory structure should look like this:
ansible
├── inventories
│ ├── dev
│ ├── prod
│ └── stg
└── playbooks
└── install_nginx_playbook.yml
7. Initializing Roles for Nginx using Ansible Galaxy
Steps to initialize the nginx-role using Ansible Galaxy:
Navigate to the playbooks Directory
If you're not already in the playbooks directory, use the following command:cd ansible/playbooks
Initialize the nginx-role Using Ansible Galaxy
Run this command to initialize the nginx-role:ansible-galaxy role init nginx-role
This creates the following directory structure for the nginx-role:
nginx-role ├── README.md ├── defaults │ └── main.yml ├── files │ └── index.html ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates ├── tests │ ├── inventory │ └── test.yml └── vars └── main.yml
Add Custom Tasks and Files
Add tasks/main.yml:
Under thenginx-role/tasks/
directory, create amain.yml
file with the following tasks:--- # tasks file for nginx-role - name: Install nginx apt: name: nginx state: latest - name: Enable nginx service: name: nginx enabled: yes - name: Deploy webpage copy: src: index.html dest: /var/www/html
This ensures that:
Nginx is installed and the latest version is used.
Nginx is enabled to start automatically.
The
index.html
file is copied to/var/www/html
to serve the default Nginx page.
Add a Custom index.html File:
Create a fileindex.html
under thenginx-role/files/
directory. Example content:<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Amitabh's DevOps Journey</title> <style> body { font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif; background-color: #121212; color: #e0e0e0; } header { background: #1e1e2f; color: #f5f5f5; text-align: center; padding: 50px 20px; } header h1 { font-size: 40px; color: #ff6f61; } footer { text-align: center; background: #181818; color: #bdbdbd; padding: 20px; } footer a { color: #ff6f61; } </style> </head> <body> <header> <h1>Rahul Advance DevOps Journey</h1> </header> <footer> <p>Created by Rahul | <a href="https://www.linkedin.com/in/amrahulgupta/">LinkedIn</a> | <a href="https://github.com/irahulgupta/terraform-ansible-multi-env-1">GitHub</a></p> </footer> </body> </html>
8. Add update_
inventories.sh
Script
Create the Script
Create a newupdate_
inventories.sh
script in your Ansible directory with the following content:#!/bin/bash # Paths and Variables TERRAFORM_OUTPUT_DIR="/path/to/terraform" ANSIBLE_INVENTORY_DIR="/path/to/ansible/inventories" cd "$TERRAFORM_OUTPUT_DIR" || { echo "Terraform directory not found"; exit 1; } DEV_IPS=$(terraform output -json dev_infra_instance_public_ips | jq -r '.[]') STG_IPS=$(terraform output -json stg_infra_instance_public_ips | jq -r '.[]') PROD_IPS=$(terraform output -json prod_infra_instance_public_ips | jq -r '.[]') update_inventory_file() { local ips="$1" local inventory_file="$2" local env="$3" > "$inventory_file" echo "[servers]" >> "$inventory_file" local count=1 for ip in $ips; do echo "server${count} ansible_host=$ip" >> "$inventory_file" count=$((count + 1)) done echo "" >> "$inventory_file" echo "[servers:vars]" >> "$inventory_file" echo "ansible_user=ubuntu" >> "$inventory_file" echo "ansible_ssh_private_key_file=/path/to/key" >> "$inventory_file" echo "ansible_python_interpreter=/usr/bin/python3" >> "$inventory_file" echo "Updated $env inventory: $inventory_file" } update_inventory_file "$DEV_IPS" "$ANSIBLE_INVENTORY_DIR/dev" "dev" update_inventory_file "$STG_IPS" "$ANSIBLE_INVENTORY_DIR/stg" "stg" update_inventory_file "$PROD_IPS" "$ANSIBLE_INVENTORY_DIR/prod" "prod" echo "All inventory files updated successfully!"
Verify the Directory Structure
Your Ansible directory should look like this:ansible ├── inventories │ ├── dev │ ├── prod │ └── stg ├── playbooks │ ├── install_nginx_playbook.yml │ └── nginx-role │ ├── README.md │ ├── defaults │ │ └── main.yml │ ├── files │ │ └── index.html │ ├── handlers │ │ └── main.yml │ ├── meta │ │ └── main.yml │ ├── tasks │ │ └── main.yml │ ├── templates │ ├── tests │ │ ├── inventory │ │ └── test.yml │ └── vars │ └── main.yml └── update_inventories.sh
Make the Script Executable
Run the following command to make the script executable:chmod +x update_inventories.sh
Run the Script
Execute the script to update the inventory files with Terraform's IPs:./update_inventories.sh
Verify the Inventory Files
Thedev
,stg
, andprod
inventory files should now contain updated IPs and necessary variables.
9. Final Directory Structure
After all configurations, your project structure should look like this:
.
├── README.md
├── ansible
│ ├── inventories
│ │ ├── dev
│ │ ├── prod
│ │ └── stg
│ ├── playbooks
│ │ ├── install_nginx_playbook.yml
│ │ └── nginx-role
│ │ ├── README.md
│ │ ├── defaults
│ │ │ └── main.yml
│ │ ├── files
│ │ │ └── index.html
│ │ ├── handlers
│ │ │ └── main.yml
│ │ ├── meta
│ │ │ └── main.yml
│ │ ├── tasks
│ │ │ └── main.yml
│ │ ├── templates
│ │ ├── tests
│ │ │ ├── inventory
│ │ │ └── test.yml
│ │ └── vars
│ │ └── main.yml
│ └── update_inventories.sh
└── terraform
├── infra
│ ├── bucket.tf
│ ├── dynamodb.tf
│ ├── ec2.tf
│ ├── output.tf
│ └── variable.tf
├── main.tf
├── providers.tf
├── terraform.tf
├── terraform.tfstate
└── terraform.tfstate.backup
10. Infrastructure Destruction
After completing the project, you can clean up the resources using the following steps:
Navigate to the Terraform Directory
cd /path/to/terraform/directory
Run Terraform Destroy
Execute this command to destroy all resources:Note: (- - auto - approve) in this case it directly destroy the instance/infra without asking for manual approval.
terraform destroy --auto-approve
Resources Destroyed
This will destroy:EC2 instances
S3 buckets
Databases and other resources provisioned during the setup
Find my GitHub repo here: https://github.com/irahulgupta/terraform-ansible-multi-env-1
Conclusion of the Project
Congratulations! You have:
Set up infrastructure with Terraform (EC2, S3, databases).
Configured Nginx on servers using Ansible.
Managed dynamic inventories and automated server configurations.
Cleaned up resources by destroying infrastructure after completing the project.
You can now apply these skills to real-world scenarios to manage and automate infrastructure across different environments effectively!