← Back to Home

Ansible Lab with Vagrant & VirtualBox

Ansible Lab with Vagrant & VirtualBox

Tech Stack:

Vagrant, VirtualBox, Ansible, Ubuntu, CentOS, Bash, YAML


Project Goal

To build a fully automated and reproducible multi-node infrastructure lab using Vagrant and Ansible, capable of provisioning and configuring heterogeneous Linux systems for DevOps experimentation and skill-building.


Project Description

This project sets up a local virtual environment consisting of:

  • One control node: Ubuntu 22.04
  • Four managed nodes: three CentOS Stream 9 (web01, web02, db01) and one Ubuntu (web03)
  • All nodes connected through a private network (192.168.56.0/24)

Using a custom Vagrantfile, the control node is provisioned via provision-control.sh, which:

  • Installs Ansible
  • Configures SSH agent forwarding
  • Generates a dynamic inventory (hosts.ini) and ansible.cfg

Ansible is then used to automate all post-deployment tasks using the post-install role.


Infrastructure Architecture

Vagrant Provisions

  • Control Node: Ubuntu 22.04, 1 GB RAM, 1 vCPU, static IP 192.168.56.10
  • CentOS Clients: web01, web02, db01 with static IPs 192.168.56.11–.12–.14
  • Ubuntu Client: web03, IP 192.168.56.13
  • Private Network: Static IPs for all VMs within 192.168.56.0/24
  • SSH Agent Forwarding: Configured for seamless and passwordless access between nodes

Provisioning Logic (Shell + Ansible)

provision-control.sh (Shell)

Executed only on the control node, this script:

  • Installs Ansible from the official PPA
  • Configures SSH for agent forwarding and disables host key checking
  • Generates:
    • ansible.cfg
    • hosts.ini grouped into [web] and [db]
    • inventory (YAML format)

inventory (YAML format)

  • Defines host groups: webservers, dbservers, and dc_oregon (parent group)
  • Sets host-specific variables (ansible_host)
  • Sets group-wide configuration:
    • ansible_user: vagrant
    • ansible_ssh_private_key_file
    • ansible_python_interpreter

Ansible Role: post-install

Executed via provisioning.yaml, this role automates full post-deployment configuration across all systems.

Package Installation (Conditional by OS)
  • CentOS Nodes: Installs chrony, wget, git, zip, unzip using yum
  • Ubuntu Nodes: Installs ntp, wget, git, zip, unzip using apt
Time Synchronization Setup
  • Enables and starts:
    • chronyd on RedHat-based systems
    • ntp on Debian/Ubuntu systems
  • Deploys templated NTP config files:
    • ntp_redhat.conf.j2 or ntp_debian.conf.j2
User and System Configuration
  • Creates system group: devops
  • Adds users defined in the usernames variable to that group
  • Customizes /etc/motd with an Ansible-managed banner
  • Creates working directory: /opt/devdata with 0775 permissions

Outcomes

  • Reusable Infrastructure: Built a repeatable multi-OS lab with a single vagrant up
  • Fully Automated Provisioning: From OS install to final service configuration via a single Ansible role
  • Cross-platform Experience: Managed both Debian- and RedHat-based systems under a unified playbook
  • Secure & Dynamic SSH Access: Configured SSH agent forwarding and managed private key usage
  • Role-Based Ansible Design: Modular role post-install controls all logic with OS-aware conditions
  • Clean Project Structure: Organized with clear separation of playbooks, roles, inventory, variables, and templates

GitHub: View Repository