Skip to content

Latest commit

 

History

History
495 lines (376 loc) · 12.1 KB

File metadata and controls

495 lines (376 loc) · 12.1 KB

Ansible-Terraform Integration

This integration allows you to provision AWS infrastructure using Terraform while maintaining a single source of truth in config/splunk_config.yml. Ansible playbooks automatically generate Terraform configuration and manage the infrastructure lifecycle.

Quick Start

1. Configure Your Infrastructure

Edit config/splunk_config.yml and add a terraform section:

terraform:
  aws:
    region: "eu-central-1"
    ami_id: "ami-03cbad7144aeda3eb"  # Redhat 9
    key_name: "aws_key"
    ssh_private_key_file: "~/.ssh/aws_key.pem"
    ssh_username: "ec2-user"
    security_group_names: ["Splunk_Basic"]
    instance_type: "t2.micro"  # Default for all hosts
    root_volume_size: 50       # Default root volume size in GB

splunk_hosts:
  - name: idx1
    roles: [indexer]
    terraform:
      aws:
        instance_type: "c5.4xlarge"
        root_volume_size: 100
        additional_volumes:
          - device_name: "/dev/xvdb"
            volume_size: 500
            volume_type: "gp3"

2. Provision Infrastructure

ansible-playbook ansible/provision_terraform_aws.yml

This will:

  1. Generate terraform/aws/terraform.tfvars from your config
  2. Run terraform init
  3. Run terraform plan and show you the changes
  4. Prompt for confirmation
  5. Apply the changes
  6. Wait for AWS status checks to pass (ensures instances are fully ready)
  7. Generate inventory/hosts with all provisioned instances

Note: Terraform now waits for AWS instance status checks (both instance and system checks) to pass before completing. This ensures instances are fully ready for Ansible deployment. To enable optional quick SSH verification during provisioning, use -e verify_ssh=true.

3. Verify Host Readiness (Optional)

For comprehensive readiness checks beyond AWS status checks:

ansible-playbook ansible/wait_for_terraform_aws_hosts.yml

This playbook performs additional checks:

  • Ansible ping connectivity test
  • Python availability verification
  • System uptime validation
  • Cloud-init completion verification

This step is optional - Terraform already ensures instances pass AWS status checks before completing.

4. Deploy Splunk

The playbook creates a standard Ansible inventory file that's automatically picked up:

# Ansible automatically uses inventory/hosts
ansible-playbook ansible/deploy_site.yml

5. Destroy Infrastructure

ansible-playbook ansible/destroy_terraform_aws.yml

Configuration Structure

Global Terraform Settings

Define defaults in the terraform.aws section:

terraform:
  aws:
    # AWS Configuration
    region: "eu-central-1"
    ami_id: "ami-03cbad7144aeda3eb"
    
    # SSH Configuration
    key_name: "aws_key"
    ssh_private_key_file: "~/.ssh/aws_key.pem"
    ssh_username: "ec2-user"
    
    # Security
    security_group_names: ["Splunk_Basic"]
    
    # Optional: AWS Credentials Override
    # access_key_id: "YOUR_ACCESS_KEY"
    # secret_access_key: "YOUR_SECRET_KEY"
    
    # Default Instance Settings
    instance_type: "t2.micro"
    root_volume_size: 50
    
    # Optional: Global Additional Volumes
    additional_volumes:
      - device_name: "/dev/xvdb"
        volume_size: 100
        volume_type: "gp3"

# OS Configuration (Optional)
os:
  # Optional: Run command on instances after creation
  remote_command: |
    #!/bin/bash
    sudo apt-get install -y acl" # Needed for Ubuntu to have Ansible working

Remote Command Execution:

  • Commands run via SSH after instance creation and status checks
  • Executes before Ansible deployment
  • If command fails, Terraform apply fails (prevents bad deployments)
  • Supports multi-line bash scripts
  • Can use sudo for privileged operations

Per-Host Terraform Settings

Override global defaults for specific hosts:

splunk_hosts:
  - name: idx1
    roles: [indexer]
    terraform:
      aws:
        instance_type: "c5.4xlarge"
        root_volume_size: 100
        root_volume_type: "gp3"
        root_volume_encrypted: true
        
        additional_volumes:
          - device_name: "/dev/xvdb"
            volume_size: 500
            volume_type: "gp3"
            encrypted: true
            delete_on_termination: true
        
        additional_tags:
          Role: "Indexer"
          Tier: "Production"

Host Generation with iter

Generate multiple hosts with sequential numbering:

splunk_hosts:
  # Creates idx01, idx02, idx03
  - iter:
      prefix: idx
      numbers: "01..03"
    roles: [indexer]
    terraform:
      aws:
        instance_type: "c5.4xlarge"
  
  # Creates sh1-prod, sh2-prod
  - iter:
      prefix: sh
      numbers: "1..2"
      postfix: "-prod"
    roles: [search_head]

Features:

  • numbers: Range like "01..10" (zero-padded based on end number)
  • prefix: Optional text before number
  • postfix: Optional text after number

Configuration Precedence

Settings are merged in this order (later overrides earlier):

  1. Defaults from defaults/aws.yml
  2. Global terraform.aws settings
  3. Per-host terraform.aws settings

Example:

# defaults/aws.yml
terraform:
  aws:
    instance_type: "t2.micro"        # Lowest priority
    ssh_username: "ec2-user"

# config/splunk_config.yml
terraform:
  aws:
    instance_type: "t3.small"        # Overrides default
    
splunk_hosts:
  - name: idx1
    terraform:
      aws:
        instance_type: "c5.4xlarge"  # Highest priority for this host

Playbook Options

Generate terraform.tfvars Only

ansible-playbook ansible/provision_terraform_aws.yml --tags generate

Run Terraform Init Only

ansible-playbook ansible/provision_terraform_aws.yml --tags init

Run Terraform Plan Only

ansible-playbook ansible/provision_terraform_aws.yml --tags plan

Auto-Approve (Skip Confirmation)

ansible-playbook ansible/provision_terraform_aws.yml -e auto_approve=true

View Outputs Only

ansible-playbook ansible/provision_terraform_aws.yml --tags outputs

Files Created

Generated by Playbook

  • terraform/aws/terraform.tfvars - Terraform variables (auto-generated, DO NOT EDIT)
  • inventory/hosts - Ansible inventory with connection details
  • terraform/aws/tfplan - Terraform plan file (temporary)

Managed by Terraform

  • terraform/aws/terraform.tfstate - Infrastructure state
  • terraform/aws/terraform.tfstate.backup - State backup
  • terraform/aws/.terraform/ - Provider plugins

Inventory Format

The generated inventory/hosts file contains all provisioned instances:

# Ansible Inventory - Generated from Terraform
# DO NOT EDIT - This file is auto-generated by provision_terraform_aws.yml

idx1 ansible_host=54.93.174.46 ansible_user=ec2-user ansible_ssh_private_key_file=~/.ssh/aws_key.pem private_ip=172.31.22.82 public_dns_name=ec2-54-93-174-46.eu-central-1.compute.amazonaws.com private_dns_name=ip-172-31-22-82.eu-central-1.compute.internal
idx2 ansible_host=3.76.29.76 ansible_user=ec2-user ansible_ssh_private_key_file=~/.ssh/aws_key.pem private_ip=172.31.29.206 public_dns_name=ec2-3-76-29-76.eu-central-1.compute.amazonaws.com private_dns_name=ip-172-31-29-206.eu-central-1.compute.internal

This file is automatically picked up by Ansible via ansible.cfg.


AWS Credentials

Option 1: Environment Variables (Recommended)

export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
ansible-playbook ansible/provision_terraform_aws.yml

Option 2: Config File Override

terraform:
  aws:
    access_key_id: "your-access-key"
    secret_access_key: "your-secret-key"

⚠️ Security Note: Don't commit credentials to version control. Use environment variables or AWS credential files.


Complete Example

terraform:
  aws:
    region: "eu-central-1"
    ami_id: "ami-03cbad7144aeda3eb"
    key_name: "aws_key"
    ssh_private_key_file: "~/.ssh/aws_key.pem"
    ssh_username: "ec2-user"
    security_group_names: ["Splunk_Basic"]
    instance_type: "t2.micro"
    additional_volumes:
      - device_name: "/dev/xvdb"
        volume_size: 100
        volume_type: "gp3"

splunk_hosts:
  # Cluster Manager
  - name: cm
    roles: [cluster_manager, license_manager]
    terraform:
      aws:
        instance_type: "t3.large"
        root_volume_size: 50
  
  # Indexers with iter
  - iter:
      prefix: idx
      numbers: "01..03"
    roles: [indexer]
    terraform:
      aws:
        instance_type: "c5.4xlarge"
        root_volume_size: 100
        additional_volumes:
          - device_name: "/dev/xvdb"
            volume_size: 500
            volume_type: "gp3"
  
  # Search Heads
  - iter:
      prefix: sh
      numbers: "1..2"
    roles: [search_head]
    terraform:
      aws:
        instance_type: "t3.large"
  
  # Deployment Server (uses global defaults)
  - name: ds
    roles: [deployment_server]

Requirements

  • Ansible 2.9+
  • community.general collection
  • Terraform 1.3.0+
  • AWS CLI (for instance status checks)
  • AWS credentials configured

Install requirements:

# Ansible collection
ansible-galaxy collection install community.general

# AWS CLI (if not already installed)
# macOS
brew install awscli

# Linux
pip install awscli

Troubleshooting

"terraform section is missing"

Ensure config/splunk_config.yml has a terraform.aws section with required fields (region, ami_id, key_name, etc.).

"Module community.general.terraform not found"

Install the collection:

ansible-galaxy collection install community.general

"aws: command not found" or status check errors

The AWS CLI is required for instance status checks. Install it:

# macOS
brew install awscli

# Linux
pip install awscli

# Verify installation
aws --version

"The security group 'xxx' does not exist"

Ensure security groups exist in AWS before provisioning. The playbook looks up security group IDs from names.

Generated tfvars looks wrong

Run with --tags generate and inspect terraform/aws/terraform.tfvars:

ansible-playbook ansible/provision_terraform_aws.yml --tags generate
cat terraform/aws/terraform.tfvars

Want to see detailed Terraform output

The playbook displays Terraform plan output. For more details, run Terraform directly:

cd terraform/aws
terraform plan

Inventory not found

The inventory file is created at inventory/hosts after running the provision playbook. Ensure you've run:

ansible-playbook ansible/provision_terraform_aws.yml

Migration Notes

From Vagrant AWS Plugin

The old Vagrant AWS plugin used these variables which now map to:

Old (Vagrant) New (Terraform)
aws.access_key_id terraform.aws.access_key_id
aws.secret_access_key terraform.aws.secret_access_key
aws.region terraform.aws.region
aws.ami terraform.aws.ami_id
aws.instance_type terraform.aws.instance_type
aws.keypair_name terraform.aws.key_name
aws.ssh_username terraform.aws.ssh_username
aws.security_groups terraform.aws.security_group_names

From block_device_mapping

Old format:

aws:
  block_device_mapping:
    - DeviceName: "/dev/sdg"
      Ebs.VolumeSize: 500

New format:

terraform:
  aws:
    additional_volumes:
      - device_name: "/dev/xvdb"
        volume_size: 500
        volume_type: "gp3"

Best Practices

  1. Use iter for multiple similar hosts instead of repeating configuration
  2. Define global defaults in terraform.aws for common settings
  3. Override per-host only when needed
  4. Use environment variables for AWS credentials, not config files
  5. Run --tags plan before applying to preview changes
  6. Commit terraform.tfstate to version control or use remote state
  7. Don't edit generated files (terraform.tfvars, inventory/hosts)

Additional Resources