This integration allows you to provision AWS infrastructure using Terraform while maintaining a single source of truth in config/splunk_config.yml. Ansible playbooks automatically generate Terraform configuration and manage the infrastructure lifecycle.
Edit config/splunk_config.yml and add a terraform section:
terraform:
aws:
region: "eu-central-1"
ami_id: "ami-03cbad7144aeda3eb" # Redhat 9
key_name: "aws_key"
ssh_private_key_file: "~/.ssh/aws_key.pem"
ssh_username: "ec2-user"
security_group_names: ["Splunk_Basic"]
instance_type: "t2.micro" # Default for all hosts
root_volume_size: 50 # Default root volume size in GB
splunk_hosts:
- name: idx1
roles: [indexer]
terraform:
aws:
instance_type: "c5.4xlarge"
root_volume_size: 100
additional_volumes:
- device_name: "/dev/xvdb"
volume_size: 500
volume_type: "gp3"ansible-playbook ansible/provision_terraform_aws.ymlThis will:
- Generate
terraform/aws/terraform.tfvarsfrom your config - Run
terraform init - Run
terraform planand show you the changes - Prompt for confirmation
- Apply the changes
- Wait for AWS status checks to pass (ensures instances are fully ready)
- Generate
inventory/hostswith all provisioned instances
Note: Terraform now waits for AWS instance status checks (both instance and system checks) to pass before completing. This ensures instances are fully ready for Ansible deployment. To enable optional quick SSH verification during provisioning, use -e verify_ssh=true.
For comprehensive readiness checks beyond AWS status checks:
ansible-playbook ansible/wait_for_terraform_aws_hosts.ymlThis playbook performs additional checks:
- Ansible ping connectivity test
- Python availability verification
- System uptime validation
- Cloud-init completion verification
This step is optional - Terraform already ensures instances pass AWS status checks before completing.
The playbook creates a standard Ansible inventory file that's automatically picked up:
# Ansible automatically uses inventory/hosts
ansible-playbook ansible/deploy_site.ymlansible-playbook ansible/destroy_terraform_aws.ymlDefine defaults in the terraform.aws section:
terraform:
aws:
# AWS Configuration
region: "eu-central-1"
ami_id: "ami-03cbad7144aeda3eb"
# SSH Configuration
key_name: "aws_key"
ssh_private_key_file: "~/.ssh/aws_key.pem"
ssh_username: "ec2-user"
# Security
security_group_names: ["Splunk_Basic"]
# Optional: AWS Credentials Override
# access_key_id: "YOUR_ACCESS_KEY"
# secret_access_key: "YOUR_SECRET_KEY"
# Default Instance Settings
instance_type: "t2.micro"
root_volume_size: 50
# Optional: Global Additional Volumes
additional_volumes:
- device_name: "/dev/xvdb"
volume_size: 100
volume_type: "gp3"
# OS Configuration (Optional)
os:
# Optional: Run command on instances after creation
remote_command: |
#!/bin/bash
sudo apt-get install -y acl" # Needed for Ubuntu to have Ansible workingRemote Command Execution:
- Commands run via SSH after instance creation and status checks
- Executes before Ansible deployment
- If command fails, Terraform apply fails (prevents bad deployments)
- Supports multi-line bash scripts
- Can use sudo for privileged operations
Override global defaults for specific hosts:
splunk_hosts:
- name: idx1
roles: [indexer]
terraform:
aws:
instance_type: "c5.4xlarge"
root_volume_size: 100
root_volume_type: "gp3"
root_volume_encrypted: true
additional_volumes:
- device_name: "/dev/xvdb"
volume_size: 500
volume_type: "gp3"
encrypted: true
delete_on_termination: true
additional_tags:
Role: "Indexer"
Tier: "Production"Generate multiple hosts with sequential numbering:
splunk_hosts:
# Creates idx01, idx02, idx03
- iter:
prefix: idx
numbers: "01..03"
roles: [indexer]
terraform:
aws:
instance_type: "c5.4xlarge"
# Creates sh1-prod, sh2-prod
- iter:
prefix: sh
numbers: "1..2"
postfix: "-prod"
roles: [search_head]Features:
numbers: Range like "01..10" (zero-padded based on end number)prefix: Optional text before numberpostfix: Optional text after number
Settings are merged in this order (later overrides earlier):
- Defaults from
defaults/aws.yml - Global
terraform.awssettings - Per-host
terraform.awssettings
Example:
# defaults/aws.yml
terraform:
aws:
instance_type: "t2.micro" # Lowest priority
ssh_username: "ec2-user"
# config/splunk_config.yml
terraform:
aws:
instance_type: "t3.small" # Overrides default
splunk_hosts:
- name: idx1
terraform:
aws:
instance_type: "c5.4xlarge" # Highest priority for this hostansible-playbook ansible/provision_terraform_aws.yml --tags generateansible-playbook ansible/provision_terraform_aws.yml --tags initansible-playbook ansible/provision_terraform_aws.yml --tags planansible-playbook ansible/provision_terraform_aws.yml -e auto_approve=trueansible-playbook ansible/provision_terraform_aws.yml --tags outputsterraform/aws/terraform.tfvars- Terraform variables (auto-generated, DO NOT EDIT)inventory/hosts- Ansible inventory with connection detailsterraform/aws/tfplan- Terraform plan file (temporary)
terraform/aws/terraform.tfstate- Infrastructure stateterraform/aws/terraform.tfstate.backup- State backupterraform/aws/.terraform/- Provider plugins
The generated inventory/hosts file contains all provisioned instances:
# Ansible Inventory - Generated from Terraform
# DO NOT EDIT - This file is auto-generated by provision_terraform_aws.yml
idx1 ansible_host=54.93.174.46 ansible_user=ec2-user ansible_ssh_private_key_file=~/.ssh/aws_key.pem private_ip=172.31.22.82 public_dns_name=ec2-54-93-174-46.eu-central-1.compute.amazonaws.com private_dns_name=ip-172-31-22-82.eu-central-1.compute.internal
idx2 ansible_host=3.76.29.76 ansible_user=ec2-user ansible_ssh_private_key_file=~/.ssh/aws_key.pem private_ip=172.31.29.206 public_dns_name=ec2-3-76-29-76.eu-central-1.compute.amazonaws.com private_dns_name=ip-172-31-29-206.eu-central-1.compute.internalThis file is automatically picked up by Ansible via ansible.cfg.
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
ansible-playbook ansible/provision_terraform_aws.ymlterraform:
aws:
access_key_id: "your-access-key"
secret_access_key: "your-secret-key"
⚠️ Security Note: Don't commit credentials to version control. Use environment variables or AWS credential files.
terraform:
aws:
region: "eu-central-1"
ami_id: "ami-03cbad7144aeda3eb"
key_name: "aws_key"
ssh_private_key_file: "~/.ssh/aws_key.pem"
ssh_username: "ec2-user"
security_group_names: ["Splunk_Basic"]
instance_type: "t2.micro"
additional_volumes:
- device_name: "/dev/xvdb"
volume_size: 100
volume_type: "gp3"
splunk_hosts:
# Cluster Manager
- name: cm
roles: [cluster_manager, license_manager]
terraform:
aws:
instance_type: "t3.large"
root_volume_size: 50
# Indexers with iter
- iter:
prefix: idx
numbers: "01..03"
roles: [indexer]
terraform:
aws:
instance_type: "c5.4xlarge"
root_volume_size: 100
additional_volumes:
- device_name: "/dev/xvdb"
volume_size: 500
volume_type: "gp3"
# Search Heads
- iter:
prefix: sh
numbers: "1..2"
roles: [search_head]
terraform:
aws:
instance_type: "t3.large"
# Deployment Server (uses global defaults)
- name: ds
roles: [deployment_server]- Ansible 2.9+
community.generalcollection- Terraform 1.3.0+
- AWS CLI (for instance status checks)
- AWS credentials configured
Install requirements:
# Ansible collection
ansible-galaxy collection install community.general
# AWS CLI (if not already installed)
# macOS
brew install awscli
# Linux
pip install awscliEnsure config/splunk_config.yml has a terraform.aws section with required fields (region, ami_id, key_name, etc.).
Install the collection:
ansible-galaxy collection install community.generalThe AWS CLI is required for instance status checks. Install it:
# macOS
brew install awscli
# Linux
pip install awscli
# Verify installation
aws --versionEnsure security groups exist in AWS before provisioning. The playbook looks up security group IDs from names.
Run with --tags generate and inspect terraform/aws/terraform.tfvars:
ansible-playbook ansible/provision_terraform_aws.yml --tags generate
cat terraform/aws/terraform.tfvarsThe playbook displays Terraform plan output. For more details, run Terraform directly:
cd terraform/aws
terraform planThe inventory file is created at inventory/hosts after running the provision playbook. Ensure you've run:
ansible-playbook ansible/provision_terraform_aws.ymlThe old Vagrant AWS plugin used these variables which now map to:
| Old (Vagrant) | New (Terraform) |
|---|---|
aws.access_key_id |
terraform.aws.access_key_id |
aws.secret_access_key |
terraform.aws.secret_access_key |
aws.region |
terraform.aws.region |
aws.ami |
terraform.aws.ami_id |
aws.instance_type |
terraform.aws.instance_type |
aws.keypair_name |
terraform.aws.key_name |
aws.ssh_username |
terraform.aws.ssh_username |
aws.security_groups |
terraform.aws.security_group_names |
Old format:
aws:
block_device_mapping:
- DeviceName: "/dev/sdg"
Ebs.VolumeSize: 500New format:
terraform:
aws:
additional_volumes:
- device_name: "/dev/xvdb"
volume_size: 500
volume_type: "gp3"- Use
iterfor multiple similar hosts instead of repeating configuration - Define global defaults in
terraform.awsfor common settings - Override per-host only when needed
- Use environment variables for AWS credentials, not config files
- Run
--tags planbefore applying to preview changes - Commit
terraform.tfstateto version control or use remote state - Don't edit generated files (
terraform.tfvars,inventory/hosts)