Featured image of post Terraform: Manage Cloud Infrastructure as Code, Stop Clicking Around

Terraform: Manage Cloud Infrastructure as Code, Stop Clicking Around

Terraform is an Infrastructure as Code tool that manages AWS, GCP, Cloudflare and 3000+ providers with HCL. Covers installation, HCL syntax, state management, module structure, multi-environment strategy, and a working S3+CloudFront deployment example.

Every time you set up a new environment, you click through the cloud console one resource at a time. Three months later a colleague asks how it was built β€” and you’ve forgotten yourself. Staging and prod silently diverge until something breaks. Terraform describes all of it in code. plan previews changes, apply executes them. There’s a git history, it’s reviewable, and it can be rolled back.

Installation

Use tfenv to manage versions β€” same concept as nvm:

1
2
3
4
5
6
7
8
9
# macOS
brew install tfenv

# Install and switch to a specific version
tfenv install 1.10.5
tfenv use 1.10.5

# Pin version per project β€” tfenv reads this file automatically when you cd in
echo "1.10.5" > .terraform-version

If you don’t need version management, install directly via Homebrew:

1
2
3
brew tap hashicorp/tap
brew install hashicorp/tap/terraform
terraform --version

HCL Syntax Basics

Terraform uses HCL (HashiCorp Configuration Language) in .tf files. Declarative syntax β€” you describe what you want, not how to get it.

Resource: Create Cloud Resources

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
# Format: resource "provider_type" "local_name" { ... }
resource "aws_s3_bucket" "website" {
  bucket = "my-website-2026"

  tags = {
    Environment = "production"
    ManagedBy   = "terraform"
  }
}

# Reference another resource's attributes β€” creates an implicit dependency
resource "aws_s3_bucket_versioning" "website" {
  bucket = aws_s3_bucket.website.id  # format: type.name.attribute

  versioning_configuration {
    status = "Enabled"
  }
}

Variable: Input Parameters

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
variable "environment" {
  type        = string
  description = "Deployment environment"
  default     = "dev"

  validation {
    condition     = contains(["dev", "staging", "prod"], var.environment)
    error_message = "Must be dev, staging, or prod."
  }
}

variable "instance_count" {
  type    = number
  default = 1
}

variable "db_password" {
  type      = string
  sensitive = true  # redacted from CLI output and logs
}

Ways to provide values (highest to lowest precedence):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# 1. CLI flag
terraform apply -var="environment=prod"

# 2. Var file flag
terraform apply -var-file="prod.tfvars"

# 3. terraform.tfvars (auto-loaded)
# 4. Environment variable
export TF_VAR_environment=prod

# 5. Default in the variable block

terraform.tfvars example:

1
2
environment    = "prod"
instance_count = 3

Output: Export Values

1
2
3
4
5
6
7
8
output "bucket_name" {
  value       = aws_s3_bucket.website.id
  description = "The S3 bucket name"
}

output "bucket_arn" {
  value = aws_s3_bucket.website.arn
}
1
2
terraform output bucket_name  # read a single output
terraform output              # read all outputs

Locals: Computed Values

Locals are named expressions reused within the configuration β€” not user inputs:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
locals {
  bucket_name   = "${var.environment}-${var.project}-assets"
  instance_type = var.environment == "prod" ? "t3.large" : "t3.micro"

  common_tags = {
    Environment = var.environment
    ManagedBy   = "terraform"
  }
}

resource "aws_s3_bucket" "assets" {
  bucket = local.bucket_name   # reference with local.<name>
  tags   = local.common_tags
}

Data Source: Read Existing Resources

Data sources are read-only queries for infrastructure not managed by this Terraform configuration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
# Look up the latest Amazon Linux 2023 AMI
data "aws_ami" "amazon_linux" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["al2023-ami-*-x86_64"]
  }
}

# Reference with data.type.name.attribute
resource "aws_instance" "app" {
  ami           = data.aws_ami.amazon_linux.id
  instance_type = "t3.micro"
}

Core Workflow

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
# Initialize: download provider plugins, create .terraform/ directory
terraform init

# Preview: shows what WOULD happen, doesn't execute
# + create  - destroy  ~ update  +/- destroy and recreate
terraform plan

# Apply: execute changes (prompts for confirmation)
terraform apply

# Save plan then apply (common in CI/CD)
terraform plan -out=tfplan
terraform apply tfplan

# Skip confirmation prompt (CI/CD pipelines)
terraform apply -auto-approve

# Other useful commands
terraform fmt           # format all .tf files in place
terraform validate      # check syntax
terraform state list    # list all resources in state
terraform destroy       # tear down all resources

Provider Configuration

Providers are plugins that connect to cloud platforms:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
terraform {
  required_version = ">= 1.10.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.80"  # >= 5.80, < 6.0
    }
    cloudflare = {
      source  = "cloudflare/cloudflare"
      version = "~> 4.0"
    }
  }
}

provider "aws" {
  region = "ap-northeast-1"
  # Credentials from env vars: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
}

provider "cloudflare" {
  api_token = var.cloudflare_api_token
}

CloudFront requires ACM certificates in us-east-1. Use a provider alias:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
provider "aws" {
  alias  = "us_east_1"
  region = "us-east-1"
}

resource "aws_acm_certificate" "cert" {
  provider          = aws.us_east_1  # specify the alias
  domain_name       = "example.com"
  validation_method = "DNS"
}

State Management

Terraform uses terraform.tfstate to track what resources it created, their IDs and attributes. This file is the core of how Terraform operates.

Important: state can contain plaintext passwords and private keys β€” never commit it to git.

1
2
3
4
5
6
7
8
# .gitignore
*.tfstate
*.tfstate.backup
.terraform/
terraform.tfvars

# DO commit this β€” it pins provider versions
# .terraform.lock.hcl

Remote State (Required for Teams)

Local state doesn’t support collaboration. Store state in S3:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
# Create the state bucket first (run this once)
resource "aws_s3_bucket" "tf_state" {
  bucket = "my-company-terraform-state"
}

resource "aws_s3_bucket_versioning" "tf_state" {
  bucket = aws_s3_bucket.tf_state.id
  versioning_configuration {
    status = "Enabled"  # keeps state file history for rollback
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "tf_state" {
  bucket = aws_s3_bucket.tf_state.id
  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

Configure the backend in your main project:

1
2
3
4
5
6
7
8
terraform {
  backend "s3" {
    bucket       = "my-company-terraform-state"
    key          = "prod/web-app/terraform.tfstate"  # path within the bucket
    region       = "ap-northeast-1"
    use_lockfile = true  # Terraform 1.10+ S3 native locking, no DynamoDB needed
  }
}

use_lockfile = true was added in Terraform 1.10. It uses S3 conditional writes for state locking, preventing two concurrent apply runs from corrupting state. Previously you needed a separate DynamoDB table β€” that approach is now deprecated.

Modules: Reusable Configuration

Modules package Terraform resources so different environments can share them.

Standard Directory Layout

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
modules/
  s3-website/
    main.tf       # resource definitions
    variables.tf  # input variable declarations
    outputs.tf    # output declarations
    versions.tf   # provider versions

environments/
  dev/
    main.tf       # calls modules with dev-specific values
    backend.tf
    terraform.tfvars
  prod/
    main.tf
    backend.tf
    terraform.tfvars

Writing a Module

1
2
3
4
5
6
7
8
# modules/s3-website/variables.tf
variable "bucket_name" {
  type = string
}

variable "environment" {
  type = string
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
# modules/s3-website/main.tf
resource "aws_s3_bucket" "this" {
  bucket = var.bucket_name

  tags = {
    Environment = var.environment
    ManagedBy   = "terraform"
  }
}

resource "aws_s3_bucket_versioning" "this" {
  bucket = aws_s3_bucket.this.id
  versioning_configuration {
    status = "Enabled"
  }
}
1
2
3
4
5
6
7
8
# modules/s3-website/outputs.tf
output "bucket_id" {
  value = aws_s3_bucket.this.id
}

output "bucket_arn" {
  value = aws_s3_bucket.this.arn
}

Calling a Module

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# environments/prod/main.tf
module "website" {
  source = "../../modules/s3-website"  # relative path for local modules

  bucket_name = "my-company-website-prod"
  environment = "prod"
}

output "website_bucket" {
  value = module.website.bucket_id  # access module output
}

Public modules from the Terraform Registry:

1
2
3
4
5
6
7
8
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 5.0"

  name = "main-vpc"
  cidr = "10.0.0.0/16"
  azs  = ["ap-northeast-1a", "ap-northeast-1c"]
}

Multi-Environment Strategy

Two common approaches:

Workspaces: Same configuration, separate state files. Good for small teams where environments are nearly identical.

1
2
3
terraform workspace new dev
terraform workspace new prod
terraform workspace select prod

Separate directories (recommended): Each environment is a completely independent Terraform root with its own backend and state. Better when prod needs strict access control.

1
2
3
environments/
  dev/   ← run terraform init / plan / apply here
  prod/  ← independent state, independent permissions

Practical Example: S3 + CloudFront Static Website

This example deploys a private S3 bucket with a CloudFront distribution using OAC (Origin Access Control) β€” the modern replacement for OAI.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# versions.tf
terraform {
  required_version = ">= 1.10.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.80"
    }
  }

  backend "s3" {
    bucket       = "my-company-tf-state"
    key          = "s3-website/terraform.tfstate"
    region       = "ap-northeast-1"
    use_lockfile = true
  }
}

provider "aws" {
  region = "ap-northeast-1"
}

# CloudFront + ACM certificates must be in us-east-1
provider "aws" {
  alias  = "us_east_1"
  region = "us-east-1"
}
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
# main.tf
locals {
  common_tags = {
    Environment = var.environment
    ManagedBy   = "terraform"
  }
}

# ── S3 Bucket (private β€” CloudFront reads via OAC) ──
resource "aws_s3_bucket" "website" {
  bucket = var.bucket_name
  tags   = local.common_tags
}

resource "aws_s3_bucket_public_access_block" "website" {
  bucket                  = aws_s3_bucket.website.id
  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

# ── CloudFront OAC (replaces legacy OAI) ──
resource "aws_cloudfront_origin_access_control" "website" {
  name                              = "${var.bucket_name}-oac"
  origin_access_control_origin_type = "s3"
  signing_behavior                  = "always"
  signing_protocol                  = "sigv4"
}

# ── Bucket Policy: allow CloudFront only ──
data "aws_iam_policy_document" "website" {
  statement {
    effect    = "Allow"
    actions   = ["s3:GetObject"]
    resources = ["${aws_s3_bucket.website.arn}/*"]

    principals {
      type        = "Service"
      identifiers = ["cloudfront.amazonaws.com"]
    }

    condition {
      test     = "StringEquals"
      variable = "AWS:SourceArn"
      values   = [aws_cloudfront_distribution.website.arn]
    }
  }
}

resource "aws_s3_bucket_policy" "website" {
  bucket     = aws_s3_bucket.website.id
  policy     = data.aws_iam_policy_document.website.json
  depends_on = [aws_s3_bucket_public_access_block.website]
}

# ── ACM Certificate (must be in us-east-1) ──
resource "aws_acm_certificate" "website" {
  provider          = aws.us_east_1
  domain_name       = var.domain_name
  validation_method = "DNS"
  tags              = local.common_tags

  lifecycle {
    create_before_destroy = true
  }
}

# ── CloudFront Distribution ──
resource "aws_cloudfront_distribution" "website" {
  enabled             = true
  is_ipv6_enabled     = true
  default_root_object = "index.html"
  price_class         = "PriceClass_100"  # US + Europe, cost-effective
  aliases             = [var.domain_name]
  tags                = local.common_tags

  origin {
    domain_name              = aws_s3_bucket.website.bucket_regional_domain_name
    origin_id                = "S3-${var.bucket_name}"
    origin_access_control_id = aws_cloudfront_origin_access_control.website.id
  }

  default_cache_behavior {
    target_origin_id       = "S3-${var.bucket_name}"
    viewer_protocol_policy = "redirect-to-https"
    allowed_methods        = ["GET", "HEAD"]
    cached_methods         = ["GET", "HEAD"]
    compress               = true

    forwarded_values {
      query_string = false
      cookies { forward = "none" }
    }

    default_ttl = 86400    # 1 day
    max_ttl     = 31536000 # 1 year
  }

  # SPA: redirect 404/403 to index.html
  custom_error_response {
    error_code         = 403
    response_code      = 200
    response_page_path = "/index.html"
  }

  custom_error_response {
    error_code         = 404
    response_code      = 200
    response_page_path = "/index.html"
  }

  viewer_certificate {
    acm_certificate_arn      = aws_acm_certificate.website.arn
    ssl_support_method       = "sni-only"
    minimum_protocol_version = "TLSv1.2_2021"
  }

  restrictions {
    geo_restriction { restriction_type = "none" }
  }
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# outputs.tf
output "cloudfront_domain" {
  value = aws_cloudfront_distribution.website.domain_name
}

output "cloudfront_id" {
  value = aws_cloudfront_distribution.website.id
}

output "bucket_name" {
  value = aws_s3_bucket.website.id
}

Deploy and upload:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
terraform init
terraform plan
terraform apply

# Upload static files
aws s3 sync ./dist s3://$(terraform output -raw bucket_name) --delete

# Invalidate CloudFront cache
aws cloudfront create-invalidation \
  --distribution-id $(terraform output -raw cloudfront_id) \
  --paths "/*"

Common Pitfalls

State contains sensitive data: State stores all resource attributes, including plaintext passwords. sensitive = true only hides values in CLI output β€” they’re still written to state. Use a remote backend with encryption. Never use local state for production.

Configuration drift: Someone makes a manual change in the console, Terraform doesn’t know about it. Run terraform plan -refresh-only to detect drift without touching real infrastructure.

Importing existing resources: Use the import block (Terraform 1.5+) to bring existing resources under Terraform management:

1
2
3
4
import {
  to = aws_s3_bucket.legacy
  id = "my-existing-bucket-name"
}

Then run terraform plan -generate-config-out=generated.tf to auto-generate the resource block. Review it, move it into your config, and terraform apply.

Lifecycle protection: Add prevent_destroy to critical resources to prevent accidental deletion:

1
2
3
4
5
6
7
resource "aws_s3_bucket" "data" {
  bucket = "critical-data"

  lifecycle {
    prevent_destroy = true  # terraform destroy will error and stop
  }
}

Lock provider versions: Without version constraints, terraform init installs the latest version, which may include breaking changes. Commit .terraform.lock.hcl to git so the whole team uses the same provider versions.

Summary

Terraform turns infrastructure into code that can be reviewed, version-controlled, and reused. plan shows you exactly what will happen before it happens. state lets Terraform know the current reality. module lets you share configurations across environments without copy-pasting.

Start small β€” import one existing resource, wrap it in code, and build from there. Every resource you move out of ClickOps becomes a change you can track in git.

References