Backend for Terraform Backend

 

This post focuses on describing my approach to create a backend for Terraform backend. The problem is that Terraform is unable to utilise backend before it’s created. Let’s explain the problem in practice. I have created a small Terraform project below.

 

provider "aws" {
  shared_credentials_file = "../credentials"
  region                  = "eu-west-1"
  profile                 = "default"
}

terraform {
  backend "s3" {
    bucket         = "terraform-state-repository"
    key            = "terraform.tfstate"
    region         = "eu-west-1"
  }
}

resource "aws_s3_bucket" "backend_s3_bucket" {
  bucket = "terraform-state-repository"
  acl    = "private"

  versioning {
    enabled = true
  }
}

 

When you try terraform init  the command displays the following error:

 

Successfully configured the backend “s3”! Terraform will automatically use this backend unless the backend configuration changes.
Error loading state: NoSuchBucket: The specified bucket does not exist status code: 404, request id: 6E45347C4440A01, host id: Lu792e7MlislUvLpaBHUIv764i3lchmBbX8dTu/dX/qq232m1yxIH+LufqILJNguqw82242sdcsc=

 

The problem here is that Terraform attempts to initialise backend before creating any resources. This means that other ways have to be utilised in order to store the valuable output file holding state of resources terraform.tfstate.

 

 

Alternative 1 – Terraform s3_bucket_object module


One of the ways to secure the state file is to use s3_bucket_object module right after the bucket is created. The file is outputted into the project directory by default so the following configuration will suffice.

 

provider "aws" {
  shared_credentials_file = "../credentials"
  region                  = "eu-west-1"
  profile                 = "default"
}

resource "aws_s3_bucket" "terraform_backend_s3_bucket" {
  bucket = "terraform-state-repository"
  acl    = "private"

  versioning {
    enabled = true
  }
}

resource "aws_s3_bucket_object" "terraform_backend_upload_state" {
  bucket       = "${aws_s3_bucket.terraform_backend_s3_bucket.id}"
  acl          = "private"
  key          = "terraform-backend/terraform.tfstate"
  source       = "terraform.tfstate"
  content_type = "application/json"

  depends_on = [
    "aws_s3_bucket.terraform_backend_s3_bucket",
  ]
}

 

Uploading the outputted local state file after the bucket is created ensures that the state file is safe from deletion locally. Notice the depends_on  directive. This is to prevent Terraform executing terraform_backend_upload_state  before creating the bucket itself and error out as the file is missing, or upload an empty state file.

 

This configuration is not ideal by far, especially when trying to output values such as s3 bucket ARN or else, so that we can load the variables into other projects. If you want to use Terraform’s output capability, you will have to run this project twice. This is because terraform.tfstate is populated as Terraform progresses through the project, output variables are the last part that Terraform will execute, therefore the state file gets uploaded to s3 before the output variables are written to the state file. During the second run, terraform state rm aws_s3_bucket_object.terraform_backend_upload_state  has to be run so that Terraform is able to execute the same task twice.

 

Below is an example of how a similar project might look like:

 

resource "aws_s3_bucket" "terraform_backend_s3_bucket" {
  bucket = "${var.terraform_backend_s3_bucket_name}"
  acl    = "private"
  tags   = "${var.terraform_backend_default_tags}"

  versioning {
    enabled = true
  }
}

resource "aws_s3_bucket_object" "terraform_backend_upload_state" {
  bucket       = "${aws_s3_bucket.terraform_backend_s3_bucket.id}"
  acl          = "private"
  key          = "terraform-backend/terraform.tfstate"
  source       = "terraform.tfstate"
  content_type = "application/json"
  tags         = "${var.terraform_backend_default_tags}"

  depends_on = [
    "aws_dynamodb_table.terraform_backend_dynamodb_lock_table",
    "aws_s3_bucket.terraform_backend_s3_bucket",
  ]
}

output "terraform_backend_s3_bucket_arn" {
  value = "${aws_s3_bucket.terraform_backend_s3_bucket.arn}"
}

output "terraform_backend_s3_bucket_id" {
  value = "${aws_s3_bucket.terraform_backend_s3_bucket.id}"
}

 

 

Alternative 2 – Two Stage Backend Deployment


Another way of creating backend for Terraform backend would be to create it in two steps. This option is in my opinion much cleaner than the first alternative. Deploy the backend s3 bucket first, then add created s3 bucket as backend to the project itself and run terraform init . Terraform will detect local terraform.tfstate file and ask you whether you want to upload it to the backend.

 

It would look something like this. First create the backend bucket.

 

...

resource "aws_s3_bucket" "backend_s3_bucket_main" {
  bucket = "terraform-bucket-repository"
  acl    = "private"
  region = "eu-west-1"

  versioning {
    enabled = true
  }
}

...

 

Then add the same bucket as the backend and run terraform init

 

...

terraform {
  backend "s3" {
    bucket         = "terraform-state-repository"
    key            = "terraform.tfstate"
    region         = "eu-west-1"
  }
}

resource "aws_s3_bucket" "backend_s3_bucket_main" {
  bucket = "terraform-state-repository"
  acl    = "private"
  region = "eu-west-1"

  versioning {
    enabled = true
  }
}

...

 

 

 

Terraform s3 Backend Robustness


The fact that s3 bucket versioning is enabled is without a question. Bucket versioning will serve to roll back or recover erroneous actions easily.

 

What if someone accidentally deletes my Terraform state bucket?

 

This is a good question that not everyone thinks about, but accidents do happen. If you loose your Terraform state repository, it could mean a weeks of careful infrastructure mapping and importing back into Terraform, or even worse. One of the simplest ways to prevent privileged users from bucket deletion is to create policy that prevents everyone from deleting the bucket, and attach it to backend bucket. Below is a complete project that I personally use to accomplish robustness. The below examples use the first alternative of creating backend for Terraform backend.

 

data "aws_iam_policy_document" "terraform_backend_s3_bucket_policy" {
  statement {
    actions = [
      "s3:DeleteBucket",
      "s3:DeleteObject",
      "s3:DeleteObjectVersion",
    ]

    resources = [
      "${aws_s3_bucket.terraform_backend_s3_bucket.arn}",
      "${aws_s3_bucket.terraform_backend_s3_bucket.arn}/*",
    ]

    principals {
      type = "AWS"

      identifiers = [
        "*",
      ]
    }

    effect = "Deny"
  }
}

resource "aws_s3_bucket" "terraform_backend_s3_bucket" {
  bucket = "${var.terraform_backend_s3_bucket_name}"
  acl    = "private"
  tags   = "${var.terraform_backend_default_tags}"

  versioning {
    enabled = true
  }
}

resource "aws_s3_bucket_object" "terraform_backend_upload_state" {
  bucket       = "${aws_s3_bucket.terraform_backend_s3_bucket.id}"
  acl          = "private"
  key          = "terraform-backend/terraform.tfstate"
  source       = "terraform.tfstate"
  content_type = "application/json"
  tags         = "${var.terraform_backend_default_tags}"

  depends_on = [
    "aws_dynamodb_table.terraform_backend_dynamodb_lock_table",
    "aws_s3_bucket.terraform_backend_s3_bucket",
    "aws_s3_bucket_policy.terraform_backend_attach_s3_policy",
  ]
}

output "terraform_backend_s3_bucket_arn" {
  value = "${aws_s3_bucket.terraform_backend_s3_bucket.arn}"
}

output "terraform_backend_s3_bucket_id" {
  value = "${aws_s3_bucket.terraform_backend_s3_bucket.id}"
}

 

I still don't feel safe about a single policy protecting my hard work!

 

You can utilise AWS s3 bucket region replication policy, which will assure that Terraform state files will reside in two or more regions. Below is an example how that can be achieved using Terraform. The project creates two buckets one in Ireland one in Frankfurt, then create  bunch of policies that enable replication from Ireland to Frankfurt and adds the deletion policy to both buckets mentioned earlier.

 

# import providers
provider "aws" {
  shared_credentials_file = "../credentials"
  region                  = "eu-west-1"
  profile                 = "default"
}

provider "aws" {
  shared_credentials_file = "../credentials"
  region                  = "eu-central-1"
  profile                 = "default"
  alias                   = "frankfurt"
}

# create policies for protection against deletion
data "aws_iam_policy_document" "terraform_backend_s3_bucket_deletion_policy_main" {
  statement {
    actions = [
      "s3:DeleteBucket",
      "s3:DeleteObject",
      "s3:DeleteObjectVersion",
    ]

    resources = [
      "${aws_s3_bucket.terraform_backend_s3_bucket_main.arn}",
      "${aws_s3_bucket.terraform_backend_s3_bucket_main.arn}/*",
    ]

    principals {
      type = "AWS"

      identifiers = [
        "*",
      ]
    }

    effect = "Deny"
  }
}

data "aws_iam_policy_document" "terraform_backend_s3_bucket_deletion_policy_replication" {
  statement {
    actions = [
      "s3:DeleteBucket",
      "s3:DeleteObject",
      "s3:DeleteObjectVersion",
    ]

    resources = [
      "${aws_s3_bucket.terraform_backend_s3_bucket_replication.arn}",
      "${aws_s3_bucket.terraform_backend_s3_bucket_replication.arn}/*",
    ]

    principals {
      type = "AWS"

      identifiers = [
        "*",
      ]
    }

    effect = "Deny"
  }
}

# create assume role policy for main s3 bucket
data "aws_iam_policy_document" "terraform_backend_s3_bucket_assume_role_policy" {
  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type = "Service"

      identifiers = [
        "s3.amazonaws.com",
      ]
    }
  }
}

# create policy for main s3 bucket
data "aws_iam_policy_document" "terraform_backend_s3_bucket_replication_policy" {
  statement {
    actions = [
      "s3:GetReplicationConfiguration",
      "s3:ListBucket",
    ]

    resources = [
      "${aws_s3_bucket.terraform_backend_s3_bucket_main.arn}",
    ]

    effect = "Allow"
  }

  statement {
    actions = [
      "s3:GetObjectVersion",
      "s3:GetObjectVersionAcl",
    ]

    resources = [
      "${aws_s3_bucket.terraform_backend_s3_bucket_main.arn}/*",
    ]

    effect = "Allow"
  }

  statement {
    actions = [
      "s3:ReplicateObject",
      "s3:ReplicateDelete",
    ]

    resources = [
      "${aws_s3_bucket.terraform_backend_s3_bucket_replication.arn}/*",
    ]

    effect = "Allow"
  }
}

# create a role for main s3 bucket
resource aws_iam_role "terraform_backend_s3_bucket_replication_role" {
  name               = "s3_bucket_replication_role"
  assume_role_policy = "${data.aws_iam_policy_document.terraform_backend_s3_bucket_assume_role_policy.json}"
}

# attach a policy to the role for main s3 bucket
resource aws_iam_policy "terraform_backend_s3_bucket_replication_policy" {
  name   = "s3_bucket_replication_policy"
  policy = "${data.aws_iam_policy_document.terraform_backend_s3_bucket_replication_policy.json}"
}

# attach completed role to the main s3 bucket
resource "aws_iam_policy_attachment" "terraform_backend_attach_replication_policy" {
  name = "attach_replication_policy"

  roles = [
    "${aws_iam_role.terraform_backend_s3_bucket_replication_role.name}",
  ]

  policy_arn = "${aws_iam_policy.terraform_backend_s3_bucket_replication_policy.arn}"
}

# create main s3 bucket
resource "aws_s3_bucket" "terraform_backend_s3_bucket_main" {
  bucket = "terraform-bucket-repository-main"
  acl    = "private"
  region = "eu-west-1"

  versioning {
    enabled = true
  }

  replication_configuration {
    role = "${aws_iam_role.terraform_backend_s3_bucket_replication_role.arn}"

    rules {
      prefix = ""
      status = "Enabled"

      destination {
        bucket        = "${aws_s3_bucket.terraform_backend_s3_bucket_replication.arn}"
        storage_class = "STANDARD"
      }
    }
  }
}

# create replication s3 bucket
resource "aws_s3_bucket" "terraform_backend_s3_bucket_replication" {
  bucket = "terraform-bucket-repository-replication"
  acl    = "private"
  region = "eu-central-1"

  provider = "aws.frankfurt"

  versioning {
    enabled = true
  }
}

# attach policy against deletion for main s3 bucket
resource "aws_s3_bucket_policy" "terraform_backend_attach_deletion_policy_main" {
  bucket = "${aws_s3_bucket.terraform_backend_s3_bucket_main.id}"
  policy = "${data.aws_iam_policy_document.terraform_backend_s3_bucket_deletion_policy_main.json}"
}

# attach policy against deletion for replication s3 bucket

resource "aws_s3_bucket_policy" "terraform_backend_attach_deletion_policy_replication" {
  bucket   = "${aws_s3_bucket.terraform_backend_s3_bucket_replication.id}"
  policy   = "${data.aws_iam_policy_document.terraform_backend_s3_bucket_deletion_policy_replication.json}"
  provider = "aws.frankfurt"
}

# upload the terraform state to the main s3 bucket
resource "aws_s3_bucket_object" "terraform_backend_upload_state" {
  bucket       = "${aws_s3_bucket.terraform_backend_s3_bucket_main.id}"
  acl          = "private"
  key          = "terraform-backend/terraform.tfstate"
  source       = "terraform.tfstate"
  content_type = "application/json"

  depends_on = [
    "aws_s3_bucket.terraform_backend_s3_bucket_main",
    "aws_s3_bucket.terraform_backend_s3_bucket_replication",
    "aws_s3_bucket_policy.terraform_backend_attach_deletion_policy_main",
    "aws_s3_bucket_policy.terraform_backend_attach_deletion_policy_replication",
  ]
}

 

Here we have replicated backend files in the Frankfurt s3 bucket.

Katapult Cloud
No Comments

Post a Comment

Comment
Name
Email
Website