In this secion we cover approaces to manage state with digger’s automation. Currently digger does not interfere with terraform’s state management and we leave it up to the user to manage the state on their own accounts. This is because we realised that most of the users come to us with state already managed somewhere, usually an S3 bucket in their own account, and we didn’t want to add an extra migration step whilst moving to digger.

With that said, we do facilitate state management configuration within digger in order to make it easy for teams to manage it on their own account. This guide will cover an approach to facilitate this state management.

The example repo for this is here: https://github.com/diggerhq/states-test

In this example we have the followign directory structure:

dev/
    main.tf
    tf_backend.tfbackend
staging/
    main.tf
    tf_backend.tfbackend
prod/
    main.tf
    tf_backend.tfbackend

within each main.tf root module we define a backend block:

terraform {
  backend "s3" {

  }
}

We ommit the backend state name, key and region on purpose since it is defined in the file tf_backend.tfbackend within the same directory:

bucket="digger-state-test"
key="/dev/terraform.tfstate"
region="us-east-1"

This is done in staging/ and prod/ as well. We consider it as a convention for all the root modules.

With that in place, we can configure digger to pass this additional configuration while running terraform as follows:

projects:
  - name: "dev"
    dir: "dev"
  - name: "staging"
    dir: "staging"
  - name: "prod"
    dir: "prod"


workflows:
  default:
    workflow_configuration:
      on_pull_request_pushed: ["digger plan"]
      on_pull_request_closed: ["digger unlock"]
      on_commit_to_default: ["digger unlock"]

    plan:
      steps:
      - init:
        extra_args: ["-backend-config=tf_backend.tfbackend" ]
      - plan:
    apply:
      steps:
      - init:
        extra_args: ["-backend-config=tf_backend.tfbackend" ]
      - apply:

The key part here is that we override the default workflow and pass extra arguments of -backend-config=tf_backend.tfbackend to the plan and apply steps. In this way it is easy to add additional states simply by adding a record for them in digger.yml. Once a PR is created and applied we will end up with a bucket that has three state files: