Digger works best as a 2-piece solution:
You can however still use the most basic features of Digger as a standalone action without a backend. To do that, set the following option in your workflow configuration:
You’d also need to add pull_request
and issue_comment
workflow triggers:
Historically this was the original way of running Digger. The initial version called “tfrun” didn’t have any backend, it was just a GitHub action. But it quickly became apparent that without some sort of orchestration there’s only so much that can be done:
For many small teams this is more than enough and it is quite easy to setup, if it works for you please don’t hesitate to use digger in this manner.
In order to function without a backend digger still needs store information about the PR locks so that it does not run “terraform plan” in 2 different PRs for the same digger project (since that would cause them stepping on top of eachother). In order to achieve that, digger will create a small resource in your cloud account to store which PR locked which project. The type of resource varies depending on the cloud account, here is what gets created:
Cloud Provider | Resource Type |
---|---|
AWS | DynamoDB |
GCP | GCP Bucket |
Azure | Storage Tables |
In case of AWS, during the first run digger will create this resource for you. However in case of GCP and azure you need to create it yourself and supply it as an argument.
After the resource is created digger will continue to use it for subsequent runs in order to store information about the locks and function correctly.
Digger works best as a 2-piece solution:
You can however still use the most basic features of Digger as a standalone action without a backend. To do that, set the following option in your workflow configuration:
You’d also need to add pull_request
and issue_comment
workflow triggers:
Historically this was the original way of running Digger. The initial version called “tfrun” didn’t have any backend, it was just a GitHub action. But it quickly became apparent that without some sort of orchestration there’s only so much that can be done:
For many small teams this is more than enough and it is quite easy to setup, if it works for you please don’t hesitate to use digger in this manner.
In order to function without a backend digger still needs store information about the PR locks so that it does not run “terraform plan” in 2 different PRs for the same digger project (since that would cause them stepping on top of eachother). In order to achieve that, digger will create a small resource in your cloud account to store which PR locked which project. The type of resource varies depending on the cloud account, here is what gets created:
Cloud Provider | Resource Type |
---|---|
AWS | DynamoDB |
GCP | GCP Bucket |
Azure | Storage Tables |
In case of AWS, during the first run digger will create this resource for you. However in case of GCP and azure you need to create it yourself and supply it as an argument.
After the resource is created digger will continue to use it for subsequent runs in order to store information about the locks and function correctly.