You can however still use the most basic features of Digger as a standalone action without a backend. To do that, set the following option in your workflow configuration:
Copy
Ask AI
# add this to .github/actions/digger_backendless_workflow.yml as an argument to the digger stepno-backend: true
You’d also need to add pull_request and issue_comment workflow triggers:
Historically this was the original way of running Digger. The initial version called “tfrun” didn’t have any backend, it was just a GitHub action.
But it quickly became apparent that without some sort of orchestration there’s only so much that can be done:
No concurrency; all plans / applies need to run sequentially, which is slow
Action starts on every push or comment, often just to detect that there are no changes. That’s expensive, especially in large repos.
Clashing applies from other jobs will fail as they cannot be queued
Buckets / tables for PR-level locks need to be configured manually in your cloud account
Comments and status checks will be updated with a delay
For many small teams this is more than enough and it is quite easy to setup, if it works for you please don’t hesitate to use digger in this manner.
In order to function without a backend digger still needs store information about the PR locks so that it does not run “terraform plan”
in 2 different PRs for the same digger project (since that would cause them stepping on top of eachother). In order to achieve that, digger will
create a small resource in your cloud account to store which PR locked which project. The type of resource varies depending on the cloud account, here is what gets created:
Cloud Provider
Resource Type
AWS
DynamoDB
GCP
GCP Bucket
Azure
Storage Tables
In case of AWS, during the first run digger will create this resource for you. However in case of GCP and azure you need to create it yourself and supply it as an argument.After the resource is created digger will continue to use it for subsequent runs in order to store information about the locks and function correctly.