Generate projects
Sometimes specifying every project in digger.yml by hand is suboptimal, especially with a large number of small statefiles.
In this scenario you can use the generate_projects
directive in digger.yml. Digger will then traverse the directory according to include_patters
and exclude_patterns
provided and dynamically generate the list of projects
Blocks syntax
You can also specify multiple include / exclude entries like this:
Note that there is a distinction between include_patterns
, exclude_patterns
and include
, exclude
.
include
and exclude
are used to decide which project get generated in the first place
whereas include_patterns
and exclude_patterns
are used to decide which additional folders should cause the generated project to trigger
Traversing nested directories
You can optionally set the top-level argument traverse_to_nested_projects
to generate a project for all sub-directories:
This will create a project for all sub-directories under environments/core. If set to false
, only the first directory with a .tf file will be evaluated.
Blocks syntax with Terragrunt
You can use blocks generation with terragrunt as well. In order to achieve this all you need to do is specify
terragrunt: true
for each block. Normally you would only have one terragrunt structure and so you will have
one block entry. However you may be also interested to specify different structure and different parameters
for different folders. For example, you may have a dev, staging and prod account hierarchy and therefore you
would specify the blocks in that way. Alternatively maybe you have different providers and you would like to
specify those different providers as well. Note that for very large terragrunt monorepos segragating by blocks
will lead to improved performance since it will not unnecessarily traverse an entire tree if no files have been
changed within it.
Caching digger config
In cases where there is a very large terragrunt monorepo and the generation takes alot of time you can instruct digger to perform caching of the config periodically. Digger will store a cache of the config and you can periodically refresh it from cache to avoid generating on every pull request.
To enable loading cache from pull request you should set this flag:
You also need to set the value for “DIGGER_INTERNAL_SECRET” to be a secret that you generate. It will be used for calling the internal endpoint for refreshing from cache.
The internal url to refresh the cache for a repo and a branch is /_internal/update_repo_cache. It expects some arguments. Here is a curl request for it as an example”
Where repo_full_name is just a path to the repo branch is the branch you want to cache from (always this would be your default branch installation_id is your github installation ID and can be fetched from your github app settings (the place where you installed the app from)
You should get back a response that the caching has been initiated and in the backend logs you will see logs about cloning. You should see an entry for that repo in the repos_cache table if things are successful. Now you can decide how often you want to refresh the cache and you can then set up a cron task to curl that endpoint for the periods that you want to refresh.
Once it is cached, all future requests will not regenerate projects but instead it will laad it from the cache. In order to force digger to regenerate in a specific pull request you can add the label “digger:no-cache” to it and if added then all digger plan and digger apply actions on that pr will regenerate the projects.