This guide shows you further steps you should take to ensure that the communication between digger and pipelines is as secure as possible.

A note about backendless mode

This guide assumes that you are running digger with the orchestrator backend (cloud or self-hosted). Digger can also be run in backendless mode as a pure cli. While we do rely on the same spec in backendless mode for the sake of code reusability, this document is mainly aimed at folks using digger with the orchestrator.

About digger Spec

As you know digger has two main components: the cli and the orchestrator backend. The orchestrator listens for events and triggers jobs in the CI system. The job will invoke the cli which then executes the terraform commands. The environment in which the cli runs in is previlliged and needs to be protected. This is because it has access to the cloud accounts and can provision resources. Therefore it has (through api keys or other keyless methods) access to read and write sensitive resources on cloud accounts. The way in which digger orchestrator communicates with the cli is through a spec. The spec is a self-contained JSON which defines the intended behaviour of the cli. You can think of it as an instruction to the cli to run a particular command. But it also contains further information about the context. Here is an example of a spec JSON:

{
    spec: {
    		"job_id":     "abc123",
    		"run_name":   "digger plan staging-vpc By: motatoes",
    		Job:       {
    		    # ....
    		    "commands": ["digger plan"]
    		    "workflow_file": "digger_workflow.yml"
    		},
    		reporter: {
    			"reporting_strategy": "comments_per_run",
    			"reporter_type":      "lazy",
    		},
    		lock: {
    			lock_type: "noop",
    		},
    		"backend": {
    			"backend_hostname":         "cloud.digger.dev",
    			"backend_organisation_name": "digger",
    			"backend_job_token":         "j:abc123-1234-1234-1234",
    			"backend_type":             "backend",
    		},
    		"vcs": {
    			"vcs_type":   "github",
    			"actor":     "motatoes",
    			"repo_owner": "diggerhq",
    			"repo_name":  "demo-opentofu",
    		},
    		"policy": {
    			"policy_type": "http",
    		},
    		"plan_storage": {
    		    "storage_type": "github_artefact"
    		}
    	}
    }

As you can see in the spec it has information about who performed the command, what vcs is used. Where the cli should comment and what commands should be run. Also about where the cli can find the policies to check against. Where to store plan artefacts. The sepc is a self-contained struct that can instruct the cli to do its job.

How to secure the spec

By now you are probably wondering how one can ensure that no malicious actors can manipulate the spec. For example what is to stop a bad actor from capturing a spec from a previous job and modifying it to bypass previlleges? Maybe he can change the actor and gain access to an environment that they are not supposed to have access to by invoking the CI job directly, bypassing the orchestrator. This is where spec signing comes into play. Just like JWT tokens are signed to validate that they were generated by a server and not tampered with, the spec is also signed by the orchestrator. This is what a signed spec payload will look like:

{
    spec: {
        ......
        expiry: 1719595950 # in 1 hour
    },
    signature: {
        signature: "xxxxxxxyyyyyyyyzzzzzzz",
        verify: true
    }
}

When the cli sees that a signature has been sent it will verify it based on the public key that was supplied to it as an environment variable. If verification fails the cli will halt with a signatureCerificationFailed error. It will also check that the payload has not expired. In this way we are only allowing the orchestrator to trigger jobs in the CI. It is advised to turn on signature verification if you are using digger for any usecase outside of a POC phase.