This is a set of Terraform configurations which create the required infrastructure for an exposure notification key server on Google Cloud. Please note that Terraform is only used for the initial deployment and provisioning of underlying infrastructure! It is not used for continuous delivery or continuous deployment.
-
Terraform 0.12. Installation guide
-
gcloud. Installation guide
Note: Make sure you unset
GOOGLE_APPLICATION_CREDENTIALS
in your environment:unset GOOGLE_APPLICATION_CREDENTIALS
For full instructions on deploying, view the deployment docs
-
Create a GCP project. Instructions. Enable a billing account for this project, and note its project ID (the unique, unchangeable string that you will be asked for during creation):
$ export PROJECT_ID="<value-from-above>"
-
Authenticate to gcloud with:
$ gcloud auth login && gcloud auth application-default login
This will open two authentication windows in your web browser.
-
Change into the
terraform/
directory. All future commands are run from theterraform/
directory:$ cd terraform/
-
Save the project ID as a Terraform variable:
$ echo "project = \"${PROJECT_ID}\"" >> ./terraform.tfvars
-
(Optional) Enable the data generation job. This is useful for testing environments as it provides a consistent flow of exposure data into the system.
$ echo 'generate_cron_schedule = "*/15 * * * *"' >> ./terraform.tfvars
-
(Optional, but recommended) Create a Cloud Storage bucket for storing remote state. This is important if you plan to have multiple people running Terraform or collaborating.
$ gsutil mb -p ${PROJECT_ID} gs://${PROJECT_ID}-tf-state
It is also strongly recommended that you enable versioning of this bucket. That will enable you to access old versions of the Terraform state for disaster recovery.
$ gsutil versioning set on gs://${PROJECT_ID}-tf-state
You can also create a lifecycle policy to only keep recent versions.
Configure Terraform to store state in the bucket:
$ cat <<EOF > ./state.tf terraform { backend "gcs" { bucket = "${PROJECT_ID}-tf-state" } } EOF
-
Run
terraform init
. Terraform will automatically download the plugins required to execute this code. You only need to do this once per machine.$ terraform init
-
Execute Terraform:
$ terraform apply
Terraform will create the required infrastructure including the database, service accounts, storage bucket, keys, and secrets. As a one-time operation, Terraform will also migrate the database schema and build/deploy the initial set of services on Cloud Run. Terraform does not manage the lifecycle of those resources beyond their initial creation.
Using custom hosts (domains) for the services requires a manual step of updating
DNS entries. Run Terraform once and get the lb_ip
entry. Then, update your DNS
provider to point the A records to that IP address. Give DNS time to propagate
and then re-apply Terraform. DNS must be working for the certificates to
provision.
The default Terraform deployment is a production-ready, high traffic deployment. For local development and testing, you may want to use a less powerful setup:
# terraform/terraform.tfvars
project = "..."
cloudsql_tier = "db-custom-1-3840"
cloudsql_disk_size_gb = 16
cloudsql_max_connections = 256
The target cloud region for each resource types are exposed as Terraform
variables in vars.tf
. Each region or location variable may be changed,
however, they are not necessarily independent. The comments for each variable
make a note of required dependencies and also link to the associated docs page
listing the valid values.
Note that not all resources used by this project are currently available in all regions, but bringing up infrastructure in different regions needs careful consideration as geographic location of resources does impact service performance.
WARNING: It's strongly discouraged to deploy both servers in the same project, do it only if you know what you are doing
When developing in a project with verification server already provisioned, there will be some resources conflict that prevent key server from being provisioned by terraform. It's not a priority to fix since it's not desired to do so in production, instead list potential problems and solutions here if ever needed:
-
State file
- Cause: state files stored in GCS are written into the same GCS bucket
${PROJECT_ID}-tf-state
and will cause conflict with each other - Solution: use another bucket or GCS location. i.e. replace
${PROJECT_ID}-tf-state
with${PROJECT_ID}-key-tf-state
from steps above
- Cause: state files stored in GCS are written into the same GCS bucket
-
Resources with identical names
- Cause:
terraform apply
fails when the resource to be provisioned already exists but not in terraform state, so any resource with identical names across two terraform definitions will causeterraform apply
to fail. So far the known resources with duplicate names are:- google_secret_manager_secret.db-secret
- google_compute_global_address.private_ip_address
- google_vpc_access_connector.connector
- Solution: rename these resouces in terraform configurations. i.e. use random string such as database suffix
- Cause: