Skip to content

Latest commit

 

History

History
 
 

terraform

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Starting the exposure notification key server

This is a set of Terraform configurations which create the required infrastructure for an exposure notification key server on Google Cloud. Please note that Terraform is only used for the initial deployment and provisioning of underlying infrastructure! It is not used for continuous delivery or continuous deployment.

Requirements

Instructions

For full instructions on deploying, view the deployment docs

  1. Create a GCP project. Instructions. Enable a billing account for this project, and note its project ID (the unique, unchangeable string that you will be asked for during creation):

    $ export PROJECT_ID="<value-from-above>"
    
  2. Authenticate to gcloud with:

    $ gcloud auth login && gcloud auth application-default login
    

    This will open two authentication windows in your web browser.

  3. Change into the terraform/ directory. All future commands are run from the terraform/ directory:

    $ cd terraform/
    
  4. Save the project ID as a Terraform variable:

    $ echo "project = \"${PROJECT_ID}\"" >> ./terraform.tfvars
    
  5. (Optional) Enable the data generation job. This is useful for testing environments as it provides a consistent flow of exposure data into the system.

    $ echo 'generate_cron_schedule = "*/15 * * * *"' >> ./terraform.tfvars
    
  6. (Optional, but recommended) Create a Cloud Storage bucket for storing remote state. This is important if you plan to have multiple people running Terraform or collaborating.

    $ gsutil mb -p ${PROJECT_ID} gs://${PROJECT_ID}-tf-state
    

    It is also strongly recommended that you enable versioning of this bucket. That will enable you to access old versions of the Terraform state for disaster recovery.

    $ gsutil versioning set on gs://${PROJECT_ID}-tf-state
    

    You can also create a lifecycle policy to only keep recent versions.

    Configure Terraform to store state in the bucket:

    $ cat <<EOF > ./state.tf
    terraform {
      backend "gcs" {
        bucket = "${PROJECT_ID}-tf-state"
      }
    }
    EOF
    
  7. Run terraform init. Terraform will automatically download the plugins required to execute this code. You only need to do this once per machine.

    $ terraform init
    
  8. Execute Terraform:

    $ terraform apply
    

Terraform will create the required infrastructure including the database, service accounts, storage bucket, keys, and secrets. As a one-time operation, Terraform will also migrate the database schema and build/deploy the initial set of services on Cloud Run. Terraform does not manage the lifecycle of those resources beyond their initial creation.

Custom hosts

Using custom hosts (domains) for the services requires a manual step of updating DNS entries. Run Terraform once and get the lb_ip entry. Then, update your DNS provider to point the A records to that IP address. Give DNS time to propagate and then re-apply Terraform. DNS must be working for the certificates to provision.

Local development and testing example deployment

The default Terraform deployment is a production-ready, high traffic deployment. For local development and testing, you may want to use a less powerful setup:

# terraform/terraform.tfvars
project                  = "..."
cloudsql_tier            = "db-custom-1-3840"
cloudsql_disk_size_gb    = 16
cloudsql_max_connections = 256

Changing Regions

The target cloud region for each resource types are exposed as Terraform variables in vars.tf. Each region or location variable may be changed, however, they are not necessarily independent. The comments for each variable make a note of required dependencies and also link to the associated docs page listing the valid values.

Note that not all resources used by this project are currently available in all regions, but bringing up infrastructure in different regions needs careful consideration as geographic location of resources does impact service performance.

Developing In Project With Verification Server Also Provisioned by Terraform

WARNING: It's strongly discouraged to deploy both servers in the same project, do it only if you know what you are doing

When developing in a project with verification server already provisioned, there will be some resources conflict that prevent key server from being provisioned by terraform. It's not a priority to fix since it's not desired to do so in production, instead list potential problems and solutions here if ever needed:

  • State file

    • Cause: state files stored in GCS are written into the same GCS bucket ${PROJECT_ID}-tf-state and will cause conflict with each other
    • Solution: use another bucket or GCS location. i.e. replace ${PROJECT_ID}-tf-state with ${PROJECT_ID}-key-tf-state from steps above
  • Resources with identical names

    • Cause: terraform apply fails when the resource to be provisioned already exists but not in terraform state, so any resource with identical names across two terraform definitions will cause terraform apply to fail. So far the known resources with duplicate names are:
      • google_secret_manager_secret.db-secret
      • google_compute_global_address.private_ip_address
      • google_vpc_access_connector.connector
    • Solution: rename these resouces in terraform configurations. i.e. use random string such as database suffix