Skip to content

Latest commit

 

History

History

generic-service

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Generic-service helm chart

Prerequisites

1. Helm v3 client

This is installed and used within our circleci pipelines but it is also useful to have installed locally for troubleshooting. See https://helm.sh/docs/intro/install/

2. A namespace in Cloudplatform cluster

See guide here for more details: https://user-guide.cloud-platform.service.justice.gov.uk/documentation/getting-started/env-create.html#creating-a-cloud-platform-environment

See example kubernetes namespace files here: https://github.com/ministryofjustice/cloud-platform-environments/tree/main/namespaces/live-1.cloud-platform.service.justice.gov.uk/digital-prison-services-dev

3. HTTPS certificate for ingress resource

In addition to namespace above - ensure a valid LetsEncrypt TLS cert has been generated by CloudPlatform's certbot. Official instructions here https://user-guide.cloud-platform.service.justice.gov.uk/documentation/other-topics/custom-domain-cert.html#obtaining-a-certificate.

Example here: https://github.com/ministryofjustice/cloud-platform-environments/blob/main/namespaces/live-1.cloud-platform.service.justice.gov.uk/digital-prison-services-dev/07-certificates.yaml

4. Kubernetes secrets

If the application needs to access secrets as part of the deployment these must be loaded into Cloudplatforms kubernetes cluster prior to deployment.

See official guide here: https://user-guide.cloud-platform.service.justice.gov.uk/documentation/deploying-an-app/add-secrets-to-deployment.html

Also, see example below.

How to use this chart

Each project should define an umbrella chart, in most cases this will be essentially an empty helm chart, which specifies this chart as a dependancy.

File/folder structure as follows, with more details below on file contents:

helm_deploy
helm_deploy/values-[environment].yaml (1 per environment)
helm_deploy/[project name]
helm_deploy/[project name]/Chart.yaml
helm_deploy/[project name]/.helmignore
helm_deploy/[project name]/values.yaml

helm_deploy/[project name]/templates/ (* optional)

(* optionally include the templates/ folder containing project specific resources not installed by generic-service chart e.g. cronjobs)

Example Chart.yaml

apiVersion: v2
appVersion: "1.0"
description: A Helm chart for Kubernetes
name: [PROJECT NAME HERE]
version: 0.1.0

dependencies:
  - name: generic-service
    version: 1.0.5
    repository: https://ministryofjustice.github.io/hmpps-helm-charts

Setting project wide values

helm_deploy/[project name]/values.yaml

The values here override the default values set in the generic-service chart - see the values.yaml in this repo/folder.

This file will contain values that are the same across all environments.

Example project values.yaml file:

---
generic-service:
  nameOverride: project-name

  image:
    repository: quay.io/hmpps/project-name
    port: 8080

  ingress:
    enabled: true
    tlsSecretName: [name of secret for ingress TLS cert]
    path: /

  # Environment variables to load into the deployment
  env:
    JAVA_OPTS: "-Xmx512m"
    SERVER_PORT: "8080"
    SPRING_PROFILES_ACTIVE: "logstash"
    APPLICATIONINSIGHTS_CONNECTION_STRING: "InstrumentationKey=$(APPINSIGHTS_INSTRUMENTATIONKEY)"

  # Pre-existing kubernetes secrets to load as environment variables in the deployment.
  # namespace_secrets:
  #   [name of kubernetes secret]:
  #     [name of environment variable as seen by app]: [key of kubernetes secret to load]

  namespace_secrets:
    project-name:
      APPINSIGHTS_INSTRUMENTATIONKEY: "APPINSIGHTS_INSTRUMENTATIONKEY"
      AP_ARN: "arn?" # optional

  # Pre-existing kubernetes secrets to load as mounted file(s) within pod/container

Mounting Secrets

K8s documentation to mount secrets

  volumes:
    - name: secrets
      secret:
        secretName: "k8s-secret-name"
        items:
          - key: secret-key
            path: secret-file-name
  volumeMounts:
    - name: secrets
      mountPath: /app/secrets
      readOnly: true

This configuration will create a file /app/secrets/secret-file-name with the content of the k8s secret within it.

When loading secrets as mounted volumes inside a container the pre-existing kubernetes secret should look like the following, as per example above:

kind: Secret
type: Opaque
apiVersion: v1
data:
  secret-key: [base64 encoded file contents]

Injecting env into batch yamls

You can inject the set of environment variables defined for the pods into other application yamls such as batch jobs:

{{- include "deployment.envs" (index .Values "generic-service") | nindent 12 }}

Setting environment specific values

helm_deploy/values-[environment].yaml

This file should only contain values that differ between environments.

Example of helm_deploy/values-[environment].yaml file:

---
generic-service:
  replicaCount: 2

  ingress:
    hosts:
      - project-name-dev.hmpps.service.justice.gov.uk

Prison Postgres database restore cronjob

The NOMIS pre-production database gets refreshed from production approximately every two weeks. It is normally a good idea to copy the other Prison databases at the same time so that the pre-production environment is in sync. Setting

---
postgresDatabaseRestore:
  enabled: true

in your values-prod.yaml will create a scheduled job runs every four hours in production only. This checks to see if there is a newer version of the NOMIS database since the last database restore and if so then does another restore. The pre-production credentials should be injected into the production namespace, see ministryofjustice/cloud-platform-environments#8325 for an example PR. Both production and pre-production credentials should then be added as a namespace_secrets: section, see the values.yaml in this repository for an example of the secrets.

If you have set up a schema separate to the default 'public' schema and want to refresh that schema, you must additionally supply the SCHEMA_TO_RESTORE environment variable in the env: section (again see the values.yaml for an example).

Manually running the database restore cronjob

The restore cronjob script only runs if there is a newer NOMIS database so we need to override the configuration to ensure to force the run. We do that by using jq to amend the json and adding in the FORCE_RUN=true parameter.

kubectl create job --dry-run=client --from=cronjob/hmpps-nomis-visits-mapping-service-postgres-restore hmpps-nomis-visits-mapping-service-postgres-restore-<user> -o "json" | jq ".spec.template.spec.containers[0].env += [{ \"name\": \"FORCE_RUN\", \"value\": \"true\"}]" | kubectl apply -f -

will trigger the job to dump the production database and import into pre-production. Job progress can then be seen by running kubectl logs -f on the newly created pod.

The last successful restore information is stored in a restore_status table in pre-production. To find out when the last restore ran connect to the pre-production database and view the contents of the table.

Scheduled downtime

For cost saving purposes, MOJ Cloud Platform provides an option to shut down RDS databases overnight in non-production environment. Check the user guide for more information.

In addition to shutting down the database, this chart also provides an option to schedule shutdown and startup of pods.

Service Account

To enable this feature, you first need to add a scheduled-downtime-serviceaccount Service Account to your namespace, with permissions to scale your deployment.

Example: scheduled-downtime.tf

module "scheduled_downtime_service_account" {
  source = "github.com/ministryofjustice/cloud-platform-terraform-serviceaccount?ref=0.8.1"

  namespace          = var.namespace
  kubernetes_cluster = var.kubernetes_cluster

  serviceaccount_name  = "scheduled-downtime-serviceaccount"
  role_name            = "scheduled-downtime-serviceaccount-role"
  rolebinding_name     = "scheduled-downtime-serviceaccount-rolebinding"
  serviceaccount_rules = [
    {
      api_groups = ["apps"]
      resources  = ["deployments"]
      verbs      = ["get"]
    },
    {
      api_groups = ["apps"]
      resources  = ["deployments/scale"]
      verbs      = ["get", "update", "patch"]
    }
  ]
}

Configuration values

Once you have a service account, add the following to your values-dev.yaml and values-preprod.yaml to enable the cron jobs:

scheduledDowntime:
  enabled: true

By default, this will shut down pods between 10pm - 6:30am UTC on weekdays and all day on weekends. 6:30am was chosen as the RDS startup happens between 6am and 6:30am To change this schedule, update the startup and shutdown values:

---
scheduledDowntime:
  enabled: true
  startup: '0 6 * * 1-5' # Start at 6am UTC Monday-Friday
  shutdown: '0 22 * * 1-5' # Stop at 10pm UTC Monday-Friday
  serviceAccountName: scheduled-downtime-serviceaccount # This must match the service account name in the Terraform module

Retrying messages on a dead letter queue

The hmpps-spring-boot-sqs project provides an endpoint for retrying all messages on all dead letter queues. Setting the following value will add a cronjob to your service which will call the retry endpoint every 10 minutes.

---
retryDlqCronjob:
  enabled: true
  retryDlqSchedule: "*/20 * * * *" # only set this if you want to override the default schedule of every 10 minutes

If you have configured scheduled downtime, the cronjob will not run during the downtime Again, you can override the default cron schedule when scheduled downtime is enabled:

---
scheduledDowntime:
  retryDlqSchedule: "*/45 * * * 1-3"