Jenkins shared pipeline library to be used for deployment in Kubernetes clusters.
- Generic entry point for Jenkinsfile - the generic entry point for the continuous delivery pipelines. See spec
- UPP entry point for Jenkinsfile main entry point for Continuous Delivery in UPP clusters. See spec
- PAC entry point for Jenkinsfile main entry point for Continuous Delivery in UPP clusters. See spec
- Install Helm chart - this is the step used by the generic job for installing a Helm chart. See spec
- Build and deploy in team envs - this is the step that handles the building and deployment into the team envs. See spec
- Build and deploy in upper envs - this is the step that handles the building and deployment into the upper envs (staging and prod). See spec
- Diff & sync 2 envs: - this is the main step used in the Diff & Sync 2 k8s envs. This job can be used to keep the team envs in sync with the prod ones, or when provisioning a new environment.
- Update cluster using the provisioner - this is the main step used in the Update a Kubernetes cluster using the Provisioner. This job can be used for updating the CoreOS version in a cluster.
- Update Dex configs - this is the main step used in the job Update Dex Config that updates the Dex configurations in multiple clusters at once. For more information on Dex, see Content auth
On every helm install/upgrade the following values are automatically inserted:
region
: the reigon where the targeted cluster lives. Example:eu
,us
target_env
: the name of the environment as defined in the Environment registry. Example:k8s
,prod
__ext.target_cluster.sub_domain
: the DNS subdomain of the targeted cluster. This is computed from the mapped API server declared in the EnvsRegistry. Example:upp-prod-publish-us
,pac-prod-eu
- For every cluster in the targeted environment, the URLs are exposed with the values
cluster.${cluster_label}.url
. Example:--set cluster.delivery.url=https://upp-k8s-dev-delivery-eu.ft.com --set cluster.publishing.url=https://upp-k8s-dev-publish-eu.ft.com
NOTE: in the future all these values will be moved under the __ext
namespace to avoid clashes with other developer introduced values.
When provisioning a new environment, Jenkins needs to "see" it, in order to be able to deploy to it. Here are the steps needed in order for Jenkins to "see" it.
-
Create a new branch for this repository
-
Add the definition of the new environment in the EnvsRegistry.groovy. Here's an example:
Environment prod = new Environment() prod.name = Environment.PROD_NAME prod.slackChannel = "#k8s-pipeline-notif" prod.regions = ["eu", "us"] prod.clusters = [Cluster.DELIVERY, Cluster.PUBLISHING, Cluster.NEO4J] prod.clusterToApiServerMap = [ ("eu-" + Cluster.DELIVERY) : "https://upp-prod-delivery-eu-api.ft.com", ("us-" + Cluster.DELIVERY) : "https://upp-prod-delivery-us-api.ft.com", ("eu-" + Cluster.PUBLISHING): "https://upp-prod-publish-eu-api.ft.com", ("us-" + Cluster.PUBLISHING): "https://upp-prod-publish-us-api.ft.com", ("eu-" + Cluster.NEO4J): "https://upp-prod-neo4j-eu-api.ft.com", ("us-" + Cluster.NEO4J): "https://upp-prod-neo4j-us-api.ft.com" ]
Here are the characteristics of an Environment:
- It has a name and a notifications slack channel.
- It might be spread across multiple AWS regions
- In each region, it might have multiple clusters (stacks).
- For each cluster(stack) we must define the URL of the K8S APi server.
The name of the environment is very important as it is correlated with the envs name from the Helm chart app-configs folder and with the ones in the Github releases for team environments. This is why this name must contain only
alphanumeric
characters.-
and_
are not allowed in the name. Valid names may be: k8s, xp, myteam, rjteam -
Don't forget to add the newly defined environment to the
envs
list in the EnvsRegistry class. -
Define in Jenkins the credentials needed for accessing the K8S API servers. For each of the API servers in the environment Jenkins needs 1 key in order to access it, therefore you need to create 1 Jenkins credential / cluster that are of type
Secret Text
with the following idsft.k8s-auth.${full-cluster-name}.token
(exampleft.k8s-auth.upp-k8s-dev-delivery-eu.token
) -> this is the token of the Jenkins service account from the Kubernetes cluster.
-
Define in Jenkins the credentials with the TLS assets of the cluster. This will be used when updating the kubernetes cluster using (this Jenkins job)Update a Kubernetes cluster The credential must be named
ft.k8s-provision.${full-cluster-name}.credentials
. Exampleft.k8s-provision.upp-k8s-dev-delivery-eu.credentials
. The type must beSecret file
and the zip should be fetched from Last Pass. -
Push the branch and create a Pull Request.
-
After merge, add the new environment to the Jenkins jobs:
Steps:
- Make sure you have Groovy language support plugin enabled in Intellij
- Import the project from the maven POM: File -> New -> Project from existing sources -> go to project folder & select -> choose External model -> Maven
- Set
var
andintellij-gdsl
as Source folders
With this setup you will have completion in groovy files for out of the box functions injected by pipeline plugins in Jenkins. This might help you in some cases.
You have 2 options:
- Checkout the pipeline syntax page. Go to any pipeline job & click the "Pipeline syntax". Here is a link to such page. This page generates snippets that you can paste into your script.
- Use Intellij with GDSL (see setup above). This might not be useful sometimes, as the parameters are maps.
- Prefer using docker images that you can control over Jenkins plugins. Depending on Jenkins plugins makes Jenkins hard to upgrade. The pipeline steps support running logic inside docker containers, so they are recommended, especially if you need command line tools. As an example, we're using the k8s-cli-utils docker image for making kubectl or helm calls instead of relying on a plugin that installs these utilities on the Jenkins slaves.
- Always declare types and avoid using “def”
- Use the @NonCPS annotation for methods that use objects that are not serializable. See the docs here and a practical example in this stackoverflow question.
- In order to test some code you can “Replay” a job run and place the code changes directly in the window.
See How to test and roll out pipeline changes
The pipeline has several integration points to achieving its goals. For this it keeps the secret data (API keys, username & passwords) as Jenkins credentials . The whole list of credentials set in Jenkins can be accessed here
See Pipeline Integration points for details.
The pipeline code tries to keep to a minimum the used plugins, and uses docker images for command line tools. The used plugins by the pipeline code are:
- HTTP Request Plugin for making HTTP requests from variuos integrations, like Slack.
- Lockable Resources Plugin for updating the
index.yaml
file of the Helm repository. - Mask Passwords Plugin for masking sensitive input data in the logs.
By default the Groovy pipelines run in Jenkins in a Sandbox that limits the methods and objects you can use in the code. This is done by the Script Security Plugin. Since this is annoying and devs might not know how to overcome this, we decided to disable this behavior by using the Permissive Script Security Plugin.
When a new app is created that needs Continuous Delivery on our Kubernetes clusters, you can enable this by creating a new multibranch job in Jenkins following these steps:
- Login into Jenkins using the AD credentials
- Go to the appropriate folder for the platform. For UPP go to
UPP: Pipelines for application deployments
and for Pac go toPAC: Pipelines for application deployments
- Click the “New Item” link in the left side
- A template job is already defined in Jenkins, so in the new item dialog fill the following:
- Configure the job: fill in the display name with “{replace-with-app-name} dev and release pipeline” and the 2 Git branch sources.
- Click save
If you've just created a new branch or a new tag or a new commit on a branch that should be picked up by Jenkins you have 2 options:
- wait for Jenkins to pick it up. It is set to scan all repos each 2 mins.
- Trigger the scanning of the multibranch manually. Go to the multibranch pipeline job like aggregate-concept-transformer job
and click
Scan Multibranch Pipeline Now
from the left hand side of the screen.