From 076c94c96b90965e70218431f05f05415d43ac3d Mon Sep 17 00:00:00 2001 From: Sebastian Date: Fri, 26 Jan 2024 13:02:24 +0100 Subject: [PATCH 1/3] fix mariadb/mysql cli usage --- content/en/docs/attaching-a-database/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/attaching-a-database/_index.md b/content/en/docs/attaching-a-database/_index.md index 9d792c9e..ee22b33a 100644 --- a/content/en/docs/attaching-a-database/_index.md +++ b/content/en/docs/attaching-a-database/_index.md @@ -578,7 +578,7 @@ oc rsh --namespace This command shows how to drop the whole database: ```bash -mysql -u$MYSQL_USER -p$MYSQL_PASSWORD -h$MARIADB_SERVICE_HOST $MYSQL_DATABASE +mariadb -u$MYSQL_USER -p$MYSQL_PASSWORD -h$MARIADB_SERVICE_HOST $MYSQL_DATABASE ``` ```bash @@ -590,7 +590,7 @@ exit Import a dump: ```bash -mysql -u$MYSQL_USER -p$MYSQL_PASSWORD -h$MARIADB_SERVICE_HOST $MYSQL_DATABASE < /tmp/dump.sql +mariadb -u$MYSQL_USER -p$MYSQL_PASSWORD -h$MARIADB_SERVICE_HOST $MYSQL_DATABASE < /tmp/dump.sql ``` Check your app to see the imported "Hellos". From 1c25d8c1321ed49616800ca4d8d5feb676f19193 Mon Sep 17 00:00:00 2001 From: Sebastian Date: Fri, 26 Jan 2024 13:05:51 +0100 Subject: [PATCH 2/3] use correct ns in quota lab --- .../resourcequotas-and-limitranges/_index.md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/content/en/docs/additional-concepts/resourcequotas-and-limitranges/_index.md b/content/en/docs/additional-concepts/resourcequotas-and-limitranges/_index.md index f5437288..3098db0e 100644 --- a/content/en/docs/additional-concepts/resourcequotas-and-limitranges/_index.md +++ b/content/en/docs/additional-concepts/resourcequotas-and-limitranges/_index.md @@ -32,13 +32,13 @@ Defining ResourceQuotas makes sense when the cluster administrators want to have In order to check for defined quotas in your Namespace, simply see if there are any of type ResourceQuota: ```bash -{{% param cliToolName %}} get resourcequota --namespace +{{% param cliToolName %}} get resourcequota --namespace -quota ``` To show in detail what kinds of limits the quota imposes: ```bash -{{% param cliToolName %}} describe resourcequota --namespace +{{% param cliToolName %}} describe resourcequota --namespace -quota ``` {{% onlyWhenNot openshift %}} @@ -149,7 +149,7 @@ Remember to use the namespace `-quota-test`, otherwise this lab will n Analyse the LimitRange in your Namespace (there has to be one, if not you are using the wrong Namespace): ```bash -{{% param cliToolName %}} describe limitrange --namespace +{{% param cliToolName %}} describe limitrange --namespace -quota ``` The command above should output this (name and Namespace will vary): @@ -166,7 +166,7 @@ Container cpu - - 10m 100m - Check for the ResourceQuota in your Namespace (there has to be one, if not you are using the wrong Namespace): ```bash -{{% param cliToolName %}} describe quota --namespace +{{% param cliToolName %}} describe quota --namespace -quota ``` The command above will produce an output similar to the following (name and namespace may vary) @@ -208,7 +208,7 @@ spec: Apply this resource with: ```bash -{{% param cliToolName %}} apply -f pod_stress2much.yaml --namespace +{{% param cliToolName %}} apply -f pod_stress2much.yaml --namespace -quota ``` {{% alert title="Note" color="info" %}} @@ -218,7 +218,7 @@ You have to actively terminate the following command pressing `CTRL+c` on your k Watch the Pod's creation with: ```bash -{{% param cliToolName %}} get pods --watch --namespace +{{% param cliToolName %}} get pods --watch --namespace -quota ``` You should see something like the following: @@ -236,7 +236,7 @@ stress2much 0/1 CrashLoopBackOff 1 20s The `stress2much` Pod was OOM (out of memory) killed. We can see this in the `STATUS` field. Another way to find out why a Pod was killed is by checking its status. Output the Pod's YAML definition: ```bash -{{% param cliToolName %}} get pod stress2much --output yaml --namespace +{{% param cliToolName %}} get pod stress2much --output yaml --namespace -quota ``` Near the end of the output you can find the relevant status part: @@ -255,7 +255,7 @@ Near the end of the output you can find the relevant status part: So let's look at the numbers to verify the container really had too little memory. We started the `stress` command using the parameter `--vm-bytes 85M` which means the process wants to allocate 85 megabytes of memory. Again looking at the Pod's YAML definition with: ```bash -{{% param cliToolName %}} get pod stress2much --output yaml --namespace +{{% param cliToolName %}} get pod stress2much --output yaml --namespace -quota ``` reveals the following values: @@ -279,7 +279,7 @@ Let's fix this by recreating the Pod and explicitly setting the memory request t First, delete the `stress2much` pod with: ```bash -{{% param cliToolName %}} delete pod stress2much --namespace +{{% param cliToolName %}} delete pod stress2much --namespace -quota ``` Then create a new Pod where the requests and limits are set: @@ -314,7 +314,7 @@ spec: And apply this again with: ```bash -{{% param cliToolName %}} apply -f pod_stress.yaml --namespace +{{% param cliToolName %}} apply -f pod_stress.yaml --namespace -quota ``` {{% alert title="Note" color="info" %}} @@ -356,7 +356,7 @@ spec: ``` ```bash -{{% param cliToolName %}} apply -f pod_overbooked.yaml --namespace +{{% param cliToolName %}} apply -f pod_overbooked.yaml --namespace -quota ``` We are immediately confronted with an error message: @@ -370,7 +370,7 @@ The default request value of 16 MiB of memory that was automatically set on the Let's have a closer look at the quota with: ```bash -{{% param cliToolName %}} get quota --output yaml --namespace +{{% param cliToolName %}} get quota --output yaml --namespace -quota ``` which should output the following YAML definition: @@ -421,7 +421,7 @@ spec: And apply with: ```bash -{{% param cliToolName %}} apply -f pod_overbooked.yaml --namespace +{{% param cliToolName %}} apply -f pod_overbooked.yaml --namespace -quota ``` Even though the limits of both Pods combined overstretch the quota, the requests do not and so the Pods are allowed to run. From 732a6cd619a75f0016a18f153ce677e716876f0a Mon Sep 17 00:00:00 2001 From: Sebastian Date: Fri, 26 Jan 2024 13:09:29 +0100 Subject: [PATCH 3/3] use tls in ingress --- content/en/docs/exposing-a-service/ingress.template.yaml | 3 +++ content/en/docs/scaling/ingress.template.yaml | 3 +++ 2 files changed, 6 insertions(+) diff --git a/content/en/docs/exposing-a-service/ingress.template.yaml b/content/en/docs/exposing-a-service/ingress.template.yaml index 0d99be7a..28d8a3c9 100644 --- a/content/en/docs/exposing-a-service/ingress.template.yaml +++ b/content/en/docs/exposing-a-service/ingress.template.yaml @@ -15,3 +15,6 @@ spec: name: example-web-go port: number: 5000 + tls: + - hosts: + - example-web-go-. \ No newline at end of file diff --git a/content/en/docs/scaling/ingress.template.yaml b/content/en/docs/scaling/ingress.template.yaml index c7c6e374..825863ee 100644 --- a/content/en/docs/scaling/ingress.template.yaml +++ b/content/en/docs/scaling/ingress.template.yaml @@ -15,3 +15,6 @@ spec: name: example-web-app port: number: 5000 +tls: + - hosts: + - example-web-app-. \ No newline at end of file