Skip to content

Commit

Permalink
update worker node notes using traefik as LB.
Browse files Browse the repository at this point in the history
  • Loading branch information
balchua committed Feb 20, 2021
1 parent 80fe3ab commit 54172fe
Show file tree
Hide file tree
Showing 2 changed files with 100 additions and 20 deletions.
12 changes: 6 additions & 6 deletions example/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,13 @@ module "microk8s" {
source = "../"
worker_node_count = "2"
os_image = "ubuntu-20-04-x64"
controller_size = "s-8vcpu-16gb"
controller_disksize = "50"
worker_disksize = "50"
region = "nyc3"
worker_size = "s-8vcpu-16gb"
controller_size = "s-4vcpu-8gb"
controller_disksize = "10"
worker_disksize = "10"
region = "sgp1"
worker_size = "s-4vcpu-8gb"
dns_zone = "geeks.sg"
microk8s_channel = "latest/edge/lite"
microk8s_channel = "latest/edge"
cluster_token = "PoiuyTrewQasdfghjklMnbvcxz123409"
cluster_token_ttl_seconds = 3600
digitalocean_ssh_fingerprint = var.digitalocean_ssh_fingerprint
Expand Down
108 changes: 94 additions & 14 deletions worker-node.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Add a worker node only with MicroK8s
This document describes the steps on how to join a worker node only into your Highly Available MicroK8s.

This document describes the steps on how to join a worker node only into your Highly Available MicroK8s. This has the main advantage that, you can avoid the resource overhead of the control plane. The overhead can go up to GBs and several CPU cycles. These cycles are best allocated to your workload instead of Kubernetes.
This has the main advantage that, you can avoid the resource overhead of the control plane. The overhead can go up to GBs and several CPU cycles. These cycles are best allocated to your workload instead of Kubernetes.

Before you begin, you need to have the following in place:

Expand All @@ -13,6 +13,8 @@ Follow the instructions in MicroK8s [documentation](https://microk8s.io/docs/hig

## Load balancer IP

**Follow the steps below, if you need to use a cloud provider load balancer**

On each of your HA MicroK8s node, add the load balancer IP into the file `/var/snap/microk8s/current/certs/cs.conf.template`

For example:
Expand Down Expand Up @@ -57,10 +59,57 @@ subjectAltName=@alt_names
```
Note that **IP.99 = 167.172.5.46** is the Load balancer IP

## Simple Load Balancer per node

The steps below uses a load balancer installed on each node.

In this step, we will use [`Traefik`](https://doc.traefik.io/) to load balance between the Kubernetes ApiServer.

* Download [Traefik binary](https://github.com/traefik/traefik/releases), and place it in to `/usr/local/bin` directory.
* Configure traefik.yaml

You need to create the `traefik.yaml` configuration file in `/etc/traefik/traefik.yaml`, for example:

```yaml
## Static configuration
entryPoints:
apiserver:
address: ":16443"
providers:
file:
filename: /etc/traefik/providers.yaml
watch: true
```
Then configure the provider into the location `/etc/traefik/provider.yaml` (as shown above).

```yaml
tcp:
routers:
Router-1:
# Listen to any SNI
rule: "HostSNI(`*`)"
service: "kube-apiserver"
tls:
passthrough: true
services:
kube-apiserver:
loadBalancer:
servers:
- address: "10.130.0.2:16443"
- address: "10.130.0.3:16443"
- address: "10.130.0.4:16443"
```

Each of the address represents the IP address and port of the apiservers.

* Start traefik, or setup a systemd service for traefik

## Install MicroK8s

On your worker node, install MicroK8s like usual.

### For versions 1.20

```
snap install microk8s --classic --channel 1.20/stable
```
Expand All @@ -69,6 +118,8 @@ snap install microk8s --classic --channel 1.20/stable

You need to stop all the services on this worker node.

#### For versions 1.19 or 1.20

```
systemctl stop snap.microk8s.daemon-apiserver
systemctl stop snap.microk8s.daemon-apiserver-kicker.service
Expand All @@ -78,6 +129,13 @@ systemctl stop snap.microk8s.daemon-scheduler.service
systemctl stop snap.microk8s.daemon-kubelet.service
systemctl stop snap.microk8s.daemon-proxy.service
```
#### For kubelite version

Stop the kubelite service

```
systemctl stop snap.microk8s.daemon-kubelite.service
```

### Token generation and known_tokens

Expand All @@ -91,9 +149,7 @@ KzVIdjBkUWFNYStQc01xb09lMXM1VEFRUVAxSHIxQ3I5UHk5bjZiSVdidz0K
```
_Keep the generated random string._

On each of your **control plane** nodes:

Edit the file `/var/snap/microk8s/current/credentials/known_tokens.csv` to add the kubelet and kube-proxy tokens:
On each of your **control plane** nodes, edit the file `/var/snap/microk8s/current/credentials/known_tokens.csv` and add the following.

```
KzVIdjBkUWFNYStQc01xb09lMXM1VEFRUVAxSHIxQ3I5UHk5bjZiSVdidz0K,system:kube-proxy,kube-proxy
Expand Down Expand Up @@ -122,14 +178,18 @@ scp /tmp/proxy.config root@workernode:/var/snap/microk8s/current/credentials/pro

Before starting the kubelet and proxy, you need to modify the token located in `/var/snap/microk8s/current/credentials/kubelet.config` and `/var/snap/microk8s/current/credentials/proxy.config`.


**If you are using the [node based simple load balancer](#simple-load-balancer-per-node), simple use `server: https://127.0.0.1:16443` in the `/var/snap/microk8s/current/credentials/kubelet.config` and `/var/snap/microk8s/current/credentials/proxy.config`**


As an example:

```yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURBVENDQWVtZ0F3SUJBZ0lKQUpGcnFTaDNvTThWTUEwR0NTcUdTSWIzRFFFQkN3VUFNQmN4RlRBVEJnTlYKQkFNTURERXdMakUxTWk0eE9ETXVNVEFlRncweU1UQXlNVEl3TVRBME5ERmFGdzB6TVRBeU1UQXdNVEEwTkRGYQpNQmN4RlRBVEJnTlZCQU1NRERFd0xqRTFNaTR4T0RNdU1UQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQCkFEQ0NBUW9DZ2dFQkFONTh6V2JlODdPK0FDUE9sbnlJVnlUWU0zaHZDTWQ0VGNucnVQcFllVHhwSzlLT1o5djEKc0pYeWNZejBEQ293TThnUEcydlRSRWI5UHg2OU0zZmhWV053Z0hMLzhUZnlKTkxpZUE1SmhtMXcrMUdOQkxvSgpkOEdXTE1LaDcwK1JWQlNzblhLSXRxUTY0aFhoZVVZV1J3QW1IU0ZCL3ZkL1B4ZWwvelN6M3loRXA0YXlhbjhMCjVVUEVpL0tkWVdwK0dJTWpJSzROQmEzMW9oVlYwQnBlLzlzTGdlNEcyNzM0em50NjdZSS9seWg4LzR1UnpuZnEKTndHRVdEeDhlV1ZWNjMrUWltNmIzRklUdG5nS0Y3UUlaV2hXN0xzMlZ2aGkrMGp1RFFSRmZNY1JYaXJpZ3lCMwoxNURzWnAwMFFzb3VQTHVPRGRlbE9waVdQR1hvUzlMSmJpY0NBd0VBQWFOUU1FNHdIUVlEVlIwT0JCWUVGSnBRCnlEaGJGK3hnSDJOL216ZjdQanpvNDdzak1COEdBMVVkSXdRWU1CYUFGSnBReURoYkYreGdIMk4vbXpmN1Bqem8KNDdzak1Bd0dBMVVkRXdRRk1BTUJBZjh3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUZ3SDNvOU9hUnQ0NDlNdQppRUxBenRKQVE1ZHhjSWM2STBUdkhvVVBYOWwvemJRQ3hPQ3ExT1Z4a2M0TXRHTXA3TktlMDZ2UWFVSzAzVnhTCm5yMktYeFdwckVxNGFTMUdHc21vOEVDQUtZOEpUVXpjUkNBa0lNcjBPcHlWM0RKc3NXNWVHRGVvaVJESGY1RnAKc3d3VUZ4REVSWFFlb0ZNV1FDYWJMQTNzdVl0enBQZVdWLzJQeHVsbEJMaXBseFhEMk8wcllLUHVzT0FTeDk1MApVSnRTMmwzTFpENXRoTUM2eG1LT2FYSDNhT0FLWjNZMEhVWUN6VGhPaUdLMXV6cDFIcjI5LzRweUQrbGVVeHNyCkZWMDZYZ1JESStYNFZTaWNVYVEzeG16U1EyYm1qVVFNc3RTdytId0VaR2tRTS9OZ1BqaURxZ2tqbHhoVDJndmQKWnFLMFhZRT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://your-load-balancer-ip
server: https://your-load-balancer-ip or https://127.0.0.1:16443 if using simple node based LB
name: microk8s-cluster
contexts:
- context:
Expand All @@ -144,6 +204,8 @@ users:
user:
token: KzVIdjBkUWFNYStQc01xb09lMXM1VEFRUVAxSHIxQ3I5UHk5bjZiSVdidz0K
```


### Additional Kubelet configuration

When enabling the DNS, you need to start the worker node's kubelet with these additional arguments/parameters.
Expand All @@ -155,8 +217,18 @@ From the file `/var/snap/microk8s/current/args/kubelet` add at the bottom of th
--cluster-dns=10.152.183.10
```

There's one last file you need to create on each worker node. You need to create a file called `
/var/snap/microk8s/current/var/lock/clustered.lock`. The importance of this file is to make sure that the control plane services such as apiserver, controller manager and scheduler do not start when the snap is updated or when you do a `microk8s stop` and `microk8s start`.

### Start the Kubelet and Kube-proxy

#### Prevent the control plane from starting

The procedure above will not prevent the control plane from starting when the node is restarted or when the snap is refreshed.

Create a file `/var/snap/microk8s/current/var/lock/clustered.lock` on each worker node.

### Start the kubelet and proxy
#### Starting the service on 1.19 or 1.20

In each of the worker node

Expand All @@ -165,23 +237,29 @@ systemctl start snap.microk8s.daemon-kubelet.service
systemctl start snap.microk8s.daemon-proxy.service
```

#### Starting the service on kubelite version

```
systemctl stop snap.microk8s.daemon-kubelite.service
```

## Check the cluster

From one of the master node.

Cordon off the control plane nodes.

`microk8s kubectl cordon microk8s-worker-cetacean-1 microk8s-worker-cetacean-0 microk8s-controller-cetacean`
`microk8s kubectl cordon mk8s-cp-01 mk8s-cp-02 mk8s-cp-03`


```console
# microk8s kubectl get no -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
microk8s-worker-cetacean-1 Ready,SchedulingDisabled <none> 5h58m v1.20.2-34+350770ed07a558 159.65.3.0 <none> Ubuntu 20.04.1 LTS 5.4.0-51-generic containerd://1.3.7
microk8s-controller-cetacean Ready,SchedulingDisabled <none> 6h23m v1.20.2-34+350770ed07a558 159.65.11.73 <none> Ubuntu 20.04.1 LTS 5.4.0-51-generic containerd://1.3.7
worker-1 Ready <none> 3h23m v1.20.2-34+350770ed07a558 188.166.212.46 <none> Ubuntu 20.04.1 LTS 5.4.0-51-generic containerd://1.3.7
microk8s-worker-cetacean-0 Ready,SchedulingDisabled <none> 6h v1.20.2-34+350770ed07a558 159.65.3.42 <none> Ubuntu 20.04.1 LTS 5.4.0-51-generic containerd://1.3.7
worker-0 Ready <none> 4h42m v1.20.2-34+350770ed07a558 178.128.24.42 <none> Ubuntu 20.04.1 LTS 5.4.0-51-generic containerd://1.3.7
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
mk8s-cp-02 Ready,SchedulingDisabled <none> 6d23h v1.20.2-36+a377b7383d340b 10.130.0.3 <none> Ubuntu 20.04.1 LTS 5.4.0-51-generic containerd://1.3.7
mk8s-wk-01 Ready <none> 6d23h v1.20.2-36+a377b7383d340b 10.130.0.5 <none> Ubuntu 20.04.1 LTS 5.4.0-51-generic containerd://1.3.7
mk8s-cp-01 Ready,SchedulingDisabled <none> 6d23h v1.20.2-36+a377b7383d340b 10.130.0.2 <none> Ubuntu 20.04.1 LTS 5.4.0-51-generic containerd://1.3.7
mk8s-cp-03 Ready,SchedulingDisabled <none> 6d23h v1.20.2-36+a377b7383d340b 10.130.0.4 <none> Ubuntu 20.04.1 LTS 5.4.0-51-generic containerd://1.3.7

```

Below shows the utilization of kubelet and kube-proxy. Running on a 1 CPU and 2 GB VM. As you can see, the node is very much dedicated to your workloads and not kubernetes.
Expand All @@ -191,3 +269,5 @@ Below shows the utilization of kubelet and kube-proxy. Running on a 1 CPU and 2
Compare that to a control plane node resource usage.

![controlplane|690x158](docs/assets/controlplane.png)

Now you can run MicroK8s on a resource constrained environment. Profit!!!

0 comments on commit 54172fe

Please sign in to comment.