Skip to content
This repository has been archived by the owner on May 16, 2023. It is now read-only.

Commit

Permalink
Merge pull request #40 from powerhome/add-resources-for-initContainers
Browse files Browse the repository at this point in the history
Add resources to InitContainers
  • Loading branch information
Crazybus authored Jan 25, 2019
2 parents c4f4923 + 6717f56 commit 7654096
Show file tree
Hide file tree
Showing 4 changed files with 42 additions and 11 deletions.
10 changes: 5 additions & 5 deletions elasticsearch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ This helm chart is a lightweight way to configure and run our official [Elastics
* The default storage class for GKE is `standard` which by default will give you `pd-ssd` type persistent volumes. This is network attached storage and will not perform as well as local storage. If you are using Kubernetes version 1.10 or greater you can use [Local PersistentVolumes](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd) for increased performance
* The chart deploys a statefulset and by default will do an automated rolling update of your cluster. It does this by waiting for the cluster health to become green after each instance is updated. If you prefer to update manually you can set [`updateStrategy: OnDelete`](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#on-delete)
* It is important to verify that the JVM heap size in `esJavaOpts` and to set the CPU/Memory `resources` to something suitable for your cluster
* To simplify chart and maintenance each set of node groups is deployed as a separate helm release. Take a look at the [multi](./examples/multi) example to get an idea for how this works. Without doing this it isn't possible to resize persistent volumes in a statefulset. By setting it up this way it makes it possible to add more nodes with a new storage size then drain the old ones. It also solves the problem of allowing the user to determine which node groups to update first when doing upgrades or changes.
* To simplify chart and maintenance each set of node groups is deployed as a separate helm release. Take a look at the [multi](./examples/multi) example to get an idea for how this works. Without doing this it isn't possible to resize persistent volumes in a statefulset. By setting it up this way it makes it possible to add more nodes with a new storage size then drain the old ones. It also solves the problem of allowing the user to determine which node groups to update first when doing upgrades or changes.
* We have designed this chart to be very un-opinionated about how to configure Elasticsearch. It exposes ways to set environment variables and mount secrets inside of the container. Doing this makes it much easier for this chart to support multiple versions with minimal changes.

## Installing
Expand All @@ -27,7 +27,7 @@ This helm chart is a lightweight way to configure and run our official [Elastics
```
helm repo add elastic https://helm.elastic.co
```
* Install it
* Install it
```
helm install --name elasticsearch elastic/elasticsearch --version 6.5.4-alpha3
```
Expand All @@ -52,6 +52,7 @@ This helm chart is a lightweight way to configure and run our official [Elastics
| `imagePullPolicy` | The Kubernetes [imagePullPolicy](https://kubernetes.io/docs/concepts/containers/images/#updating-images) value | `IfNotPresent` |
| `esJavaOpts` | [Java options](https://www.elastic.co/guide/en/elasticsearch/reference/current/jvm-options.html) for Elasticsearch. This is where you should configure the [jvm heap size](https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html) | `-Xmx1g -Xms1g` |
| `resources` | Allows you to set the [resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for the statefulset | `requests.cpu: 100m`<br>`requests.memory: 2Gi`<br>`limits.cpu: 1000m`<br>`limits.memory: 2Gi` |
| `initResources` | Allows you to set the [resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for the initContainer in the statefulset | {} |
| `networkHost` | Value for the [network.host Elasticsearch setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/network.host.html) | `0.0.0.0` |
| `volumeClaimTemplate` | Configuration for the [volumeClaimTemplate for statefulsets](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage). You will want to adjust the storage (default `30Gi`) and the `storageClassName` if you are using a different storage class | `accessModes: [ "ReadWriteOnce" ]`<br>`storageClassName: standard`<br>`resources.requests.storage: 30Gi` |
| `antiAffinityTopologyKey` | The [anti-affinity topology key](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity). By default this will prevent multiple Elasticsearch nodes from running on the same Kubernetes node | `kubernetes.io/hostname` |
Expand Down Expand Up @@ -115,7 +116,7 @@ A cluster with X-Pack security enabled
```
kubectl exec -ti $(kubectl get pods -l release=helm-es-security -o name | awk -F'/' '{ print $NF }' | head -n 1) bash
```

* Install the X-Pack license
```
curl -XPUT 'http://localhost:9200/_xpack/license' -H "Content-Type: application/json" -d @/usr/share/elasticsearch/config/license/license.json
Expand All @@ -132,7 +133,7 @@ A cluster with X-Pack security enabled

### Local development environments

This chart is designed to run on production scale Kubernetes clusters with multiple nodes, lots of memory and persistent storage. For that reason it can be a bit tricky to run them against local Kubernetes environments such as minikube. Below are some examples of how to get this working locally.
This chart is designed to run on production scale Kubernetes clusters with multiple nodes, lots of memory and persistent storage. For that reason it can be a bit tricky to run them against local Kubernetes environments such as minikube. Below are some examples of how to get this working locally.

#### Minikube

Expand Down Expand Up @@ -200,4 +201,3 @@ To run the goss tests against the default example:
cd examples/default
make goss
```

6 changes: 4 additions & 2 deletions elasticsearch/templates/statefulset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -99,14 +99,16 @@ spec:
privileged: true
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
command: ["sysctl", "-w", "vm.max_map_count={{ .Values.sysctlVmMaxMapCount}}"]
resources:
{{ toYaml .Values.initResources | indent 10 }}
containers:
- name: "{{ template "name" . }}"
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
readinessProbe:
{{ toYaml .Values.readinessProbe | indent 10 }}
exec:
command:
command:
- sh
- -c
- |
Expand All @@ -124,7 +126,7 @@ spec:
fi
curl -XGET -s -k --fail ${BASIC_AUTH} {{ .Values.protocol }}://127.0.0.1:{{ .Values.httpPort }}${path}
}
if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
http "/"
Expand Down
25 changes: 23 additions & 2 deletions elasticsearch/tests/elasticsearch_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -324,6 +324,29 @@ def test_adding_a_node_selector():
r = helm_template(config)
assert r['statefulset'][uname]['spec']['template']['spec']['nodeSelector']['disktype'] == 'ssd'

def test_adding_resources_to_initcontainer():
config = '''
initResources:
limits:
cpu: "25m"
memory: "128Mi"
requests:
cpu: "25m"
memory: "128Mi"
'''
r = helm_template(config)
i = r['statefulset'][uname]['spec']['template']['spec']['initContainers'][0]

assert i['resources'] == {
'requests': {
'cpu': '25m',
'memory': '128Mi'
},
'limits': {
'cpu': '25m',
'memory': '128Mi'
}
}

def test_adding_a_node_affinity():
config = '''
Expand Down Expand Up @@ -418,5 +441,3 @@ def test_adding_in_es_config():
assert {'mountPath': '/usr/share/elasticsearch/config/log4j2.properties', 'name': 'esconfig', 'subPath': 'log4j2.properties'} in s['containers'][0]['volumeMounts']

assert 'configchecksum' in r['statefulset'][uname]['spec']['template']['metadata']['annotations']


12 changes: 10 additions & 2 deletions elasticsearch/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ extraEnvs:
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts:
secretMounts:
# - name: elastic-certificates
# secretName: elastic-certificates
# path: /usr/share/elasticsearch/config/certs
Expand All @@ -56,6 +56,14 @@ resources:
cpu: "1000m"
memory: "2Gi"

initResources: {}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"

networkHost: "0.0.0.0"

volumeClaimTemplate:
Expand All @@ -67,7 +75,7 @@ volumeClaimTemplate:

# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"
antiAffinityTopologyKey: "kubernetes.io/hostname"

# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
Expand Down

0 comments on commit 7654096

Please sign in to comment.