Skip to content

Commit

Permalink
Merge pull request #1684 from GeorgianaElena/helm3_docs
Browse files Browse the repository at this point in the history
[WIP] Make the docs assume helm3
  • Loading branch information
consideRatio authored Sep 4, 2020
2 parents dd4fd8b + 5551d3d commit b92c0a5
Show file tree
Hide file tree
Showing 11 changed files with 186 additions and 190 deletions.
4 changes: 2 additions & 2 deletions doc/source/administrator/debug.rst
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ To fix this, let's add a tag to our ``config.yaml`` file::

Then run a helm upgrade::

helm upgrade jhub jupyterhub/jupyterhub --version=v0.6 -f config.yaml
helm upgrade --cleanup-on-fail jhub jupyterhub/jupyterhub --version=v0.6 -f config.yaml

where ``jhub`` is the helm release name (substitute the release name that you
chose during setup).
Expand Down Expand Up @@ -188,4 +188,4 @@ communicate with the proxy pod API, likely because of a problem in the

3. Redeploy the helm chart::

helm upgrade jhub jupyterhub/jupyterhub -f config.yaml
helm upgrade --cleanup-on-fail jhub jupyterhub/jupyterhub -f config.yaml
2 changes: 1 addition & 1 deletion doc/source/administrator/optimization.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ situations:

**NOTE**: With this enabled your `helm upgrade` will take a long time if you
introduce a new image as it will wait for the pulling to complete. We
recommend that you add `--timeout 600` or similar to your `helm upgrade`
recommend that you add `--timeout 10m0s` or similar to your `helm upgrade`
command to give it enough time.

The hook-image-puller is enabled by default. To disable it, use the
Expand Down
12 changes: 3 additions & 9 deletions doc/source/administrator/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,17 +159,11 @@ http://ssllabs.com/ssltest/analyze.html?d=<YOUR-DOMAIN>

## Secure access to Helm

In its default configuration, helm pretty much allows root access to all other
pods running in your cluster. See this [Bitnami Helm security article](https://engineering.bitnami.com/articles/helm-security.html)
for more information. As a consequence, the default allows all users in your cluster to pretty much have root access to your whole cluster!
Helm 3 supports the security, identity, and authorization features of modern Kubernetes. Helm’s permissions are evaluated using your kubeconfig file. Cluster administrators can restrict user permissions at whatever granularity they see fit.

You can mitigate this by limiting public access to the Tiller API. To do so, use the following command:
Read more about organizing cluster access using kubeconfig files in the
[Kubernetes docs](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/).

```bash
kubectl --namespace=kube-system patch deployment tiller-deploy --type=json --patch='[{"op": "add", "path": "/spec/template/spec/containers/0/command", "value": ["/tiller", "--listen=localhost:44134"]}]'
```

This limit shouldn't affect helm functionality in any form.

## Audit Cloud Metadata server access

Expand Down
6 changes: 3 additions & 3 deletions doc/source/administrator/upgrading.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,13 +63,13 @@ a production system!
To run the upgrade:

```
helm upgrade <YOUR-HELM-RELEASE-NAME> jupyterhub/jupyterhub --version=<RELEASE-VERSION> -f config.yaml
helm upgrade --cleanup-on-fail <YOUR-HELM-RELEASE-NAME> jupyterhub/jupyterhub --version=<RELEASE-VERSION> -f config.yaml
```

For example, to upgrade to v0.6, enter and substituting `<YOUR-HELM-RELEASE-NAME>` and version v0.6:

```
helm upgrade <YOUR-HELM-RELEASE-NAME> jupyterhub/jupyterhub --version=v0.6 -f config.yaml
helm upgrade --cleanup-on-fail <YOUR-HELM-RELEASE-NAME> jupyterhub/jupyterhub --version=v0.6 -f config.yaml
```

### Database
Expand Down Expand Up @@ -147,7 +147,7 @@ If the upgrade is failing on a test system or a system that does not serve users
deleting the helm chart using:

```
helm delete <YOUR-HELM-RELEASE-NAME> --purge
helm delete <YOUR-HELM-RELEASE-NAME>
```

`helm list` may be used to find <YOUR-HELM-RELEASE-NAME>.
3 changes: 2 additions & 1 deletion doc/source/customizing/extending-jupyterhub.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,8 @@ The general method to modify your Kubernetes deployment is to:
RELEASE=jhub
helm upgrade $RELEASE jupyterhub/jupyterhub \
helm upgrade --cleanup-on-fail \
$RELEASE jupyterhub/jupyterhub \
--version=0.8.2 \
--values config.yaml

Expand Down
7 changes: 6 additions & 1 deletion doc/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,11 @@ This documentation is for jupyterhub chart version |release|, which deploys Jupy

This version of the chart requires kubernetes ≥1.11 and helm ≥2.11.

.. note::

Helm 2 is deprecated since of November 2019, and
`will receive bugfixes until August 13, 2020 <https://helm.sh/blog/covid-19-extending-helm-v2-bug-fixes>`_.
So, the Helm references in this documentation are Helm v3.

.. _about-guide:

Expand All @@ -32,7 +37,7 @@ While doing this, you will gain valuable experience with:

* **A cloud provider** such as Google Cloud, Microsoft Azure, Amazon EC2, IBM Cloud...
* **Kubernetes** to manage resources on the cloud
* **Helm** to configure and control the packaged JupyterHub installation
* **Helm v3** to configure and control the packaged JupyterHub installation
* **JupyterHub** to give users access to a Jupyter computing environment
* **A terminal interface** on some operating system

Expand Down
106 changes: 16 additions & 90 deletions doc/source/setup-jupyterhub/setup-helm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,132 +6,58 @@ Setting up Helm
`Helm <https://helm.sh/>`_, the package manager for Kubernetes, is a useful tool
for: installing, upgrading and managing applications on a Kubernetes cluster.
Helm packages are called *charts*.
We will be installing and managing JupyterHub on
our Kubernetes cluster using a Helm chart.
We will be installing and managing JupyterHub on our Kubernetes cluster using a Helm chart.

Charts are abstractions describing how to install packages onto a Kubernetes
cluster. When a chart is deployed, it works as a templating engine to populate
multiple `yaml` files for package dependencies with the required variables, and
then runs `kubectl apply` to apply the configuration to the resource and install
the package.

Helm has two parts: a client (`helm`) and a server (`tiller`). Tiller runs
inside of your Kubernetes cluster as a pod in the kube-system namespace. Tiller
manages both, the *releases* (installations) and *revisions* (versions) of charts deployed
on the cluster. When you run `helm` commands, your local Helm client sends
instructions to `tiller` in the cluster that in turn make the requested changes.

.. note::

These instructions are for Helm 2.
Helm 3 includes several major breaking changes and is not yet officially
supported, but :doc:`preliminary instructions are available for testing
<setup-helm3>`.
If you previously installed Z2JH using Helm 2, it is worth noting that
Helm 3 includes several major **breaking changes**. See the
`Helm 3 FAQ <https://helm.sh/docs/faq/>`_ for more information.

For **migrating from Helm v2 to v3**, checkout the official
`Helm guide <https://helm.sh/docs/topics/v2_v3_migration/>`_.

Installation
------------

While several `methods to install Helm
<https://v2.helm.sh/docs/using_helm/#installing-helm>`_ exists, the
<https://helm.sh/docs/intro/install/>`_ exist, the
simplest way to install Helm is to run Helm's installer script in a terminal:

.. code:: bash
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
.. _helm-rbac:

Initialization
--------------

After installing helm on your machine, initialize Helm on your Kubernetes
cluster:

1. Set up a `ServiceAccount
<https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/>`_
for use by `tiller`.

.. code-block:: bash
kubectl --namespace kube-system create serviceaccount tiller
2. Give the `ServiceAccount` full permissions to manage the cluster.

.. note::

If you know your kubernetes cluster does not have RBAC enabled, you **must** skip this step.
Most users can ignore this note.

.. code-block:: bash
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
See `our RBAC documentation
<../administrator/security.html#use-role-based-access-control-rbac>`_ for more information.
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
3. Initialize `helm` and `tiller`.
* The minimum supported version of Helm in Z2JH is `3.2.0`.

.. code-block:: bash
* Helm 3 uses the same security mechanisms as other Kubernetes clients such as `kubectl`.

helm init --service-account tiller --history-max 100 --wait
This command only needs to run once per Kubernetes cluster, it will create a
`tiller` deployment in the kube-system namespace and setup your local `helm`
client.
This command installs and configures the `tiller` part of Helm (the whole
project, not the CLI) on the remote kubernetes cluster. Later when you want
to deploy changes with `helm` (the local CLI), it will talk to `tiller`
and tell it what to do. `tiller` then executes these instructions from
within the cluster.
We limit the history to 100 previous installs as very long histories slow
down helm commands a lot.

.. note::

If you wish to install `helm` on another computer, you won't need to setup
`tiller` again but you still need to initialize `helm`:

.. code-block:: bash
helm init --client-only
Secure Helm
-----------

Ensure that `tiller` is secure from access inside the cluster:

.. code:: bash
kubectl patch deployment tiller-deploy --namespace=kube-system --type=json --patch='[{"op": "add", "path": "/spec/template/spec/containers/0/command", "value": ["/tiller", "--listen=localhost:44134"]}]'
`tiller` s port is exposed in the cluster without authentication and if you probe
this port directly (i.e. by bypassing `helm`) then `tiller` s permissions can be
exploited. This step forces `tiller` to listen to commands from localhost (i.e.
`helm`) *only* so that e.g. other pods inside the cluster cannot ask `tiller` to
install a new chart granting them arbitrary, elevated RBAC privileges and exploit
them. `More details here. <https://engineering.bitnami.com/articles/helm-security.html>`_

Verify
------

You can verify that you have the correct version and that it installed properly
by running:
You can verify that it is installed properly by running:

.. code:: bash
helm version
helm list
It should in less then a minute, when `tiller` on the cluster is ready, be able
to provide output like below. Make sure you have at least version 2.11.0 and that
the client (`helm`) and server version (`tiller`) is matching!
You should see an empty list since no Helm charts have been installed:

.. code-block:: bash
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
Next Step
---------

Congratulations, Helm is now set up! Let's continue with :ref:`setup-jupyterhub`!

145 changes: 145 additions & 0 deletions doc/source/setup-jupyterhub/setup-helm2.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,145 @@
:orphan:

.. _setup-helm2:

Setting up Helm2
================

.. warning::

Helm 2 is not supported anymore by Zero to JupyterHub and shouldn't be used for testing new
clusters. Helm 2 is deprecated since of November 2019, and
`will receive bugfixes until August 13, 2020 <https://helm.sh/blog/covid-19-extending-helm-v2-bug-fixes>`_.

`Helm <https://helm.sh/>`_, the package manager for Kubernetes, is a useful tool
for: installing, upgrading and managing applications on a Kubernetes cluster.
Helm packages are called *charts*.
We will be installing and managing JupyterHub on
our Kubernetes cluster using a Helm chart.

Charts are abstractions describing how to install packages onto a Kubernetes
cluster. When a chart is deployed, it works as a templating engine to populate
multiple `yaml` files for package dependencies with the required variables, and
then runs `kubectl apply` to apply the configuration to the resource and install
the package.

Helm has two parts: a client (`helm`) and a server (`tiller`). Tiller runs
inside of your Kubernetes cluster as a pod in the kube-system namespace. Tiller
manages both, the *releases* (installations) and *revisions* (versions) of charts deployed
on the cluster. When you run `helm` commands, your local Helm client sends
instructions to `tiller` in the cluster that in turn make the requested changes.

.. note::

These instructions are for Helm 2.
Helm 3 includes several major breaking changes and is not yet officially
supported, but :doc:`preliminary instructions are available for testing
<setup-helm3>`.

Installation
------------

While several `methods to install Helm
<https://v2.helm.sh/docs/using_helm/#installing-helm>`_ exists, the
simplest way to install Helm is to run Helm's installer script in a terminal:

.. code:: bash
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
.. _helm-rbac:

Initialization
--------------

After installing helm on your machine, initialize Helm on your Kubernetes
cluster:

1. Set up a `ServiceAccount
<https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/>`_
for use by `tiller`.

.. code-block:: bash
kubectl --namespace kube-system create serviceaccount tiller
2. Give the `ServiceAccount` full permissions to manage the cluster.

.. note::

If you know your kubernetes cluster does not have RBAC enabled, you **must** skip this step.
Most users can ignore this note.

.. code-block:: bash
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
See `our RBAC documentation
<../administrator/security.html#use-role-based-access-control-rbac>`_ for more information.

3. Initialize `helm` and `tiller`.

.. code-block:: bash
helm init --service-account tiller --history-max 100 --wait
This command only needs to run once per Kubernetes cluster, it will create a
`tiller` deployment in the kube-system namespace and setup your local `helm`
client.
This command installs and configures the `tiller` part of Helm (the whole
project, not the CLI) on the remote kubernetes cluster. Later when you want
to deploy changes with `helm` (the local CLI), it will talk to `tiller`
and tell it what to do. `tiller` then executes these instructions from
within the cluster.
We limit the history to 100 previous installs as very long histories slow
down helm commands a lot.

.. note::

If you wish to install `helm` on another computer, you won't need to setup
`tiller` again but you still need to initialize `helm`:

.. code-block:: bash
helm init --client-only
Secure Helm
-----------

Ensure that `tiller` is secure from access inside the cluster:

.. code:: bash
kubectl patch deployment tiller-deploy --namespace=kube-system --type=json --patch='[{"op": "add", "path": "/spec/template/spec/containers/0/command", "value": ["/tiller", "--listen=localhost:44134"]}]'
`tiller` s port is exposed in the cluster without authentication and if you probe
this port directly (i.e. by bypassing `helm`) then `tiller` s permissions can be
exploited. This step forces `tiller` to listen to commands from localhost (i.e.
`helm`) *only* so that e.g. other pods inside the cluster cannot ask `tiller` to
install a new chart granting them arbitrary, elevated RBAC privileges and exploit
them. `More details here. <https://engineering.bitnami.com/articles/helm-security.html>`_

Verify
------

You can verify that you have the correct version and that it installed properly
by running:

.. code:: bash
helm version
It should in less then a minute, when `tiller` on the cluster is ready, be able
to provide output like below. Make sure you have at least version 2.11.0 and that
the client (`helm`) and server version (`tiller`) is matching!

.. code-block:: bash
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Next Step
---------

Congratulations, Helm is now set up! Let's continue with :ref:`setup-jupyterhub`!
Loading

0 comments on commit b92c0a5

Please sign in to comment.