Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add config for hub/proxy/autohttps container's securityContext #1708

Conversation

mriedem
Copy link
Contributor

@mriedem mriedem commented Jul 2, 2020

This makes it possible to override the securityContext
for the hub container spec, similar to how the hub container
resources are overridable via values.yaml. By default the
securityContext is the same as before.

One use case for this is to be able to specify capabilities
in order to run py-spy [1] on the hub container.

One downside to this approach is that for anyone overriding the
hub container securityContext may need to duplicate the
runAsUser and allowPrivilegeEscalation entries which means
they could fall out of alignment with the upstream chart. So an
alternative to this full replacement approach is to add an
optional extraContainerSecurityContext entry similar to how
extraVolumeMounts works. The downside to that is it could
eventually conflict with what is not extra in the parent chart.
Another alternative is just adding a very specific and optional
containerCapabilities value which if set only sets the
securityContext.capabilities value in the container spec.

For simplicity and flexibility this opts for the full replacement
approach assuming the hub container securityContext will not change
frequently.

[1] https://github.com/benfred/py-spy#how-do-i-run-py-spy-in-kubernetes

Closes #1677

This makes it possible to override the `securityContext`
for the hub container spec, similar to how the hub container
`resources` are overridable via `values.yaml`. By defualt the
`securityContext` is the same as before.

One use case for this is to be able to specify `capabilities`
in order to run py-spy [1] on the hub container.

One downside to this approach is that for anyone overriding the
hub container `securityContext` may need to duplicate the
`runAsUser` and `allowPrivilegeEscalation` entries which means
they could fall out of alignment with the upstream chart. So an
alternative to this full replacement approach is to add an
optional `extraContainerSecurityContext` entry similar to how
`extraVolumeMounts` works. The downside to that is it could
eventually conflict with what is not extra in the parent chart.
Another alternative is just adding a very specific and optional
`containerCapabilities` value which if set *only* sets the
`securityContext.capabilities` value in the container spec.

For simplicity and flexibility this opts for the full replacement
approach assuming the hub container `securityContext` will not change
frequently.

[1] https://github.com/benfred/py-spy#how-do-i-run-py-spy-in-kubernetes

Closes jupyterhub#1677
@welcome
Copy link

welcome bot commented Jul 2, 2020

Thanks for submitting your first pull request! You are awesome! 🤗

If you haven't done so already, check out Jupyter's Code of Conduct. Also, please make sure you followed the pull request template, as this will help us review your contribution more quickly.
welcome
You can meet the other Jovyans by joining our Discourse forum. There is also a intro thread there where you can stop by and say Hi! 👋

Welcome to the Jupyter community! 🎉

@mriedem
Copy link
Contributor Author

mriedem commented Jul 2, 2020

So with this update to the values.yaml to add capabilities:

  containerSecurityContext:
    runAsUser: *hubUid
    # Don't allow any process to execute as root inside the container
    allowPrivilegeEscalation: false
    capabilities:
      add:
      - SYS_PTRACE

I get this template output:

osboxes@osboxes:~/jupyter/zero-to-jupyterhub-k8s/jupyterhub$ helm template . --set proxy.secretToken=nobodycares -x templates/hub/deployment.yaml
---
# Source: jupyterhub/templates/hub/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hub
  labels:
    component: hub
    app: jupyterhub
    release: release-name
    chart: jupyterhub-0.0.1-set.by.chartpress
    heritage: Tiller
spec:
  replicas: 1
  selector:
    matchLabels:
      component: hub
      app: jupyterhub
      release: release-name
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        component: hub
        app: jupyterhub
        release: release-name
        hub.jupyter.org/network-access-proxy-api: "true"
        hub.jupyter.org/network-access-proxy-http: "true"
        hub.jupyter.org/network-access-singleuser: "true"
      annotations:
        # This lets us autorestart when the secret changes!
        checksum/config-map: 224bd4582593c15178c1e810671c53ecd6375050f6d987c81d8918c8c86e8efd
        checksum/secret: 1222d7d6e69fe267655a5350a839bbf85e049f7f5ce3c4c4b55168258b2c374e
    spec:
      nodeSelector: {}
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              preference:
                matchExpressions:
                  - key: hub.jupyter.org/node-purpose
                    operator: In
                    values: [core]
      volumes:
        - name: config
          configMap:
            name: hub-config
        - name: secret
          secret:
            secretName: hub-secret
        - name: hub-db-dir
          persistentVolumeClaim:
            claimName: hub-db-dir
      serviceAccountName: hub
      securityContext:
        fsGroup: 1000
      containers:
        - name: hub
          image: jupyterhub/k8s-hub:set-by-chartpress
          command:
            - jupyterhub
            - --config
            - /etc/jupyterhub/jupyterhub_config.py
            - --upgrade-db
          volumeMounts:
            - mountPath: /etc/jupyterhub/jupyterhub_config.py
              subPath: jupyterhub_config.py
              name: config
            - mountPath: /etc/jupyterhub/z2jh.py
              subPath: z2jh.py
              name: config
            - mountPath: /etc/jupyterhub/config/
              name: config
            - mountPath: /etc/jupyterhub/secret/
              name: secret
            - mountPath: /srv/jupyterhub
              name: hub-db-dir
          resources:
            requests:
              cpu: 200m
              memory: 512Mi
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              add:
              - SYS_PTRACE
            runAsUser: 1000
          env:
            - name: PYTHONUNBUFFERED
              value: "1"
            - name: HELM_RELEASE_NAME
              value: "release-name"
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: CONFIGPROXY_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                  name: hub-secret
                  key: proxy.token
          ports:
            - name: http
              containerPort: 8081
          readinessProbe:
            initialDelaySeconds: 0
            periodSeconds: 2
            httpGet:
              path: /hub/health
              port: http

@consideRatio
Copy link
Member

Thanks for your work, I think this PR makes sense and is sustainable enough long term to maintain.

Note that for the end user of the chart to change this, it would be something like:

# my-values.yaml
hub:
  containerSecurityContext:
    capabilities:
      add:
      - SYS_PTRACE

The YAML anchor in the Helm charts values.yaml will be unavailable for a user working with my-values.yaml. I suggest we do some helm logic just before we render this securityContext with toYaml by manipulating it. We force the runAsUser entry to be .Values.hub.uid.

Would you like to try that? One could either directly edit .Values.hub.containerSecurityContext by assigning it .Values.hub.uid or one could make a copy of it first, and modify the copy. If you search for := you can see examples of assignment in Helm templating logic.

@mriedem
Copy link
Contributor Author

mriedem commented Jul 3, 2020

The YAML anchor in the Helm charts values.yaml will be unavailable for a user working with my-values.yaml. I suggest we do some helm logic just before we render this securityContext with toYaml by manipulating it. We force the runAsUser entry to be .Values.hub.uid.

Do you want the same thing to happen with allowPrivilegeEscalation: false, i.e. hard-code it in regardless of what's in containerSecurityContext?

Would you like to try that? One could either directly edit .Values.hub.containerSecurityContext by assigning it .Values.hub.uid or one could make a copy of it first, and modify the copy. If you search for := you can see examples of assignment in Helm templating logic.

Yeah I can try that. I figured if someone overrode containerSecurityContext then they'd just assign runAsUser and allowPrivilegeEscalation and whatever else themselves.

Also, if we take this approach of setting values in .Values.hub.containerSecurityContext in the deployment template, should we just take one of the alternative approaches laid out in the PR description to make it a more targeted change that doesn't touch runAsUser or allowPrivilegeEscalation?

jupyterhub/values.yaml Show resolved Hide resolved
jupyterhub/values.yaml Outdated Show resolved Hide resolved
@mriedem
Copy link
Contributor Author

mriedem commented Jul 6, 2020

Should there be an entry in the CHANGELOG.md for this? Or are those rolled up at release time based on the commit history?

This removes hub.uid from values.yaml and hard-codes the one
place it was used (hub.containerSecurityContext.runAsUser). A
deprecation note is added which can later be changed to a hard
fail if desired.
@mriedem
Copy link
Contributor Author

mriedem commented Jul 6, 2020

Would you like to try that? One could either directly edit .Values.hub.containerSecurityContext by assigning it .Values.hub.uid or one could make a copy of it first, and modify the copy. If you search for := you can see examples of assignment in Helm templating logic.

If we deprecate and remove hub.uid from values.yaml how does this change? Does it mean that we need to hard-code the runAsUser value in the hub deployment template itself? I would think that's a step backward if people have been overriding hub.uid for whatever reason. The more I think about this, the more I think we should just do this mentioned alternative:

an alternative to this full replacement approach is to add an
optional extraContainerSecurityContext entry similar to how
extraVolumeMounts works.

Thoughts?

If `hub.uid` is set to 0 then the deprecation note won't appear
because the condition evaluates to false so use `hasKey` to see
if the `hub.uid` entry exists regardless of value.
@mriedem
Copy link
Contributor Author

mriedem commented Jul 6, 2020

Test failures in Travis look unrelated but I can't restart the builds.

@consideRatio consideRatio force-pushed the issues/1677-hub-container-securityContext branch from 543515a to d1daf01 Compare July 9, 2020 15:24
@consideRatio
Copy link
Member

I added a commit:

  • ensures .Values.hub.uid gets priority over the default value we have in hub.containerSecurityContext.runAsUser
  • making this possible for the proxy and autohttps pod also

Ready for merge in my mind, what do you think @mriedem ?

Comment on lines +173 to +174
containerSecurityContext:
allowPrivilegeEscalation: false
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is technically new since it wasn't defined before for traefik correct? Maybe that's fine?

Copy link
Member

@consideRatio consideRatio Jul 9, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, since it passes the tests it should be fine though. It depends on whats going on inside the traefik container though I think. Hmmm, not too confident about this actually but this is the common good practice and the traefik docker image is a well used and docker-hub official image.

@consideRatio consideRatio changed the title Allow overriding hub container securityContext Add config for hub/proxy/autohttps container's securityContext Jul 9, 2020
@consideRatio consideRatio merged commit d080b1d into jupyterhub:master Jul 9, 2020
@welcome
Copy link

welcome bot commented Jul 9, 2020

Congrats on your first merged pull request in this project! 🎉
congrats
Thank you for contributing, we are very proud of you! ❤️

@consideRatio
Copy link
Member

Thank you for your work on this @mriedem and congratulations on your first PR to the repo!!! ❤️ 🎉

@mriedem
Copy link
Contributor Author

mriedem commented Jul 9, 2020

Thank you for your work on this @mriedem and congratulations on your first PR to the repo!!! ❤️ 🎉

Thanks for helping this along and yeah your changes look good to me, thanks for adding those. I wanted to see if I could fetch this change and do a local template build to see what the security context would look like with the capabilities changes we want to make but it sounds like this should work.

Only remaining question I have is if there is an upcoming release of the chart planned or does that happen automatically?

@mriedem mriedem deleted the issues/1677-hub-container-securityContext branch July 9, 2020 18:30
@mriedem
Copy link
Contributor Author

mriedem commented Jul 9, 2020

Only remaining question I have is if there is an upcoming release of the chart planned or does that happen automatically?

Oh I see, there are dev builds available:

https://jupyterhub.github.io/helm-chart/#development-releases-jupyterhub

so I guess I'm asking if there are plans on doing a stable release anytime soon?

https://jupyterhub.github.io/helm-chart/#jupyterhub

@consideRatio
Copy link
Member

so I guess I'm asking if there are plans on doing a stable release anytime soon?

Can't say =/

I wanted to see if I could fetch this change and do a local template build to see what the security context would look like with the capabilities changes we want to make but it sounds like this should work.

You can always do helm template ~/jupyterhub --values dev-config.yaml from the git repo root and update dev-config.yaml to see if your templates render as you want them to. Its also useful to know of the --show-only flag that helps you isolate certain templates output.

@mriedem
Copy link
Contributor Author

mriedem commented Jul 10, 2020

You can always do helm template ~/jupyterhub --values dev-config.yaml from the git repo root and update dev-config.yaml to see if your templates render as you want them to. Its also useful to know of the --show-only flag that helps you isolate certain templates output.

That would indeed be a lot simpler than what I was thinking, I keep forgetting how you can just use --set and pass additional values files. Anyway, we pulled in the latest dev build yesterday and our capabilities container security context value is working like a charm so thanks again.

That does make me think, we're unconditionally installing py-spy [1] but without the capabilities enabled to use it in the hub container it's kind of useless. I wonder if people would find it useful to add a value flag (hub.pyspy.enabled or something) so that we could just do the capabilities for the hub container security context as part of the main chart. We have our fix downstream but that would make it available generically to everyone using z2jh. If that would be valuable, I could create a follow up change.

[1]

py-spy==0.3.3 # via -r requirements.in

@consideRatio
Copy link
Member

It may be valuable for the end user to have such configuration option, but its a pure convinience config that adds complexity (like the complexity for hub.uid) to the helm chart, so i wouldn't want us to add it.

Documentation is of relevance though!

@meeseeksmachine
Copy link

This pull request has been mentioned on Jupyter Community Forum. There might be relevant details there:

https://discourse.jupyter.org/t/scheduler-insufficient-memory-waiting-errors-any-suggestions/5314/4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Allow custom securityContext in hub
3 participants