Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Directories provisioned by hostPath provisioner are only writeable by root #1990

Closed
yuvipanda opened this issue Sep 20, 2017 · 31 comments
Closed
Labels
area/mount kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@yuvipanda
Copy link
Contributor

Is this a BUG REPORT or FEATURE REQUEST? (choose one): bug report

Please provide the following details:

Environment:

Minikube version (use minikube version): v0.21.0

  • OS (e.g. from /etc/os-release): ubuntu 17.04
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): virtualbox
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): minikube-v0.20.0.iso
  • Install tools:
  • Others:

What happened:

  1. I provision a PVC
  2. It is dynamically bound to a hostPath volume by the minikube provisioner
  3. A pod is created that mounts the PVC
  4. The process in the pod is running as uid 1000, with fsgid 1000 too
  5. The process can not write to the PVC mount, since it is only writeable by root

Since we don't want to allow escalating privileges in the pod, we can't use the PVC mount at all.

What you expected to happen:

Some way of specifying in the PVC what uid / gid the hostPath should be owned by, so we can write to it.

How to reproduce it (as minimally and precisely as possible):

kubectl apply -f the following file:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: "0"
---
apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  volumes:
    - name: test 
      persistentVolumeClaim:
        claimName: test
  containers:
  - image: busybox:latest
    name: notebook
    volumeMounts: 
     - mountPath: /home/test
       name: test
    command: ["/bin/sh", "-c", "touch /home/test/hi"]
  securityContext:
    fsGroup: 1000
    runAsUser: 1000

It fails with the following output:

touch: /home/test/hi: Permission denied

If you set the fsGroup and runAsUser to 0, it succeeds.

@yuvipanda
Copy link
Contributor Author

Perhaps an annotation for the PVC that sets ownership? Or mode?

@yuvipanda
Copy link
Contributor Author

According to

if err := os.MkdirAll(path, 0777); err != nil {
it looks like the pvc should be created with 0777 permissions, but in reality:

$ ls -lhsd pvc-d55626b9-9e3b-11e7-a572-08002772c173/
4.0K drwxr-xr-x 2 root root 4.0K Sep 20 19:46 pvc-d55626b9-9e3b-11e7-a572-08002772c173/

@yuvipanda
Copy link
Contributor Author

Am convinced now this is because the process runs by default with umask 022, and so 0777 gets set as 0755 instead.

We could drop umask to 0000 just before this call and then restore it afterwards

@dlorenc
Copy link
Contributor

dlorenc commented Sep 21, 2017

What about doing something like this:
https://github.com/kubernetes/kubernetes/blob/9e223539290b5401a3b912ea429147a01c3fda75/pkg/volume/util/atomic_writer.go#L360

where we'd set the permissions via a call to chmod after creation, rather than setting/resetting the process-level umask?

@dlorenc
Copy link
Contributor

dlorenc commented Sep 21, 2017

Thanks for figuring this out, by the way!

@yuvipanda
Copy link
Contributor Author

That works too, and might be better than fiddling with umask (since umask is process wide afaik)! I'll amend the patch later today.

@aaron-prindle aaron-prindle added kind/bug Categorizes issue or PR as related to a bug. area/mount labels Sep 21, 2017
@yuvipanda
Copy link
Contributor Author

@dlorenc np, and thanks for reviewing the PR so quickly!

@antoineco
Copy link

This was supposed to fix the fsGroup compatibility, but doesn't seem to.

minikube v0.24.1

With the following securityContext

      securityContext:
        fsGroup: 20003

and the following PVC template

  volumeClaimTemplates:
  - metadata:
      name: datadir
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi

the host directories are still created with the following mode
drwxr-xr-x 2 root root 4096 Dec 1 16:47 /tmp/hostpath-provisioner/pvc-541614b7-d6b7-11e7-a722-36d29dc40439

@sonnysideup
Copy link

I'm seeing the same issue running minikube version: v0.24.1. I'm dynamically creating a couple of PVCs/PVs when launching a StatefulSet. This, in turn, is using the default storage provisioner (k8s.io/minikube-hostpath).

@antoineco
Copy link

@dlorenc @yuvipanda would it be possible to reopen this issue?

@dlorenc dlorenc reopened this Dec 5, 2017
@greglanthier
Copy link

greglanthier commented Dec 13, 2017

I suspect I've bumped into the same issue seen by @yuvipanda.

The process I followed is slightly different but the end results are the same: a hostPath volume is created in /tmp/hostpath-provisioner with permissions that deny write access to processes in containers that run with a non-root id.

  • Minikube version v0.24.1
  • OS Ubuntu 16.04.2 LTS
  • VM Driver virtualbox
  • ISO version minikube-v0.23.6.iso
  • Helm v2.6.1

What happened:

  1. I started minikube
minikube start
  1. I initialised Helm
helm init
  1. I installed the Redis Helm chart:
helm install stable/redis --name=my-release

Ultimately the Redis pod failed to startup. The pod logs contained something like this:

Welcome to the Bitnami redis container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-redis
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-redis/issues
Send us your feedback at containers@bitnami.com

nami    INFO  Initializing redis
Error executing 'postInstallation': EACCES: permission denied, mkdir '/bitnami/redis'

The Docker image used by the Redis Helm chart launches the Redis daemon as uid 1001. During its initialisation the pod encounters permission errors while attempting to create files on a persistent volume.

The Redis pod uses a persistent volume that ultimately maps to a directory on the minikube VM that is created with permissions 0755 owned by root:

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                      STORAGECLASS   REASON    AGE
pvc-8fd0125d-e04d-11e7-b721-0800271a7cc9   8Gi        RWO            Delete           Bound     default/my-release-redis   standard                 29m
$ minikube ssh "ls -l /tmp/hostpath-provisioner/"
total 4
drwxr-xr-x 2 root root 4096 Dec 13 21:35 pvc-8fd0125d-e04d-11e7-b721-0800271a7cc9
$ 

If I chmod 0777 pvc-8fd0125d-e04d-11e7-b721-0800271a7cc9 the Redis pod will startup properly.

I don't know what the best option for a fix would be - although I'm not sure this is a bug.

There has been a fair amount of debate in other issues (see kubernetes/kubernetes#2630, kubernetes/charts#976, and others) that makes me hesitant to advocate for a umask or chmod type change since I don't know what type of implications making a hostPath volume globally readable / writable by all containers would have. If its safe enough this seems like a reasonable path of least resistance.

Allowing some customisation of mountOptions in creating a persistentVolume in minikube could help (i.e.: create the hostPath with an owner / group id of n) - at least that's what I first tried to do - but it doesn't look like mountOptions are supported by the storage-provider used by minikube yet.

@chancez
Copy link
Member

chancez commented Dec 22, 2017

The issue is that the volume provisioner isn't really responsible for the mounted volume's permissions, kubelet is. This same problem exists basically for all externalVolume provisioners that don't have a mounter implementation in core. Local volumes I think are the only supported volume type that's got a provisioner outside of core, but has a mounter implemented in core.

I don't know what the best option is, but it seems that if local volumes get better support, then perhaps minikube should switch to using the local volume provisioner instead of the hostpath-provisioner, and then that may resolve most of these issues.

No matter what, even if the hostpath provisioner can set proper permissions (777 by default or even by allowing the storageClass to specify the permissions), the user/group of the volume will always be wrong according to fsGroup, which can still break certain things that assume a particular user.

@greglanthier
Copy link

Yup, thank you @chancez. Your summary confirms what I’ve gleaned from the K8s docs here.

I’m thinking to submit a PR for the Redis helm chart that would allow consumers to override the runAsUse and fsGroup settings - but that feels like a hack.

I don’t have enough experience with this sort of thing to have a feeling for the right approach to this scenario.

Sent with GitHawk

@chancez
Copy link
Member

chancez commented Dec 22, 2017

I think being able to set those values will help in many cases. I use that to make jenkins not fail on minikube when using PVCs, but i also have serverspec tests to validate jenkins comes up correctly, and currently while things work my tests fail in minikube due to the owner/group on the files being root, so it's not a silver bullet.

sleshchenko added a commit to sleshchenko/che that referenced this issue Feb 5, 2018
It is required to workround issue with PVC on minikube where mounted
foldres are not writable for non root users kubernetes/minikube#1990
sleshchenko added a commit to sleshchenko/che that referenced this issue Feb 6, 2018
It is required to workround issue with PVC on minikube where mounted
foldres are not writable for non root users kubernetes/minikube#1990
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 22, 2018
@antoineco
Copy link

/remove-lifecycle stale

@cneberg
Copy link

cneberg commented Dec 6, 2018

I found a workaround for rancher kubernetes on this same issue and I found my way here through a google search to find a solution.

In case it helps others here is the workaround I used. Create an init container a level above where you want to mount your writable directory. (I want /data/myapp/submission but I create a volume at /data/myapp, then in the command for that container create the submission directory and chown it to the users numeric userid. The account and uid do not need to exist in init container. When the main container(s) come up the directory you wish to write in will have the correct ownership, and you can use it as expected.

initContainers:

  • name: init-myapp
    image: registry.hub.docker.com/library/busybox:latest
    command: ['sh', '-c', ‘mkdir -p /data/myapp/submission/ && chown 1049 /data/myapp/submission/' ]
    volumeMounts:
  • name: submission
    mountPath: "/data/myapp/"

Originally I had tried chown-ing the mount itself, and not a directory below - the behavior in that instance was odd - it acted if it could write files but they silently disappeared after creation.

@weisjohn
Copy link

Observed this issue today, doesn't seem to be any work around other than init containers.

@monokal
Copy link

monokal commented Mar 23, 2019

Also bumped in to this "Permission denied" error when mounting a hostPath PersistentVolume to a container which uses a non-root USER.

This isn't an issue using vanilla Docker and a named volume on my local host if I chown some_user:some_group in the Dockerfile itself which seems to persist permissions/ownership even post-mounting at runtime.

@AkihiroSuda
Copy link
Member

this should be reopened?

@monokal
Copy link

monokal commented Apr 23, 2019

I think so @AkihiroSuda - The only workaround I found was to grant my USER sudo privileges in order to chown the mount at runtime, which pretty much negates the point of using a non-root user.

@d3vpasha
Copy link

d3vpasha commented Sep 6, 2021

I still have this issue with minikube. fsGroup configuration does not apply & the volume that I mounted using hostPath has still as owner root & group root. I don't have other choice than going through initContainers to change the owner with an old "chown"

@eleaar
Copy link

eleaar commented Sep 29, 2021

Hi, I'm also running into this issue with some helm charts that explicitely forbid running their containers as root (e.g. bitnami/kube-prometheus). Could this be reopened?

@mj3c
Copy link

mj3c commented Nov 26, 2021

Also having this issue. Started a minikube cluster (with --driver=kvm2 --nodes=2), tried to deploy Prometheus with Helm, but prometheus-server fails to start because it can't write any data in the mounted volume.

Funnily enough, I found that if the pod gets scheduled on the control plane node (minikube), it starts successfully, but if it's scheduled on the second node (minikube-m02), the problem appears. It seems like the hostpath provisioner is not working properly on all nodes. (#11765)

@gonzojive
Copy link

Please reopen.

@leqii-com
Copy link

其他都正常,pod启动不了,有知道解决办法的吗?pod报错如下:
/opt/bitnami/scripts/librediscluster.sh: line 202:
/bitnami/redis/data/nodes.sh: Permission denied

@stevester94
Copy link

As a workaround I am deploying a daemonset which mounts the hostpath-provisioner directory and sets all subdirs to 777 every second.

apiVersion: v1
kind: Namespace
metadata:
  name: minikube-pv-hack
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: minikube-pv-hack
  namespace: minikube-pv-hack
spec:
  selector:
    matchLabels:
      name: minikube-pv-hack
  template:
    metadata:
      labels:
        name: minikube-pv-hack
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - name: minikube-pv-hack
        image: registry.access.redhat.com/ubi8:latest
        command:
        - bash
        - -c
        - |
          while : ; do
            chmod 777 /target/*
            sleep 1
          done
        volumeMounts:
        - name: host-vol
          mountPath: /target
      volumes:
      - name: host-vol
        hostPath:
          path: /tmp/hostpath-provisioner/default

@vjm
Copy link

vjm commented May 18, 2024

is there any solution here? i am encountering the same problems

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/mount kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests