Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a flag for setting init containers to change permissions on the mounts #2337

Closed
holdenk opened this issue Aug 1, 2021 · 7 comments
Closed

Comments

@holdenk
Copy link

holdenk commented Aug 1, 2021

Proposed change

Add a flag (default to true) that adds an init container changes the permissions on the mounts to match the security context.

Alternative options

Users can manually configure their own init containers.

Who would use this feature?

Anyone running more current version of k3s or any PVC which does some kind of locking down.

Related see k3s v1.21.3 ( https://github.com/k3s-io/k3s/releases/tag/v1.21.3%2Bk3s1 ) & Changes the permission of local storage pods ( k3s-io/k3s#3548 )

(Optional): Suggest a solution

Add default init container (have it be able to be disabled by a flag).

@holdenk
Copy link
Author

holdenk commented Aug 1, 2021

For anyone who runs into the same problem as me, this showed up as:

[E 2021-08-01 04:57:49.757 JupyterHub app:2973]
    Traceback (most recent call last):
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/app.py", line 2970, in launch_instance_async
        await self.initialize(argv)
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/app.py", line 2505, in initialize
        self.init_db()
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/app.py", line 1703, in init_db
        dbutil.upgrade_if_needed(self.db_url, log=self.log)
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/dbutil.py", line 112, in upgrade_if_needed
        orm.check_db_revision(engine)
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/orm.py", line 771, in check_db_revision
        current_table_names = set(inspect(engine).get_table_names())
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/inspection.py", line 64, in inspect
        ret = reg(subject)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/reflection.py", line 182, in _engine_insp
        return Inspector._construct(Inspector._init_engine, bind)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/reflection.py", line 117, in _construct
        init(self, bind)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/reflection.py", line 128, in _init_engine
        engine.connect().close()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 3096, in connect
        return self._connection_cls(self, close_with_result=close_with_result)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 92, in __init__
        else engine.raw_connection()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 3175, in raw_connection
        return self._wrap_pool_connect(self.pool.connect, _connection)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 3145, in _wrap_pool_connect
        Connection._handle_dbapi_exception_noconnection(
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 2004, in _handle_dbapi_exception_noconnection
        util.raise_(
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/util/compat.py", line 211, in raise_
        raise exception
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/base.py", line 3142, in _wrap_pool_connect
        return fn()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 301, in connect
        return _ConnectionFairy._checkout(self)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 761, in _checkout
        fairy = _ConnectionRecord.checkout(pool)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 419, in checkout
        rec = pool._do_get()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/impl.py", line 259, in _do_get
        return self._create_connection()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 247, in _create_connection
        return _ConnectionRecord(self)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 362, in __init__
        self.__connect()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 605, in __connect
        pool.logger.debug("Error on connect(): %s", e)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
        compat.raise_(
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/util/compat.py", line 211, in raise_
        raise exception
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/base.py", line 599, in __connect
        connection = pool._invoke_creator(self)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/create.py", line 578, in connect
        return dialect.connect(*cargs, **cparams)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/default.py", line 584, in connect
        return self.dbapi.connect(*cargs, **cparams)
    sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to open database file
    (Background on this error at: http://sqlalche.me/e/14/e3q8)
    
[D 2021-08-01 04:57:49.763 JupyterHub application:834] Exiting application: jupyterhub

in the hub logs
And I worked around it by adding

hub:
  initContainers:
    - name: volume-mount-hack
      image: busybox
      command: ["sh", "-c", "chown -R 1000:1000 /srv/jupyterhub"]
      volumeMounts:
        - name: pvc
          mountPath: /srv/jupyterhub

@holdenk
Copy link
Author

holdenk commented Aug 1, 2021

Then if you run into issues with this happening when your trying to launch the user container the solution is a bit more gnarly because the volume names are dynamic but the following did the trick for me:

hub:
  initContainers:
    - name: volume-mount-hack
      image: busybox
      command: ["sh", "-c", "chown -R 1000:1000 /srv/jupyterhub"]
      volumeMounts:
        - name: pvc
          mountPath: /srv/jupyterhub
  extraConfig:
      preSpawnHook: |
        import z2jh
        async def my_pre_spawn_hook(spawner):
            users = z2jh.get_config('custom.users') or {}
            username = spawner.user.name
            # Fix up the user permissions with an init container
            spawner.init_containers = [{
                "name": "fix-fs-perms",
                "image": "busybox",
                "command": ["sh", "-c", "chown -R 1000:1000 /user"],
                "volumeMounts": [{
                  "name": "volume-" + username,
                  "mountPath": "/user"
                }]
            }]

(note specifying fsGid to impact the security context doesn't work despite the kubespawner API docs implying it will run a chmod/chown, I'm guessing because it's already trying to do that a user joyvan who doesn't have permission)

@manics
Copy link
Member

manics commented Aug 1, 2021

fsGid is passed to the Kubernetes storage provisioner, which can automatically set the permissions on the volume to ensure the user has access before it's even mounted. As you've discovered not all storage controllers do this.

I don't think we should automatically override the volume permissions as they may have been changed after the volume is provisioned. In addition it requires running the init container as root which is blocked on some clusters.

I think this is a good addition to the docs though. Not sure if it should be added as an non-default option in the Helm chart, @consideRatio what do you think?

@manics
Copy link
Member

manics commented Aug 1, 2021

Note it sounds like the K3S behaviour is a bug: k3s-io/k3s#3704
Only the parent directory should be root owned 0700, the subdirectories should be writeable.

Maybe the fix will be in the next release? k3s-io/k3s#3721

@holdenk
Copy link
Author

holdenk commented Aug 1, 2021

That would be rad if it gets fixed in K3s. I know the minio bitnami helm operator has a built in option to work around this so it might be a more common PVC permission bug, but I am not sure since I only run K3s & EKS these days.

@flokli
Copy link

flokli commented Aug 10, 2021

This should get fixed in the next k3s release (next Wednesday): k3s-io/k3s#3704 (comment)

@consideRatio
Copy link
Member

Thanks for your investigative work everyone involved! I'll close this as a no-action for this Helm chart.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants