Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for volume scopes #22077

Merged
merged 2 commits into from
Jun 6, 2016
Merged

Conversation

cpuguy83
Copy link
Member

Add support for volume scopes

This is similar to network scopes where a volume can either be local or global.
A global volume is one that exists across the entire cluster where as a local volume exists on a single engine.

Includes fixes to the plugin RPC generator to support passing structs to the plugin which was required for this PR.

@cpuguy83 cpuguy83 changed the title Remote volplugin caps Add support for volume scopes Apr 15, 2016
@cpuguy83
Copy link
Member Author

Global networks are stored in the KV store, but for volumes we already rely on the volume plugins to tell us everything about the volumes so there is seemingly no requirement to do that here.

{}
```

Get the list of volumes registered with the plugin.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possibly copy-pasta; Think this should be something about "Capabilities"?

@cpuguy83
Copy link
Member Author

Note for reviewers, most of the change here is fixing known limitations in the plugin RPC parser/generator + lots of new tests.

@cpuguy83
Copy link
Member Author

docker/engine-api#216

@vdemeester
Copy link
Member

@cpuguy83 Am I able to set the scope on a volume for a plugin that is not "multi-host" ? (like on a volume that is using local driver)

@cpuguy83
Copy link
Member Author

@vdemeester It will be local
All scopes will default to local unless the volume driver says it is global

@vdemeester
Copy link
Member

@cpuguy83 ah ok good ! Thanks :)

@vdemeester
Copy link
Member

@cpuguy83 docker/engine-api#216 merged, you can add a vendoring commit 😛

@cpuguy83
Copy link
Member Author

cpuguy83 commented May 4, 2016

@vdemeester #22422

@vdemeester
Copy link
Member

@cpuguy83 ah right 😝

@cpuguy83
Copy link
Member Author

cpuguy83 commented May 4, 2016

vendor updated this one is ready.

@thaJeztah thaJeztah added this to the 1.12.0 milestone May 4, 2016
}
// import paths have quotes already added in, so need to remove them for name comparison
name = strings.TrimPrefix(name, "\"")
name = strings.TrimSuffix(name, "\"")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: could be just

name = strings.Trim(name, `"`)

@thaJeztah
Copy link
Member

Supported scopes are global and local. Any other value in Scope will be ignored and assumed to be local.

Would it be better to error out if an unknown scope is provided? I'm never really comfortable with silently ignoring invalid values. It could easily lead to backward compatibility issues if it's ignored now, but becomes a valid value in future.

@cpuguy83
Copy link
Member Author

cpuguy83 commented May 5, 2016

@thaJeztah Instead what we can do is just not register the driver if it's providing an invalid scope.

@thaJeztah
Copy link
Member

can you fix the copy/paste typo here; #22077 (comment) ?

@cpuguy83
Copy link
Member Author

cpuguy83 commented Jun 5, 2016

@thaJeztah haha, (finally) fixed.

@thaJeztah
Copy link
Member

592029

LGTM

@thaJeztah
Copy link
Member

@cpuguy83 looks like the tests needs to be updated to not use '

Now handles `package.Type` and `*package.Type`
Fixes parsing issues with slice and map types.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This is similar to network scopes where a volume can either be `local`
or `global`. A `global` volume is one that exists across the entire
cluster where as a `local` volume exists on a single engine.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
@cpuguy83
Copy link
Member Author

cpuguy83 commented Jun 5, 2016

COBRA!

Fixed.

@vdemeester vdemeester merged commit 28dde09 into moby:master Jun 6, 2016
@kolyshkin
Copy link
Contributor

@cpuguy83 @thaJeztah Is anyone working on adding this to https://github.com/docker/go-plugins-helpers? If no one is doing it, maybe I should give a try.

@cpuguy83 cpuguy83 deleted the remote_volplugin_caps branch June 8, 2016 18:40
@cpuguy83
Copy link
Member Author

cpuguy83 commented Jun 8, 2016

Help would be much appreciated.
Swarm could also use a treatment.

@thaJeztah
Copy link
Member

not that I'm aware of (@cpuguy83?) so contributions welcome!

@tiagoalves83
Copy link

At the time, is it possible to test global volumes?
I am trying to achieve NFS volumes to work in swarm mode without success.
issue #24484

@cpuguy83
Copy link
Member Author

@tiagoalves83 scopes are for external tooling, like github.com/docker/swarm, swarm-mode will not be making use of this.

Also note that the local volume driver can mount an nfs share docker volume create --opt type=nfs --opt device=:/export/path --opt o=addr=1.2.3.4, for example.

@govint
Copy link

govint commented Jul 13, 2016

What if a plugin is serving up a set of datastores and some of those are local datastores? How should the plugin report scope - global (there are global datastores) or local? What if the plugin has only local datastores accessible to it (no NFS for example).

The user has no control where the volumes are placed anyway. Any plan to enumerate datastores for a plugin and allow users to specify placement.

@cpuguy83

@cpuguy83
Copy link
Member Author

@govint Always send local scope? It's probably not the best idea to have a plugin serving two roles.

What we'll probably end up doing for swarm mode is something like having the orchestrator pick a node then ask the plugin "does this node have access to the requested data?"

@tiagoalves83
Copy link

@cpuguy83 nfs local volumes are not working for swarm workers, just the manager.

My Enviroment:
MacOSX running docker-machine 0.7.0, nfs-server (192.168.99.1), docker cli 1.12.0-rc4
I have 2 nodes: swmaster-1 (192.168.99.100) and swnode-1 (192.168.99.101) boot2docker 1.12.0-rc4.
From within swarm nodes (docker-machine ssh ...) I can successfully mount nfs:

docker-machine ssh swmaster-1
sudo mkdir /test-nfs
sudo mount -t nfs 192.168.99.1:/Volumes/HDD/tmp /test-nfs
sudo ls /test-nfs #OK listing all files
exit

docker-machine ssh swnode-1
sudo mkdir /test-nfs
sudo mount -t nfs 192.168.99.1:/Volumes/HDD/tmp /test-nfs
sudo ls /test-nfs #OK listing all files
exit

Now, using docker (swarm-mode)

eval $(docker-machine env swmaster-1)
docker volume create --driver local --opt type=nfs --opt o=addr=192.168.99.1,rw --opt device=:/Volumes/HDD/tmp --name nfs-vol-tmp
docker service create --endpoint-mode dnsrr --mount type=volume,source=nfs-vol-tmp,target=/mount --name test-nfs-1 alpine /bin/sh -c "while true; do echo 'OK'; sleep 3; done"
docker service create --endpoint-mode dnsrr --mount type=volume,source=nfs-vol-tmp,target=/mount --name test-nfs-2 alpine /bin/sh -c "while true; do echo 'OK'; sleep 3; done"
docker service create --endpoint-mode dnsrr --mount type=volume,source=nfs-vol-tmp,target=/mount --name test-nfs-3 alpine /bin/sh -c "while true; do echo 'OK'; sleep 3; done"
docker service create --endpoint-mode dnsrr --mount type=volume,source=nfs-vol-tmp,target=/mount --name test-nfs-4 alpine /bin/sh -c "while true; do echo 'OK'; sleep 3; done"
docker service create --endpoint-mode dnsrr --mount type=volume,source=nfs-vol-tmp,target=/mount --name test-nfs-5 alpine /bin/sh -c "while true; do echo 'OK'; sleep 3; done"
docker service create --endpoint-mode dnsrr --mount type=volume,source=nfs-vol-tmp,target=/mount --name test-nfs-6 alpine /bin/sh -c "while true; do echo 'OK'; sleep 3; done"
docker service create --endpoint-mode dnsrr --mount type=volume,source=nfs-vol-tmp,target=/mount --name test-nfs-7 alpine /bin/sh -c "while true; do echo 'OK'; sleep 3; done"

Than, inside swmaster-1, volume is being mounted and it is working

docker@swmaster-1:~$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
f7ff90bb6324        alpine:latest       "/bin/sh -c 'while tr"   3 minutes ago       Up 3 minutes                            test-nfs-7.1.9e2q6954xjjbqdwpz14k5dji5
be174c6dcf31        alpine:latest       "/bin/sh -c 'while tr"   3 minutes ago       Up 3 minutes                            test-nfs-6.1.928csw1stg2gpx8d7r3dieiqr
67b01f475bfb        alpine:latest       "/bin/sh -c 'while tr"   4 minutes ago       Up 4 minutes                            test-nfs-3.1.6dv7j6mxmo4y6pkxim6yn2705
0046534afcdf        alpine:latest       "/bin/sh -c 'while tr"   4 minutes ago       Up 4 minutes                            test-nfs-2.1.8loforolv5b34rlztfxwhwjvu

docker@swmaster-1:~$ docker exec f7ff90bb6324  /bin/sh -c "ls /mount" # OK listing files
docker@swmaster-1:~$ docker exec be174c6dcf31  /bin/sh -c "ls /mount" # OK listing files
docker@swmaster-1:~$ docker exec 67b01f475bfb  /bin/sh -c "ls /mount" # OK listing files
docker@swmaster-1:~$ docker exec 0046534afcdf  /bin/sh -c "ls /mount" # OK listing files

Now, for containers created at the swnode-1, volume is not working

docker@swnode-1:~$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
3d38d66326e2        alpine:latest       "/bin/sh -c 'while tr"   16 seconds ago      Up 15 seconds                           test-nfs-5.1.ap0nhivlvifunar8z9n9333hx
a17e411d9690        alpine:latest       "/bin/sh -c 'while tr"   45 seconds ago      Up 43 seconds                           test-nfs-4.1.1lud5nyumwcfxn6yrurctkj1i
ee13c9b9ed05        alpine:latest       "/bin/sh -c 'while tr"   5 minutes ago       Up 5 minutes                            test-nfs-1.1.cvc75ku4zyqg5mfz2dz94v718

docker@swnode-1:~$ docker exec 3d38d66326e2  /bin/sh -c "ls /mount" # NOT OK empty folder
docker@swnode-1:~$ docker exec a17e411d9690  /bin/sh -c "ls /mount" # NOT OK empty folder
docker@swnode-1:~$ docker exec ee13c9b9ed05  /bin/sh -c "ls /mount" # NOT OK empty folder

docker volume ls and docker volume inspect inside "swnode-1" show the same info that "swmaster-1" for "nfs-vol-tmp" volume, except for the Label field that is null in swnode-1 and "{}" for swmaster-1

docker@swmaster-1:~$ docker volume inspect nfs-vol-tmp
[
    {
        "Name": "nfs-vol-tmp",
        "Driver": "local",
        "Mountpoint": "/mnt/sda1/var/lib/docker/volumes/nfs-vol-tmp/_data",
        "Labels": {},
        "Scope": "local"
    }
]

docker@swnode-1:~$ docker volume inspect nfs-vol-tmp
[
    {
        "Name": "nfs-vol-tmp",
        "Driver": "local",
        "Mountpoint": "/mnt/sda1/var/lib/docker/volumes/nfs-vol-tmp/_data",
        "Labels": null,
        "Scope": "local"
    }
]

Something that I noticed is that when I remove the volume from the swarm docker volume rm nfs-vol-tmp the volume still being listed docker volume ls to the worker node "swnode-1". Is that normal ?

@tiagoalves83
Copy link

Same issue in 1.12.0 version #25202

@cpuguy83
Copy link
Member Author

@tiagoalves83 nothing to do with this pr.
If you are having an issue please open an issue with all the details so we can take a look.
Thanks.

kolyshkin added a commit to kolyshkin/docker-volume-ploop that referenced this pull request Nov 15, 2016
Since Docker 1.12, volumes can report their scope as either "global"
or "local". Global scope means volumes are available on all nodes in
a Docker Swarm cluster (so Swarm should act accordingly, for example
it should not try to remove a global volume on each node).

By default (or when started with -scope auto), scope is autodetected,
meaning if driver's home is on Virtuozzo Storage, global scope is set
and reported, otherwise it's assumed as local. This can be overriden
from the command line ("-scope local" or "-scope global").

For more details, see moby/moby#22077

Signed-off-by: Kir Kolyshkin <kir@openvz.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants