Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Container config not propagated when registry uses old V2.1 manifest #1641

Closed
chanseokoh opened this issue Apr 17, 2019 · 8 comments · Fixed by #1644
Closed

Container config not propagated when registry uses old V2.1 manifest #1641

chanseokoh opened this issue Apr 17, 2019 · 8 comments · Fixed by #1644
Assignees
Milestone

Comments

@chanseokoh
Copy link
Member

This normally works, but not when using quay.io/sdase/openjdk-runtime:8-hotspot as a base image. docker inspect shows that the base image has Env, Labels, etc.

This may be related to Quay using the old manifest V2.1; when I retag the base image, push it to Docker Hub (which generates manifest V2.2), and use it as a base image, it works.

@chanseokoh chanseokoh self-assigned this Apr 17, 2019
@chanseokoh
Copy link
Member Author

This is obvious because, unlike when dealing with V.2.2 manifest, the only thing we add when creating Image<Layer> is the layers. In JsonToImageTranslator,

  public static Image<Layer> toImage(V21ManifestTemplate manifestTemplate) ... {
    Image.Builder<Layer> imageBuilder = Image.builder(V21ManifestTemplate.class);

    // V21 layers are in reverse order of V22. (The first layer is the latest one.)
    for (DescriptorDigest digest : Lists.reverse(manifestTemplate.getLayerDigests())) {
      imageBuilder.addLayer(new DigestOnlyLayer(digest));
    }

    // Nothing else we add.

    return imageBuilder.build();
  }

In V2.1, the top-most history.v1Compability field (corresponds to the latest layer) seems to have the container configuration. Not sure if we can always expect that.

{
   "schemaVersion": 1, 
   "tag": "8-hotspot", 
   "name": "sdase/openjdk-runtime", 
   "architecture": "amd64", 
   "fsLayers": [
      {
         "blobSum": "sha256:01f204195488f0edd59fe12e05f9bed7a6c6aa66a37e5e4b0470ec138804ae6d"
      }, 
      {
         "blobSum": "sha256:c88a6d0aeb7afb5a9dc9d8446ff7c58e240b2745a53bc7637d08de7b3d74036a"
      }
   ], 
   "history": [
      {
         "v1Compatibility": "{\"architecture\":\"amd64\",\"config\":{\"User\":\"1001\",\"Env\":[\"JAVA_HOME=/opt/openjdk\",\"PATH=/opt/openjdk/bin\"],\"Entrypoint\":[\"/opt/openjdk/bin/java\"],\"Cmd\":[\"-version\"],\"Labels\":{\"org.opencontainers.image.authors\":\"SDA SE Engineers \\u003ccloud@sda-se.com\\u003e\",\"org.opencontainers.image.description\":\"OpenJDK runtime version 8.202.0 with HotSpot powered by AdoptOpenJDK\",\"org.opencontainers.image.licenses\":\"AGPL-3.0\",\"org.opencontainers.image.revision\":\"93b07519d06996e9e10fee2635cefdcaa02bb24c\",\"org.opencontainers.image.source\":\"https://github.com/SDA-SE/openjdk-runtime\",\"org.opencontainers.image.title\":\"OpenJDK runtime\",\"org.opencontainers.image.url\":\"https://quay.io/sdase/openjdk-runtime\",\"org.opencontainers.image.vendor\":\"SDA SE Open Industry Solutions\",\"org.opencontainers.image.version\":\"8.202.0-hotspot\"}},\"created\":\"2019-04-17T14:02:34.79404123Z\",\"id\":\"7b405ae1ef71e07bc7bb5e0d7901741cc2db881a307d06a41e5908f42fad8dfa\",\"os\":\"linux\",\"parent\":\"81274ba4cf39e9bc6536dfd8ba8e1b9278b895dd9fd8ac286276edaaa689f6c6\"}"
      },
      {  
         "v1Compatibility": "{\"id\":\"81274ba4cf39e9bc6536dfd8ba8e1b9278b895dd9fd8ac286276edaaa689f6c6\",\"created\":\"2019-04-17T14:00:49.699769867Z\",\"container_config\":{\"Cmd\":[\"\"]}}"
      }
   ]
}

This is effectively #1634, but I'll leave #1634 open, as setting appropriate history may involve some thoughts and more work.

@hendrikhalkow
Copy link

I can confirm that the issue only appears with Quay.io, but not with Docker.io. As a workaround, I download the file via skopeo before jibbing., However, this requires Docker to run (I haven't found out how to skopeo-copy the base image into jib.baseImageCache directory).

skopeo copy \
  docker://quay.io/sdase/openjdk-runtime:8-hotspot \
  docker-daemon:sdase/openjdk-runtime:8-hotspot

gradle jibDockerBuild

skopeo copy \
  docker-daemon:quay.io/${TARGET_IMAGE} \
  docker://quay.io/${TARGET_IMAGE}  

@chanseokoh
Copy link
Member Author

skopeo copy \
  docker://quay.io/sdase/openjdk-runtime:8-hotspot \
  docker-daemon:sdase/openjdk-runtime:8-hotspot

I doubt the above will do anything different, because Jib doesn't make use of the Docker daemon's cache at all. Jib will pull sdase/openjdk-runtime:8-hotspot from Docker Hub even if your local Docker daemon has it.

I haven't found out how to skopeo-copy the base image into jib.baseImageCache directory

This won't work anyway, because Jib caches neither an image manifest nor a container configuration.

@hendrikhalkow
Copy link

I doubt the above will do anything different, because Jib doesn't make use of the Docker daemon's cache at all.

Not even when you do gradle jibDockerBuild?

@chanseokoh
Copy link
Member Author

No. Jib never looks into the Docker daemon cache. The idea has been around for a while though: #1468, #718 (comment)

@chanseokoh chanseokoh changed the title Container config not propagated Container config not propagated when registry uses old V2.1 manifest Apr 17, 2019
@hendrikhalkow
Copy link

As of now Quay.io is testing V2.2 support. Since yesterday, it is enabled for quay.io/sdase. However, the issue still persists. Proof that V2.2 is enabled:

manifest_digest="$( curl -Ls https://quay.io/api/v1/repository/sdase/openjdk-runtime \
  | jq -r '.tags["8-hotspot"].manifest_digest' )"

curl -Ls "https://quay.io/api/v1/repository/sdase/openjdk-runtime/manifest/${manifest_digest}" \
  | jq -r '.manifest_data' | jq

outputs:

{
  "schemaVersion": 2,
  "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
  "config": {
    "mediaType": "application/vnd.docker.container.image.v1+json",
    "size": 1254,
    "digest": "sha256:ff724a3013d24eb430142f71430f1a5cb126c2aa2b636878794e11cdc9056d47"
  },
  "layers": [
    {
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "size": 57420166,
      "digest": "sha256:33ed000c44f0f27644a6e2a173a3aefb1a2e0f31a5a3661cc8a0c24b25b16f54"
    },
    {
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "size": 41212859,
      "digest": "sha256:ea174d0fc4671ffc534b62def5cbee6fc0d09bfb325287b6772315757dc8a167"
    }
  ]
}

chanseokoh added a commit that referenced this issue Apr 23, 2019
@chanseokoh
Copy link
Member Author

chanseokoh commented Apr 23, 2019

@hendrikhalkow I see container configuration is propagated when using your base image. Maybe a stale image hanging around in your env?

        <groupId>com.google.cloud.tools</groupId>
        <artifactId>jib-maven-plugin</artifactId>
        <version>1.1.2</version>
        <configuration>
          <from>
            
          </from>
          <container>
            <entrypoint>INHERIT</entrypoint>
          </container>
        </configuration>
$ mvn clean compile jib:dockerBuild
$ docker inspect helloworld:1
            "Env": [
                "PATH=/opt/openjdk/bin",
                "JAVA_HOME=/opt/openjdk"
            ],
            "Cmd": [
                "-version"
            ],
            "Image": "",
            "Volumes": {},
            "WorkingDir": "",
            "Entrypoint": [
                "/opt/openjdk/bin/java"
            ],
            "OnBuild": null,
            "Labels": {}

@hendrikhalkow
Copy link

You are right – it was just a caching issue. Enabling V2.2 solved the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants