Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

jibDockerBuild fails to pull source image from gcr #991

Closed
LenGillespie opened this issue Sep 17, 2018 · 21 comments
Closed

jibDockerBuild fails to pull source image from gcr #991

LenGillespie opened this issue Sep 17, 2018 · 21 comments
Assignees
Milestone

Comments

@LenGillespie
Copy link

Description of the issue: Unable to build Docker image from custom base image hosted in gcr with gradle.plugin.com.google.cloud.tools:jib-gradle-plugin:0.9.10

I've got Docker on the path: PATH=/Applications/Docker.app/Contents/Resources/bin:...

Already have the image locally after executing:

  • gcloud auth configure-docker
  • gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://gcr.io
  • docker pull gcr.io/ir-devops-playground/oracle-jdk1.8:alpine3.8

Environment:

  • Mac El Capitan 10.11.6
  • Docker version 18.06.1-ce, build e68fc7a

jib-gradle-plugin Configuration:
Both my & service IAM account have "Storage Object Viewer" role. Tried 2 authentication approaches:

jib {
        from {
            image = "gcr.io/ir-devops-playground/oracle-jdk1.8:alpine3.8"
            auth {
                username = '_json_key'
                password =  file("${project.rootDir}/ir-devops-playground-33d34617c277.json").text
            }
        }
  }

&

jib {
        from {
            image = "gcr.io/ir-devops-playground/oracle-jdk1.8:alpine3.8"
            auth {
                username = 'oauth2accesstoken'
                password = 'gcloud auth print-access-token'.execute().text.trim()
            }
        }
  }

Log output:

./gradlew :impactradius-account:jibDockerBuild --stacktrace

Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':impactradius-account:jibDockerBuild'.

Caused by: org.gradle.api.GradleException: Build to Docker daemon failed
        at com.google.cloud.tools.jib.gradle.BuildDockerTask.buildDocker(BuildDockerTask.java:118)
       ....
Caused by: java.io.IOException: 'docker load' command failed with output: 
        at com.google.cloud.tools.jib.docker.DockerClient.load(DockerClient.java:110)
        at com.google.cloud.tools.jib.builder.steps.LoadDockerStep.afterPushBaseImageLayerFuturesFuture(LoadDockerStep.java:92)
        at com.google.common.util.concurrent.CombinedFuture$CallableInterruptibleTask.runInterruptibly(CombinedFuture.java:181)
        at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)

Additional Information:
Originally tried pulling the image from our insecure Nexus Docker repo with allowInsecureRegistries=true but when that didn't work, so I pushed the image up to gcr.

@chanseokoh
Copy link
Member

I think pulling the image is working and the image is successfully built. Note that Jib does not load the base image into your local docker daemon to build an image.

The error message is saying Jib failed to run "docker load" which loads the built image into your local docker daemon. The IOException is supposed to include the error output of "docker load", but I see it's empty. That said, I wonder why. What is clear is that, running the "docker load" command from Jib failed.

@coollog
Copy link
Contributor

coollog commented Sep 17, 2018

Hi @LenGillespie , can you try the instructions at https://github.com/GoogleContainerTools/jib/tree/master/jib-gradle-plugin#build-an-image-tarball and see if you can manually load the image into Docker?

@LenGillespie
Copy link
Author

./gradlew :impactradius-account:jibBuildTar
Starting a Gradle Daemon, 1 stopped Daemon could not be reused, use --status for details
Parallel execution with configuration on demand is an incubating feature.

Containerizing application to file at '.../impactradius-account/build/jib-image.tar'...

Getting base image gcr.io/ir-devops-playground/oracle-jdk1.8:alpine3.8...
Building dependencies layer...
Building resources layer...
Building classes layer...
The base image requires auth. Trying again for gcr.io/ir-devops-playground/oracle-jdk1.8:alpine3.8...
Retrieving registry credentials for gcr.io...
Finalizing...
Building image to tar file...

Container entrypoint set to [java, -Djava.awt.headless=true, -Djava.util.logging.manager=com.caucho.log.LogManagerImpl, -Destalea.queue.directory=build/queues, -Duser.timezone=UTC, -Xmx512m, -Xss1m, -cp, /app/resources/:/app/classes/:/app/libs/*, com.caucho.server.resin.Resin]

Built image tarball at .../impactradius-account/build/jib-image.tar

docker load --input impactradius-account/build/jib-image.tar
316f8828bd6b: Loading layer [==================================================>]  3.796MB/3.796MB
f8adf56d4a49: Loading layer [==================================================>]  183.6MB/183.6MB
7489554cc1d6: Loading layer [==================================================>]  823.6kB/823.6kB
fc9892481b49: Loading layer [==================================================>]  7.131MB/7.131MB
too many non-empty layers in History section

@chanseokoh
Copy link
Member

I think the number of history elements does not match the number of diff_ids in config.json: https://github.com/docker/distribution/blob/master/manifest/schema1/config_builder.go#L129

@LenGillespie can you upload the content of config.json in build/jib-image.tar? For example, tar -axf jib-image.tar config.json -O

@LenGillespie
Copy link
Author

Converted to txt since GitHub didn't accept json:
config.txt

@coollog coollog added this to the v0.9.11 milestone Sep 18, 2018
@chanseokoh
Copy link
Member

Yeah, the number of history entries is larger than the number of layer entries. We need to investigate what is going on.

And we should also fix the issue that the Docker error output is not being captured.

@chanseokoh
Copy link
Member

Oh, I should have excluded empty layer histories when counting the number.

@chanseokoh
Copy link
Member

There are 17 history entries. Among them, 5 are empty layers ("empty_layer": true). So, there are 12 history entries for non-empty layers. Then, there are 11 layers (11 diff_ids). So the numbers don't match.

"history":[
   1  {"created":"2018-07-06T14:14:06.165546783Z","created_by":"/bin/sh -c #(nop) ADD file:25f61d70254b9807a40cd3e8d820f6a5ec0e1e596de04e325f6a33810393e95a in / "},
   2  {"created":"2018-07-06T14:14:06.393355914Z","created_by":"/bin/sh -c #(nop)  CMD [\"/bin/sh\"]","empty_layer":true},
   3  {"created":"2018-08-13T08:37:54.8105367Z","created_by":"/bin/sh -c #(nop)  USER root","empty_layer":true},
   4  {"created":"2018-08-13T08:37:56.080005Z","created_by":"/bin/sh -c mkdir -p /deployments /opt"},
   5  {"created":"2018-08-13T08:37:56.3973599Z","created_by":"/bin/sh -c #(nop)  ENV JAVA_APP_DIR=/deployments JAVA_VERSION_MAJOR=8 JAVA_VERSION_MINOR=181 JAVA_VERSION_BUILD=13 JAVA_PACKAGE=jdk JAVA_JCE=standard JAVA_HOME=/opt/jdk JAVA_HASH=96a7b8442fe848ef90c96a2fad6ed6d1 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/jdk/bin GLIBC_VERSION=2.23-r3 LANG=C.UTF-8 TZ=:/etc/localtime","empty_layer":true},
   6  {"created":"2018-08-13T08:43:45.2872392Z","created_by":"/bin/sh -c set -ex  && apk upgrade --update  && apk add --update libstdc++ curl ca-certificates bash fuse unionfs-fuse  && for pkg in glibc-${GLIBC_VERSION} glibc-bin-${GLIBC_VERSION} glibc-i18n-${GLIBC_VERSION}; do curl -sSL https://github.com/andyshinn/alpine-pkg-glibc/releases/download/${GLIBC_VERSION}/${pkg}.apk -o /tmp/${pkg}.apk; done  && apk add --allow-untrusted /tmp/*.apk  && rm -v /tmp/*.apk  && ( /usr/glibc-compat/bin/localedef --force --inputfile POSIX --charmap UTF-8 C.UTF-8 || true )  && echo \"export LANG=C.UTF-8\" > /etc/profile.d/locale.sh  && /usr/glibc-compat/sbin/ldconfig /lib /usr/glibc-compat/lib  && curl -jksSL -b \"oraclelicense=a\" -o /tmp/java.tar.gz       http://download.oracle.com/otn-pub/java/jdk/8u${JAVA_VERSION_MINOR}-b${JAVA_VERSION_BUILD}/${JAVA_HASH}/jdk-8u${JAVA_VERSION_MINOR}-linux-x64.tar.gz  && gunzip /tmp/java.tar.gz  && tar -C /opt -xf /tmp/java.tar  && ln -s /opt/jdk1.${JAVA_VERSION_MAJOR}.0_${JAVA_VERSION_MINOR} /opt/jdk  && if [ \"${JAVA_JCE}\" == \"unlimited\" ]; then echo \"Installing Unlimited JCE policy\" >&2  &&     curl -jksSLH \"Cookie: oraclelicense=accept-securebackup-cookie\" -o /tmp/jce_policy-${JAVA_VERSION_MAJOR}.zip             http://download.oracle.com/otn-pub/java/jce/${JAVA_VERSION_MAJOR}/jce_policy-${JAVA_VERSION_MAJOR}.zip      && cd /tmp && unzip /tmp/jce_policy-${JAVA_VERSION_MAJOR}.zip      && cp -v /tmp/UnlimitedJCEPolicyJDK8/*.jar /opt/jdk/jre/lib/security;     fi  && sed -i s/#networkaddress.cache.ttl=-1/networkaddress.cache.ttl=10/ $JAVA_HOME/jre/lib/security/java.security  && apk del glibc-i18n  && rm -rf /opt/jdk/*src.zip            /opt/jdk/lib/missioncontrol            /opt/jdk/lib/visualvm            /opt/jdk/lib/*javafx*            /opt/jdk/jre/plugin            /opt/jdk/jre/bin/javaws            /opt/jdk/jre/bin/jjs            /opt/jdk/jre/bin/orbd            /opt/jdk/jre/bin/pack200            /opt/jdk/jre/bin/policytool            /opt/jdk/jre/bin/rmid            /opt/jdk/jre/bin/rmiregistry            /opt/jdk/jre/bin/servertool            /opt/jdk/jre/bin/tnameserv            /opt/jdk/jre/bin/unpack200            /opt/jdk/jre/lib/javaws.jar            /opt/jdk/jre/lib/deploy*            /opt/jdk/jre/lib/desktop            /opt/jdk/jre/lib/*javafx*            /opt/jdk/jre/lib/*jfx*            /opt/jdk/jre/lib/amd64/libdecora_sse.so            /opt/jdk/jre/lib/amd64/libprism_*.so            /opt/jdk/jre/lib/amd64/libfxplugins.so            /opt/jdk/jre/lib/amd64/libglass.so            /opt/jdk/jre/lib/amd64/libgstreamer-lite.so            /opt/jdk/jre/lib/amd64/libjavafx*.so            /opt/jdk/jre/lib/amd64/libjfx*.so            /opt/jdk/jre/lib/ext/jfxrt.jar            /opt/jdk/jre/lib/ext/nashorn.jar            /opt/jdk/jre/lib/oblique-fonts            /opt/jdk/jre/lib/plugin.jar            /tmp/* /var/cache/apk/*  && echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf  && echo \"securerandom.source=file:/dev/urandom\" >> $JAVA_HOME/jre/lib/security/java.security"},
   7  {"created":"2018-08-13T08:53:18.0164483Z","created_by":"/bin/sh -c #(nop) ADD file:b05048ad63e5a7aee97caeecd283bc4323541dd3023a2b14459af2989cf8ea1e in /opt/run-java-options "},
   8  {"created":"2018-08-13T08:53:29.9849311Z","created_by":"/bin/sh -c mkdir -p /opt/agent-bond  && curl http://central.maven.org/maven2/io/fabric8/agent-bond-agent/1.0.2/agent-bond-agent-1.0.2.jar           -o /opt/agent-bond/agent-bond.jar  && chmod 444 /opt/agent-bond/agent-bond.jar  && chmod 755 /opt/run-java-options  && apk del curl"},
   9  {"created":"2018-08-13T08:53:30.3137568Z","created_by":"/bin/sh -c #(nop) ADD file:7e9f850e5d485a729e4c2994c5a44588a4131707220fa86a7ef43bff6dad7c6e in /opt/agent-bond/ "},
  10  {"created":"2018-08-13T08:53:30.7001534Z","created_by":"/bin/sh -c #(nop)  EXPOSE 8080 8778 9779","empty_layer":true},
  11  {"created":"2018-08-13T08:53:31.4206354Z","created_by":"/bin/sh -c #(nop) COPY multi:961a0c1e47ec8a2483a367d757720b53036a2175075682d8da9fc522e8de6428 in /deployments/ "},
  12  {"created":"2018-08-13T08:53:32.6617529Z","created_by":"/bin/sh -c chmod 755 /deployments/run-java.sh /deployments/java-default-options /deployments/container-limits /deployments/debug-options"},
  13  {"created":"2018-08-13T08:53:33.1435034Z","created_by":"/bin/sh -c #(nop) COPY file:906399e2c4c0b4ce988751a43554be48530ac76b4f654288b6a3e4fbbdce4d0b in /usr/local/bin "},
  14  {"created":"2018-08-13T08:53:33.8613112Z","created_by":"/bin/sh -c #(nop)  CMD [\"/deployments/run-java.sh\"]","empty_layer":true},
  15  {"created":"1970-01-01T00:00:00Z","author":"Jib","created_by":"jib-gradle-plugin"},
  16  {"created":"1970-01-01T00:00:00Z","author":"Jib","created_by":"jib-gradle-plugin"},
  17  {"created":"1970-01-01T00:00:00Z","author":"Jib","created_by":"jib-gradle-plugin"}
],
"rootfs":{   
  "type":"layers",
  "diff_ids":[
     1 "sha256:73046094a9b835e443af1a9d736fcfc11a994107500e474d0abf399499ed280c",
     2 "sha256:0ab31c64fe45c0aa59282b5174e45195d72710b5f8bc8044d550c8ccdbc35f94",
     3 "sha256:265133943c24dea6ca818b4593f86e7de54c48b658f2d3cbe2d1d85e49b1a4b7",
     4 "sha256:345cc7f7b9d710ed5e29a0f6a87ed144ecc618424775c91753e76f8315d23f6b",
     5 "sha256:58868b82fad6130a076baa3d789077dd8a1ccfe09c4ab59f93b13506e6037e4d",
     6 "sha256:785a71c89fb5472c7af1febf3a7405428cb699f8a7d11397fc2f4db3d05ab12e",
     7 "sha256:091688edce8e6f8cd425b94916343785d3e98fc7bd87673f7b265b7fbaf42915",
     8 "sha256:316f8828bd6b5814f1475adb6a5abbc163e44bf98c54236db9da9a270ff05258",
     9 "sha256:f8adf56d4a49fab0148d98ead4757ac74d8838589bb11f06a55b7d1a9b40ceae",
    10 "sha256:7489554cc1d6ed5fa536acc37fb6751bb6df806cf5835f56b88ddffcf862438f",
    11 "sha256:fc9892481b49871c2b1ce6a5d33044461f9c3c458fcfb0321cba95a3c72302fb"
  ]}}

@briandealwis
Copy link
Member

@LenGillespie can you please also include the output of the following:

docker inspect gcr.io/ir-devops-playground/oracle-jdk1.8:alpine3.8
docker history gcr.io/ir-devops-playground/oracle-jdk1.8:alpine3.8

(you might have to docker-pull it first)

@briandealwis
Copy link
Member

@LenGillespie's docker-load shows it loading 4 layers, but the jib history and build output show it only build three layers. Where did layer 316f8828bd6b5814f1475adb6a5abbc163e44bf98c54236db9da9a270ff05258 come from?

@LenGillespie
Copy link
Author

history.txt
inspect.txt

@chanseokoh
Copy link
Member

chanseokoh commented Sep 20, 2018

316f8828bd6b is the last layer of the base image, so that's OK. I think I found why this is happening.

From inspect.txt, there are 9 layers in the base image. The base image has 14 history entries, among which 5 are empty, so the numbers match (9 = 14 - 5). All the information is consistent with config.txt from the Jib-built image, so there is no discrepancy so far.

        "RootFS": {
            "Type": "layers",
            "Layers": [
                "sha256:73046094a9b835e443af1a9d736fcfc11a994107500e474d0abf399499ed280c",
                "sha256:0ab31c64fe45c0aa59282b5174e45195d72710b5f8bc8044d550c8ccdbc35f94",
                "sha256:265133943c24dea6ca818b4593f86e7de54c48b658f2d3cbe2d1d85e49b1a4b7",
                "sha256:345cc7f7b9d710ed5e29a0f6a87ed144ecc618424775c91753e76f8315d23f6b",
                "sha256:58868b82fad6130a076baa3d789077dd8a1ccfe09c4ab59f93b13506e6037e4d",
                "sha256:785a71c89fb5472c7af1febf3a7405428cb699f8a7d11397fc2f4db3d05ab12e",
                "sha256:091688edce8e6f8cd425b94916343785d3e98fc7bd87673f7b265b7fbaf42915",
                "sha256:091688edce8e6f8cd425b94916343785d3e98fc7bd87673f7b265b7fbaf42915",
                "sha256:316f8828bd6b5814f1475adb6a5abbc163e44bf98c54236db9da9a270ff05258"
            ]
        },

The problem I think is that, the base image has duplicate layers:

                "sha256:091688edce8e6f8cd425b94916343785d3e98fc7bd87673f7b265b7fbaf42915",
                "sha256:091688edce8e6f8cd425b94916343785d3e98fc7bd87673f7b265b7fbaf42915",

And it looks like Jib removes the duplicate, and we are left with one less layer. The question is, are duplicate layers part of the spec? That is, is it us or the base image that does not follow the image spec?

@chanseokoh
Copy link
Member

chanseokoh commented Sep 20, 2018

It's indeed possible to have duplicate layers with the same ID.

FROM registry:2

RUN touch /file
RUN chmod 755 /file
RUN chmod 755 /file
$ docker build -t image-test .
$ docker inspect image-test
...
"RootFS": {
    "Type": "layers",
     "Layers": [
        ...
        "sha256:6c7980b6df74cb40f7d3c199bafffd847ab0d87e6105fa1ed5b68fded06289c1",
        "sha256:c97e6cee912653f76dbbc3e82d39593f0f6d00d2a102eb630dce0ca35a25fe85",
        "sha256:c97e6cee912653f76dbbc3e82d39593f0f6d00d2a102eb630dce0ca35a25fe85"
    ]
},

@briandealwis
Copy link
Member

ImageLayers.Builder maintain the layers as a LinkedHashSet.

@briandealwis briandealwis self-assigned this Sep 20, 2018
@briandealwis
Copy link
Member

I guess the question here is: do we eliminate duplicate layers and rewrite the layer history, or do we maintain the layers and history? There are no substantial space savings to eliminating the extra layers.

@coollog
Copy link
Contributor

coollog commented Sep 20, 2018

Duplicate layers can exist but they are not necessary for producing the intended container file system. Only the last of any duplicate layers needs to be there to define the same container. I think we might want to just maintain the layers as is (with duplicates) so that the intended history remains associated with them.

@chanseokoh
Copy link
Member

Yeah, I think we need to retain the history about how each layer is created even if some history created duplicate layers, which means we need to retain duplicate layers too so that the numbers match.

@briandealwis just keep in mind that there was a reason that we used LinkedHashSet to remove duplicate layers: #739

@coollog
Copy link
Contributor

coollog commented Sep 20, 2018

I think once we get the new cache mechanism in, the problem in #739 should not happen anymore since we won't have a cache metadata with a list for each cache entry.

@coollog
Copy link
Contributor

coollog commented Sep 27, 2018

Hi @LenGillespie , we have released version 0.9.11 with a fix for this issue.

@LenGillespie
Copy link
Author

Thanks. I ended up rebuilding the image with --squash to get around the issue.

@chanseokoh
Copy link
Member

@LenGillespie thanks for the update. Good to know about how --squash can work out here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants