Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker push image to harbor fails with unauthorized: unauthorized to access repository #20865

Open
microyahoo opened this issue Aug 21, 2024 · 4 comments

Comments

@microyahoo
Copy link
Contributor

microyahoo commented Aug 21, 2024

Hi, we deploy harbor with helm, when we try to push image from CI to harbor, but I get the following error output. And this issue cannot be reproduced consistently, but it occurs intermittently over time.

$ docker login -u "${SIMULATION_REGISTRY_USER}" -p "${SIMULATION_REGISTRY_PASSWORD}" reg.deeproute.ai;
00:00
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded

++ docker push reg.deeproute.ai/deeproute-simulation/simulation-release-image-snapshot:1.19.0-1385250
The push refers to repository [reg.deeproute.ai/deeproute-simulation/simulation-release-image-snapshot]
d73d0cb11baf: Preparing
028cc842dca9: Preparing
86ab8490d694: Preparing
557c1326e3fe: Preparing
96c906d59015: Preparing
ef8698acd427: Preparing
a226656deeca: Preparing
a5434c69463c: Preparing
ef8698acd427: Waiting
51337ccf7aea: Preparing
a5434c69463c: Waiting
a226656deeca: Waiting
9c04cff1ea42: Preparing
88f2e499f0c8: Preparing
8a98eb98e93d: Preparing
9c04cff1ea42: Waiting
effef6d2500b: Preparing
51337ccf7aea: Waiting
8a98eb98e93d: Waiting
cc012dea378e: Preparing
effef6d2500b: Waiting
5e2f58a28a21: Preparing
cc012dea378e: Waiting
f609131a4bfa: Preparing
88f2e499f0c8: Waiting
e7f500b1d55d: Preparing
5e2f58a28a21: Waiting
5ec6e3b96045: Preparing
6e55d8b8377a: Preparing
f609131a4bfa: Waiting
1cfe21ac83e8: Preparing
952c5f5c45dc: Preparing
5ec6e3b96045: Waiting
5f70bf18a086: Preparing
1cfe21ac83e8: Waiting
952c5f5c45dc: Waiting
397460360ef8: Preparing
54a486f85c19: Preparing
5f70bf18a086: Waiting
10fdc4e74558: Preparing
54a486f85c19: Waiting
6e55d8b8377a: Waiting
e5134e1691fb: Preparing
e7f500b1d55d: Waiting
397460360ef8: Waiting
7b547be20767: Preparing
10fdc4e74558: Waiting
e5134e1691fb: Waiting
94e4c1b7c395: Preparing
7b547be20767: Waiting
9cbd6771086f: Preparing
94e4c1b7c395: Waiting
fe4547415f0a: Preparing
327a0a0a8bce: Preparing
e68fc1e8f1c6: Preparing
7368782be4c2: Preparing
31814479dad0: Preparing
548a79621a42: Preparing
9cbd6771086f: Waiting
7368782be4c2: Waiting
fe4547415f0a: Waiting
327a0a0a8bce: Waiting
e68fc1e8f1c6: Waiting
31814479dad0: Waiting
548a79621a42: Waiting
unauthorized: unauthorized to access repository: deeproute-simulation/simulation-release-image-snapshot, action: push: unauthorized to access repository: deeproute-simulation/simulation-release-image-snapshot, action: push

harbor version: v2.10.0

harbor deployments

root@master1:/home/devops# kubectl get pods -n harbor-server -o wide
NAME                                 READY   STATUS    RESTARTS   AGE    IP               NODE                                    NOMINATED NODE   READINESS GATES
harbor-core-c599cc8c7-8szgv          1/1     Running   0          32d    10.233.120.175   node08.prod-k8s.bm.pd.sz.deeproute.ai   <none>           <none>
harbor-core-c599cc8c7-n9g6k          1/1     Running   0          128d   10.233.94.24     node04.prod-k8s.bm.pd.sz.deeproute.ai   <none>           <none>
harbor-core-c599cc8c7-r6zls          1/1     Running   0          128d   10.233.102.92    node03.prod-k8s.bm.pd.sz.deeproute.ai   <none>           <none>
harbor-exporter-75dc6cfbbc-wtvpd     1/1     Running   0          128d   10.233.94.176    node04.prod-k8s.bm.pd.sz.deeproute.ai   <none>           <none>
harbor-jobservice-7469c85bf8-5kjjb   1/1     Running   0          32d    10.233.120.209   node08.prod-k8s.bm.pd.sz.deeproute.ai   <none>           <none>
harbor-jobservice-7469c85bf8-phr7t   1/1     Running   0          128d   10.233.102.112   node03.prod-k8s.bm.pd.sz.deeproute.ai   <none>           <none>
harbor-jobservice-7469c85bf8-s6fkt   1/1     Running   0          128d   10.233.94.11     node04.prod-k8s.bm.pd.sz.deeproute.ai   <none>           <none>
harbor-portal-8dd69fcb5-7qjct        1/1     Running   0          128d   10.233.94.157    node04.prod-k8s.bm.pd.sz.deeproute.ai   <none>           <none>
harbor-registry-89bb77dbd-27n9k      2/2     Running   0          128d   10.233.94.112    node04.prod-k8s.bm.pd.sz.deeproute.ai   <none>           <none>
harbor-registry-89bb77dbd-f67b8      2/2     Running   0          32d    10.233.125.189   node05.prod-k8s.bm.pd.sz.deeproute.ai   <none>           <none>
harbor-registry-89bb77dbd-pskm2      2/2     Running   0          128d   10.233.107.232   node06.prod-k8s.bm.pd.sz.deeproute.ai   <none>           <none>
harbor-trivy-0                       1/1     Running   0          128d   10.233.94.215    node04.prod-k8s.bm.pd.sz.deeproute.ai   <none>           <none>

I try to find some clues from docker client like below

20 Aug 21 02:36:48 10-3-8-63.maas dockerd[1740]: time="2024-08-21T02:36:48.181921955Z" level=error msg="Upload failed: unauthorized: unautho    rized to access repository: deeproute-simulation/simulation-release-image-snapshot, action: push: unauthorized to access repository: deep    route-simulation/simulation-release-image-snapshot, action: push"
 21 Aug 21 02:36:48 10-3-8-63.maas dockerd[1740]: time="2024-08-21T02:36:48.182336799Z" level=info msg="Attempting next endpoint for push aft    er error: unauthorized: unauthorized to access repository: deeproute-simulation/simulation-release-image-snapshot, action: push: unauthor    ized to access repository: deeproute-simulation/simulation-release-image-snapshot, action: push"
root@10-3-8-63:/etc# docker version
Client: Docker Engine - Community
 Version:           25.0.5
 API version:       1.44
 Go version:        go1.21.8
 Git commit:        5dc9bcc
 Built:             Tue Mar 19 15:05:20 2024
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          25.0.5
  API version:      1.44 (minimum version 1.24)
  Go version:       go1.21.8
  Git commit:       e63daec
  Built:            Tue Mar 19 15:05:20 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.6.34
  GitCommit:        e9e2c7707933f32aa891dda794a1df36a6ec7aee
 runc:
  Version:          1.1.13
  GitCommit:        v1.1.13-0-g58aa9203-dirty
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

harbor log:
20240821-154926
20240821-154917

the issue goharbor/harbor-helm#1205 mentioned there should be time syncs between teh nodes that were running core/registry. I have checked the ntp and try to run date in all nodes but have no lucks.

deeproute@10-9-8-162:~$ ansible -i /home/deeproute/zhengliang/ops-playbooks/projects/prod-k8s-cluster/inventory.ini kube-node -m shell -a 'date'
node04.prod-k8s.bm.pd.sz.deeproute.ai | CHANGED | rc=0 >>
Wed Aug 21 07:52:23 UTC 2024
node05.prod-k8s.bm.pd.sz.deeproute.ai | CHANGED | rc=0 >>
Wed Aug 21 07:52:23 UTC 2024
node02.prod-k8s.bm.pd.sz.deeproute.ai | CHANGED | rc=0 >>
Wed Aug 21 07:52:23 UTC 2024
node01.prod-k8s.bm.pd.sz.deeproute.ai | CHANGED | rc=0 >>
Wed Aug 21 07:52:23 UTC 2024
node03.prod-k8s.bm.pd.sz.deeproute.ai | CHANGED | rc=0 >>
Wed Aug 21 07:52:23 UTC 2024
10-10-12-93.maas | CHANGED | rc=0 >>
Wed 21 Aug 2024 07:52:23 AM UTC
10-10-12-94.maas | CHANGED | rc=0 >>
Wed 21 Aug 2024 07:52:23 AM UTC
node06.prod-k8s.bm.pd.sz.deeproute.ai | CHANGED | rc=0 >>
Wed Aug 21 07:52:23 UTC 2024
node07.prod-k8s.bm.pd.sz.deeproute.ai | CHANGED | rc=0 >>
Wed Aug 21 07:52:23 UTC 2024
node08.prod-k8s.bm.pd.sz.deeproute.ai | CHANGED | rc=0 >>
Wed Aug 21 07:52:23 UTC 2024

harbor portal config file

apiVersion: v1
data:
  nginx.conf: |-
    worker_processes 2;
    pid /tmp/nginx.pid;
    events {
        worker_connections  1024;
    }
    http {
        client_body_temp_path /tmp/client_body_temp;
        proxy_temp_path /tmp/proxy_temp;
        fastcgi_temp_path /tmp/fastcgi_temp;
        uwsgi_temp_path /tmp/uwsgi_temp;
        scgi_temp_path /tmp/scgi_temp;
        server {
            listen 8080;
            listen [::]:8080;
            server_name  localhost;
            root   /usr/share/nginx/html;
            index  index.html index.htm;
            include /etc/nginx/mime.types;
            gzip on;
            gzip_min_length 1000;
            gzip_proxied expired no-cache no-store private auth;
            gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
            location /devcenter-api-2.0 {
                try_files $uri $uri/ /swagger-ui-index.html;
            }
            location / {
                try_files $uri $uri/ /index.html;
            }
            location = /index.html {
                add_header Cache-Control "no-store, no-cache, must-revalidate";
            }
        }
    }
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: harbor-server
    meta.helm.sh/release-namespace: harbor-server
  creationTimestamp: "2024-02-29T02:24:10Z"
  labels:
    app.kubernetes.io/managed-by: Helm
  name: portal-conf
  namespace: harbor-server
  resourceVersion: "1091105143"
  selfLink: /api/v1/namespaces/harbor-server/configmaps/portal-conf
  uid: 223882e1-662f-41bc-bca5-cd131d06f566
@ainy0293
Copy link

ainy0293 commented Aug 25, 2024

Hi, I have the same problem.

Pushing the image, it returns unauthorized. After multiple attempts, it succeeds once.

My Harbor version is v2.11.1.

@MinerYang
Copy link
Contributor

MinerYang commented Aug 26, 2024

Please make sure you set same token-service-private-key within all the core pods
either the default one /etc/core/private_key.pem or the env TOKEN_PRIVATE_KEY_PATH

@microyahoo
Copy link
Contributor Author

microyahoo commented Aug 26, 2024

Hi @MinerYang, yes, we set same private key like below.

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "18"
    meta.helm.sh/release-name: harbor-server
    meta.helm.sh/release-namespace: harbor-server
    reloader.stakater.com/auto: "true"
  creationTimestamp: "2024-02-29T02:24:10Z"
  generation: 20
  labels:
    app: harbor-core
    app.kubernetes.io/managed-by: Helm
    project: harbor
  name: harbor-core
  namespace: harbor-server
  resourceVersion: "1264401268"
  selfLink: /apis/apps/v1/namespaces/harbor-server/deployments/harbor-core
  uid: 98269ccd-74db-46f7-80f9-5e3d7914b2c1
spec:
  progressDeadlineSeconds: 600
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: harbor-core
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        dep.configmap.hash/app-conf: xxx
      creationTimestamp: null
      labels:
        app: harbor-core
        project: harbor
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - harbor-core
              topologyKey: kubernetes.io/hostname
            weight: 100
      containers:
      - env:
        - name: DR_META_K8S_CLUSTER_ENV
          value: production
        - name: DR_META_K8S_CLUSTER_NAME
          value: prod-k8s-cluster
        - name: METRIC_SUBSYSTEM
          value: core
        - name: PORT
          value: "8080"
        - name: STAKATER_CORE_ENV_SECRET
          value: cb3de260f885a88fb66ce9b748b4afdb3b3a6d03
        envFrom:
        - secretRef:
            name: harbor-env
        - secretRef:
            name: core-env
        image: reg.deeproute.ai/deeproute-public/harbor-core:v2.10.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 2
          httpGet:
            path: /api/v2.0/ping
            port: core
            scheme: HTTP
          initialDelaySeconds: 15
          periodSeconds: 120
          successThreshold: 1
          timeoutSeconds: 10
        name: core
        ports:
        - containerPort: 8080
          name: core
          protocol: TCP
        - containerPort: 8001
          name: metrics
          protocol: TCP
        readinessProbe:
          failureThreshold: 2
          httpGet:
            path: /api/v2.0/ping
            port: core
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 60
          successThreshold: 1
          timeoutSeconds: 10
        resources:
          limits:
            cpu: "8"
            memory: 16Gi
          requests:
            cpu: "8"
            memory: 16Gi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/core//app.conf
          name: app-conf
          subPath: app.conf
        - mountPath: /etc/core//private_key.pem
          name: private-key
          subPath: private_key.pem
        - mountPath: /etc/core/token
          name: psc
        - mountPath: /etc/core//key
          name: secret-key
          subPath: key
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 10000
        runAsUser: 10000
      terminationGracePeriodSeconds: 120
      volumes:
      - emptyDir: {}
        name: psc
      - configMap:
          defaultMode: 420
          name: app-conf
        name: app-conf
      - name: private-key
        secret:
          defaultMode: 420
          secretName: private-key
      - name: secret-key
        secret:
          defaultMode: 420
          secretName: secret-key
status:
  availableReplicas: 3
  conditions:
  - lastTransitionTime: "2024-02-29T02:24:10Z"
    lastUpdateTime: "2024-04-15T02:45:13Z"
    message: ReplicaSet "harbor-core-c599cc8c7" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2024-07-19T12:33:37Z"
    lastUpdateTime: "2024-07-19T12:33:37Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 20
  readyReplicas: 3
  replicas: 3
  updatedReplicas: 3

@microyahoo
Copy link
Contributor Author

similar issues:
goharbor/harbor-helm#174
#12135

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants