Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Network using podman in podman #22791

Closed
yellowhat opened this issue May 23, 2024 · 6 comments · Fixed by containers/common#2020
Closed

Network using podman in podman #22791

yellowhat opened this issue May 23, 2024 · 6 comments · Fixed by containers/common#2020
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. network Networking related issue or feature

Comments

@yellowhat
Copy link

yellowhat commented May 23, 2024

Issue Description

Hi,
I would like to run kind inside a podman container.

I am able to reproduce the error by creating and using a network using podman from inside a podman container:

$ podman run -it --rm --privileged quay.io/podman/stable:v5.0.2
[root@86e34ba1cabb /]# podman network create a
a
[root@86e34ba1cabb /]# podman run -it --net a alpine hostname 
Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/alpine:latest...
Getting image source signatures
Copying blob d25f557d7f31 done   | 
Copying config 1d34ffeaf1 done   | 
Writing manifest to image destination
ERRO[0002] IPAM error: failed to open database /run/containers/storage/networks/ipam.db: open /run/containers/storage/networks/ipam.db: no such file or directory 
ERRO[0002] Unmounting partially created network namespace for container 745b971a2ab377704314c7ef87125a9f531e3bd3c64b6562fb638e392fb46039: failed to remove ns path: remove /run/user/0/netns/netns-9b156638-d83d-8804-9404-f4124fd14a83: device or resource busy 
Error: netavark: IO error: failed to create aardvark-dns directory: No such file or directory (os error 2)

Is this expected?

Thanks

Steps to reproduce the issue

Steps to reproduce the issue

  1. podman run -it --rm --privileged quay.io/podman/stable:v5.0.2
  2. podman network create a
  3. podman run -it --net a alpine hostname

Describe the results you received

ERRO[0002] IPAM error: failed to open database /run/containers/storage/networks/ipam.db: open /run/containers/storage/networks/ipam.db: no such file or directory 
ERRO[0002] Unmounting partially created network namespace for container 745b971a2ab377704314c7ef87125a9f531e3bd3c64b6562fb638e392fb46039: failed to remove ns path: remove /run/user/0/netns/netns-9b156638-d83d-8804-9404-f4124fd14a83: device or resource busy 
Error: netavark: IO error: failed to create aardvark-dns directory: No such file or directory (os error 2)

Describe the results you expected

Container is created and run

podman info output

Client:       Podman Engine
Version:      5.0.3
API Version:  5.0.3
Go Version:   go1.22.2
Built:        Fri May 10 02:00:00 2024
OS/Arch:      linux/amd64
host:
  arch: amd64
  buildahVersion: 1.35.4
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.10-1.fc40.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.10, commit: '
  cpuUtilization:
    idlePercent: 92.82
    systemPercent: 1.26
    userPercent: 5.92
  cpus: 16
  databaseBackend: sqlite
  distribution:
    distribution: fedora
    version: "40"
  eventLogger: journald
  freeLocks: 2030
  hostname: sharko
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
  kernel: 6.8.10-300.fc40.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 12717789184
  memTotal: 24813404160
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.10.0-1.fc40.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.10.0
    package: netavark-1.10.3-3.fc40.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.10.3
  ociRuntime:
    name: crun
    package: crun-1.15-1.fc40.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.15
      commit: e6eacaf4034e84185fd8780ac9262bbf57082278
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20240510.g7288448-1.fc40.x86_64
    version: |
      pasta 0^20240510.g7288448-1.fc40.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: false
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 0
  swapTotal: 0
  uptime: 5h 56m 5.00s (Approximately 0.21 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/user/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 1
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/user/.local/share/containers/storage
  graphRootAllocated: 510139912192
  graphRootUsed: 51379908608
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 67
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/user/.local/share/containers/storage/volumes
version:
  APIVersion: 5.0.3
  Built: 1715299200
  BuiltTime: Fri May 10 02:00:00 2024
  GitCommit: ""
  GoVersion: go1.22.2
  Os: linux
  OsArch: linux/amd64
  Version: 5.0.3


### Podman in a container

Yes

### Privileged Or Rootless

Privileged

### Upstream Latest Release

Yes

### Additional environment details

Additional environment details

### Additional information

Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting
@yellowhat yellowhat added the kind/bug Categorizes issue or PR as related to a bug. label May 23, 2024
@Luap99
Copy link
Member

Luap99 commented May 23, 2024

Can you add --log-level debug to the podman run -it --net a ... command and provide the full output please

@yellowhat
Copy link
Author

yellowhat commented May 23, 2024

Sure:

logs
[root@fb041043307f /]# podman run -it --net a --log-level debug alpine hostname
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called run.PersistentPreRunE(podman run -it --net a --log-level debug alpine hostname) 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
INFO[0000] Using sqlite as database backend             
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/lib/containers/storage 
DEBU[0000] Using run root /run/containers/storage       
DEBU[0000] Using static dir /var/lib/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/libpod                    
DEBU[0000] Using volume path /var/lib/containers/storage/volumes 
DEBU[0000] Using transient store: false                 
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: imagestore=/var/lib/shared          
DEBU[0000] overlay: imagestore=/usr/lib/containers/storage 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Initializing event backend file              
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument 
DEBU[0000] Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument 
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument 
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument 
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument 
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument 
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/crun"            
INFO[0000] Setting parallel job count to 49             
DEBU[0000] Pulling image alpine (policy: missing)       
DEBU[0000] Looking up image "alpine" in local containers storage 
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] } 
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf" 
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf" 
DEBU[0000] Trying "docker.io/library/alpine:latest" ... 
DEBU[0000] reference "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]docker.io/library/alpine:latest" does not resolve to an image ID 
DEBU[0000] Trying "localhost/alpine:latest" ...         
DEBU[0000] reference "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]localhost/alpine:latest" does not resolve to an image ID 
DEBU[0000] Trying "registry.fedoraproject.org/alpine:latest" ... 
DEBU[0000] reference "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]registry.fedoraproject.org/alpine:latest" does not resolve to an image ID 
DEBU[0000] Trying "registry.access.redhat.com/alpine:latest" ... 
DEBU[0000] reference "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]registry.access.redhat.com/alpine:latest" does not resolve to an image ID 
DEBU[0000] Trying "docker.io/library/alpine:latest" ... 
DEBU[0000] reference "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]docker.io/library/alpine:latest" does not resolve to an image ID 
DEBU[0000] Trying "quay.io/alpine:latest" ...           
DEBU[0000] reference "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]quay.io/alpine:latest" does not resolve to an image ID 
DEBU[0000] Trying "docker.io/library/alpine:latest" ... 
DEBU[0000] reference "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]docker.io/library/alpine:latest" does not resolve to an image ID 
DEBU[0000] Trying "alpine" ...                          
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] } 
DEBU[0000] Attempting to pull candidate docker.io/library/alpine:latest for alpine 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]docker.io/library/alpine:latest" 
DEBU[0000] Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf) 
Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/alpine:latest...
DEBU[0000] Copying source image //alpine:latest to destination image [overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]docker.io/library/alpine:latest 
DEBU[0000] Using registries.d directory /etc/containers/registries.d 
DEBU[0000] Trying to access "docker.io/library/alpine:latest" 
DEBU[0000] No credentials matching docker.io/library/alpine found in /run/containers/0/auth.json 
DEBU[0000] No credentials matching docker.io/library/alpine found in /root/.config/containers/auth.json 
DEBU[0000] No credentials matching docker.io/library/alpine found in /root/.docker/config.json 
DEBU[0000] No credentials matching docker.io/library/alpine found in /root/.dockercfg 
DEBU[0000] No credentials for docker.io/library/alpine found 
DEBU[0000]  No signature storage configuration found for docker.io/library/alpine:latest, using built-in default file:///var/lib/containers/sigstore 
DEBU[0000] Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io 
DEBU[0000] GET https://registry-1.docker.io/v2/         
DEBU[0000] Ping https://registry-1.docker.io/v2/ status 401 
DEBU[0000] GET https://auth.docker.io/token?scope=repository%3Alibrary%2Falpine%3Apull&service=registry.docker.io 
DEBU[0001] GET https://registry-1.docker.io/v2/library/alpine/manifests/latest 
DEBU[0001] Content-Type from manifest GET is "application/vnd.docker.distribution.manifest.list.v2+json" 
DEBU[0001] Using SQLite blob info cache at /var/lib/containers/cache/blob-info-cache-v1.sqlite 
DEBU[0001] Source is a manifest list; copying (only) instance sha256:216266c86fc4dcef5619930bd394245824c2af52fd21ba7c6fa0e618657d4c3b for current system 
DEBU[0001] GET https://registry-1.docker.io/v2/library/alpine/manifests/sha256:216266c86fc4dcef5619930bd394245824c2af52fd21ba7c6fa0e618657d4c3b 
DEBU[0001] Content-Type from manifest GET is "application/vnd.docker.distribution.manifest.v2+json" 
DEBU[0001] IsRunningImageAllowed for image docker:docker.io/library/alpine:latest 
DEBU[0001]  Using default policy section                
DEBU[0001]  Requirement 0: allowed                      
DEBU[0001] Overall: allowed                             
DEBU[0001] Downloading /v2/library/alpine/blobs/sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1 
DEBU[0001] GET https://registry-1.docker.io/v2/library/alpine/blobs/sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1 
Getting image source signatures
DEBU[0001] Reading /var/lib/containers/sigstore/library/alpine@sha256=216266c86fc4dcef5619930bd394245824c2af52fd21ba7c6fa0e618657d4c3b/signature-1 
DEBU[0001] Not looking for sigstore attachments: disabled by configuration 
DEBU[0001] Manifest has MIME type application/vnd.docker.distribution.manifest.v2+json, ordered candidate list [application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v1+json] 
DEBU[0001] ... will first try using the original manifest unmodified 
DEBU[0001] Checking if we can reuse blob sha256:d25f557d7f31bf7acfac935859b5153da41d13c41f2b468d16f729a5b883634f: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true 
DEBU[0001] Failed to retrieve partial blob: convert_images not configured 
DEBU[0001] Downloading /v2/library/alpine/blobs/sha256:d25f557d7f31bf7acfac935859b5153da41d13c41f2b468d16f729a5b883634f 
DEBU[0001] GET https://registry-1.docker.io/v2/library/alpine/blobs/sha256:d25f557d7f31bf7acfac935859b5153da41d13c41f2b468d16f729a5b883634f 
Copying blob d25f557d7f31 [--------------------------------------] 0.0b / 3.5MiB (skipped: 0.0b = 0.00%)
Copying blob d25f557d7f31 [--------------------------------------] 0.0b / 3.5MiB | 0.0 b/s
Copying blob d25f557d7f31 done   | 
Copying blob d25f557d7f31 done   | 
DEBU[0002] No compression detected                      
DEBU[0002] Compression change for blob sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1 ("application/vnd.docker.container.image.v1+json") not supported 
DEBU[0002] Using original blob without modification     
Copying config 1d34ffeaf1 done   | 
Writing manifest to image destination
DEBU[0002] setting image creation date to 2024-05-22 18:18:12.052034407 +0000 UTC 
DEBU[0002] created new image ID "1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" with metadata "{}" 
DEBU[0002] added name "docker.io/library/alpine:latest" to image "1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" 
DEBU[0002] Pulled candidate docker.io/library/alpine:latest successfully 
DEBU[0002] Looking up image "1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" in local containers storage 
DEBU[0002] Trying "1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" ... 
DEBU[0002] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]@1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" 
DEBU[0002] Found image "1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" as "1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" in local containers storage 
DEBU[0002] Found image "1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" as "1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]@1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1) 
DEBU[0002] exporting opaque data as blob "sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" 
DEBU[0002] Looking up image "alpine" in local containers storage 
DEBU[0002] Normalized platform linux/amd64 to {amd64 linux  [] } 
DEBU[0002] Trying "docker.io/library/alpine:latest" ... 
DEBU[0002] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]@1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" 
DEBU[0002] Found image "alpine" as "docker.io/library/alpine:latest" in local containers storage 
DEBU[0002] Found image "alpine" as "docker.io/library/alpine:latest" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]@1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1) 
DEBU[0002] exporting opaque data as blob "sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" 
DEBU[0002] Inspecting image 1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1 
DEBU[0002] exporting opaque data as blob "sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" 
DEBU[0002] exporting opaque data as blob "sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" 
DEBU[0002] Inspecting image 1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1 
DEBU[0002] Inspecting image 1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1 
DEBU[0002] Inspecting image 1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1 
DEBU[0002] using systemd mode: false                    
DEBU[0002] Loading seccomp profile from "/usr/share/containers/seccomp.json" 
DEBU[0002] Successfully loaded network a: &{a 9ccb0856370ddfdea57b7bb9477701ff419bd56f62fafb0d00b516a440ee2714 bridge podman1 2024-05-23 15:38:22.45350875 +0000 UTC [{{{10.89.0.0 ffffff00}} 10.89.0.1 <nil>}] [] false false true [] map[] map[] map[driver:host-local]} 
DEBU[0002] Successfully loaded 2 networks               
DEBU[0002] Allocated lock 0 for container b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575 
DEBU[0002] exporting opaque data as blob "sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" 
DEBU[0002] Created container "b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575" 
DEBU[0002] Container "b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575" has work directory "/var/lib/containers/storage/overlay-containers/b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575/userdata" 
DEBU[0002] Container "b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575" has run directory "/run/containers/storage/overlay-containers/b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575/userdata" 
DEBU[0002] Handling terminal attach                     
INFO[0002] Received shutdown.Stop(), terminating!        PID=29
DEBU[0002] Enabling signal proxying                     
DEBU[0002] overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/K7SXMMZESE6ZSY262VMYIDLKNL,upperdir=/var/lib/containers/storage/overlay/acc59c8a563e425e0d9fc5f1cf55220855ca5cc34728ae73bd4810cab5b2d01c/diff,workdir=/var/lib/containers/storage/overlay/acc59c8a563e425e0d9fc5f1cf55220855ca5cc34728ae73bd4810cab5b2d01c/work,nodev,fsync=0 
DEBU[0002] Made network namespace at /run/user/0/netns/netns-b8d9b493-1423-df26-84d6-9f7d8c601986 for container b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575 
DEBU[0002] Mounted container "b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575" at "/var/lib/containers/storage/overlay/acc59c8a563e425e0d9fc5f1cf55220855ca5cc34728ae73bd4810cab5b2d01c/merged" 
DEBU[0002] Created root filesystem for container b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575 at /var/lib/containers/storage/overlay/acc59c8a563e425e0d9fc5f1cf55220855ca5cc34728ae73bd4810cab5b2d01c/merged 
DEBU[0002] Creating rootless network namespace at "/run/containers/storage/networks/rootless-netns/rootless-netns" 
DEBU[0002] pasta arguments: --config-net --pid /run/containers/storage/networks/rootless-netns/rootless-netns-conn.pid --dns-forward 169.254.0.1 -t none -u none -T none -U none --no-map-gw --quiet --netns /run/containers/storage/networks/rootless-netns/rootless-netns 
DEBU[0002] The path of /etc/resolv.conf in the mount ns is "/etc/resolv.conf" 
[DEBUG netavark::network::validation] "Validating network namespace..."
[DEBUG netavark::commands::setup] "Setting up..."
[INFO  netavark::firewall] Using iptables firewall driver
[DEBUG netavark::network::bridge] Setup network a
[DEBUG netavark::network::bridge] Container interface name: eth0 with IP addresses [10.89.0.2/24]
[DEBUG netavark::network::bridge] Bridge name: podman1 with IP addresses [10.89.0.1/24]
[DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv6/conf/eth0/autoconf to 0
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv4/conf/eth0/arp_notify to 1
[INFO  netavark::network::netlink] Adding route (dest: 0.0.0.0/0 ,gw: 10.89.0.1, metric 100)
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-1F40FC92DA241 created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_2 created on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_3 created on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_INPUT created on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD created on table filter
[DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -j ACCEPT created on table nat and chain NETAVARK-1F40FC92DA241
[DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE created on table nat and chain NETAVARK-1F40FC92DA241
[DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j NETAVARK-1F40FC92DA241 created on table nat and chain POSTROUTING
[DEBUG netavark::firewall::varktables::helpers] rule -p udp -s 10.89.0.0/24 --dport 53 -j ACCEPT created on table filter and chain NETAVARK_INPUT
[DEBUG netavark::firewall::varktables::helpers] rule -m conntrack --ctstate INVALID -j DROP created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j ACCEPT created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-SETMARK created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-MASQ created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-DNAT created on table nat
[DEBUG netavark::firewall::varktables::helpers] rule -j MARK  --set-xmark 0x2000/0x2000 created on table nat and chain NETAVARK-HOSTPORT-SETMARK
[DEBUG netavark::firewall::varktables::helpers] rule -j MASQUERADE -m comment --comment 'netavark portfw masq mark' -m mark --mark 0x2000/0x2000 created on table nat and chain NETAVARK-HOSTPORT-MASQ
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL created on table nat and chain PREROUTING
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL created on table nat and chain OUTPUT
ERRO[0002] IPAM error: failed to open database /run/containers/storage/networks/ipam.db: open /run/containers/storage/networks/ipam.db: no such file or directory 
ERRO[0002] Unmounting partially created network namespace for container b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575: failed to remove ns path: remove /run/user/0/netns/netns-b8d9b493-1423-df26-84d6-9f7d8c601986: device or resource busy 
DEBU[0002] Unmounted container "b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575" 
DEBU[0002] Network is already cleaned up, skipping...   
DEBU[0002] Cleaning up container b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575 
DEBU[0002] Network is already cleaned up, skipping...   
DEBU[0002] Container b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575 storage is already unmounted, skipping... 
DEBU[0002] ExitCode msg: "netavark (exit code 1): io error: failed to create aardvark-dns directory: no such file or directory (os error 2)" 
Error: netavark (exit code 1): IO error: failed to create aardvark-dns directory: No such file or directory (os error 2)
DEBU[0002] Shutting down engines    

@yellowhat
Copy link
Author

yellowhat commented May 24, 2024

Using an older version seems to work:

$ podman run -it --rm --privileged quay.io/podman/stable:v4.9.4
...
[root@d80777d4e6fe /]# podman network create a
a
[root@d80777d4e6fe /]# podman run -it --net a alpine hostname
Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/alpine:latest...
Getting image source signatures
Copying blob d25f557d7f31 done   | 
Copying config 1d34ffeaf1 done   | 
Writing manifest to image destination
WARN[0002] Path "/run/secrets/etc-pki-entitlement" from "/etc/containers/mounts.conf" doesn't exist, skipping 
WARN[0002] Path "/run/secrets/rhsm" from "/etc/containers/mounts.conf" doesn't exist, skipping 
d80777d4e6fe

adelton pushed a commit to adelton/kind-in-pod that referenced this issue May 24, 2024
adelton pushed a commit to adelton/kind-in-pod that referenced this issue May 24, 2024
@Luap99
Copy link
Member

Luap99 commented May 27, 2024

can you unset _CONTAINERS_USERNS_CONFIGURED in the container, i.e. run with --unsetenv _CONTAINERS_USERNS_CONFIGURED on the outer container.

I am not sure why this is set in our images by default as this is a internal detail and does not look correct to me at all.

@yellowhat
Copy link
Author

It works:

$ podman run -it --rm --privileged --unsetenv _CONTAINERS_USERNS_CONFIGURED  quay.io/podman/stable:v5.0.2
[root@2215684bcb22 /]# env | grep _CON
[root@2215684bcb22 /]# podman network create a
a
[root@2215684bcb22 /]# podman run -it --net a alpine hostname
Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/alpine:latest...
Getting image source signatures
Copying blob d25f557d7f31 done   | 
Copying config 1d34ffeaf1 done   | 
Writing manifest to image destination
2215684bcb22

@Luap99
Copy link
Member

Luap99 commented May 27, 2024

Ack I will create a fix, looks we only only set it to an empty string and the interal code sets a value so I should ignore the empty value I guess

@Luap99 Luap99 added the network Networking related issue or feature label May 27, 2024
Luap99 added a commit to Luap99/common that referenced this issue May 27, 2024
For some unknonw reason the podman container image sets the
_CONTAINERS_USERNS_CONFIGURED env to an empty value. I don't know what
the purpose of this is but is will trigger the check here which is wrong
when the contianer is privileged.

To fix this check that the value is set to done like it is by the reexec
logic. Also make sure the lock dir uses the same condition to stay
consitent.

Fixes containers/podman#22791

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Luap99 added a commit to Luap99/common that referenced this issue May 27, 2024
For some unknown reason the podman container image sets the
_CONTAINERS_USERNS_CONFIGURED env to an empty value. I don't know what
the purpose of this is but is will trigger the check here which is wrong
when the container is privileged.

To fix this check that the value is set to done like it is by the reexec
logic. Also make sure the lock dir uses the same condition to stay
consistent.

Fixes containers/podman#22791

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Luap99 added a commit to Luap99/common that referenced this issue May 31, 2024
For some unknown reason the podman container image sets the
_CONTAINERS_USERNS_CONFIGURED env to an empty value. I don't know what
the purpose of this is but is will trigger the check here which is wrong
when the container is privileged.

To fix this check that the value is set to done like it is by the reexec
logic. Also make sure the lock dir uses the same condition to stay
consistent.

Fixes containers/podman#22791

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
hswong3i pushed a commit to alvistack/containers-common that referenced this issue Jun 5, 2024
For some unknown reason the podman container image sets the
_CONTAINERS_USERNS_CONFIGURED env to an empty value. I don't know what
the purpose of this is but is will trigger the check here which is wrong
when the container is privileged.

To fix this check that the value is set to done like it is by the reexec
logic. Also make sure the lock dir uses the same condition to stay
consistent.

Fixes containers/podman#22791

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
adelton pushed a commit to adelton/kind-in-pod that referenced this issue Jun 8, 2024
adelton pushed a commit to adelton/kind-in-pod that referenced this issue Jun 8, 2024
adelton pushed a commit to adelton/kind-in-pod that referenced this issue Jun 8, 2024
adelton pushed a commit to adelton/kind-in-pod that referenced this issue Jun 20, 2024
@stale-locking-app stale-locking-app bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Aug 27, 2024
@stale-locking-app stale-locking-app bot locked as resolved and limited conversation to collaborators Aug 27, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. network Networking related issue or feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants