Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman: container engine docker: StartHost failed #7992

Closed
elegos opened this issue May 4, 2020 · 16 comments
Closed

podman: container engine docker: StartHost failed #7992

elegos opened this issue May 4, 2020 · 16 comments
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-solution-message Issues where where offering a solution for an error would be helpful priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@elegos
Copy link
Contributor

elegos commented May 4, 2020

OS: Fedora 32
minikube: v1.10.0-beta.2 (with applied #7962 to let podman driver work)

Steps to reproduce the issue:

  1. minikube start --driver=podman

Full output of failed command:

    ~ : minikube start --alsologtostderr --driver=podman
I0504 15:34:57.038952  149480 start.go:99] hostinfo: {"hostname":"localhost.localdomain","uptime":13514,"bootTime":1588585783,"procs":486,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"32","kernelVersion":"5.6.8-300.fc32.x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"5a3e2727-374c-4665-8b07-e67a1fc66448"}
I0504 15:34:57.040062  149480 start.go:109] virtualization: kvm host
😄  minikube v1.10.0-beta.2 on Fedora 32
I0504 15:34:57.040283  149480 driver.go:253] Setting default libvirt URI to qemu:///system
I0504 15:34:57.040287  149480 notify.go:125] Checking for updates...
I0504 15:34:57.084872  149480 podman.go:97] podman version: 1.9.1
✨  Using the podman (experimental) driver based on user configuration
I0504 15:34:57.084941  149480 start.go:206] selected driver: podman
I0504 15:34:57.084946  149480 start.go:579] validating driver "podman" against <nil>
I0504 15:34:57.084953  149480 start.go:585] status for podman: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0504 15:34:57.085001  149480 start_flags.go:217] no existing cluster config was found, will generate one from the flags 
I0504 15:34:57.085098  149480 cli_runner.go:108] Run: sudo podman system info --format json
I0504 15:34:57.166752  149480 start_flags.go:231] Using suggested 3900MB memory alloc based on sys=15992MB, container=15992MB
I0504 15:34:57.166919  149480 start_flags.go:553] Wait components to verify : map[apiserver:true system_pods:true]
👍  Starting control plane node minikube in cluster minikube
I0504 15:34:57.167023  149480 cache.go:103] Beginning downloading kic artifacts for podman with docker
I0504 15:34:57.167035  149480 cache.go:115] Driver isn't docker, skipping base-image download
I0504 15:34:57.167054  149480 preload.go:81] Checking if preload exists for k8s version v1.18.1 and runtime docker
I0504 15:34:57.167084  149480 preload.go:96] Found local preload: /home/elegos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4
I0504 15:34:57.167096  149480 cache.go:47] Caching tarball of preloaded images
I0504 15:34:57.167112  149480 preload.go:122] Found /home/elegos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0504 15:34:57.167121  149480 cache.go:50] Finished verifying existence of preloaded tar for  v1.18.1 on docker
I0504 15:34:57.167374  149480 profile.go:156] Saving config to /home/elegos/.minikube/profiles/minikube/config.json ...
I0504 15:34:57.167496  149480 lock.go:35] WriteFile acquiring /home/elegos/.minikube/profiles/minikube/config.json: {Name:mkf0fc1747a7eda8f54bf02b38aabf21182a31cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0504 15:34:57.167904  149480 cache.go:125] Successfully downloaded all kic artifacts
I0504 15:34:57.167933  149480 start.go:223] acquiring machines lock for minikube: {Name:mk54bbd76b9ba071d84e6139eee3a3cd7ecc36f4 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0504 15:34:57.168053  149480 start.go:227] acquired machines lock for "minikube" in 104.541µs
I0504 15:34:57.168075  149480 start.go:83] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:3900 CPUs:2 DiskSize:20000 Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.1 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.1 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} {Name: IP: Port:8443 KubernetesVersion:v1.18.1 ControlPlane:true Worker:true}
I0504 15:34:57.168134  149480 start.go:104] createHost starting for "" (driver="podman")
🔥  Creating podman container (CPUs=2, Memory=3900MB) ...
I0504 15:34:57.168383  149480 start.go:140] libmachine.API.Create for "minikube" (driver="podman")
I0504 15:34:57.168416  149480 client.go:161] LocalClient.Create starting
I0504 15:34:57.168455  149480 main.go:110] libmachine: Reading certificate data from /home/elegos/.minikube/certs/ca.pem
I0504 15:34:57.168488  149480 main.go:110] libmachine: Decoding PEM data...
I0504 15:34:57.168512  149480 main.go:110] libmachine: Parsing certificate...
I0504 15:34:57.168673  149480 main.go:110] libmachine: Reading certificate data from /home/elegos/.minikube/certs/cert.pem
I0504 15:34:57.168705  149480 main.go:110] libmachine: Decoding PEM data...
I0504 15:34:57.168725  149480 main.go:110] libmachine: Parsing certificate...
I0504 15:34:57.169166  149480 cli_runner.go:108] Run: sudo podman ps -a --format {{.Names}}
W0504 15:34:57.232508  149480 oci.go:149] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0504 15:34:57.232742  149480 cli_runner.go:108] Run: sudo podman info --format "'{{json .SecurityOptions}}'"
I0504 15:34:57.232619  149480 preload.go:81] Checking if preload exists for k8s version v1.18.1 and runtime docker
I0504 15:34:57.232832  149480 preload.go:96] Found local preload: /home/elegos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4
I0504 15:34:57.232849  149480 kic.go:133] Starting extracting preloaded images to volume ...
I0504 15:34:57.232961  149480 cli_runner.go:108] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/elegos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir
I0504 15:34:57.233048  149480 kic.go:136] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v /home/elegos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: exec: "docker": executable file not found in $PATH
stdout:

stderr:
I0504 15:34:57.320586  149480 cli_runner.go:108] Run: sudo podman run --cgroup-manager cgroupfs -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var:exec --cpus=2 -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10
I0504 15:34:57.599378  149480 cli_runner.go:108] Run: sudo podman inspect minikube --format={{.State.Running}}
I0504 15:34:57.676962  149480 cli_runner.go:108] Run: sudo podman inspect minikube --format={{.State.Status}}
I0504 15:34:57.748337  149480 oci.go:203] the created container "minikube" has a running status.
I0504 15:34:57.748363  149480 kic.go:157] Creating ssh key for kic: /home/elegos/.minikube/machines/minikube/id_rsa...
I0504 15:34:57.907711  149480 kic_runner.go:177] podman (temp): /home/elegos/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0504 15:34:58.179558  149480 kic_runner.go:91] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0504 15:34:58.179605  149480 kic_runner.go:112] Args: [sudo podman exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0504 15:34:58.314045  149480 cli_runner.go:108] Run: sudo podman inspect minikube --format={{.State.Status}}
I0504 15:34:58.387273  149480 machine.go:86] provisioning docker machine ...
I0504 15:34:58.387323  149480 ubuntu.go:166] provisioning hostname "minikube"
I0504 15:34:58.387463  149480 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0504 15:34:58.460786  149480 main.go:110] libmachine: Using SSH client type: native
I0504 15:34:58.461152  149480 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5c0] 0x7bf590 <nil>  [] 0s} 127.0.0.1 45151 <nil> <nil>}
I0504 15:34:58.461176  149480 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0504 15:34:58.587349  149480 main.go:110] libmachine: SSH cmd err, output: <nil>: minikube

I0504 15:34:58.587554  149480 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0504 15:34:58.656598  149480 main.go:110] libmachine: Using SSH client type: native
I0504 15:34:58.656798  149480 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5c0] 0x7bf590 <nil>  [] 0s} 127.0.0.1 45151 <nil> <nil>}
I0504 15:34:58.656829  149480 main.go:110] libmachine: About to run SSH command:

                if ! grep -xq '.*\sminikube' /etc/hosts; then
                        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                        else 
                                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
                        fi
                fi
I0504 15:34:58.767082  149480 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0504 15:34:58.767112  149480 ubuntu.go:172] set auth options {CertDir:/home/elegos/.minikube CaCertPath:/home/elegos/.minikube/certs/ca.pem CaPrivateKeyPath:/home/elegos/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/elegos/.minikube/machines/server.pem ServerKeyPath:/home/elegos/.minikube/machines/server-key.pem ClientKeyPath:/home/elegos/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/elegos/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/elegos/.minikube}
I0504 15:34:58.767140  149480 ubuntu.go:174] setting up certificates
I0504 15:34:58.767152  149480 provision.go:82] configureAuth start
I0504 15:34:58.767225  149480 cli_runner.go:108] Run: sudo podman inspect -f {{.NetworkSettings.IPAddress}} minikube
I0504 15:34:58.833473  149480 provision.go:131] copyHostCerts
I0504 15:34:58.833534  149480 exec_runner.go:91] found /home/elegos/.minikube/cert.pem, removing ...
I0504 15:34:58.833601  149480 exec_runner.go:98] cp: /home/elegos/.minikube/certs/cert.pem --> /home/elegos/.minikube/cert.pem (1078 bytes)
I0504 15:34:58.833729  149480 exec_runner.go:91] found /home/elegos/.minikube/key.pem, removing ...
I0504 15:34:58.833767  149480 exec_runner.go:98] cp: /home/elegos/.minikube/certs/key.pem --> /home/elegos/.minikube/key.pem (1675 bytes)
I0504 15:34:58.833859  149480 exec_runner.go:91] found /home/elegos/.minikube/ca.pem, removing ...
I0504 15:34:58.833895  149480 exec_runner.go:98] cp: /home/elegos/.minikube/certs/ca.pem --> /home/elegos/.minikube/ca.pem (1038 bytes)
I0504 15:34:58.833977  149480 provision.go:105] generating server cert: /home/elegos/.minikube/machines/server.pem ca-key=/home/elegos/.minikube/certs/ca.pem private-key=/home/elegos/.minikube/certs/ca-key.pem org=elegos.minikube san=[10.88.0.42 localhost 127.0.0.1]
I0504 15:34:58.903723  149480 provision.go:159] copyRemoteCerts
I0504 15:34:58.903760  149480 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0504 15:34:58.903792  149480 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0504 15:34:58.979497  149480 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:45151 SSHKeyPath:/home/elegos/.minikube/machines/minikube/id_rsa Username:docker}
I0504 15:34:59.060599  149480 ssh_runner.go:215] scp /home/elegos/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1038 bytes)
I0504 15:34:59.078762  149480 ssh_runner.go:215] scp /home/elegos/.minikube/machines/server.pem --> /etc/docker/server.pem (1119 bytes)
I0504 15:34:59.092786  149480 ssh_runner.go:215] scp /home/elegos/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0504 15:34:59.109814  149480 provision.go:85] duration metric: configureAuth took 342.648328ms
I0504 15:34:59.109839  149480 ubuntu.go:190] setting minikube options for container-runtime
I0504 15:34:59.110053  149480 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0504 15:34:59.184557  149480 main.go:110] libmachine: Using SSH client type: native
I0504 15:34:59.184764  149480 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5c0] 0x7bf590 <nil>  [] 0s} 127.0.0.1 45151 <nil> <nil>}
I0504 15:34:59.184786  149480 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0504 15:34:59.293410  149480 main.go:110] libmachine: SSH cmd err, output: <nil>: overlay

I0504 15:34:59.293431  149480 ubuntu.go:71] root file system type: overlay
I0504 15:34:59.293577  149480 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0504 15:34:59.293649  149480 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0504 15:34:59.360567  149480 main.go:110] libmachine: Using SSH client type: native
I0504 15:34:59.360753  149480 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5c0] 0x7bf590 <nil>  [] 0s} 127.0.0.1 45151 <nil> <nil>}
I0504 15:34:59.360886  149480 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0504 15:34:59.477934  149480 main.go:110] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0504 15:34:59.478122  149480 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0504 15:34:59.546555  149480 main.go:110] libmachine: Using SSH client type: native
I0504 15:34:59.546752  149480 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5c0] 0x7bf590 <nil>  [] 0s} 127.0.0.1 45151 <nil> <nil>}
I0504 15:34:59.546788  149480 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0504 15:34:59.819139  149480 main.go:110] libmachine: SSH cmd err, output: Process exited with status 1: --- /lib/systemd/system/docker.service        2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new      2020-05-04 13:34:59.476638053 +0000
@@ -8,24 +8,22 @@
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP 
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

I0504 15:34:59.819208  149480 ubuntu.go:192] Error setting container-runtime options during provisioning ssh command error:
command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
err     : Process exited with status 1
output  : --- /lib/systemd/system/docker.service        2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new      2020-05-04 13:34:59.476638053 +0000
@@ -8,24 +8,22 @@
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP 
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
I0504 15:34:59.819263  149480 machine.go:89] provisioned docker machine in 1.431961376s
I0504 15:34:59.819277  149480 client.go:164] LocalClient.Create took 2.650853649s
I0504 15:35:01.819406  149480 start.go:107] duration metric: createHost completed in 4.651258866s
I0504 15:35:01.819436  149480 start.go:74] releasing machines lock for "minikube", held for 4.651368427s
🤦  StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
err     : Process exited with status 1
output  : --- /lib/systemd/system/docker.service        2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new      2020-05-04 13:34:59.476638053 +0000
@@ -8,24 +8,22 @@
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP 
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

I0504 15:35:01.820342  149480 cli_runner.go:108] Run: sudo podman inspect minikube --format={{.State.Status}}
I0504 15:35:01.889975  149480 stop.go:36] StopHost: minikube
✋  Stopping "minikube" in podman ...
I0504 15:35:01.890412  149480 cli_runner.go:108] Run: sudo podman inspect minikube --format={{.State.Status}}
🛑  Powering off "minikube" via SSH ...
I0504 15:35:01.962605  149480 cli_runner.go:108] Run: sudo podman exec --privileged -t minikube /bin/bash -c "sudo init 0"
I0504 15:35:03.121146  149480 cli_runner.go:108] Run: sudo podman inspect minikube --format={{.State.Status}}
I0504 15:35:03.193291  149480 oci.go:504] container minikube status is Stopped
I0504 15:35:03.193320  149480 oci.go:516] Successfully shutdown container minikube
I0504 15:35:03.193332  149480 stop.go:84] shutdown container: err=<nil>
I0504 15:35:03.193365  149480 main.go:110] libmachine: Stopping "minikube"...
I0504 15:35:03.193525  149480 cli_runner.go:108] Run: sudo podman inspect minikube --format={{.State.Status}}
I0504 15:35:03.271461  149480 stop.go:56] stop err: Machine "minikube" is already stopped.
I0504 15:35:03.271495  149480 stop.go:59] host is already stopped
🔥  Deleting "minikube" in podman ...
I0504 15:35:04.271779  149480 cli_runner.go:108] Run: sudo podman inspect -f {{.Id}} minikube
I0504 15:35:04.345543  149480 cli_runner.go:108] Run: sudo podman inspect minikube --format={{.State.Status}}
I0504 15:35:04.420994  149480 cli_runner.go:108] Run: sudo podman exec --privileged -t minikube /bin/bash -c "sudo init 0"
I0504 15:35:04.483544  149480 oci.go:496] error shutdown minikube: sudo podman exec --privileged -t minikube /bin/bash -c "sudo init 0": exit status 255
stdout:

stderr:
Error: can only create exec sessions on running containers: container state improper
I0504 15:35:05.483784  149480 cli_runner.go:108] Run: sudo podman inspect minikube --format={{.State.Status}}
I0504 15:35:05.555946  149480 oci.go:504] container minikube status is Stopped
I0504 15:35:05.555964  149480 oci.go:516] Successfully shutdown container minikube
I0504 15:35:05.556019  149480 cli_runner.go:108] Run: sudo podman rm -f -v minikube
I0504 15:35:05.679420  149480 cli_runner.go:108] Run: sudo podman inspect -f {{.Id}} minikube
I0504 15:35:10.749762  149480 start.go:223] acquiring machines lock for minikube: {Name:mk54bbd76b9ba071d84e6139eee3a3cd7ecc36f4 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0504 15:35:10.750038  149480 start.go:227] acquired machines lock for "minikube" in 241.651µs
I0504 15:35:10.750069  149480 start.go:83] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:3900 CPUs:2 DiskSize:20000 Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.1 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.1 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} {Name: IP: Port:8443 KubernetesVersion:v1.18.1 ControlPlane:true Worker:true}
I0504 15:35:10.750128  149480 start.go:104] createHost starting for "" (driver="podman")
🔥  Creating podman container (CPUs=2, Memory=3900MB) ...
I0504 15:35:10.750280  149480 start.go:140] libmachine.API.Create for "minikube" (driver="podman")
I0504 15:35:10.750306  149480 client.go:161] LocalClient.Create starting
I0504 15:35:10.750335  149480 main.go:110] libmachine: Reading certificate data from /home/elegos/.minikube/certs/ca.pem
I0504 15:35:10.750379  149480 main.go:110] libmachine: Decoding PEM data...
I0504 15:35:10.750400  149480 main.go:110] libmachine: Parsing certificate...
I0504 15:35:10.750533  149480 main.go:110] libmachine: Reading certificate data from /home/elegos/.minikube/certs/cert.pem
I0504 15:35:10.750557  149480 main.go:110] libmachine: Decoding PEM data...
I0504 15:35:10.750575  149480 main.go:110] libmachine: Parsing certificate...
I0504 15:35:10.750969  149480 cli_runner.go:108] Run: sudo podman ps -a --format {{.Names}}
I0504 15:35:10.812411  149480 preload.go:81] Checking if preload exists for k8s version v1.18.1 and runtime docker
W0504 15:35:10.812450  149480 oci.go:149] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0504 15:35:10.812479  149480 preload.go:96] Found local preload: /home/elegos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4
I0504 15:35:10.812494  149480 kic.go:133] Starting extracting preloaded images to volume ...
I0504 15:35:10.812532  149480 cli_runner.go:108] Run: sudo podman info --format "'{{json .SecurityOptions}}'"
I0504 15:35:10.812640  149480 cli_runner.go:108] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/elegos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir
I0504 15:35:10.812712  149480 kic.go:136] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v /home/elegos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: exec: "docker": executable file not found in $PATH
stdout:

stderr:
I0504 15:35:10.892477  149480 cli_runner.go:108] Run: sudo podman run --cgroup-manager cgroupfs -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var:exec --cpus=2 -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10
I0504 15:35:11.186247  149480 cli_runner.go:108] Run: sudo podman inspect minikube --format={{.State.Running}}
I0504 15:35:11.259517  149480 cli_runner.go:108] Run: sudo podman inspect minikube --format={{.State.Status}}
I0504 15:35:11.336552  149480 oci.go:203] the created container "minikube" has a running status.
I0504 15:35:11.336580  149480 kic.go:157] Creating ssh key for kic: /home/elegos/.minikube/machines/minikube/id_rsa...
I0504 15:35:11.540263  149480 kic_runner.go:177] podman (temp): /home/elegos/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0504 15:35:11.810697  149480 kic_runner.go:91] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0504 15:35:11.810736  149480 kic_runner.go:112] Args: [sudo podman exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0504 15:35:11.955969  149480 cli_runner.go:108] Run: sudo podman inspect minikube --format={{.State.Status}}
I0504 15:35:12.025463  149480 machine.go:86] provisioning docker machine ...
I0504 15:35:12.025501  149480 ubuntu.go:166] provisioning hostname "minikube"
I0504 15:35:12.025699  149480 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0504 15:35:12.098955  149480 main.go:110] libmachine: Using SSH client type: native
I0504 15:35:12.099260  149480 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5c0] 0x7bf590 <nil>  [] 0s} 127.0.0.1 44023 <nil> <nil>}
I0504 15:35:12.099285  149480 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0504 15:35:12.217145  149480 main.go:110] libmachine: SSH cmd err, output: <nil>: minikube

I0504 15:35:12.217203  149480 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0504 15:35:12.291566  149480 main.go:110] libmachine: Using SSH client type: native
I0504 15:35:12.291800  149480 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5c0] 0x7bf590 <nil>  [] 0s} 127.0.0.1 44023 <nil> <nil>}
I0504 15:35:12.291844  149480 main.go:110] libmachine: About to run SSH command:

                if ! grep -xq '.*\sminikube' /etc/hosts; then
                        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                        else 
                                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
                        fi
                fi
I0504 15:35:12.395228  149480 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0504 15:35:12.395259  149480 ubuntu.go:172] set auth options {CertDir:/home/elegos/.minikube CaCertPath:/home/elegos/.minikube/certs/ca.pem CaPrivateKeyPath:/home/elegos/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/elegos/.minikube/machines/server.pem ServerKeyPath:/home/elegos/.minikube/machines/server-key.pem ClientKeyPath:/home/elegos/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/elegos/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/elegos/.minikube}
I0504 15:35:12.395284  149480 ubuntu.go:174] setting up certificates
I0504 15:35:12.395296  149480 provision.go:82] configureAuth start
I0504 15:35:12.395364  149480 cli_runner.go:108] Run: sudo podman inspect -f {{.NetworkSettings.IPAddress}} minikube
I0504 15:35:12.468583  149480 provision.go:131] copyHostCerts
I0504 15:35:12.468648  149480 exec_runner.go:91] found /home/elegos/.minikube/ca.pem, removing ...
I0504 15:35:12.468709  149480 exec_runner.go:98] cp: /home/elegos/.minikube/certs/ca.pem --> /home/elegos/.minikube/ca.pem (1038 bytes)
I0504 15:35:12.468837  149480 exec_runner.go:91] found /home/elegos/.minikube/cert.pem, removing ...
I0504 15:35:12.468876  149480 exec_runner.go:98] cp: /home/elegos/.minikube/certs/cert.pem --> /home/elegos/.minikube/cert.pem (1078 bytes)
I0504 15:35:12.469000  149480 exec_runner.go:91] found /home/elegos/.minikube/key.pem, removing ...
I0504 15:35:12.469040  149480 exec_runner.go:98] cp: /home/elegos/.minikube/certs/key.pem --> /home/elegos/.minikube/key.pem (1675 bytes)
I0504 15:35:12.469123  149480 provision.go:105] generating server cert: /home/elegos/.minikube/machines/server.pem ca-key=/home/elegos/.minikube/certs/ca.pem private-key=/home/elegos/.minikube/certs/ca-key.pem org=elegos.minikube san=[10.88.0.43 localhost 127.0.0.1]
I0504 15:35:12.637922  149480 provision.go:159] copyRemoteCerts
I0504 15:35:12.637961  149480 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0504 15:35:12.637998  149480 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0504 15:35:12.708636  149480 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:44023 SSHKeyPath:/home/elegos/.minikube/machines/minikube/id_rsa Username:docker}
I0504 15:35:12.788202  149480 ssh_runner.go:215] scp /home/elegos/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1038 bytes)
I0504 15:35:12.803590  149480 ssh_runner.go:215] scp /home/elegos/.minikube/machines/server.pem --> /etc/docker/server.pem (1119 bytes)
I0504 15:35:12.816714  149480 ssh_runner.go:215] scp /home/elegos/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0504 15:35:12.832292  149480 provision.go:85] duration metric: configureAuth took 436.985866ms
I0504 15:35:12.832310  149480 ubuntu.go:190] setting minikube options for container-runtime
I0504 15:35:12.832443  149480 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0504 15:35:12.908584  149480 main.go:110] libmachine: Using SSH client type: native
I0504 15:35:12.908715  149480 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5c0] 0x7bf590 <nil>  [] 0s} 127.0.0.1 44023 <nil> <nil>}
I0504 15:35:12.908727  149480 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0504 15:35:13.016194  149480 main.go:110] libmachine: SSH cmd err, output: <nil>: overlay

I0504 15:35:13.016220  149480 ubuntu.go:71] root file system type: overlay
I0504 15:35:13.016375  149480 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0504 15:35:13.016449  149480 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0504 15:35:13.082735  149480 main.go:110] libmachine: Using SSH client type: native
I0504 15:35:13.082925  149480 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5c0] 0x7bf590 <nil>  [] 0s} 127.0.0.1 44023 <nil> <nil>}
I0504 15:35:13.083064  149480 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0504 15:35:13.198455  149480 main.go:110] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0504 15:35:13.198587  149480 cli_runner.go:108] Run: sudo podman inspect -f "{{range .NetworkSettings.Ports}}{{if eq .ContainerPort 22}}{{.HostPort}}{{end}}{{end}}" minikube
I0504 15:35:13.264489  149480 main.go:110] libmachine: Using SSH client type: native
I0504 15:35:13.264684  149480 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5c0] 0x7bf590 <nil>  [] 0s} 127.0.0.1 44023 <nil> <nil>}
I0504 15:35:13.264718  149480 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0504 15:35:13.573632  149480 main.go:110] libmachine: SSH cmd err, output: Process exited with status 1: --- /lib/systemd/system/docker.service        2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new      2020-05-04 13:35:13.196922328 +0000
@@ -8,24 +8,22 @@
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP 
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

I0504 15:35:13.573766  149480 ubuntu.go:192] Error setting container-runtime options during provisioning ssh command error:
command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
err     : Process exited with status 1
output  : --- /lib/systemd/system/docker.service        2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new      2020-05-04 13:35:13.196922328 +0000
@@ -8,24 +8,22 @@
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP 
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
I0504 15:35:13.573919  149480 machine.go:89] provisioned docker machine in 1.548432908s
I0504 15:35:13.573941  149480 client.go:164] LocalClient.Create took 2.823625435s
I0504 15:35:15.574123  149480 start.go:107] duration metric: createHost completed in 4.823980833s
I0504 15:35:15.574154  149480 start.go:74] releasing machines lock for "minikube", held for 4.824100254s
😿  Failed to start podman container. "minikube start" may fix it: creating host: create: provisioning: ssh command error:
command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
err     : Process exited with status 1
output  : --- /lib/systemd/system/docker.service        2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new      2020-05-04 13:35:13.196922328 +0000
@@ -8,24 +8,22 @@
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP 
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

I0504 15:35:15.574398  149480 exit.go:58] WithError(error provisioning host)=Failed to start host: creating host: create: provisioning: ssh command error:
command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
err     : Process exited with status 1
output  : --- /lib/systemd/system/docker.service        2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new      2020-05-04 13:35:13.196922328 +0000
@@ -8,24 +8,22 @@
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP 
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
 called from:
goroutine 1 [running]:
runtime/debug.Stack(0x0, 0x0, 0x0)
        /usr/lib/golang/src/runtime/debug/stack.go:24 +0x9d
k8s.io/minikube/pkg/minikube/exit.WithError(0x1ade51e, 0x17, 0x1d98a00, 0xc0001f5900)
        /home/elegos/Development/minikube/pkg/minikube/exit/exit.go:58 +0x34
k8s.io/minikube/cmd/minikube/cmd.runStart(0x2ae98a0, 0xc000584a80, 0x0, 0x1)
        /home/elegos/Development/minikube/cmd/minikube/cmd/start.go:161 +0xa7f
github.com/spf13/cobra.(*Command).execute(0x2ae98a0, 0xc000584a70, 0x1, 0x1, 0x2ae98a0, 0xc000584a70)
        /home/elegos/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0x2ae88e0, 0x0, 0x1, 0xc000049d20)
        /home/elegos/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
        /home/elegos/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
        /home/elegos/Development/minikube/cmd/minikube/cmd/root.go:108 +0x6a4
main.main()
        /home/elegos/Development/minikube/cmd/minikube/main.go:66 +0xea

❌  [DOCKER_RESTART_FAILED] error provisioning host Failed to start host: creating host: create: provisioning: ssh command error:
command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
err     : Process exited with status 1
output  : --- /lib/systemd/system/docker.service        2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new      2020-05-04 13:35:13.196922328 +0000
@@ -8,24 +8,22 @@
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP 
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

💡  Suggestion: Remove the incompatible --docker-opt flag if one was provided
⁉️   Related issue: https://github.com/kubernetes/minikube/issues/7070

Full output of minikube start command used, if not already included:

Optional: Full output of minikube logs command:

Thank you :)

@elegos elegos changed the title minikube start, driver Podman: StartHost failed minikube start, driver Podman, container engine docker: StartHost failed May 4, 2020
@afbjorklund
Copy link
Collaborator

Any chance to get any systemd logs, from within the container ?

Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

@afbjorklund afbjorklund added co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels May 4, 2020
@elegos
Copy link
Contributor Author

elegos commented May 4, 2020

Deleted the previous message as I was wrong: minikube's pod is online.

systemctl status docker.service

● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Mon 2020-05-04 15:07:06 UTC; 2h 10min ago
     Docs: https://docs.docker.com
  Process: 255 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:10485
76 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=
podman --insecure-registry 10.96.0.0/12 (code=exited, status=1/FAILURE)
 Main PID: 255 (code=exited, status=1/FAILURE)
      CPU: 95ms

May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.889438149Z" level=warning msg="Your kernel does not support cgroup 
memory limit"
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.889459960Z" level=warning msg="Unable to find cpu cgroup in mounts"
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.889470450Z" level=warning msg="Unable to find blkio cgroup in mount
s"
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.889479900Z" level=warning msg="Unable to find cpuset cgroup in moun
ts"
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.889488840Z" level=warning msg="mountpoint for pids not found"
May 04 15:07:06 minikube dockerd[255]: failed to start daemon: Devices cgroup isn't mounted
May 04 15:07:06 minikube systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
May 04 15:07:06 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
May 04 15:07:06 minikube systemd[1]: Failed to start Docker Application Container Engine.
May 04 15:07:06 minikube systemd[1]: docker.service: Consumed 95ms CPU time.

journalctl -xe

-- Logs begin at Mon 2020-05-04 15:07:04 UTC, end at Mon 2020-05-04 15:08:13 UTC. --
May 04 15:07:04 minikube systemd-journald[66]: Journal started
-- Subject: The journal has been started
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- 
-- The system journal process has started up, opened the journal
-- files for writing and is now ready to process requests.
May 04 15:07:04 minikube systemd-journald[66]: Runtime journal (/run/log/journal/cbb92813c7114c9c8a3eeb6698eb5ada) is 8.0M, max 799.
6M, 791.6M free.
-- Subject: Disk space used by the journal
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- 
-- Runtime journal (/run/log/journal/cbb92813c7114c9c8a3eeb6698eb5ada) is currently using 8.0M.
-- Maximum allowed usage is set to 799.6M.
-- Leaving at least 1.1G free (of currently available 7.8G of disk space).
-- Enforced usage limit is thus 799.6M, of which 791.6M are still available.
-- 
-- The limits controlling how much disk space is used by the journal may
-- be configured with SystemMaxUse=, SystemKeepFree=, SystemMaxFileSize=,
-- RuntimeMaxUse=, RuntimeKeepFree=, RuntimeMaxFileSize= settings in
-- /etc/systemd/journald.conf. See journald.conf(5) for details.
May 04 15:07:04 minikube systemd-sysctl[67]: Couldn't write 'fq_codel' to 'net/core/default_qdisc', ignoring
: No such file or directory
May 04 15:07:04 minikube systemd-sysusers[70]: Creating group systemd-coredump with gid 999.
May 04 15:07:04 minikube systemd-sysusers[70]: Creating user systemd-coredump (systemd Core Dumper) with uid 999 and gid 999.
May 04 15:07:04 minikube systemd[1]: Starting Flush Journal to Persistent Storage...
-- Subject: A start job for unit systemd-journal-flush.service has begun execution
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- 
-- A start job for unit systemd-journal-flush.service has begun execution.
-- 
-- The job identifier is 5.
May 04 15:07:04 minikube systemd[1]: Reached target System Initialization.
-- Subject: A start job for unit sysinit.target has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- 
-- A start job for unit sysinit.target has finished successfully.
-- 
-- The job identifier is 4.
May 04 15:07:04 minikube systemd[1]: Starting Docker Socket for the API.
-- Subject: A start job for unit docker.socket has begun execution
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- 
-- A start job for unit docker.socket has begun execution.
-- 
-- The job identifier is 42.
May 04 15:07:04 minikube systemd[1]: Started Daily Cleanup of Temporary Directories.
-- Subject: A start job for unit systemd-tmpfiles-clean.timer has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- 
-- A start job for unit systemd-tmpfiles-clean.timer has finished successfully.
-- 
-- The job identifier is 44.
root@minikube:/# journalctl -u docker | cat
-- Logs begin at Mon 2020-05-04 15:07:04 UTC, end at Mon 2020-05-04 15:08:13 UTC. --
May 04 15:07:04 minikube systemd[1]: Starting Docker Application Container Engine...
May 04 15:07:04 minikube dockerd[78]: time="2020-05-04T15:07:04.782763140Z" level=info msg="Starting up"
May 04 15:07:04 minikube dockerd[78]: time="2020-05-04T15:07:04.784125288Z" level=info msg="parsed scheme: \"unix\"" module=grpc
May 04 15:07:04 minikube dockerd[78]: time="2020-05-04T15:07:04.784151378Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
May 04 15:07:04 minikube dockerd[78]: time="2020-05-04T15:07:04.784182298Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
May 04 15:07:04 minikube dockerd[78]: time="2020-05-04T15:07:04.784203808Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
May 04 15:07:04 minikube dockerd[78]: time="2020-05-04T15:07:04.784291199Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000863700, CONNECTING" module=grpc
May 04 15:07:04 minikube dockerd[78]: time="2020-05-04T15:07:04.804886280Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000863700, READY" module=grpc
May 04 15:07:04 minikube dockerd[78]: time="2020-05-04T15:07:04.805785595Z" level=info msg="parsed scheme: \"unix\"" module=grpc
May 04 15:07:04 minikube dockerd[78]: time="2020-05-04T15:07:04.805814195Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
May 04 15:07:04 minikube dockerd[78]: time="2020-05-04T15:07:04.805842385Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
May 04 15:07:04 minikube dockerd[78]: time="2020-05-04T15:07:04.805876366Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
May 04 15:07:04 minikube dockerd[78]: time="2020-05-04T15:07:04.805954246Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00015fbf0, CONNECTING" module=grpc
May 04 15:07:04 minikube dockerd[78]: time="2020-05-04T15:07:04.805965456Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
May 04 15:07:04 minikube dockerd[78]: time="2020-05-04T15:07:04.806184827Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00015fbf0, READY" module=grpc
May 04 15:07:04 minikube dockerd[78]: time="2020-05-04T15:07:04.808675392Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
May 04 15:07:05 minikube dockerd[78]: time="2020-05-04T15:07:05.064843934Z" level=warning msg="Your kernel does not support cgroup memory limit"
May 04 15:07:05 minikube dockerd[78]: time="2020-05-04T15:07:05.064871224Z" level=warning msg="Unable to find cpu cgroup in mounts"
May 04 15:07:05 minikube dockerd[78]: time="2020-05-04T15:07:05.064880534Z" level=warning msg="Unable to find blkio cgroup in mounts"
May 04 15:07:05 minikube dockerd[78]: time="2020-05-04T15:07:05.064889024Z" level=warning msg="Unable to find cpuset cgroup in mounts"
May 04 15:07:05 minikube dockerd[78]: time="2020-05-04T15:07:05.064896434Z" level=warning msg="mountpoint for pids not found"
May 04 15:07:05 minikube dockerd[78]: failed to start daemon: Devices cgroup isn't mounted
May 04 15:07:05 minikube systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
May 04 15:07:05 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
May 04 15:07:05 minikube systemd[1]: Failed to start Docker Application Container Engine.
May 04 15:07:05 minikube systemd[1]: docker.service: Consumed 90ms CPU time.
May 04 15:07:06 minikube systemd[1]: docker.service: Service RestartSec=100ms expired, scheduling restart.
May 04 15:07:06 minikube systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
May 04 15:07:06 minikube systemd[1]: Stopped Docker Application Container Engine.
May 04 15:07:06 minikube systemd[1]: docker.service: Consumed 0 CPU time.
May 04 15:07:06 minikube systemd[1]: Starting Docker Application Container Engine...
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.783797800Z" level=info msg="Starting up"
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.786070633Z" level=info msg="parsed scheme: \"unix\"" module=grpc
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.786102314Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.786131734Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.786152954Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.786256234Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007813a0, CONNECTING" module=grpc
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.786561366Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007813a0, READY" module=grpc
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.787288371Z" level=info msg="parsed scheme: \"unix\"" module=grpc
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.787323401Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.787343851Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.787367971Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.787421131Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0009951e0, CONNECTING" module=grpc
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.787449192Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.787635543Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0009951e0, READY" module=grpc
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.788649229Z" level=info msg="Processing signal 'terminated'"
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.790407599Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.807233738Z" level=warning msg="Your kernel does not support cgroup memory limit"
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.807259198Z" level=warning msg="Unable to find cpu cgroup in mounts"
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.807268768Z" level=warning msg="Unable to find blkio cgroup in mounts"
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.807277058Z" level=warning msg="Unable to find cpuset cgroup in mounts"
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.807284768Z" level=warning msg="mountpoint for pids not found"
May 04 15:07:06 minikube dockerd[213]: failed to start daemon: Devices cgroup isn't mounted
May 04 15:07:06 minikube dockerd[213]: time="2020-05-04T15:07:06.807958322Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
May 04 15:07:06 minikube systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
May 04 15:07:06 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
May 04 15:07:06 minikube systemd[1]: Stopped Docker Application Container Engine.
May 04 15:07:06 minikube systemd[1]: docker.service: Consumed 95ms CPU time.
May 04 15:07:06 minikube systemd[1]: Starting Docker Application Container Engine...
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.858047965Z" level=info msg="Starting up"
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.859497934Z" level=info msg="parsed scheme: \"unix\"" module=grpc
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.859521274Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.859537844Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.859547324Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.859596224Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00093e200, CONNECTING" module=grpc
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.859811156Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00093e200, READY" module=grpc
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.860455069Z" level=info msg="parsed scheme: \"unix\"" module=grpc
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.860481660Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.860501350Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.860515310Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.860567910Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0009936d0, CONNECTING" module=grpc
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.860803262Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0009936d0, READY" module=grpc
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.864184021Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.889438149Z" level=warning msg="Your kernel does not support cgroup memory limit"
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.889459960Z" level=warning msg="Unable to find cpu cgroup in mounts"
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.889470450Z" level=warning msg="Unable to find blkio cgroup in mounts"
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.889479900Z" level=warning msg="Unable to find cpuset cgroup in mounts"
May 04 15:07:06 minikube dockerd[255]: time="2020-05-04T15:07:06.889488840Z" level=warning msg="mountpoint for pids not found"
May 04 15:07:06 minikube dockerd[255]: failed to start daemon: Devices cgroup isn't mounted
May 04 15:07:06 minikube systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
May 04 15:07:06 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
May 04 15:07:06 minikube systemd[1]: Failed to start Docker Application Container Engine.
May 04 15:07:06 minikube systemd[1]: docker.service: Consumed 95ms CPU time.

@afbjorklund
Copy link
Collaborator

afbjorklund commented May 4, 2020

Seems it has some issues finding the cgroups... Are you still running cgroups v2, perhaps ?

May 04 15:07:05 minikube dockerd[78]: time="2020-05-04T15:07:05.064843934Z" level=warning msg="Your kernel does not support cgroup memory limit"
May 04 15:07:05 minikube dockerd[78]: time="2020-05-04T15:07:05.064871224Z" level=warning msg="Unable to find cpu cgroup in mounts"
May 04 15:07:05 minikube dockerd[78]: time="2020-05-04T15:07:05.064880534Z" level=warning msg="Unable to find blkio cgroup in mounts"
May 04 15:07:05 minikube dockerd[78]: time="2020-05-04T15:07:05.064889024Z" level=warning msg="Unable to find cpuset cgroup in mounts"
May 04 15:07:05 minikube dockerd[78]: time="2020-05-04T15:07:05.064896434Z" level=warning msg="mountpoint for pids not found"
May 04 15:07:05 minikube dockerd[78]: failed to start daemon: Devices cgroup isn't mounted

https://bugzilla.redhat.com/show_bug.cgi?id=1746355

@afbjorklund
Copy link
Collaborator

afbjorklund commented May 4, 2020

It booted OK here. But I had changed my system configuration back to v1 already, for docker...

Tested with Fedora 32, using either of docker (moby-engine) 19.03.8 or podman 1.9.1

@elegos
Copy link
Contributor Author

elegos commented May 4, 2020

I'm not really into kernel development, but I suppose that host's cgroup will affect the guest's too, as AFAIK the kernel is shared. Fedora 32 should run cgroup v2, so I think I'm going to switch to v1 with extra kernel parameter systemd.unified_cgroup_hierarchy=0... and me who thought that podman was cgroup v2 ready :P - I'll let you know ASAP, but if this is the issue, I hope that containerd and then docker will support cgroups v2!

@afbjorklund
Copy link
Collaborator

You can try with --container-runtime=cri-o instead, but I'm not sure if Kubernetes is v2 ready ?

https://medium.com/nttlabs/cgroup-v2-596d035be4d7

@elegos
Copy link
Contributor Author

elegos commented May 4, 2020

I've already tried with cri-o, got different errors (and yes, with containerd too :P ). For CRI-O it stalls in a connection timeout, but that's another bug!

@afbjorklund
Copy link
Collaborator

afbjorklund commented May 4, 2020

AFAIK, the containerd status is pretty much the same as docker - most of it lies within runc.

https://github.com/opencontainers/runc

Fedora switched to using crun instead, which has cgroups v2 support. Including for cri-o.

https://github.com/containers/crun

@afbjorklund
Copy link
Collaborator

You say either and I say either,
You say neither and I say neither
Either, either Neither, neither
Let's call the whole thing off.

You like potato and I like potahto
You like tomato and I like tomahto
Potato, potahto, Tomato, tomahto.
Let's call the whole thing off

@afbjorklund afbjorklund added kind/documentation Categorizes issue or PR as related to documentation. needs-solution-message Issues where where offering a solution for an error would be helpful priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels May 4, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 2, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 1, 2020
This was referenced Sep 3, 2020
@medyagh
Copy link
Member

medyagh commented Sep 23, 2020

Podman is experimental driver, I suggest using our docker driver

@medyagh medyagh changed the title minikube start, driver Podman, container engine docker: StartHost failed podman: container engine docker: StartHost failed Sep 23, 2020
@sharifelgamal sharifelgamal removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 30, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 29, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 28, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-solution-message Issues where where offering a solution for an error would be helpful priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

6 participants