Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add VMware driver support for new minikube ISO #16796

Merged
merged 2 commits into from
Jul 12, 2023

Conversation

lbogdan
Copy link
Contributor

@lbogdan lbogdan commented Jun 30, 2023

Resolves #16221

This PR adds VMware driver support for the new minikube ISO, which removed the password for the docker user in #15849.

This ISO change broke the VMware driver, because it uses ssh and vmrun commands , which assume that the docker user password is tcuser, in order to initialize the VM.

It uses the same approach as the QEMU driver: it creates a tar stream having the "boot2docker, please format-me" magic string and the public key in .ssh/authorized_keys and writes it to the beginning of the VM disk, which will then be detected and used by the automount script during the first boot.

To be able to make changes to docker-machine-driver-vmware, this vendors its master branch into pkg/drivers/vmware, moving config/config.go to the parent folder and to the vmware package. To check the actual changes, see the second commit.

@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Jun 30, 2023

CLA Signed

The committers listed above are authorized under a signed CLA.

  • ✅ login: lbogdan / name: Bogdan Luca (380359c)
  • ✅ login: spowelljr / name: Steven Powell (9502f96)

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Jun 30, 2023
@k8s-ci-robot
Copy link
Contributor

Welcome @lbogdan!

It looks like this is your first PR to kubernetes/minikube 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/minikube has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jun 30, 2023
@k8s-ci-robot
Copy link
Contributor

Hi @lbogdan. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Jun 30, 2023
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Jun 30, 2023
@minikube-bot
Copy link
Collaborator

Can one of the admins verify this patch?

@medyagh
Copy link
Member

medyagh commented Jun 30, 2023

@lbogdan thank you very much for this contribution, do you mind sharing output of "minikube start" Before / After this PR ?

@medyagh
Copy link
Member

medyagh commented Jun 30, 2023

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jun 30, 2023
@lbogdan
Copy link
Contributor Author

lbogdan commented Jun 30, 2023

Sure!

Without this PR, latest minikube; VMware driver was disabled in #16233:

PS C:\Users\Bogdan> minikube version
minikube version: v1.30.1
commit: 08896fd1dc362c097c925146c4a0d0dac715ace0
PS C:\Users\Bogdan> minikube start --driver vmware
😄  minikube v1.30.1 on Microsoft Windows 11 Pro 10.0.22621.1848 Build 22621.1848
✨  Using the vmware driver based on user configuration

❌  Exiting due to DRV_UNSUPPORTED: Due to security improvements to minikube the VMware driver is currently not supported. Available workarounds are to use a different driver or downgrade minikube to v1.29.0.

    We are accepting community contributions to fix this, for more details on the issue see: https://github.com/kubernetes/minikube/issues/16221

With this PR on top of master:

PS C:\Users\Bogdan> minikube start --driver vmware
😄  minikube v1.30.1 on Microsoft Windows 11 Pro 10.0.22621.1848 Build 22621.1848
✨  Using the vmware driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating vmware VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
PS C:\Users\Bogdan> kubectl get nodes -o wide
NAME       STATUS   ROLES           AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE               KERNEL-VERSION   CONTAINER-RUNTIME
minikube   Ready    control-plane   36s   v1.27.3   192.168.126.128   <none>        Buildroot 2021.02.12   5.10.57          docker://24.0.2

@lbogdan
Copy link
Contributor Author

lbogdan commented Jun 30, 2023

There are quite some lint errors caused by the vendoring of docker-machine-driver-vmware, I'm not that proficient in Go, but I'll try to take a look tomorrow.

But before I do, please confirm vendoring is the way to go here.

@minikube-pr-bot

This comment has been minimized.

@minikube-pr-bot

This comment has been minimized.

@afbjorklund
Copy link
Collaborator

There are quite some lint errors caused by the vendoring of docker-machine-driver-vmware, I'm not that proficient in Go, but I'll try to take a look tomorrow.

But before I do, please confirm vendoring is the way to go here.

We are working towards lifting all the drivers out, so it is probably not the way to go (long term).

It is of course a perfectly reasonable way to test it, so that one doesn't have to only use "docker-machine"

But we made a related change, to stop using the external binary in anticipation of "minikube-machine"

I would have expected that this password removal would happen on docker-machine-driver-vmware ?
Have not compared the new driver with the old driver, but thought it would be more of a patch / PR...

What we do for docker/machine and for the other drivers, is that we replace them with a patched fork.
Maybe backporting the fixes to machine-drivers/docker-machine-driver-vmware would be one way.


Eventually, we want to start using the new organization:

Not sure where it would live, separately or in the main repo ?

The reason for forking libmachine and all the drivers, is both that it is dead and that we want to add new API.
Currently the drivers and provisioners are forked in the minikube mono-repo, and it is not easy to work with.

Related to this, we probably want to lift "minikube-iso" out again since it has become a very big component.
It is also in need of some refactoring, especially after the arch work, and wants to become more standalone.

@lbogdan : finally, thanks for fixing the vmware driver!

I think we want something similar for the parallels driver.

@lbogdan
Copy link
Contributor Author

lbogdan commented Jul 1, 2023

Maybe backporting the fixes to machine-drivers/docker-machine-driver-vmware would be one way.

This would probably make the most sense, but considering there was no activity there for over an year, and the "fix" is unrelated to the actual docker-machine driver, it will probably take some time to get merged, if ever.

Another approach would be to fork it, but that would mean managing a new repository.

I would suggest to go with vendoring it (is this the same as what you call "forked in the minikube mono-repo"?) short term, and then include it in the process of lifting out the drivers to minikube-machine.

Unrelated, I see most of the Jenkins integration tests fail (flake?), is this a known issue, and is anyone looking into it?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jul 1, 2023

I would suggest to go with vendoring it (is this the same as what you call "forked in the minikube mono-repo"?) short term, and then include it in the process of lifting out the drivers to minikube-machine.

If by vendoring you mean copying it to the minikube repository under pkg/drivers, then it is not the same thing. If you include it in the go.mod it will end up in vendor after running "go mod vendor", that is more vendoring these days...

The main difference is that in docker-machine, the drivers were standalone grpc programs and did not share code. The "Config" structure was never exported, and all the settings were transferred using the docker-machine string Flags.

Now in minikube, the driver is referenced in the code base (from the "registry") and all the code is vendored in.

pkg/minikube/registry/drvs/hyperkit/hyperkit.go:        "k8s.io/minikube/pkg/drivers/hyperkit"
pkg/minikube/registry/drvs/none/none.go:        "k8s.io/minikube/pkg/drivers/none"
pkg/minikube/registry/drvs/qemu2/qemu2.go:      "k8s.io/minikube/pkg/drivers/qemu"
pkg/minikube/registry/drvs/ssh/ssh.go:  "k8s.io/minikube/pkg/drivers/ssh"
// This is duplicate of kvm.Driver. Avoids importing the kvm2 driver, which requires cgo & libvirt.
type kvmDriver struct

* commit 118783c removed k8s.io/minikube/pkg/drivers/kvm

pkg/minikube/registry/drvs/hyperv/hyperv.go:	"github.com/docker/machine/drivers/hyperv"
pkg/minikube/registry/drvs/parallels/parallels.go:	parallels "github.com/Parallels/docker-machine-parallels/v2"
pkg/minikube/registry/drvs/virtualbox/virtualbox.go:	"github.com/docker/machine/drivers/virtualbox"
pkg/minikube/registry/drvs/vmware/vmware.go:	vmware "github.com/machine-drivers/docker-machine-driver-vmware/pkg/drivers/vmware"

So the main difference was to build the driver in as well, except for those where it makes sense to link it separately:

  • hyperkit (due to linking with some Mac libraries)

  • kvm (due to requiring linking with the C libraries)

@lbogdan
Copy link
Contributor Author

lbogdan commented Jul 1, 2023

I've now also fixed the lint errors.

In the process, I added error checks to the vmrun commands used to initialize the VM, which were failing because of the removed password, so bubbling up the error started failing minikube start.

So I went ahead and removed all ssh and vmrun commands, and all related code, that assumed password access.

@afbjorklund
Copy link
Collaborator

The main outstanding question is if all drivers should be included with minikube-machine/machine, or if some should be in their own repositories still. Either way, they will all be included by minikube (in some way) - just like they were before.

i.e. do we create a directory:

minikube-machine/machine/pkg/drivers/vmware

or do we create a repository:

minikube-machine/minikube-machine-driver-vmware

@lbogdan
Copy link
Contributor Author

lbogdan commented Jul 1, 2023

Oh, OK, then I guess I had a more inclusive understanding of vendoring, meaning "copying a dependency's source files into the project". 🙂

@afbjorklund
Copy link
Collaborator

Oh, OK, then I guess I had a more inclusive understanding of vendoring, meaning "copying a dependency's source files into the project". 🙂

I'm not sure the terms are properly defined anywhere, just as long as we understand what we are talking about. The roadmap items would be to move "minikube-machine" (including libmachine, drivers, and minikube-iso) out of the monorepo. With the increased focus on the KIC drivers, it could make more sense to have the VM and ISO separate.

There is some bigger picture discussion that should happen on the roadmap planning - or in the Slack channel perhaps

@lbogdan
Copy link
Contributor Author

lbogdan commented Jul 1, 2023

Well, I'm only interested in fixing VMware support as quick as possible, so let me know if and how I can help in getting this in the next release.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jul 1, 2023

Well, I'm only interested in fixing VMware support as quick as possible, so let me know if and how I can help in getting this in the next release.

Patching github.com/machine-drivers/docker-machine-driver-vmware v0.1.5 would be a faster route.

i.e. create a fork of the repository, add the needed changes to a branch, and use that with replace

Copy link
Collaborator

@afbjorklund afbjorklund left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would prefer patching the docker-machine-driver-vmware repository, instead of removing it:

diff --git a/go.mod b/go.mod
index 4637bb9f6f5..4a8c18806c8 100644
--- a/go.mod
+++ b/go.mod
@@ -33,7 +33,6 @@ require (
 	github.com/johanneswuerbach/nfsexports v0.0.0-20200318065542-c48c3734757f
 	github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51
 	github.com/klauspost/cpuid v1.2.0
-	github.com/machine-drivers/docker-machine-driver-vmware v0.1.5
 	github.com/mattbaird/jsonpatch v0.0.0-20200820163806-098863c1fc24
 	github.com/mattn/go-isatty v0.0.19
 	github.com/mitchellh/go-ps v1.0.0

@lbogdan
Copy link
Contributor Author

lbogdan commented Jul 1, 2023

Sounds good!

Should I fork it under my GitHub lbogdan username, or do you want to fork it under the (kubernetes? minikube-machine?) organization and give me access?

@lbogdan
Copy link
Contributor Author

lbogdan commented Jul 2, 2023

While testing on macOS, I've noticed a small issue: the minikube part of the VMware driver checks for vmrun to be in the path

func status() registry.State {
_, err := exec.LookPath("vmrun")
if err != nil {
return registry.State{Error: err, Fix: "Install vmrun", Doc: "https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/"}
}
return registry.State{Installed: true, Healthy: true}
}
but the driver has additional logic on macOS (and on Windows, too) to look for it, even if it's not in the path.

I was thinking that we can export SetVmwareCmd() and use it to check for vmrun in status(). Alternatively, we can export a new CheckVmrun() function. What do you think?

@afbjorklund
Copy link
Collaborator

You actually don't have to tag your fork, Go will "make something up" for a version if you just give it the commit

Normally we don't tag forks, since if upstream wakes up of their coma we would suddenly end up with two v0.2.0

@afbjorklund
Copy link
Collaborator

Fixing the duplicate check in the "registry" sounds like the most obvious, if it is currently broken (can't find vmrun)

It is not possible to export new functions in libmachine, which is one of the reasons that we want minikube-machine

@lbogdan
Copy link
Contributor Author

lbogdan commented Jul 2, 2023

Can't we export (capitalize the first letter of) SetVmwareCmd() in the vmware package in docker-machine-driver-vmware, and then in minikube/pkg/minikube/registry/drvs/vmware/vmware.go do

func status() registry.State { 
	cmd := vmware.SetVmwareCmd("vmrun") 
	if cmd == "vmrun" || cmd == "" { // if it can't find it, it just returns the argument (or an empty string, on Windows)
		return registry.State{Error: errors.New("vmrun not found"), Fix: "Install VMware", Doc: "https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/"} 
	} 
	return registry.State{Installed: true, Healthy: true} 
}

?

Later edit: tried it and it works fine.

@lbogdan
Copy link
Contributor Author

lbogdan commented Jul 2, 2023

Fixing the duplicate check in the "registry" sounds like the most obvious, if it is currently broken (can't find vmrun)

If you meant removing the vmrun registry check entirely, the error will be a bit different / more confusing IMO:

With updated check:

PS C:\Users\Bogdan> .\Downloads\minikube.exe start --driver vmware
😄  minikube v1.30.1-761-g36238f402 on Microsoft Windows 11 Pro 10.0.22621.1928 Build 22621.1928
✨  Using the vmware driver based on user configuration

🤷  Exiting due to PROVIDER_VMWARE_NOT_FOUND: The 'vmware' provider was not found: vmrun not found
💡  Suggestion: Install VMware
📘  Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/

Without check:

PS C:\Users\Bogdan> .\Downloads\minikube.exe start --driver vmware
😄  minikube v1.30.1-761-g36238f402 on Microsoft Windows 11 Pro 10.0.22621.1928 Build 22621.1928
✨  Using the vmware driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating vmware VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🔥  Deleting "minikube" in vmware ...
🤦  StartHost failed, but will try again: creating host: create: creating: open C:\Users\Bogdan\.minikube\machines\minikube\minikube-tmp-flat.vmdk: The system cannot find the file specified.
🔄  Restarting existing vmware VM for "minikube" ...
😿  Failed to start vmware VM. Running "minikube delete" may fix it: driver start: exec: no command

❌  Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: exec: no command

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

@afbjorklund
Copy link
Collaborator

Actual I was thinking more like fixing the check, by adding the missing paths on the missing platforms.

missed in commit 3c48985

@lbogdan
Copy link
Contributor Author

lbogdan commented Jul 2, 2023

So just go ahead and duplicate all the logic from docker-machine-driver-vmware, for Windows and macOS?

Windows - reads from the Windows registry: https://github.com/lbogdan/docker-machine-driver-vmware/blob/a391c48b14d523058072d491f0fec2296dd461e4/pkg/drivers/vmware/vmware_windows.go#L40-L70

macOS - looks into an additional path: https://github.com/lbogdan/docker-machine-driver-vmware/blob/a391c48b14d523058072d491f0fec2296dd461e4/pkg/drivers/vmware/vmware_darwin.go#L39-L53

@afbjorklund
Copy link
Collaborator

Yeah, not great either... Having some feature for it in the new API would be good, I think?

Could use the same thing for VBoxManage or qemu-system-x86_64/qemu-system-aarch64 etc

@afbjorklund
Copy link
Collaborator

It is just that at some path in the future we want to restore the feature of decoupling the "registry" and the drivers...

So we should try to not import any code from the drivers, outside of the libmachine API... Including the Config.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 5, 2023
@medyagh
Copy link
Member

medyagh commented Jul 5, 2023

Sure!

Without this PR, latest minikube; VMware driver was disabled in #16233:

PS C:\Users\Bogdan> minikube version
minikube version: v1.30.1
commit: 08896fd1dc362c097c925146c4a0d0dac715ace0
PS C:\Users\Bogdan> minikube start --driver vmware
😄  minikube v1.30.1 on Microsoft Windows 11 Pro 10.0.22621.1848 Build 22621.1848
✨  Using the vmware driver based on user configuration

❌  Exiting due to DRV_UNSUPPORTED: Due to security improvements to minikube the VMware driver is currently not supported. Available workarounds are to use a different driver or downgrade minikube to v1.29.0.

    We are accepting community contributions to fix this, for more details on the issue see: https://github.com/kubernetes/minikube/issues/16221

With this PR on top of master:

PS C:\Users\Bogdan> minikube start --driver vmware
😄  minikube v1.30.1 on Microsoft Windows 11 Pro 10.0.22621.1848 Build 22621.1848
✨  Using the vmware driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating vmware VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
PS C:\Users\Bogdan> kubectl get nodes -o wide
NAME       STATUS   ROLES           AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE               KERNEL-VERSION   CONTAINER-RUNTIME
minikube   Ready    control-plane   36s   v1.27.3   192.168.126.128   <none>        Buildroot 2021.02.12   5.10.57          docker://24.0.2

thank you @lbogdan, I see @afbjorklund already reviewing the PR, once @afbjorklund approves the PR, you got mine too :)
thank you for your contribution

@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Jul 12, 2023
@@ -235,7 +235,7 @@ replace (
github.com/Parallels/docker-machine-parallels/v2 => github.com/minikube-machine/machine-driver-parallels/v2 v2.0.1
github.com/briandowns/spinner => github.com/alonyb/spinner v1.12.7
github.com/docker/machine => github.com/minikube-machine/machine v0.0.0-20230610170757-350a83297593
github.com/machine-drivers/docker-machine-driver-vmware => github.com/minikube-machine/machine-driver-vmware v0.1.5
github.com/machine-drivers/docker-machine-driver-vmware => github.com/lbogdan/docker-machine-driver-vmware v0.2.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lbogdan do you mind making this PR to the new Org https://github.com/minikube-machine/machine-driver-vmware ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I will get to it on Friday, weekend at the latest.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For getting the release out in time we'll merge this and then change it back to our fork once the change gets merged upstream.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@k8s-ci-robot k8s-ci-robot added size/S Denotes a PR that changes 10-29 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Jul 12, 2023
@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 12, 2023
@lbogdan
Copy link
Contributor Author

lbogdan commented Jul 12, 2023

Please don't merge this yet, I want to get back to it on Friday, weekend at the latest.

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 16796) |
+----------------+----------+---------------------+
| minikube start | 52.8s    | 53.7s               |
| enable ingress | 28.5s    | 28.4s               |
+----------------+----------+---------------------+

Times for minikube start: 52.6s 54.8s 53.2s 50.5s 53.1s
Times for minikube (PR 16796) start: 53.1s 52.4s 55.1s 54.3s 53.4s

Times for minikube ingress: 27.8s 29.3s 28.4s 27.9s 28.9s
Times for minikube (PR 16796) ingress: 28.3s 27.9s 29.4s 27.9s 28.4s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 16796) |
+----------------+----------+---------------------+
| minikube start | 25.8s    | 25.4s               |
| enable ingress | 48.7s    | 48.8s               |
+----------------+----------+---------------------+

Times for minikube start: 26.0s 25.5s 25.8s 26.2s 25.5s
Times for minikube (PR 16796) start: 25.0s 25.7s 26.3s 26.2s 24.0s

Times for minikube ingress: 48.4s 48.4s 49.4s 48.9s 48.4s
Times for minikube (PR 16796) ingress: 49.4s 48.4s 48.9s 48.9s 48.4s

docker driver with containerd runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 16796) |
+----------------+----------+---------------------+
| minikube start | 22.7s    | 24.1s               |
| enable ingress | 28.8s    | 31.4s               |
+----------------+----------+---------------------+

Times for minikube ingress: 31.6s 31.4s 17.9s 31.4s 31.4s
Times for minikube (PR 16796) ingress: 31.4s 31.4s 31.4s 31.5s 31.4s

Times for minikube start: 23.7s 21.6s 21.0s 24.0s 23.3s
Times for minikube (PR 16796) start: 23.7s 24.0s 24.1s 24.2s 24.7s

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: lbogdan, medyagh, spowelljr

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Hyperkit_macOS TestImageBuild/serial/Setup (gopogh) 1.82 (chart)
Hyperkit_macOS TestNetworkPlugins/group/enable-default-cni/Start (gopogh) 2.42 (chart)
Docker_Linux_crio TestPause/serial/SecondStartNoReconfiguration (gopogh) 12.35 (chart)
QEMU_macOS TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (gopogh) 38.22 (chart)

To see the flake rates of all tests by environment, click here.

@spowelljr spowelljr merged commit de791e7 into kubernetes:master Jul 12, 2023
23 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Recently built minikube fails to start with vmware driver
7 participants