Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deploy_minikube.sh: Adding support for Centos8 #6073

Merged
merged 1 commit into from
Jul 6, 2020

Conversation

liranmauda
Copy link
Contributor

@liranmauda liranmauda commented Jul 5, 2020

Explain the changes

deploy_minikube.sh:

  • Adding support for Centos8
  • Moving minikube and kubectl from /usr/local/bin to /usr/bin

sudo dnf -y update && sudo dnf -y install socat conntrack
dnf install -y dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install -y docker-ce docker-ce-cli containerd.io --nobest
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why docker-ce and not podman and/or buildah?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

originally it was on deploying Ubuntu (as it for travis runs).
We can change to podman when running on Centos

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the look of it, minikube support for podman is still WIP.
We can see 2 things:

  1. Using the podman driver is experimental
  2. even though the driver is podman, the run-time is docker and changing it to cri-o still looks for docker (Fedora 31 vm-driver=podman fail to start trying to start docker service kubernetes/minikube#6795).

We are using MINIKUBE_VERSION=v1.8.2 and KUBERNETES_VERSION=v1.17.3 due to kubernetes/minikube#7828

running with podman results in:

+ main@./1.sh:70 cat /root/.minikube/config/config.json
{
    "WantNoneDriverWarning": false,
    "WantUpdateNotification": false,
    "container-runtime": "cri-o",
    "driver": "podman",
    "vm-driver": "none"
}+ main@./1.sh:72 minikube version
minikube version: v1.8.2
commit: eb13446e786c9ef70cb0a9f85a633194e62396a1
+ main@./1.sh:74 minikube start --kubernetes-version=v1.17.3
😄  minikube v1.8.2 on Centos 8.1.1911
    ▪ MINIKUBE_VERSION=v1.8.2
✨  Using the podman (experimental) driver based on user configuration
E0705 15:30:53.797867   10377 cache.go:106] Error downloading kic artifacts:  error loading image: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

once podman support in minikube will be better, and we will be able to upgrade the minikube version we can switch to podman on Centos8.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have opened a new issue to track this: #6075

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have seen similar problems with minikube+podman, it does not really work well yet.

SELinux_status=$(sestatus | grep "SELinux status" | awk -F ":" '{print $2}' | xargs )
if [ "${SELinux_status}" == "enabled" ]
then
sudo setenforce 0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oy :(

Why?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, kubeadm on CentOS needs SELinux to be disabled:

kubernetes/minikube#6014 (comment)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does that happen with podman as well?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I mean, somehow, magically it is working in RHEL, no?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should look into it one we can switch into podman (see previous comment #6073 (comment))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some locations where minikube stores executables, and these try to access configuration files. SElinux prevents that. I am not sure if the location that minikube uses is dynamic, or predictable. In the 2nd case, creating the directories in advance and setting appropriate labels might work.

This obviously need some more research. Moving to Permissive mode is acceptable for the moment.

@liranmauda liranmauda changed the title deploy_minikube.sh: Adding support for Centos8 deploy_minikube.sh: Adding support for Centos8 dbg Jul 5, 2020
@liranmauda
Copy link
Contributor Author

Adding dbg to the subject to debug the failed deploy on travis

@liranmauda liranmauda changed the title deploy_minikube.sh: Adding support for Centos8 dbg deploy_minikube.sh: Adding support for Centos8 Jul 5, 2020
@liranmauda liranmauda force-pushed the liran-update-minikube-script branch from 9860f67 to ca28bcc Compare July 5, 2020 13:11
@liranmauda liranmauda marked this pull request as ready for review July 5, 2020 13:14
@liranmauda
Copy link
Contributor Author

@nixpanic @mykaul can you have a look?

Copy link
Contributor

@nixpanic nixpanic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should probably put kubectl in /usr/bin like minikube.

When running in a Vagrant VM (as user 'vagrant'), kubectl does not work:

[vagrant@localhost vagrant]$ kubectl cluster-info

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[vagrant@localhost vagrant]$ sudo /usr/local/bin/kubectl cluster-info
Kubernetes master is running at https://192.168.121.2:8443
KubeDNS is running at https://192.168.121.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

There is a /root/.kube directory, but not one in /home/vagrant.

This is does not need to be blocker in case everything is intended to be run as root.

.travis/deploy_minikube.sh Show resolved Hide resolved
.travis/deploy_minikube.sh Outdated Show resolved Hide resolved
.travis/deploy_minikube.sh Show resolved Hide resolved
Copy link
Contributor

@nixpanic nixpanic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for now, we will assume we are running as root

with that being the case, things look good, but you'll have to place kubectl in a location where sudo kubectl works in CentOS (not /usr/local/bin)

@liranmauda liranmauda force-pushed the liran-update-minikube-script branch from ca28bcc to 4e7fefc Compare July 6, 2020 09:14
deploy_minikube.sh:
- Adding support for Centos8
- Moving minikube and kubectl from /usr/local/bin to /usr/bin
@liranmauda liranmauda force-pushed the liran-update-minikube-script branch from 4e7fefc to a8a792e Compare July 6, 2020 09:29
Copy link
Contributor

@nixpanic nixpanic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the corrections, it works for me in a Vagrant VM 👍

@liranmauda liranmauda requested a review from jackyalbo July 6, 2020 12:33
@liranmauda liranmauda merged commit 51b6613 into noobaa:master Jul 6, 2020
@liranmauda liranmauda deleted the liran-update-minikube-script branch October 27, 2020 14:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants