Skip to content
This repository has been archived by the owner on Aug 6, 2021. It is now read-only.

after stopping and starting again cluster kyma does not start #3

Closed
valentinvieriu opened this issue Jul 16, 2020 · 2 comments · Fixed by #12
Closed

after stopping and starting again cluster kyma does not start #3

valentinvieriu opened this issue Jul 16, 2020 · 2 comments · Fixed by #12
Assignees
Labels
question Further information is requested

Comments

@valentinvieriu
Copy link
Contributor

As the cluster is consuming resources, I've tried to stop it when not deeded using k3d stop -n kyma, and then start it againa k3d start -n kyma.
It seems that after the whole cluster stabilises ( no pods in the CrashLoopBackOff state) after the start, when trying to login into console you get:
image

@pbochynski
Copy link
Collaborator

pbochynski commented Jul 17, 2020

You need to apply again the patch with CoreDNS. Set all the environment variables (export section at the beginning of the script) and patch the coreDNS config map again after k3d start -n kyma.

export KUBECONFIG="$(k3d get-kubeconfig -n='kyma')"
REGISTRY_IP=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' /k3d-registry)
sed "s/REGISTRY_IP/$REGISTRY_IP/" coredns-patch.tpl >coredns-patch.yaml
kubectl -n kube-system patch cm coredns --patch "$(cat coredns-patch.yaml)"

Related issue: k3d-io/k3d#229

@anishj0shi anishj0shi added the question Further information is requested label Jul 24, 2020
@felipekunzler
Copy link

Note that since k3d v3.0 the workaround above failed to me. I used this instead:

export KUBECONFIG="$(k3d kubeconfig merge kyma --switch-context)"
REGISTRY_IP=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' /registry.localhost)
sed "s/REGISTRY_IP/$REGISTRY_IP/" coredns-patch.tpl >coredns-patch.yaml
kubectl -n kube-system patch cm coredns --patch "$(cat coredns-patch.yaml)"

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants