-
Notifications
You must be signed in to change notification settings - Fork 1.9k
[kibana] initContainer: configure-kibana-token: Back-off restarting failed container: #1714
Comments
fix proposal: #1715 |
Relates to #1679 (comment) |
Yes, that's what I'm trying to balance. From my memory, we already had some issues with multiple replicas for Kibana and it wasn't really supported. However, I can't find back any reference to that in the old GitHub tickets, so I'm not sure if we can't stick to that and merge with your PR or need to find a way to address multiple replicas. |
If we need to handle multiple replicas/pods, there are different ways to do it:
|
or we can run a job on
then we can mount the secret to the kibana container directly |
Indeed, I think the best solution is to use a pre-install job that:
Then mount the secret into all kibana pods and finaly remove the token + secret in a post-delete job. I was already able to test the secret creation from a pod using k8s api. Now, I'm trying to write a node JS script to do all the pre-install steps. |
PR in progress => #1720 (still a few things to fix 🤞🏻) |
Chart version:
8.4.1 (from the main branch in this point in history)
Kubernetes version:
1.21
Kubernetes provider:
GKE (Google Kubernetes Engine)
Describe the bug:
Kibana's
initContainer
configure-kibana-token
keep crashing forever.Steps to reproduce:
initContainer
configure-kibana-token
crashing forever.Expected behavior:
the new kibana pod will have its
initContainer
configure-kibana-token
completes successfully.Provide logs and/or server output (if relevant):
configure-kibana-token
initContainer
logs before crashing:this init container creates a token for kibana's service account and saves it for kibana's actual container.
if i run a similar command from within the elasticsearch pods:
curl -k -u $ELASTIC_USERNAME:$ELASTIC_PASSWORD -XPOST https://localhost:9200/_security/service/elastic/kibana/credential/token/mykibana8-kibana?pretty
I get the following response:if i manually delete that token:
curl -k -u $ELASTIC_USERNAME:$ELASTIC_PASSWORD -XDELETE https://localhost:9200/_security/service/elastic/kibana/credential/token/mykibana8-kibana?pretty
and then the pod can start. but again, if that pod dies, the next one will get stuck the same way.
The text was updated successfully, but these errors were encountered: