Skip to content
This repository has been archived by the owner on May 16, 2023. It is now read-only.

cannot deploy elasticsearch to kubernetes #775

Closed
carlmacdiarmada opened this issue Aug 10, 2020 · 12 comments
Closed

cannot deploy elasticsearch to kubernetes #775

carlmacdiarmada opened this issue Aug 10, 2020 · 12 comments
Labels
bug Something isn't working elasticsearch

Comments

@carlmacdiarmada
Copy link

Hi,

I've been trying to deploy elasticsearch to helm using kubernetes, but I keep getting an exit code 6 or 7 failure after 10seconds. when I get the logs its simply

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0curl: (6) Could not resolve host: elasticsearch-master; Unknown error

this is using the vanilla helm chart from your repo, no changes, on AKS and on prem kubernetes cluster..

output of describe

Name:               elasticsearch-qnuwm-test
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               aks-pool2-46846346-vmss000002/10.240.0.8
Start Time:         Mon, 10 Aug 2020 09:38:43 +0100
Labels:             <none>
Annotations:        helm.sh/hook=test-success
Status:             Failed
IP:                 10.244.3.6
Containers:
  elasticsearch-mqroo-test:
    Container ID:  docker://abcdd32eaff2263b17141b2fd7a1a2ed80801cd869b8472976e64c5a52658b49
    Image:         docker.elastic.co/elasticsearch/elasticsearch:7.8.1
    Image ID:      docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:54b6af874560621c7791a0845359f2013b42592b18f38857b22fd18246f8afd1
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      #!/usr/bin/env bash -e
curl -XGET --fail 'elasticsearch-master:9200/_cluster/health?wait_for_status=green&timeout=1s'

    State:          Terminated
      Reason:       Error
      Exit Code:    6
      Started:      Mon, 10 Aug 2020 09:38:44 +0100
      Finished:     Mon, 10 Aug 2020 09:39:05 +0100
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vzz6d (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-vzz6d:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-vzz6d
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                                    Message
  ----    ------     ----  ----                                    -------
  Normal  Scheduled  44m   default-scheduler                       Successfully assigned default/elasticsearch-qnuwm-test to aks-pool2-46846346-vmss000002
  Normal  Pulled     44m   kubelet, aks-pool2-46846346-vmss000002  Container image "docker.elastic.co/elasticsearch/elasticsearch:7.8.1" already present on machine
  Normal  Created    44m   kubelet, aks-pool2-46846346-vmss000002  Created container elasticsearch-mqroo-test
  Normal  Started    44m   kubelet, aks-pool2-46846346-vmss000002  Started container elasticsearch-mqroo-test

Thanks

@Arnaud-Francois-Fausse
Copy link

Same problem, the deployment hangs as shown below.
The log of the first pods is:
Error from server (BadRequest): container "elasticsearch" in pod "elasticsearch-master-0" is waiting to start: PodInitializing

Console display :

NAME: elasticsearch
LAST DEPLOYED: Wed Aug 12 04:57:11 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 0/1 Pending 0 0s
elasticsearch-master-1 0/1 Pending 0 0s
elasticsearch-master-2 0/1 Pending 0 0s

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-master ClusterIP 10.104.151.181 9200/TCP,9300/TCP 0s
elasticsearch-master-headless ClusterIP None 9200/TCP,9300/TCP 0s

==> v1/StatefulSet
NAME READY AGE
elasticsearch-master 0/3 0s

==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
elasticsearch-master-pdb N/A 1 0 0s

NOTES:

  1. Watch all cluster members come up.
    $ kubectl get pods --namespace=default -l app=elasticsearch-master -w
  2. Test cluster health using Helm test.
    $ helm test elasticsearch --cleanup

sysadmin@potomak:~$ kubectl get pods --namespace=default -l app=elasticsearch-master -w
NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 0/1 Init:0/1 0 23s
elasticsearch-master-1 0/1 Init:0/1 0 23s
elasticsearch-master-2 0/1 Init:0/1 0 23s

@jmlrt
Copy link
Member

jmlrt commented Aug 25, 2020

Hi @carlmacdiarmada, thanks for submitting this issue.

Can you provide more details about your environment by answering to all the questions asked in the bug report template?

@jmlrt jmlrt added bug Something isn't working elasticsearch labels Aug 25, 2020
@giddel
Copy link

giddel commented Sep 11, 2020

Same here. helm test does not note that ELASTIC_USER / PASSWORD is set at deployment time and API access is secured. So it cannot authenticate :-(

@botelastic
Copy link

botelastic bot commented Dec 10, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@jmlrt
Copy link
Member

jmlrt commented Dec 10, 2020

still valid 👍
was able to reproduce it and will need to fix helm test pod to handle authentication

@botelastic botelastic bot removed the triage/stale label Dec 10, 2020
@botelastic
Copy link

botelastic bot commented Mar 10, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@jmlrt
Copy link
Member

jmlrt commented Mar 17, 2021

still valid

@botelastic botelastic bot removed the triage/stale label Mar 17, 2021
@botelastic
Copy link

botelastic bot commented Jun 15, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@jmlrt
Copy link
Member

jmlrt commented Jun 23, 2021

still valid

@botelastic botelastic bot removed the triage/stale label Jun 23, 2021
@botelastic
Copy link

botelastic bot commented Sep 21, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@jmlrt
Copy link
Member

jmlrt commented Sep 21, 2021

still valid

@botelastic botelastic bot removed the triage/stale label Sep 21, 2021
@framsouza
Copy link
Contributor

I assume it was fixed here #1384 , feel free to reopen if it's not the case.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working elasticsearch
Projects
None yet
Development

No branches or pull requests

5 participants