Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

At times few Kube2iam pods ending up in pending state #348

Open
Pa1tiriveedhi opened this issue Sep 28, 2022 · 0 comments
Open

At times few Kube2iam pods ending up in pending state #348

Pa1tiriveedhi opened this issue Sep 28, 2022 · 0 comments

Comments

@Pa1tiriveedhi
Copy link

After upgrading to EKS 1.22, could notice multiple sets of Kube2IAM pods ending up in pending state, the nodes are running out of CPU with the other microservices. Not sure how the microservices rather winning the race to jump on to nodes before Kube2iam get inititated.

Had tried multiple options by upgrading helm chart of kube2iam, trying to implement Nodetiant (wish/nodetaint). Can some one please help out if you got any solutions to suggest for us to take a look further. Thanks.

pods getting no head space on node::

kube2iam-78chr 0/1 Pending 0 105m

Event:
Warning FailedScheduling 115s (x97 over 95m) default-scheduler 0/26 nodes are available: 1 Insufficient cpu, 25 node(s) didn't match Pod's node affinity/selector.

Have added pod level config in attachements.
kube2iam.docx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant