Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: retry generic controller startup even if its resources are not discovered #142

Closed
AmitKumarDas opened this issue May 21, 2020 · 1 comment · Fixed by #143
Closed
Assignees
Labels
backward compatibility config gctl Generic Controller released usecase Can/Does Metac solve this usecase

Comments

@AmitKumarDas
Copy link
Owner

AmitKumarDas commented May 21, 2020

ProblemStatement: As a DevOps admin, I want to use metac for backward compatibility usecases. I want Metac to retry discovering CRDs & respective custom resources that are not yet available in the kubernetes clusters till certain other upgrade path(s) is complete. In certain cases upgrade might take few hours to few weeks or more. There might be cases where few teams want to keep using older CRDs & custom resources even after upgrading their metac controllers.

On the whole, I want metac to understand new as well as old versions of resources & even if they are not available in the kubernetes cluster.

@AmitKumarDas AmitKumarDas self-assigned this May 21, 2020
@AmitKumarDas AmitKumarDas added config gctl Generic Controller usecase Can/Does Metac solve this usecase backward compatibility labels May 21, 2020
AmitKumarDas pushed a commit that referenced this issue May 21, 2020
… mode

This commit closes #142

A new boolean flag named retry-indefinitely-to-start is added that lets
metac retry indefinitely to start all configured generic controllers.
Retry will happen even when one or more configured kubernetes resources
are not available at the kubernetes cluster. This feature will help
teams to run metac for backward & forward compatibility use cases.

Signed-off-by: AmitKumarDas <amit.das@mayadata.io>
AmitKumarDas pushed a commit that referenced this issue May 22, 2020
This commit closes #142

A new boolean flag named 'retry-indefinitely-to-start' is added that
lets metac to retry indefinitely to start all the configured generic
controllers. When this flag is set to true, retries happen even when
one or more configured kubernetes custom resource definitions are not
available at the cluster. Once these custom definitions are available
& discovered by metac, this retry stops completely.

When this flag is not set which is also the default setting, metac
binary panics (i.e. metac container stops running) if any of the
configured controllers make use of custom resources whose definitions
are not yet available in the cluster. The binary panics after exhausting
the configured timeout (which defaults to 30 minutes).

This feature help teams to manage the transient phases during controller
upgrades. One such scenario is when these upgraded controllers make use
of new custom resource definition(s) that are yet to be applied against
the cluster.

Signed-off-by: AmitKumarDas <amit.das@mayadata.io>
AmitKumarDas pushed a commit that referenced this issue May 26, 2020
This commit closes #142

A new boolean flag named 'retry-indefinitely-to-start' is added that
lets metac to retry indefinitely to start all the configured generic
controllers. When this flag is set to true, retries happen even when
one or more configured kubernetes custom resource definitions are not
available at the cluster. Once these custom definitions are available
& discovered by metac, this retry stops completely.

When this flag is not set which is also the default setting, metac
binary panics (i.e. metac container stops running) if any of the
configured controllers make use of custom resources whose definitions
are not yet available in the cluster. The binary panics after exhausting
the configured timeout (which defaults to 30 minutes).

This feature help teams to manage the transient phases during controller
upgrades. One such scenario is when these upgraded controllers make use
of new custom resource definition(s) that are yet to be applied against
the cluster.

Signed-off-by: AmitKumarDas <amit.das@mayadata.io>
AmitKumarDas pushed a commit that referenced this issue May 26, 2020
This commit closes #142

A new boolean flag named 'retry-indefinitely-to-start' is added that
lets metac to retry indefinitely to start all the configured generic
controllers. When this flag is set to true, retries happen even when
one or more configured kubernetes custom resource definitions are not
available at the cluster. Once these custom definitions are available
& discovered by metac, this retry stops completely.

When this flag is not set which is also the default setting, metac
binary panics (i.e. metac container stops running) if any of the
configured controllers make use of custom resources whose definitions
are not yet available in the cluster. The binary panics after exhausting
the configured timeout (which defaults to 30 minutes).

This feature help teams to manage the transient phases during controller
upgrades. One such scenario is when these upgraded controllers make use
of new custom resource definition(s) that are yet to be applied against
the cluster.

Signed-off-by: AmitKumarDas <amit.das@mayadata.io>
AmitKumarDas pushed a commit that referenced this issue May 26, 2020
This commit closes #142

A new boolean flag named 'retry-indefinitely-to-start' is added that
lets metac to retry indefinitely to start all the configured generic
controllers. When this flag is set to true, retries happen even when
one or more configured kubernetes custom resource definitions are not
available at the cluster. Once these custom definitions are available
& discovered by metac, this retry stops completely.

When this flag is not set which is also the default setting, metac
binary panics (i.e. metac container stops running) if any of the
configured controllers make use of custom resources whose definitions
are not yet available in the cluster. The binary panics after exhausting
the configured timeout (which defaults to 30 minutes).

This feature help teams to manage the transient phases during controller
upgrades. One such scenario is when these upgraded controllers make use
of new custom resource definition(s) that are yet to be applied against
the cluster.

Signed-off-by: AmitKumarDas <amit.das@mayadata.io>
@AmitKumarDas
Copy link
Owner Author

🎉 This issue has been resolved in version 0.4.0 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backward compatibility config gctl Generic Controller released usecase Can/Does Metac solve this usecase
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant