-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Priority expander relies now on listers instead of direct watch api #1843
Priority expander relies now on listers instead of direct watch api #1843
Conversation
/assign @MaciekPytel |
|
||
func getPriorityExpandersConfigMapLister(kubeClient kube_client.Interface, stopChannel <-chan struct{}, | ||
namespace string) v1lister.ConfigMapLister { | ||
// FIXME: how to be sure the reflector completed at least one run and switch to the line below? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MaciekPytel I have a problem here: how to make sure the reflector completed at least 1 sync loop before I access the lister? I want to validate the configmap before starting.
If this can't be done easily, I probably have to give up on startup time validation and assume reflector will be ready when first call for Best
comes. But even that smells like race condition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CA loop generally takes a few seconds before it gets to expander and other listers must have already synced for a scale-up to have any chance of happening. Since you have a fallback strategy and the race is quite unlikely I'm ok to merge this without logic to explicitly make sure lister has initialized.
If you think you really need to make sure: I think the LastSyncResourceVersion() you do below is the best way unfortunately (at least the best way I know of). I don't like creating lister in here though. This file is really not the place for all those client-go internals. If you really want to sync, I'd rather you created your own configmap lister interface in utils/kubernetes/listers.go (like we do for pod listers) and expose a HasSynced() method.
return nil, errors.ToAutoscalerError(errors.InternalError, err) | ||
} | ||
return priority.NewStrategy(initialPriorities, priorityChangesChan, autoscalingKubeClients.LogRecorder) | ||
// TODO: how to get proper termination info? It seems other listers do the same here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other listers seems to be doing the same - please make sure it's ok to leave the termination channel like that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's fine. Currently you can't change expander at runtime and you can't stop autoscaler main loop without killing the process, so we don't have any need to ever stop listers.
And one important question here: is there any comprehensive doc arund here about how to use the |
142e057
to
1516d42
Compare
I'm not aware of any such doc. I'm not an expert on client-go though, it may be worth asking directly on that repo or in kubernetes slack sig-apimachinery channel. As far as I know Kubernetes documentation is mostly limited to how to use it. Documentation for developing is minimal and the entry point for contributing is quite high. |
return nil, errors.ToAutoscalerError(errors.InternalError, err) | ||
} | ||
return priority.NewStrategy(initialPriorities, priorityChangesChan, autoscalingKubeClients.LogRecorder) | ||
// TODO: how to get proper termination info? It seems other listers do the same here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's fine. Currently you can't change expander at runtime and you can't stop autoscaler main loop without killing the process, so we don't have any need to ever stop listers.
|
||
func getPriorityExpandersConfigMapLister(kubeClient kube_client.Interface, stopChannel <-chan struct{}, | ||
namespace string) v1lister.ConfigMapLister { | ||
// FIXME: how to be sure the reflector completed at least one run and switch to the line below? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CA loop generally takes a few seconds before it gets to expander and other listers must have already synced for a scale-up to have any chance of happening. Since you have a fallback strategy and the race is quite unlikely I'm ok to merge this without logic to explicitly make sure lister has initialized.
If you think you really need to make sure: I think the LastSyncResourceVersion() you do below is the best way unfortunately (at least the best way I know of). I don't like creating lister in here though. This file is really not the place for all those client-go internals. If you really want to sync, I'd rather you created your own configmap lister interface in utils/kubernetes/listers.go (like we do for pod listers) and expose a HasSynced() method.
OK, I didn't want to leave the code there, just make sure first how to solve this. In this case I'm in favor of not running configmap validation at startup and leave that until the first |
ping @MaciekPytel |
Sorry for delay. I'm ok with what you proposed in #1843 (comment). |
1516d42
to
17e4c89
Compare
@MaciekPytel Done, on-init config check is removed. Please review again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One comment, otherwise lgtm (though the nature of comment suggests you haven't actually run the code, if that's true please test before merging).
// NewConfigMapListerForNamespace builds a configmap lister for the passed namespace (including all). | ||
func NewConfigMapListerForNamespace(kubeClient client.Interface, stopchannel <-chan struct{}, | ||
namespace string) v1lister.ConfigMapLister { | ||
listWatcher := cache.NewListWatchFromClient(kubeClient.AppsV1().RESTClient(), "configmaps", namespace, fields.Everything()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Configmap is in core/v1, not apps/v1. Does that work for you? Seems like it shouldn't.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, my bad, I ran tests actually before removing the test code here. I was sure it was exactly the same as in Listers - it wasn't. Fixing, and I'll re-run tests once more. Thanks for catching!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MaciekPytel please check now
17e4c89
to
119dc75
Compare
119dc75
to
856e58d
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: MaciekPytel The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This is a rework of this PR: #1801
The changes are as follows:
Please review @MaciekPytel