Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

simplify DeleteNode logic by removing an extra Mutex #3573

Merged

Conversation

ysy2020
Copy link
Contributor

@ysy2020 ysy2020 commented Oct 1, 2020

Previously, two mutexes are used in the logic of removing a node from a node pool. Now only one mutex is used to simplify the logic and the time for removing a node can be shortened with this logic since it doesn't always have to wait for a certain time length before being deleted.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Oct 1, 2020
@ysy2020
Copy link
Contributor Author

ysy2020 commented Oct 1, 2020

/assign @RainbowMango

@ysy2020
Copy link
Contributor Author

ysy2020 commented Oct 1, 2020

/assign @MaciekPytel

@ysy2020
Copy link
Contributor Author

ysy2020 commented Oct 5, 2020

any comments?

@ysy2020
Copy link
Contributor Author

ysy2020 commented Oct 5, 2020

/assign @kevin-wangzefeng

@@ -185,7 +185,6 @@ func getAutoscaleNodePools(manager *huaweicloudCloudManager, opts config.Autosca
}

clusterUpdateLock := sync.Mutex{}
deleteMux := sync.Mutex{}

// Given our current implementation just support single node pool,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No problem here, but need to point out that the clusterUpdateLock and deleteMux should be defined in the loop so that we can ensure the same mutex can't be shared with different node groups.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah you're right. I'll move the clusterUpdateLock inside the for loop below. Thank you for the comment!

@RainbowMango
Copy link
Member

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 9, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: RainbowMango

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 9, 2020
@k8s-ci-robot k8s-ci-robot merged commit e017121 into kubernetes:master Oct 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants