-
Notifications
You must be signed in to change notification settings - Fork 517
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove artificial pod creation/update/delete rate limiting #2128
Conversation
@@ -75,7 +76,8 @@ steps: | |||
action: start | |||
checkIfPodsAreUpdated: {{$CHECK_IF_PODS_ARE_UPDATED}} | |||
labelSelector: group = load | |||
operationTimeout: 15m | |||
operationTimeout: {{$operationTimeout}} | |||
exactOperationTimeout: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could move exactOperationTimeout
to be CL2 flag/env that will change how WaitForPods works for whole test instead of particular measurements.
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: marseel, mborsz The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/test pull-perf-tests-clusterloader2-e2e-gce-scale-performance-manual |
/hold cancel |
It looks like the change guarding doesn't work. The generated data leaks global100qps: https://storage.googleapis.com/kubernetes-jenkins/pr-logs/pull/perf-tests/2128/pull-perf-tests-clusterloader2/1562701334413578240/artifacts/generatedConfig_load.yaml |
@@ -168,7 +169,13 @@ steps: | |||
params: | |||
actionName: "create" | |||
namespaces: {{$namespaces}} | |||
{{if .RATE_LIMIT_POD_CREATION}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be $RATE_LIMIT_POD_CREATION
Right now, the test tries to spread load to generate expected load on the control plane. It has been introduced when master was able to handle max 10 pods per second and was overloading with higher throughput.
Now we are in a state when it is able to handle 100 pods per second throughput (we have at least few manual runs with this PR, the metrics look comparable to regular runs).
Let's remove the artificial throttling to:
The feature is controlled by CL2_RATE_LIMIT_POD_CREATION (default true) to keep default behavior.
We will gradually swap all tests to false and then remove the env var.
/assign @marseel