Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

service_creation_latency: limit number of calls, exit on success #2314

Merged
merged 2 commits into from
Sep 19, 2023

Conversation

cezarygerard
Copy link
Contributor

@cezarygerard cezarygerard commented Sep 8, 2023

/kind cleanup

@k8s-ci-robot k8s-ci-robot added kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Sep 8, 2023
@aojea
Copy link
Member

aojea commented Sep 18, 2023

/assign @aojea

@@ -393,6 +393,7 @@ func (p *pingChecker) run() {
success++
if success == pingChecks {
p.creationTimes.Set(key, phaseName(reachabilityPhase, p.svc.Spec.Type), time.Now())
return
Copy link
Member

@aojea aojea Sep 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that is necessary wrong to assert 10 times, maybe is indeed excessive , but the main problem here is that this doing an exec per command.

Another flaw in this code is that if no ipsare obtained , this will consider it as a success.

it seems the code in updateObject waits until an ingress.IP is created

Oh

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, so this never returned before 🤦

Copy link
Member

@aojea aojea Sep 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, it does return, see lines 354-356

https://github.com/kubernetes/perf-tests/pull/2314/files#diff-89087237f4615aee550b7d157c898b60525acb6063585068ce42884480131616R354-R356

			if _, exists := p.creationTimes.Get(key, phaseName(reachabilityPhase, p.svc.Spec.Type)); exists {
				return
			}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should get rid for this lines

for the simple return here:

			if success == pingChecks {
				p.creationTimes.Set(key, phaseName(reachabilityPhase, p.svc.Spec.Type), time.Now())
				return
                         }

Copy link
Member

@aojea aojea Sep 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree, is confusing to do the check at the beginning if we are explicitily setting it here ... unless there are multiple of these running in parlallel? do you know if that is even possible? the same service gets called several times?

@aojea
Copy link
Member

aojea commented Sep 18, 2023

please see suggestion

diff --git a/clusterloader2/pkg/measurement/common/service_creation_latency.go b/clusterloader2/pkg/measurement/common/service_creation_latency.go
index 6b69dedfb..8176851e0 100644
--- a/clusterloader2/pkg/measurement/common/service_creation_latency.go
+++ b/clusterloader2/pkg/measurement/common/service_creation_latency.go
@@ -345,7 +345,6 @@ func (p *pingChecker) run() {
                klog.Errorf("%s: meta key created error: %v", p.callerName, err)
                return
        }
-       success := 0
        for {
                select {
                case <-p.stopCh:
@@ -358,7 +357,6 @@ func (p *pingChecker) run() {
                        pod, err := execservice.GetPod()
                        if err != nil {
                                klog.Warningf("call to execservice.GetPod() ended with error: %v", err)
-                               success = 0
                                time.Sleep(pingBackoff)
                                continue
                        }
@@ -377,23 +375,23 @@ func (p *pingChecker) run() {
                                }
                                port = p.svc.Spec.Ports[0].Port
                        }
+                       if len(ips) == 0 {
+                               klog.Warningf("no ips found for service: %+v", p.svc)
+                               time.Sleep(pingBackoff)
+                               continue
+                       }
                        for _, ip := range ips {
-                               address := net.JoinHostPort(ip, fmt.Sprint(port))
-                               command := fmt.Sprintf("curl %s", address)
+                               command := fmt.Sprintf(`date; for i in $(seq 1 %d); do echo "$(date) Try: ${i}"; curl -g -q -s --max-time 15 --connect-timeout 1 %s; echo; done`, pingChecks, net.JoinHostPort(ip, fmt.Sprint(port)))
                                _, err = execservice.RunCommand(context.TODO(), pod, command)
                                if err != nil {
                                        break
                                }
                        }
                        if err != nil {
-                               success = 0
                                time.Sleep(pingBackoff)
                                continue
                        }
-                       success++
-                       if success == pingChecks {
-                               p.creationTimes.Set(key, phaseName(reachabilityPhase, p.svc.Spec.Type), time.Now())
-                       }
+                       p.creationTimes.Set(key, phaseName(reachabilityPhase, p.svc.Spec.Type), time.Now())
                }
        }
 }

@aojea
Copy link
Member

aojea commented Sep 18, 2023

/assign @wojtek-t

doing 3 checks instead of 10 does not sounds bad, specially if each check is a new kubectl exec call, that is very expensive, an alternative is to do the multiple checks in one call to the pod as in #2314 (comment)

The logic to return from the forever loop checks for the condition to exit at the beginning of the loop, I do not know if this plugins may execute the same service in parallel and this is needed to avoid overlapping #2314 (comment)

@wojtek-t
Copy link
Member

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 19, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: cezarygerard, wojtek-t

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 19, 2023
@k8s-ci-robot k8s-ci-robot merged commit e353d7c into kubernetes:master Sep 19, 2023
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants