-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
usm: Refactor GoTLS monitor with new uprobe attacher #29309
base: main
Are you sure you want to change the base?
Conversation
Test changes on VMUse this command from test-infra-definitions to manually test this PR changes on a VM: inv create-vm --pipeline-id=46280041 --os-family=ubuntu Note: This applies to commit db488cb |
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 521bc73 Regression Detector: ✅ Bounds Checks: ✅ No significant changes in experiment optimization goalsConfidence level: 90.00% There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | tcp_syslog_to_blackhole | ingress throughput | +0.76 | [+0.69, +0.82] | 1 | Logs |
➖ | idle | memory utilization | +0.60 | [+0.55, +0.65] | 1 | Logs |
➖ | basic_py_check | % cpu utilization | +0.48 | [-3.42, +4.37] | 1 | Logs |
➖ | idle_all_features | memory utilization | +0.33 | [+0.24, +0.42] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency | egress throughput | +0.21 | [-0.28, +0.71] | 1 | Logs |
➖ | otel_to_otel_logs | ingress throughput | +0.20 | [-0.60, +1.01] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency | egress throughput | +0.01 | [-0.32, +0.35] | 1 | Logs |
➖ | file_to_blackhole_300ms_latency | egress throughput | +0.01 | [-0.18, +0.19] | 1 | Logs |
➖ | file_to_blackhole_100ms_latency | egress throughput | +0.00 | [-0.22, +0.22] | 1 | Logs |
➖ | uds_dogstatsd_to_api | ingress throughput | +0.00 | [-0.10, +0.10] | 1 | Logs |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.00 | [-0.01, +0.01] | 1 | Logs |
➖ | file_to_blackhole_500ms_latency | egress throughput | -0.02 | [-0.26, +0.22] | 1 | Logs |
➖ | file_tree | memory utilization | -0.05 | [-0.18, +0.07] | 1 | Logs |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | -0.66 | [-1.39, +0.06] | 1 | Logs |
➖ | pycheck_lots_of_tags | % cpu utilization | -3.45 | [-6.93, +0.03] | 1 | Logs |
Bounds Checks Passed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_300ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
✅ | idle | memory_usage | 10/10 |
Explanation
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
/trigger-ci --variable RUN_ALL_BUILDS=true --variable RUN_KITCHEN_TESTS=true --variable RUN_E2E_TESTS=on --variable RUN_UNIT_TESTS=on --variable RUN_KMT_TESTS=on |
🚂 Gitlab pipeline started Started pipeline #44284926 |
9d22d6f
to
ebd3dd5
Compare
[Fast Unit Tests Report] On pipeline 46280041 (CI Visibility). The following jobs did not run any unit tests: Jobs:
If you modified Go files and expected unit tests to run in these jobs, please double check the job logs. If you think tests should have been executed reach out to #agent-devx-help |
ea0c480
to
662673e
Compare
662673e
to
bef17bc
Compare
6b54dd9
to
116b805
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please deploy it to a large staging cluster for a day or two, so we can verify nothing was missed
var _ uprobes.BinaryInspector = &GoTLSBinaryInspector{} | ||
|
||
// Inspect extracts the metadata required to attach to a Go binary from the ELF file at the given path. | ||
func (p *GoTLSBinaryInspector) Inspect(fpath utils.FilePath, requests []uprobes.SymbolRequest) (map[string]bininspect.FunctionMetadata, bool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need both a boolean and error return arguments?
seems like mutually exclusive
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea was to distinguish between the case where a binary is incompatible and can't be probed (e.g., a x86 binary in ARM) and actual errors reading/parsing the file. Although to be honest I'm not sure how useful is that distinction now, as in the end the only thing the UprobeAttacher
does is log both errors, just with a different message. I think I can take that out in a separate PR and just use a single error return.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can create (and return) a specific error for that use case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed in a separate PR: #29990
PerformInitialScan: false, | ||
EnablePeriodicScanNewProcesses: false, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gjulianm why do those values set to false?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The previous GoTLS attacher had the same behavior AFAICT, it didn't have an initial scan of all processes nor it checked periodically for processes that we hadn't considered due to missed events. It can be changed as needed.
What does this PR do?
As a followup to #27663, this PR modifies the GoTLS monitor to use the new uprobe attacher.
Motivation
Simplify code and use a single type to manage uprobe attachments.
Additional Notes
Performance tests were done on the load-test environment, with no significant changes.
soak_test
Dashboard
Resource usage
Accuracy:
processes
Dashboard
Resource usage
Possible Drawbacks / Trade-offs
Describe how to test/QA your changes