Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance Test changes for EE #8453

Merged
merged 19 commits into from
Jul 11, 2023
Merged

Conversation

bewebi
Copy link
Contributor

@bewebi bewebi commented Jul 7, 2023

Description

  • Factor out and export BenchmarkConfig and GenerateBenchmarkDesc() for use in EE
  • Replace benchmarkMatchers field in BenchmarkConfig with GhaMatchers and LocalMatchers
    • Add member GetMatchers() function to BenchmarkConfig which returns appropriate matchers depending on run context
  • Reduce log level in performance tests
  • Add WithHealthChecks() function to UpstreamBuilder

Context

These changes are designed to allow EE to reuse performance test infrastructure developed for OSS

The use of different targets for benchmarking depending on test running environment is inspired by this implementation in EE
Note that if an entry is missing matchers for the machine tests are being run on, the experiment will still take place and results logged, however no assertions will be made and the entry will trivially pass

See solo-io/solo-projects#5157 for proposed use of these changes in EE

Log level is reduced in order to eliminate noise from each translation iteration logging
Here is a run of the performance tests without all of the noisy logs
Here a run of the performance tests as of 50a22e4

My local environment is much faster than GHA, hence the targets that are 25x faster than the existing targets for GHA
Additionally the existing GHA targets do not account for the lowered log level which also improves performance
These targets are (still) mainly intended for demonstration purposes and are not tied too customer requirements or other external expectations

Checklist:

  • I included a concise, user-facing changelog (for details, see https://github.com/solo-io/go-utils/tree/main/changelogutils) which references the issue that is resolved.
  • If I updated APIs (our protos) or helm values, I ran make -B install-go-tools generated-code to ensure there will be no code diff
  • I followed guidelines laid out in the Gloo Edge contribution guide
  • I opened a draft PR or added the work in progress label if my PR is not ready for review
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works

@solo-changelog-bot
Copy link

Issues linked to changelog:
https://github.com/solo-io/solo-projects/issues/5038

@github-actions github-actions bot added the keep pr updated signals bulldozer to keep pr up to date with base branch label Jul 7, 2023
// Tests are run as part of the "Nightly" action in a GHA using the default Linux runner
// More info on that machine can be found here: https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources
// When developing new tests, users should manually run that action in order to test performance under the same parameters
// Results can then be found in the logs for that instance of the action
var _ = Describe("Translation - Benchmarking Tests", decorators.Performance, Label(labels.Performance), func() {
var _ = Describe("Translation - Benchmarking Tests", Ordered, decorators.Performance, Label(labels.Performance), func() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any reason this needs to be ordered?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BeforeAll and AfterAll can only be present in ordered blocks, and, since the tests are all in a Table and the benchmarking measurement locks the OS thread I didn’t really see a downside to this, whereas switching the log level in BeforeEach and AfterEach seemed like overkill and didn’t really capture the intended logic

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do all performance tests need to be Ordered, or just this one? If it's all, could we bake this into the Peformance decorator, so other devs don't have to remember it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The requirement to be Ordered is purely because we're using a BeforeAll and AfterAll

Copy link
Contributor

@inFocus7 inFocus7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only one {nit / open question/ thought}, other than that: reasonable PR and I'll :lgtm: afterward 👍🏼

test/helpers/benchmark.go Show resolved Hide resolved
// Tests are run as part of the "Nightly" action in a GHA using the default Linux runner
// More info on that machine can be found here: https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources
// When developing new tests, users should manually run that action in order to test performance under the same parameters
// Results can then be found in the logs for that instance of the action
var _ = Describe("Translation - Benchmarking Tests", decorators.Performance, Label(labels.Performance), func() {
var _ = Describe("Translation - Benchmarking Tests", Ordered, decorators.Performance, Label(labels.Performance), func() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do all performance tests need to be Ordered, or just this one? If it's all, could we bake this into the Peformance decorator, so other devs don't have to remember it?

projects/gloo/pkg/translator/performance_test.go Outdated Show resolved Hide resolved
test/helpers/benchmark.go Outdated Show resolved Hide resolved
test/helpers/benchmark.go Outdated Show resolved Hide resolved
test/helpers/benchmark.go Show resolved Hide resolved
sam-heilbron
sam-heilbron previously approved these changes Jul 10, 2023
test/helpers/benchmark.go Outdated Show resolved Hide resolved
sam-heilbron
sam-heilbron previously approved these changes Jul 11, 2023
Copy link
Contributor

@sam-heilbron sam-heilbron left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work! 🚀

Copy link
Contributor

@inFocus7 inFocus7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

small nit, happy to approve if we don't want to go through CI again

projects/gloo/pkg/translator/performance_test.go Outdated Show resolved Hide resolved
Copy link
Contributor

@inFocus7 inFocus7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀

@soloio-bulldozer soloio-bulldozer bot merged commit e3277bd into main Jul 11, 2023
14 of 15 checks passed
@soloio-bulldozer soloio-bulldozer bot deleted the upstream-builder-healthchecks branch July 11, 2023 15:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
keep pr updated signals bulldozer to keep pr up to date with base branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants