-
Notifications
You must be signed in to change notification settings - Fork 438
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance Test changes for EE #8453
Conversation
Issues linked to changelog: |
…-io/gloo into upstream-builder-healthchecks
// Tests are run as part of the "Nightly" action in a GHA using the default Linux runner | ||
// More info on that machine can be found here: https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources | ||
// When developing new tests, users should manually run that action in order to test performance under the same parameters | ||
// Results can then be found in the logs for that instance of the action | ||
var _ = Describe("Translation - Benchmarking Tests", decorators.Performance, Label(labels.Performance), func() { | ||
var _ = Describe("Translation - Benchmarking Tests", Ordered, decorators.Performance, Label(labels.Performance), func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any reason this needs to be ordered?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BeforeAll
and AfterAll
can only be present in ordered blocks, and, since the tests are all in a Table and the benchmarking measurement locks the OS thread I didn’t really see a downside to this, whereas switching the log level in BeforeEach
and AfterEach
seemed like overkill and didn’t really capture the intended logic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do all performance tests need to be Ordered, or just this one? If it's all, could we bake this into the Peformance decorator, so other devs don't have to remember it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The requirement to be Ordered is purely because we're using a BeforeAll
and AfterAll
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only one {nit / open question/ thought}, other than that: reasonable PR and I'll :lgtm: afterward 👍🏼
// Tests are run as part of the "Nightly" action in a GHA using the default Linux runner | ||
// More info on that machine can be found here: https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources | ||
// When developing new tests, users should manually run that action in order to test performance under the same parameters | ||
// Results can then be found in the logs for that instance of the action | ||
var _ = Describe("Translation - Benchmarking Tests", decorators.Performance, Label(labels.Performance), func() { | ||
var _ = Describe("Translation - Benchmarking Tests", Ordered, decorators.Performance, Label(labels.Performance), func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do all performance tests need to be Ordered, or just this one? If it's all, could we bake this into the Peformance decorator, so other devs don't have to remember it?
…-io/gloo into upstream-builder-healthchecks
…-io/gloo into upstream-builder-healthchecks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! 🚀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
small nit, happy to approve if we don't want to go through CI again
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚀
Description
BenchmarkConfig
andGenerateBenchmarkDesc()
for use in EEbenchmarkMatchers
field inBenchmarkConfig
withGhaMatchers
andLocalMatchers
GetMatchers()
function toBenchmarkConfig
which returns appropriate matchers depending on run contextWithHealthChecks()
function toUpstreamBuilder
Context
These changes are designed to allow EE to reuse performance test infrastructure developed for OSS
The use of different targets for benchmarking depending on test running environment is inspired by this implementation in EE
Note that if an entry is missing matchers for the machine tests are being run on, the experiment will still take place and results logged, however no assertions will be made and the entry will trivially pass
See solo-io/solo-projects#5157 for proposed use of these changes in EE
Log level is reduced in order to eliminate noise from each translation iteration logging
Here is a run of the performance tests without all of the noisy logs
Here a run of the performance tests as of 50a22e4
My local environment is much faster than GHA, hence the targets that are 25x faster than the existing targets for GHA
Additionally the existing GHA targets do not account for the lowered log level which also improves performance
These targets are (still) mainly intended for demonstration purposes and are not tied too customer requirements or other external expectations
Checklist:
make -B install-go-tools generated-code
to ensure there will be no code diff