Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create Elasticsearch Reporters for indexing test results #20948

Open
4 tasks
LeeDr opened this issue Jul 18, 2018 · 6 comments
Open
4 tasks

Create Elasticsearch Reporters for indexing test results #20948

LeeDr opened this issue Jul 18, 2018 · 6 comments
Labels
Team:QA Team label for QA Team test

Comments

@LeeDr
Copy link
Contributor

LeeDr commented Jul 18, 2018

Describe the feature:
We have an ongoing issue with Flaky tests. While these are mostly UI tests I think it would benefit us to have the capability to load all automated test results into an Elasticsearch instance so we can drink our own champagne and use Kibana to quickly see the biggest issues

Describe a specific use case for the feature:
Generally speaking, there are configurable reporters for more test frameworks that can log to the console and/or create junit test output and/or html reports, etc. and have a mechanism for plugging in another reporter.

We can use this issue to define requirements and then individual PRs to implement the reporters.

Frameworks:

  • Functional Test Runner (jUnit results are uploaded now)
  • Elastic Stack Test Framework (ESTF)
  • integration-test framework
  • new Webdriver.IO framework

It would be great if we could collect a consistent set of fields in the reporter for each framework, but they may not have all the same data available. Maybe we just come up with the super-set of what we'd like and each framework can try to capture as much of that as possible.

Mapping:

name type comment
timestamp date date/time of the execution of the test
version String (or keyword?) ex: 6.4.0 or 6.4.0-SNAPSHOT or 6.3.2-0b85b411
branch keyword ex: 6.x
testname String made up of each level of describe plus test name like kibana - settings app - creating and deleting default index - index pattern creation - should have index pattern in page header?
suitename String each level of describe but not the test name?
result keyword Pass/Fail/Skip
error text blank if Pass or Skip, stack trace if Fail
screenshot url blank if Pass or Skip, link to screenshot in S3 bucket if Fail
hostname keyword could be Jenkins worker
runId number we need some kind of job number to group a test run like Jenkins job #,
jenkinsUrl url might as well store a link to the Jenkins job if we can
os keyword currently Jenkins runs CentOS and Ubuntu but eventually will add Windows
browser keyword Chrome, Firefox, IE
duration number only passing tests log the duration of the test. We can use this to detect changes in performance over time
@LeeDr LeeDr added the test label Jul 18, 2018
@rashmivkulkarni
Copy link
Contributor

  • screenshots on failure.
  • index.html ? ( is this a good idea to capture the entire DOM on failure)
  • Any ignored methods, for example, a disabled/skipped methods ?

@snide snide added the Team:QA Team label for QA Team label Aug 7, 2018
@calidude25
Copy link

this looks like an awesome idea as many organizations have a litany of many test frameworks

@LeeDr
Copy link
Contributor Author

LeeDr commented Apr 16, 2020

I just mentioned this topic to @dmlemeshko today in the context of gathering functional test durations for the purpose of detecting changes in performance.

For performance measurements we really only need the passing tests because the test duration isn't logged for failing tests (which makes sense). But for other purposes, having the failing tests, and maybe even the stack trace would be great.

@LeeDr
Copy link
Contributor Author

LeeDr commented Apr 16, 2020

@wayneseymour this is VERY similar to what you were looking into.

Along this topic, Jenkins CI jobs are already loading failed test data into build-stats cluster. We could make a change to also load passing test data into build-stats with the duration?

@spalger @brianseeders what do you think about that ^ ?

Right now the failure-* index in build-stats has several different products. But for project: kibana could we also load passing tests and their duration?

One question would be, do we want to collect the passing test duration on all PR CI jobs, or only a small subset of daily/hourly builds?

@spalger
Copy link
Contributor

spalger commented Apr 16, 2020

I don't think we will be able to update runbld to index successful builds, but I also think we would benefit from reworking the structure of the data that's stored in ES to include things like links to the builds/pipeline steps that generated the report, which might be possible if we take control of indexing the results in the junit files. (especially if @brianseeders is able to get independent workspaces for each worker figured out)

@brianseeders
Copy link
Contributor

#62515 - this PR adds functionality to the FTR to output start/stop timestamps, durations, and success/fail to a JSON file after test execution. The data is broken down by each individual test file. Nothing is really done with the data as part of this PR besides sticking it in a file.

#62253 - In this messy draft PR, I'm taking the metrics output by the previous PR and uploading them to GCS. I then download them at the start of a CI job, and use them to automatically create an optimized functional test plan, which eliminates the need for ciGroups to be defined. The functional tests currently run in about 1h 10m this way.

After this work is finished up, the plan is to start indexing the data in ES (which cluster is undecided) and outputting information to PR comments if a PR slows a suite down too much. I haven't really thought too much about the indexing part yet, but I would like to become less dependent on runbld overall.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:QA Team label for QA Team test
Projects
None yet
Development

No branches or pull requests

6 participants