Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use pytest hooks for test execution instead of "scraping" the JUnit XML results. #12973

Closed
karrtikr opened this issue Jul 15, 2020 · 12 comments
Closed
Labels
area-testing feature-request Request for new features or functionality

Comments

@karrtikr
Copy link

karrtikr commented Jul 15, 2020

Couldn't find an open issue, if we have one feel free to close this one against it

Note we already use hooks for test discovery, this issue for test execution. The output on executing tests is not stable, so the assumptions we make when scraping output can break things. (For eg. #12403, pytest-dev/pytest#7463, #7265)

API details: pytest-dev/pytest#7463 (comment)

Similar issue for unittest framework: #10119

@karrtikr karrtikr added feature-request Request for new features or functionality area-testing needs decision triage-needed Needs assignment to the proper sub-team labels Jul 15, 2020
@ghost ghost removed the triage-needed Needs assignment to the proper sub-team label Jul 15, 2020
@DonJayamanne
Copy link

Here you go #1132
I thought we are doing that today

@karrtikr
Copy link
Author

@DonJayamanne This one is for execution, not discovery. We already do that for discovery.

@ghost ghost added the triage-needed Needs assignment to the proper sub-team label Aug 26, 2020
@luabud luabud added the needs spike Label for issues that need investigation before they can be worked on. label Aug 26, 2020
@ghost ghost removed the triage-needed Needs assignment to the proper sub-team label Aug 26, 2020
@luabud
Copy link
Member

luabud commented Aug 26, 2020

@karrtikr I believe we don't scrape out the output from execution, do we?
cc @ericsnowcurrently

@karrtikr
Copy link
Author

We do scrape out the output from execution, which is something we probably shouldn't.

@ericsnowcurrently
Copy link
Member

Are you talking about what goes on in TestRunner.run() and it's use of execObservable()?

perhaps particularly relevant:

result.out.subscribe(
(output) => {
stdOut += output.out;
// If the test runner python module is not installed we'll have something in stderr.
// Hence track that separately and check at the end.
if (output.source === 'stderr') {
stdErr += output.out;
}
if (options.outChannel) {
options.outChannel.append(output.out);
}
},
reject,
async () => {
// If the test runner python module is not installed we'll have something in stderr.
if (
moduleName &&
pythonExecutionServicePromise &&
ErrorUtils.outputHasModuleNotInstalledError(moduleName, stdErr)
) {
const pythonExecutionService = await pythonExecutionServicePromise;
const isInstalled = await pythonExecutionService.isModuleInstalled(moduleName);
if (!isInstalled) {
return reject(new ModuleNotInstalledError(moduleName));
}
}
resolve(stdOut);
}
);

Regardless, it sounds like you have specific code in mind. Where is it?

@karrtikr
Copy link
Author

I had initially encountered a very particular place where I found the code was scraping the testFunction.testFunction.traceback to figure out the test filePath:

if (testFunction.testFunction.traceback) {
const fileMatches =
testFunction.testFunction.traceback.match(/^((\.\.[\\\/])*.+\.py)\:(\d+)\:.*$/gim) || [];
for (const fileDetailsMatch of fileMatches) {
const fileDetails = fileDetailsMatch.split(':');
let filePath = fileDetails[0];
filePath = path.isAbsolute(filePath) ? filePath : path.resolve(rootDirectory, filePath);
const fileUri = Uri.file(filePath);

And secondly, yes, I saw what goes in TestRunner.run() and saw we execute module pytest. We collect results in a JUnit XML file, which is then parsed here:

function getJunitResults(parserResult: any): TestSuiteResult | undefined {
// This is the newer JUnit XML format (e.g. pytest 5.1 and later).
const fullResults = parserResult as { testsuites: { testsuite: TestSuiteResult[] } };
if (!fullResults.testsuites) {
return (parserResult as { testsuite: TestSuiteResult }).testsuite;
}
const junitSuites = fullResults.testsuites.testsuite;
if (!Array.isArray(junitSuites)) {
throw Error('bad JUnit XML data');
}
if (junitSuites.length === 0) {
return;
}
if (junitSuites.length > 1) {
throw Error('got multiple XML results');
}
return junitSuites[0];
}

We don't scrape terminal output so it's not as bad, but JUnit XML format can change as well, happened once to us here: #6990 and broke running tests for everybody.

@ericsnowcurrently ericsnowcurrently changed the title Use pytest hooks for test execution instead of scraping the output Use pytest hooks for test execution instead of "scraping" the JUnit XML results. Aug 28, 2020
@nadavmisgav
Copy link

nadavmisgav commented Sep 26, 2020

Is any one working on this? This also help to show test result when it ends and not wait for all tests to finish, right?

@karrtikr
Copy link
Author

karrtikr commented Sep 28, 2020

If the pytest hooks provide that feature, then yes, but I'm not aware if they do.

@nadavmisgav
Copy link

One can use the pytest_runtest_logreport hook to capture result of each stage ("setup", "teardown" ,"call") it includes a TestReport object that holds the outcome of the test and other data such as captured logs and test duration

@karrtikr
Copy link
Author

@eleanorjboyd Are we still scraping test outputs for execution?

@eleanorjboyd
Copy link
Member

Yes still scraping output, this will be fixed in the rewrite. I will reference this in the meta issue I am tracking here.

@eleanorjboyd
Copy link
Member

closing since this is fixed by the testing rewrite, thanks

@github-actions github-actions bot removed the needs spike Label for issues that need investigation before they can be worked on. label Jul 14, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 14, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area-testing feature-request Request for new features or functionality
Projects
None yet
Development

No branches or pull requests

6 participants