Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: Add code coverage #2410

Open
xlc opened this issue Nov 20, 2023 · 15 comments
Open

CI: Add code coverage #2410

xlc opened this issue Nov 20, 2023 · 15 comments
Labels
T10-tests This PR/Issue is related to tests.

Comments

@xlc
Copy link
Contributor

xlc commented Nov 20, 2023

So we know if there are missing tests. It seems like the test coverage can be very bad for many pallets. e.g. #2405 did not break any existing tests in pallet-xcm

@bkchr
Copy link
Member

bkchr commented Nov 20, 2023

This issue is really not actionable in any way. In most cases bugs highlight missing test cases, there is like a natural connection between broken things and missing tests.

If you have a better issue with more ideas or whatever, please open it.

@bkchr bkchr closed this as completed Nov 20, 2023
@xlc
Copy link
Contributor Author

xlc commented Nov 20, 2023

I mean measure test coverages so we will know if there are code that’s completely untested. it is not the most useful matrix but better than not having it

@xlc
Copy link
Contributor Author

xlc commented Nov 20, 2023

for example, we have this in orml open-web3-stack/open-runtime-module-library#966 (comment)

@xlc
Copy link
Contributor Author

xlc commented Nov 22, 2023

@bkchr can you reopen this? unless you need me to provide more details

@bkchr
Copy link
Member

bkchr commented Nov 23, 2023

I don't follow the assumption that showing the test coverage will prevent the introduction of bugs. These metrics are quite obscure, especially when we here run things as well in wasm and whatever that will not be tracked by this tool and thus reduce actually the test coverage.

@xlc
Copy link
Contributor Author

xlc commented Nov 23, 2023

100% coverage does not mean no bug, but an uncovered code always requires an extra look. For example, a PR that introduce a new pallet, if with coverage available, I will spend extra time review untested code and make sure there are good reason that they are not covered in unit tests. This is especially important to ensure consistent high quality code in such big repo. As you might have guessed, I am a bit disappointed with code quality in some of the code in this repo and want to find good ways to improve it without requiring me or you to review all the code.

@bkchr
Copy link
Member

bkchr commented Nov 23, 2023

Okay fine.

@bkchr bkchr reopened this Nov 23, 2023
@bkchr bkchr changed the title test coverages CI: Add code coverage Nov 23, 2023
@bkchr
Copy link
Member

bkchr commented Nov 23, 2023

@alvicsam maybe something you would like to work on? Not sure we should use codecov.

@alvicsam
Copy link
Contributor

Sure, I'll take a look. Out of curiosity: why not codecov?

@bkchr
Copy link
Member

bkchr commented Nov 23, 2023

IDK. We can also use it. Not sure about their plans/costs. But if they are free for open source projects, we can probably use them

@mrcnski
Copy link
Contributor

mrcnski commented Nov 25, 2023

Ah, I think this would be useful. Not that we have to test every line, but it can help us determine what's been overlooked in the tests. In async backing for example it turned out we have whole functions that were entirely untested. More reading here.

There is just a problem that coverage chokes on wasm, so not all crates can use it. And FYI, we have some instructions in the Contribution doc about code coverage. They are pretty old, but worth checking because there are some workarounds for wasm. (Didn't work for me, but maybe it was something on my end.)

@alvicsam
Copy link
Contributor

alvicsam commented Jan 9, 2024

I experimented in this PR. After I add -Cinstrument-coverage rust flag and some tests fail:

  • polkadot-dispute-distribution tests::receive_rate_limit_is_enforced (example)
  • staging-node-cli::benchmark_machine_works benchmark_machine_fails_with_slow_hardware (example)
  • polkadot-node-core-dispute-coordinator tests::participation_requests_reprioritized_for_newly_included (this one sometime passes and sometime fails because it runs out of memory)

Also with this flag tests take 2x time (about 1-1,5h)

@ggwpez
Copy link
Member

ggwpez commented Jan 9, 2024

I dont know if it makes sense to run the node tests for coverage?
Does the coverage generator even realize that we are spawning another binary inside the test? I guess not.

@xlc
Copy link
Contributor Author

xlc commented Jan 9, 2024

Yeah I don't think we really need coverage for integration tests

@ggwpez ggwpez added the T10-tests This PR/Issue is related to tests. label Mar 13, 2024
@AndWeHaveAPlan
Copy link
Contributor

Some updates. These features are already on master:
Code coverage preparations #4387 - code coverage jobs, gitlab for now, but will soon move to github with the rest of pipeline
Conditional required checks #4544 - along with codecov flags and nextest filters, allows us to only run tests related to changed code

So, if everyone is okay with this, we could try to move tests to coverage jobs during GHA migration

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T10-tests This PR/Issue is related to tests.
Projects
None yet
Development

No branches or pull requests

6 participants