You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I checked the coverage presented on Codecov and noticed a few common mistakes in the coverage collection/configuration.
Look at https://app.codecov.io/gh/aio-libs/aiobotocore/commit/fd5f917524ce98ce5b48b062f2d78815cebe60d6/tree?search=tests%2F&displayType=list and you'll see that the test folder has a bunch of dead code. This ain't good — this means that some tests you might think run don't. This kind of thing may happen when you copy a test but forget to rename the function and so the first one gets shadowed and never runs. Or some logic is bad in helpers.
Some bits might be expected to never run and that's fine, however, having those line marked as red makes it harder to rely on the coverage data.
And so it's recommended to require 100% coverage on the test folder. This is achievable by using line exclusions and "no cover", "no branch" pragmas that would make coveragepy ignore the lines you don't want to matter completely so that they don't contribute to the coverage measurements at all.
I also noticed some mentions of Python 2 in vendored tests and some possibly outdated branches.
So to fix this, it's important to review the uncovered tests/helpers and add pragmas where reasonable (including justification comments for each). In some cases, you'll find that you might just remove excessive exception handling or helpers that are never in use.
Once you clean that up, configure Codecov to specifically require 100% on the tests and make the check required.
Going further, it's a good idea to apply similar treatment to the runtime code — although, it's better to actually cover the cases with tests and pragma justifications might be more rare. One corner case you might find would be branch coverage dependent on an OS or a Python version. For these, there's coveragepy plugins that provide you with custom pragmas that let you make certain lines as not needing coverage under respective conditions. This could be the second stage, though. At some point you might want to make use of covdefaults when the code base is ready.
Remember: this ain't about the coverage metric ripped out of the context — this is about setting up the tooling to help you spot problems. This would also make Codecov reports more stable and patch/project percentage will stop being affected by the number of lines in the project changing in PRs (which influences the ratio and contributes to visual confusion when it looks like you've got the entire patch covered but the metric ends up being skewed).
Caution
This is a popular topic of @webknjaz's rants 🤪
I checked the coverage presented on Codecov and noticed a few common mistakes in the coverage collection/configuration.
Look at https://app.codecov.io/gh/aio-libs/aiobotocore/commit/fd5f917524ce98ce5b48b062f2d78815cebe60d6/tree?search=tests%2F&displayType=list and you'll see that the test folder has a bunch of dead code. This ain't good — this means that some tests you might think run don't. This kind of thing may happen when you copy a test but forget to rename the function and so the first one gets shadowed and never runs. Or some logic is bad in helpers.
Some bits might be expected to never run and that's fine, however, having those line marked as red makes it harder to rely on the coverage data.
And so it's recommended to require 100% coverage on the test folder. This is achievable by using line exclusions and "no cover", "no branch" pragmas that would make coveragepy ignore the lines you don't want to matter completely so that they don't contribute to the coverage measurements at all.
I also noticed some mentions of Python 2 in vendored tests and some possibly outdated branches.
So to fix this, it's important to review the uncovered tests/helpers and add pragmas where reasonable (including justification comments for each). In some cases, you'll find that you might just remove excessive exception handling or helpers that are never in use.
Once you clean that up, configure Codecov to specifically require 100% on the tests and make the check required.
Going further, it's a good idea to apply similar treatment to the runtime code — although, it's better to actually cover the cases with tests and pragma justifications might be more rare. One corner case you might find would be branch coverage dependent on an OS or a Python version. For these, there's coveragepy plugins that provide you with custom pragmas that let you make certain lines as not needing coverage under respective conditions. This could be the second stage, though. At some point you might want to make use of
covdefaultswhen the code base is ready.Remember: this ain't about the coverage metric ripped out of the context — this is about setting up the tooling to help you spot problems. This would also make Codecov reports more stable and patch/project percentage will stop being affected by the number of lines in the project changing in PRs (which influences the ratio and contributes to visual confusion when it looks like you've got the entire patch covered but the metric ends up being skewed).
ctx: https://nedbatchelder.com/blog/200507/sometimes_the_automation_really_knows_best / https://nedbatchelder.com/blog/200710/flaws_in_coverage_measurement / https://nedbatchelder.com/blog/201106/running_coverage_on_your_tests / https://nedbatchelder.com/blog/201908/dont_omit_tests_from_coverage / https://nedbatchelder.com/blog/202008/you_should_include_your_tests_in_coverage