Skip to content

Fix tests skipped during setup phase reporting with no result#102

Open
nprizal wants to merge 1 commit intomainfrom
te-5406-test-doesnt-have-result-when-skipped-during-setup-phase
Open

Fix tests skipped during setup phase reporting with no result#102
nprizal wants to merge 1 commit intomainfrom
te-5406-test-doesnt-have-result-when-skipped-during-setup-phase

Conversation

@nprizal
Copy link
Copy Markdown
Contributor

@nprizal nprizal commented Apr 13, 2026

Context

This was originally reported as a test-engine-client issue where OnlyMutedFailures() returns false due to unrecognized test statuses. I traced the root cause back to the test collector itself.

When a test is skipped during the setup phase (e.g. via @pytest.mark.skip, @pytest.mark.skipif, or pytest.skip() in a fixture), the call phase never runs. The pytest_runtest_logreport hook fires with report.when="setup" and report.outcome="skipped", but the existing condition only captured setup failures, not setup skips:

# before
if report.when == 'call' or (report.when in ('setup', 'teardown') and report.failed):

This meant update_test_result was never called, so the test's result stayed None. When serialized to JSON, the result key was omitted entirely. The test-engine-client then treated this as "unknown", which broke the status accounting in RunResult.Status() and caused OnlyMutedFailures() to return false.

Changes

  • Added setup-skipped tests to the capture condition so they are correctly recorded as "skipped" in the JSON payload, with a corresponding unit test.

Test plan

  • New unit test test_pytest_runtest_logreport_skip_in_setup passes
  • Full test suite (92 tests) passes
  • Run pytest with the --json option on a test suite that includes @pytest.mark.skip tests and verify the JSON output contains "result": "skipped" for those tests

Tests skipped via @pytest.mark.skip, @pytest.mark.skipif, or
pytest.skip() in fixtures produce a setup-phase report with
outcome="skipped". The logreport condition only captured setup
failures, not setup skips, so the result stayed None and was
omitted from the JSON payload — displayed as "unknown" in the UI.

Also restructured the capture condition with clearer comments
explaining when each phase's outcome is meaningful.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@nprizal nprizal changed the title [TE-5406] Fix tests skipped during setup phase reporting with no result Fix tests skipped during setup phase reporting with no result Apr 13, 2026
@nprizal nprizal self-assigned this Apr 13, 2026
@nprizal nprizal marked this pull request as ready for review April 13, 2026 23:00
@jasonwbarnett
Copy link
Copy Markdown
Contributor

The fix here is correct — nice catch tracing this back from the test-engine-client. This shouldn't hold up the PR, but while reviewing I noticed a few related things worth flagging.


as_json() silently drops the result key when it's None

This is the real systemic issue. The fix closes one path to result=None, but payload.py:236-247 will silently omit the "result" key for any code path that leaves result unset:

if isinstance(self.result, TestResultPassed):
    attrs["result"] = "passed"
if isinstance(self.result, TestResultFailed):
    attrs["result"] = "failed"
    ...
if isinstance(self.result, TestResultSkipped):
    attrs["result"] = "skipped"
# None → key is just absent

A small safety net here would close off the entire class of bugs, not just this instance:

if "result" not in attrs:
    logger.warning("Test %s has no result set", self.name)
    attrs["result"] = "unknown"

Subtests can also leave result=None

Passing/skipped subtests intentionally don't set a result — the design relies on the parent's call-phase report arriving later. If that report is ever lost (plugin conflict, xdist race, crash), the test ships with result=None and hits the same downstream issue. The as_json() safety net above would cover this too.

finalize_test doesn't validate result before shipping

finalize_test() moves tests to the payload without checking if result was set. A warning log here would make future occurrences visible instead of silent.

Minor: if/if/if in update_test_result

The three outcome checks are independent if statements rather than if/elif/elif. Works fine today since pytest outcomes are mutually exclusive, but elif would make that invariant explicit.


None of these are blockers — the fix as-is solves the reported issue. The as_json() fallback is the one I'd most advocate for as a follow-up since it's a one-line safety net against the whole category.

@nprizal nprizal requested a review from a team April 14, 2026 05:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants