Test Execution Metrics — Pass Rate, Coverage and Progress Tracking

When stakeholders ask “how is testing going?”, they are really asking three things: how much have you done, how much is left, and how does it look so far? Test execution metrics answer these questions with data instead of guesswork. They turn vague progress updates into precise, actionable reports that help the team make informed decisions about scope, schedule, and release readiness.

Core Test Execution Metrics — Calculating and Interpreting Them

These are the metrics that every QA engineer should be able to calculate, chart, and explain to non-technical stakeholders.

# Calculating core test execution metrics from real sprint data

sprint_data = {
    "total_planned":    120,
    "executed":         102,
    "not_executed":      18,
    "passed":            89,
    "failed":             9,
    "blocked":            4,
    "total_requirements": 25,
    "requirements_with_tests": 23,
}

# 1. Test Execution Progress
execution_progress = (sprint_data["executed"] / sprint_data["total_planned"]) * 100

# 2. Test Pass Rate
pass_rate = (sprint_data["passed"] / sprint_data["executed"]) * 100

# 3. Test Fail Rate
fail_rate = (sprint_data["failed"] / sprint_data["executed"]) * 100

# 4. Blocked Rate
blocked_rate = (sprint_data["blocked"] / sprint_data["executed"]) * 100

# 5. Requirements Coverage
req_coverage = (sprint_data["requirements_with_tests"] / sprint_data["total_requirements"]) * 100

# 6. Tests Not Executed (risk indicator)
not_executed_pct = (sprint_data["not_executed"] / sprint_data["total_planned"]) * 100

print("Test Execution Metrics — Sprint Report")
print("=" * 55)
print(f"  Total Planned:       {sprint_data['total_planned']}")
print(f"  Executed:            {sprint_data['executed']}")
print(f"  Not Executed:        {sprint_data['not_executed']}")
print()
print(f"  Execution Progress:  {execution_progress:.1f}%")
print(f"  Pass Rate:           {pass_rate:.1f}%")
print(f"  Fail Rate:           {fail_rate:.1f}%")
print(f"  Blocked Rate:        {blocked_rate:.1f}%")
print(f"  Requirements Coverage: {req_coverage:.1f}%")
print(f"  Not Executed Risk:   {not_executed_pct:.1f}%")

# Interpret the results
print("\n\nInterpretation:")
if pass_rate >= 95:
    print("  Pass rate is excellent (>= 95%). Release candidate looks strong.")
elif pass_rate >= 85:
    print("  Pass rate is acceptable (85-95%). Review failed cases for severity.")
else:
    print("  Pass rate is below threshold (< 85%). Investigate root causes.")

if blocked_rate > 5:
    print(f"  WARNING: {blocked_rate:.1f}% blocked rate is high. Resolve blockers urgently.")

if not_executed_pct > 10:
    print(f"  WARNING: {not_executed_pct:.1f}% tests not executed. Risk of untested areas.")
Note: The pass rate is the most quoted metric but also the most misunderstood. A 95% pass rate means 95% of executed tests passed — it says nothing about the 18 tests that were never executed. If those 18 tests cover critical payment scenarios, the 95% pass rate creates false confidence. Always report pass rate alongside execution progress and the nature of unexecuted tests. A 95% pass rate with 100% execution is far more meaningful than 95% with only 60% execution.
Tip: Track execution progress as a daily burn-down chart. Plot “tests remaining” on the y-axis and “sprint day” on the x-axis. This visual shows whether testing is on pace to complete by sprint end. If the line flattens (stalls), you can see it immediately and escalate — rather than discovering on the last day that 30% of tests are still pending.
Warning: A high pass rate does not automatically mean high quality. If your test cases are too weak (testing only happy paths with trivial inputs), they will pass even when the software has significant defects. Pass rate is only meaningful when the test cases themselves are well-designed — covering positive, negative, and boundary scenarios as discussed in Chapter 4. Combining pass rate with defect metrics gives a more accurate picture of quality.

Common Mistakes

Mistake 1 — Reporting pass rate without execution progress

❌ Wrong: “Our pass rate is 98%!” (but only 40 of 120 tests have been executed)

✅ Correct: “We have executed 40 of 120 tests (33% progress) with a 98% pass rate on executed tests. 80 tests remain, including critical payment and security scenarios.”

Mistake 2 — Ignoring blocked tests in the report

❌ Wrong: Counting blocked tests as “not executed” and hoping they will be resolved before the report is due.

✅ Correct: Reporting blocked tests separately with the reason for each block (environment down, missing test data, dependency on another team). Blocked tests are a risk that stakeholders need to know about.

🧠 Test Yourself

A sprint report shows: 100 tests planned, 60 executed, 57 passed, 3 failed. The pass rate is 95%. Should the team feel confident about release quality?