When stakeholders ask “how is testing going?”, they are really asking three things: how much have you done, how much is left, and how does it look so far? Test execution metrics answer these questions with data instead of guesswork. They turn vague progress updates into precise, actionable reports that help the team make informed decisions about scope, schedule, and release readiness.
Core Test Execution Metrics — Calculating and Interpreting Them
These are the metrics that every QA engineer should be able to calculate, chart, and explain to non-technical stakeholders.
# Calculating core test execution metrics from real sprint data
sprint_data = {
"total_planned": 120,
"executed": 102,
"not_executed": 18,
"passed": 89,
"failed": 9,
"blocked": 4,
"total_requirements": 25,
"requirements_with_tests": 23,
}
# 1. Test Execution Progress
execution_progress = (sprint_data["executed"] / sprint_data["total_planned"]) * 100
# 2. Test Pass Rate
pass_rate = (sprint_data["passed"] / sprint_data["executed"]) * 100
# 3. Test Fail Rate
fail_rate = (sprint_data["failed"] / sprint_data["executed"]) * 100
# 4. Blocked Rate
blocked_rate = (sprint_data["blocked"] / sprint_data["executed"]) * 100
# 5. Requirements Coverage
req_coverage = (sprint_data["requirements_with_tests"] / sprint_data["total_requirements"]) * 100
# 6. Tests Not Executed (risk indicator)
not_executed_pct = (sprint_data["not_executed"] / sprint_data["total_planned"]) * 100
print("Test Execution Metrics — Sprint Report")
print("=" * 55)
print(f" Total Planned: {sprint_data['total_planned']}")
print(f" Executed: {sprint_data['executed']}")
print(f" Not Executed: {sprint_data['not_executed']}")
print()
print(f" Execution Progress: {execution_progress:.1f}%")
print(f" Pass Rate: {pass_rate:.1f}%")
print(f" Fail Rate: {fail_rate:.1f}%")
print(f" Blocked Rate: {blocked_rate:.1f}%")
print(f" Requirements Coverage: {req_coverage:.1f}%")
print(f" Not Executed Risk: {not_executed_pct:.1f}%")
# Interpret the results
print("\n\nInterpretation:")
if pass_rate >= 95:
print(" Pass rate is excellent (>= 95%). Release candidate looks strong.")
elif pass_rate >= 85:
print(" Pass rate is acceptable (85-95%). Review failed cases for severity.")
else:
print(" Pass rate is below threshold (< 85%). Investigate root causes.")
if blocked_rate > 5:
print(f" WARNING: {blocked_rate:.1f}% blocked rate is high. Resolve blockers urgently.")
if not_executed_pct > 10:
print(f" WARNING: {not_executed_pct:.1f}% tests not executed. Risk of untested areas.")
Common Mistakes
Mistake 1 — Reporting pass rate without execution progress
❌ Wrong: “Our pass rate is 98%!” (but only 40 of 120 tests have been executed)
✅ Correct: “We have executed 40 of 120 tests (33% progress) with a 98% pass rate on executed tests. 80 tests remain, including critical payment and security scenarios.”
Mistake 2 — Ignoring blocked tests in the report
❌ Wrong: Counting blocked tests as “not executed” and hoping they will be resolved before the report is due.
✅ Correct: Reporting blocked tests separately with the reason for each block (environment down, missing test data, dependency on another team). Blocked tests are a risk that stakeholders need to know about.