Reading Performance Test Results

๐Ÿ“‹ Table of Contents โ–พ
  1. Latency, Throughput and Error Metrics
  2. Common Mistakes

Running performance tests is only useful if you can correctly interpret the results. Understanding latency distributions, throughput, errors and resource usage is the foundation of any meaningful analysis.

Latency, Throughput and Error Metrics

Latency metrics show how long requests take (often via percentiles), throughput indicates how many requests are processed per second, and error metrics capture failed or slow responses. Looking at these together, rather than in isolation, helps you judge whether the system meets its targets under a given load.

Example performance summary:
- Load: 200 virtual users, ~800 requests/second
- Latency: P50 = 220 ms, P95 = 650 ms, P99 = 1100 ms
- Error rate: 0.7% (mostly timeouts on /checkout)
- CPU: 70โ€“80% on app servers
- DB connections: near pool limit during peak
Note: Always tie metrics back to the specific test conditions (load pattern, duration, environment) to avoid incorrect comparisons.
Tip: Plot latency percentiles over time, not just as single aggregates, so you can see how behaviour changes during ramp-up, peak and ramp-down.
Warning: A low error rate with very high latency may still violate SLAs; success is not just about avoiding errors.

Careful reading of these core metrics prevents misinterpretation and guides deeper investigation.

Common Mistakes

Mistake 1 โ€” Looking only at averages

This hides tail behaviour.

โŒ Wrong: Reporting only average response time.

โœ… Correct: Include P95 and P99 latencies to capture worst-case user experience.

Mistake 2 โ€” Ignoring test conditions when comparing runs

This leads to false conclusions.

โŒ Wrong: Comparing runs with different loads or environments as if they were equivalent.

โœ… Correct: Compare like with like, or explicitly note differences in conditions.

🧠 Test Yourself

Why is it important to consider both latency percentiles and error rates together?