Running performance tests is only useful if you can correctly interpret the results. Understanding latency distributions, throughput, errors and resource usage is the foundation of any meaningful analysis.
Latency, Throughput and Error Metrics
Latency metrics show how long requests take (often via percentiles), throughput indicates how many requests are processed per second, and error metrics capture failed or slow responses. Looking at these together, rather than in isolation, helps you judge whether the system meets its targets under a given load.
Example performance summary:
- Load: 200 virtual users, ~800 requests/second
- Latency: P50 = 220 ms, P95 = 650 ms, P99 = 1100 ms
- Error rate: 0.7% (mostly timeouts on /checkout)
- CPU: 70โ80% on app servers
- DB connections: near pool limit during peak
Careful reading of these core metrics prevents misinterpretation and guides deeper investigation.
Common Mistakes
Mistake 1 โ Looking only at averages
This hides tail behaviour.
โ Wrong: Reporting only average response time.
โ Correct: Include P95 and P99 latencies to capture worst-case user experience.
Mistake 2 โ Ignoring test conditions when comparing runs
This leads to false conclusions.
โ Wrong: Comparing runs with different loads or environments as if they were equivalent.
โ Correct: Compare like with like, or explicitly note differences in conditions.