Analysing Results and Finding Bottlenecks

Collecting performance data is only the first step; the real value comes from analysing results and identifying bottlenecks. Effective analysis connects metrics to user experience and pinpoints where improvements will have the most impact.

Key Metrics and Dashboards

Important metrics include response time percentiles (such as p50, p95, p99), throughput (requests per second), error rates, and resource utilisation (CPU, memory, disk, network). Dashboards that correlate these metrics over time let you see how the system behaves as load changes.

# Example analysis steps

- Check overall error rate during the test.
- Examine response time percentiles at different load levels.
- Correlate spikes with CPU, memory, or I/O saturation.
- Look for specific endpoints that degrade faster than others.
Note: Percentiles are often more informative than averages, because they reveal how slower requests affect users.
Tip: Capture test configurations (scenarios, versions, environments) alongside results so you can compare runs accurately over time.
Warning: Drawing conclusions from a single noisy run can be misleading; where possible, repeat key tests to confirm patterns.

Finding bottlenecks often requires drilling into specific layers: application code, databases, caches, external dependencies, or network links. Tools such as APM (Application Performance Monitoring), database query profilers, and log analysis platforms help you trace slow paths.

From Symptoms to Root Causes

Start by identifying which operations are slow or error-prone, then investigate which components they depend on. Look for shared resources such as database tables or external APIs that may be causing systemic slowdown. Work with developers and SREs to validate hypotheses and plan fixes.

Common Mistakes

Mistake 1 โ€” Focusing only on one metric

No single metric tells the entire story.

โŒ Wrong: Looking only at average response time.

โœ… Correct: Consider percentiles, errors, and resource usage together.

Mistake 2 โ€” Ignoring context when comparing runs

Changes in environment or configuration can skew comparisons.

โŒ Wrong: Comparing tests without noting differences in versions or setups.

โœ… Correct: Keep records of conditions for each test run.

🧠 Test Yourself

How should teams approach analysing performance test results?