Production observability is not just for incident response; it is also a powerful input into test design. By studying real traffic patterns, common errors, and slow paths, you can align testing effort with what users actually experience.
Using Production Data to Guide Testing
Dashboards, logs, and traces reveal which endpoints are most heavily used, which errors happen most often, and where latency spikes occur. This helps you prioritise regression tests, exploratory charters, and performance checks around the most critical behaviours.
# Examples of observability-informed test ideas
- Create tests for the top N most-used endpoints.
- Design scenarios that reproduce frequent error codes.
- Add performance checks around slow queries or services.
- Explore user journeys that correlate with high drop-off.
Over time, you can build a feedback loop where new incidents lead to new tests, and recurring patterns inform improvements in both observability and test coverage.
Bridging the Gap Between Environments
Production behaviour also highlights where test environments differ too much from realityβfor example, missing integrations, different data volumes, or disabled features. QA can use these insights to advocate for environment improvements or targeted production testing under controlled conditions.
Common Mistakes
Mistake 1 β Designing tests without looking at real usage
This can misalign effort.
β Wrong: Spending most time on rarely used paths while high-traffic flows are under-tested.
β Correct: Use production data to shape priorities.
Mistake 2 β Treating incidents as isolated events
Patterns often repeat.
β Wrong: Fixing issues without adding tests or improving signals.
β Correct: Turn incidents into new tests and observability improvements.