Performance thresholds and quality gates in pipelines turn test results into clear go/no-go signals. Instead of manually inspecting graphs, teams can codify rules that fail builds when performance degrades beyond agreed limits.
Using Thresholds as Pipeline Gates
With k6, thresholds automatically control the process exit code, which CI uses to mark jobs as passed or failed. For JMeter, you can add post-processing scripts or use tools that parse JTL files and compare metrics against thresholds before deciding on job status.
// Example: k6 thresholds as quality gates
export const options = {
thresholds: {
http_req_duration: ['p(95)<800'],
http_req_failed: ['rate<0.01'],
},
};
Integrating with Broader Quality Gates
Performance gates can complement other CI checks such as unit tests, static analysis and code coverage. For example, a release pipeline may require all functional tests to pass, static analysis gates to be green and performance thresholds to be met before deployment.
Example release gate conditions:
- All automated tests pass (unit, API, UI)
- Quality gate in SonarQube: passed
- Performance smoke test: thresholds met
- No new critical security vulnerabilities
By treating performance limits as first-class gates, you ensure that regressions are surfaced and discussed before reaching production.
Common Mistakes
Mistake 1 โ Setting thresholds without baseline data
This causes noise.
โ Wrong: Picking numbers that do not match current behaviour or hardware.
โ Correct: Measure current performance first, then set thresholds slightly tighter than the baseline.
Mistake 2 โ Using too many fine-grained gates early on
This overwhelms teams.
โ Wrong: Blocking releases on many metrics before pipelines are stable.
โ Correct: Start with a small set of impactful gates and expand gradually.