Multi-Scenario Tests and Tag-Based Analysis

Many performance experiments involve multiple traffic patterns or user journeys running at once, and k6 scenarios plus tags make this manageable. You can run different flows in parallel and then analyse metrics per scenario or tag.

Combining Multiple Scenarios in One Script

By defining several scenarios with different executors and exec functions, you can simulate, for example, steady background traffic plus spikes for a specific feature. Tags on requests or custom metrics let you separate results by flow or business area.

export const options = {
  scenarios: {
    background_browse: {
      executor: 'constant-vus',
      vus: 30,
      duration: '10m',
      exec: 'browseFlow',
    },
    promo_spike: {
      executor: 'ramping-arrival-rate',
      startRate: 0,
      timeUnit: '1s',
      stages: [
        { target: 50, duration: '2m' },
        { target: 200, duration: '3m' },
      ],
      preAllocatedVUs: 50,
      maxVUs: 300,
      exec: 'checkoutFlow',
    },
  },
};

export function browseFlow() { /* ... */ }
export function checkoutFlow() { /* ... */ }
Note: Each scenario can have its own thresholds, tags and execution function, giving fine-grained control over how you model and evaluate different flows.

Tag-Based Analysis of Results

k6 automatically tags metrics with scenario names, and you can add your own tags (such as journey or endpoint) to group results. In backends like Prometheus or InfluxDB, you can filter and aggregate metrics by these tags for targeted analysis.

http.get(`${BASE_URL}/products`, { tags: { journey: 'browse' } });
http.post(`${BASE_URL}/checkout`, body, { tags: { journey: 'checkout' } });
Tip: Define a small, consistent set of tag keys and values so dashboards and alerts remain understandable.
Warning: Overusing highly cardinal tags (for example per-user IDs) can explode metric volume and slow down backends.

Multi-scenario and tag-based analysis helps you see which journeys or features are struggling under load instead of only looking at global averages.

Common Mistakes

Mistake 1 โ€” Running each journey in a completely separate script

This hides interactions.

โŒ Wrong: Testing flows only in isolation.

โœ… Correct: Use multi-scenario scripts when interactions between flows matter.

Mistake 2 โ€” Using inconsistent or ad hoc tags

This reduces clarity.

โŒ Wrong: Changing tag names frequently or using too many variations.

โœ… Correct: Standardise tag usage across scripts and teams.

🧠 Test Yourself

How do scenarios and tags work together in k6?