For large-scale tests, a single k6 instance may not generate enough load or may hit client-side limits first. Distributed and cloud execution options let you scale k6 horizontally and run tests closer to your users or infrastructure.
Scaling k6 with Distributed or Cloud Setups
You can run k6 on multiple machines and coordinate them manually or via orchestration tools like Kubernetes, or use hosted solutions such as k6 Cloud that handle distribution for you. Container images and infrastructure-as-code make it easier to provision consistent load-generating environments.
# Example: running k6 in Docker
docker run -i loadimpact/k6 run - < scripts/checkout.js
# Example: Kubernetes job (conceptual)
# apiVersion: batch/v1
# kind: Job
# spec:
# template:
# spec:
# containers:
# - name: k6
# image: loadimpact/k6
# args: ["run", "/scripts/checkout.js"]
Distributed execution unlocks realistic, high-volume scenarios such as regional traffic patterns or global campaigns.
Common Mistakes
Mistake 1 โ Scaling k6 clients before fixing script or environment issues
This amplifies problems.
โ Wrong: Running huge tests with unvalidated scripts.
โ Correct: Fix correctness and stability at small scale first.
Mistake 2 โ Ignoring network proximity and latency
This skews results.
โ Wrong: Running all load from a distant region when your users are local.
โ Correct: Place load generators in regions that reflect real user locations when possible.