Bringing Docker into CI pipelines lets you run tests against realistic, containerised environments on every change. Instead of relying on fragile shared servers, you can spin up fresh containers in GitHub Actions or other CI tools as part of your jobs.
Running Dockerised Services in GitHub Actions
GitHub-hosted runners already include Docker, so you can build images and start compose stacks directly in workflow steps. A common pattern is to build or pull images, start the stack in the background and then run your API or UI tests against it.
# .github/workflows/docker-tests.yml
name: Dockerised Tests
on:
pull_request:
branches: [ main ]
jobs:
docker-tests:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
POSTGRES_DB: testdb
ports:
- 5432:5432
options: >-
--health-cmd="pg_isready -U testuser" --health-interval=10s --health-timeout=5s --health-retries=5
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Install deps
run: npm ci
- name: Run API tests against Docker DB
env:
DATABASE_URL: postgres://testuser:testpass@localhost:5432/testdb
run: npm test -- api
services: section in GitHub Actions makes it easy to attach containers like databases to your jobs without writing full compose files.For more complex setups, you can call docker compose up from a step instead of using the services shortcut.
Common Mistakes
Mistake 1 โ Starting tests before containers are ready
This creates flaky failures.
โ Wrong: Immediately running tests right after starting containers.
โ Correct: Use health checks, waits or readiness scripts so tests see fully initialised services.
Mistake 2 โ Treating CI Docker config differently from local setups
This causes drift.
โ Wrong: Maintaining completely separate configs for local and CI environments.
โ Correct: Reuse the same compose files or images wherever possible, with minimal CI-specific overrides.