Parameterised Cross-Browser Execution — Running Every Test on Every Browser

You have a DriverFactory that creates any browser. Now you need to run your 200 tests on Chrome, Firefox, and Edge — without writing 600 test functions. pytest provides two mechanisms for this: fixture parameterisation (the browser fixture yields multiple browsers) and CI job matrix parameterisation (the pipeline runs separate jobs per browser). Both approaches run the same test code on every browser automatically.

Two Approaches to Cross-Browser Parameterisation

The choice between fixture-level and CI-level parameterisation depends on whether you want cross-browser results in a single test run or in separate parallel pipeline jobs.

import pytest
from utils.driver_factory import DriverFactory


# ── Approach 1: Fixture Parameterisation ──
# Each test runs N times — once per browser — in a single pytest run
# Best for: local development, small suites, all-in-one reports

@pytest.fixture(params=["chrome", "firefox", "edge"], scope="function")
def browser(request):
    driver = DriverFactory.create(browser_name=request.param)
    yield driver
    driver.quit()


# This test automatically runs 3 times: Chrome, Firefox, Edge
def test_login_valid(browser):
    # Identical test code — browser fixture handles the variation
    browser.get("https://www.saucedemo.com")
    browser.find_element("id", "user-name").send_keys("standard_user")
    browser.find_element("id", "password").send_keys("secret_sauce")
    browser.find_element("id", "login-button").click()
    assert "inventory" in browser.current_url

# pytest output:
#   test_login_valid[chrome]   PASSED
#   test_login_valid[firefox]  PASSED
#   test_login_valid[edge]     PASSED


# ── Approach 2: CI Job Matrix (recommended for large suites) ──
# Each browser gets its own CI job running in parallel
# Best for: CI/CD, large suites, independent browser reports

# conftest.py — single browser per run, set by CLI or env var
# @pytest.fixture(scope="function")
# def browser(request):
#     browser_name = request.config.getoption("--browser", default="chrome")
#     driver = DriverFactory.create(browser_name=browser_name)
#     yield driver
#     driver.quit()

GITHUB_ACTIONS_MATRIX = """
# .github/workflows/cross-browser.yml
name: Cross-Browser Tests

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        browser: [chrome, firefox, edge]
      fail-fast: false    # Continue other browsers if one fails

    services:
      selenium:
        image: selenium/standalone-${{ matrix.browser }}:latest
        ports: ['4444:4444']
        options: --shm-size=2g

    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with: { python-version: '3.12' }
      - run: pip install -r requirements.txt
      - run: |
          pytest tests/ \\
            --browser ${{ matrix.browser }} \\
            --grid-url http://localhost:4444 \\
            -n 4 \\
            --html=reports/${{ matrix.browser }}-report.html \\
            --junitxml=reports/${{ matrix.browser }}-results.xml
      - uses: actions/upload-artifact@v4
        if: always()
        with:
          name: ${{ matrix.browser }}-reports
          path: reports/
"""

# Comparison of both approaches
COMPARISON = [
    {
        "aspect": "Execution model",
        "fixture_param": "Sequential within one pytest run (chrome → firefox → edge)",
        "ci_matrix": "Parallel CI jobs (chrome, firefox, edge run simultaneously)",
    },
    {
        "aspect": "Total time (200 tests x 3 browsers)",
        "fixture_param": "3x longer (600 tests in sequence)",
        "ci_matrix": "Same as single browser (3 parallel jobs)",
    },
    {
        "aspect": "Report",
        "fixture_param": "Single report with browser suffix on each test",
        "ci_matrix": "Separate reports per browser (easier to isolate failures)",
    },
    {
        "aspect": "Best for",
        "fixture_param": "Local development, small suites (< 50 tests)",
        "ci_matrix": "CI/CD, large suites (100+ tests), parallel execution",
    },
]

print("Cross-Browser Parameterisation Approaches")
print("=" * 70)
print(f"\n{'Aspect':<30} {'Fixture Param':<25} {'CI Matrix'}")
print("-" * 70)
for c in COMPARISON:
    print(f"\n{c['aspect']}")
    print(f"  Fixture: {c['fixture_param']}")
    print(f"  CI:      {c['ci_matrix']}")
Note: The CI matrix approach with fail-fast: false is critical for cross-browser testing. Without it, a failure in the Chrome job would cancel the Firefox and Edge jobs — you would never know if those browsers also had the same defect or different ones. Setting fail-fast: false lets all browser jobs complete independently, giving you a complete cross-browser picture in every run.
Tip: Use the fixture parameterisation approach during local development for quick feedback ("does my new test work on Firefox?") and the CI matrix approach for production pipeline runs. They are not mutually exclusive — your conftest.py can support both modes: parameterised when --all-browsers is passed, single-browser when --browser chrome is passed.
Warning: Fixture parameterisation with 3 browsers triples your test count and execution time in a single run. A 200-test suite becomes 600 tests. With pytest-xdist parallel execution (-n 4), this means 600 tests across 4 workers — each worker processes 150 tests instead of 50. Ensure your Grid or machine has enough capacity for the increased load, or use the CI matrix approach to distribute the load across separate jobs.

Common Mistakes

Mistake 1 — Duplicating test files for each browser

❌ Wrong: test_login_chrome.py, test_login_firefox.py, test_login_edge.py — three files with identical logic.

✅ Correct: One test_login.py with a parameterised browser fixture or a CI matrix that runs the same file on each browser.

Mistake 2 — Using fail-fast in cross-browser CI matrix

❌ Wrong: fail-fast: true — Chrome failure cancels Firefox and Edge jobs, hiding cross-browser defects.

✅ Correct: fail-fast: false — all browser jobs run to completion, giving a complete picture of which browsers are affected by each defect.

🧠 Test Yourself

A team has 200 Selenium tests. They want to run them on Chrome, Firefox, and Edge in their CI pipeline with minimum total execution time. Which approach is best?