CI/CD Pipeline and Cross-Browser Execution — Deploying Your Framework

A framework without a CI/CD pipeline is a local experiment. Wiring your capstone into GitHub Actions transforms it into a production-grade quality gate — tests run automatically on every push, cross-browser coverage is verified, and results are published as downloadable artefacts. This lesson walks through the complete CI/CD configuration for the capstone, including parallel execution, cross-browser testing, and quality gate enforcement.

Capstone CI/CD Pipeline — GitHub Actions

The pipeline has three jobs: lint (fast feedback), test-chrome (primary), and test-firefox (cross-browser).

# Capstone CI/CD pipeline configuration

GITHUB_ACTIONS = """
# .github/workflows/ci.yml
name: QA Capstone CI

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with: { python-version: '3.12' }
      - run: pip install flake8
      - run: flake8 pages/ tests/ utils/ --max-line-length=120

  test:
    needs: lint
    runs-on: ubuntu-latest
    strategy:
      fail-fast: false
      matrix:
        browser: [chrome, firefox]

    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with: { python-version: '3.12' }
      - run: pip install -r requirements.txt

      - name: Run tests
        env:
          BROWSER: ${{ matrix.browser }}
          HEADLESS: 'true'
          BASE_URL: 'https://www.saucedemo.com'
        run: |
          pytest tests/ \\
            -n 2 \\
            --reruns 1 \\
            --html=reports/${{ matrix.browser }}-report.html \\
            --self-contained-html \\
            -v

      - name: Upload reports
        uses: actions/upload-artifact@v4
        if: always()
        with:
          name: ${{ matrix.browser }}-reports
          path: reports/
          retention-days: 14
"""

# Pipeline features explained
PIPELINE_FEATURES = [
    {
        "feature": "Lint job runs first (fast feedback)",
        "why": "Catches syntax errors and style violations in 10 seconds — before spending "
               "minutes on test execution. Developers get instant feedback on code quality.",
    },
    {
        "feature": "Matrix strategy: Chrome + Firefox",
        "why": "Two parallel jobs test cross-browser compatibility. fail-fast: false ensures "
               "both browsers run to completion even if one fails.",
    },
    {
        "feature": "Environment variables for configuration",
        "why": "BROWSER, HEADLESS, and BASE_URL are injected from the pipeline — the framework "
               "reads them via Settings class. No code changes needed between local and CI.",
    },
    {
        "feature": "pytest -n 2 (parallel within each browser job)",
        "why": "Two test workers per browser job. Combined with the 2-browser matrix, this gives "
               "4 parallel test sessions total.",
    },
    {
        "feature": "--reruns 1 (single retry for transient failures)",
        "why": "CI environments are noisier than local. One retry handles transient issues "
               "without masking real defects.",
    },
    {
        "feature": "if: always() on artifact upload",
        "why": "Reports and screenshots are uploaded even when tests fail — essential for "
               "debugging CI failures without local reproduction.",
    },
]

# Cross-browser execution summary
CROSS_BROWSER = {
    "Chrome job": "Primary browser — runs all tests, generates Chrome report",
    "Firefox job": "Cross-browser — runs all tests in parallel with Chrome",
    "Reports": "Separate HTML reports per browser, downloadable from Actions artifacts",
    "Total time": "~5-8 minutes (both browsers run simultaneously)",
}

print("Capstone CI/CD Pipeline Features:")
for feat in PIPELINE_FEATURES:
    print(f"\n  {feat['feature']}")
    print(f"    Why: {feat['why']}")
Note: The needs: lint directive on the test job means tests only run if linting passes. This prevents wasting CI minutes on test execution when the code has syntax errors or style violations. The lint job completes in 10-15 seconds — if it fails, the developer gets instant feedback and can fix the issue before tests are even attempted. This two-stage pipeline (lint then test) is a standard CI best practice.
Tip: Add a badge to your README showing the CI status: ![CI](https://github.com/username/repo/actions/workflows/ci.yml/badge.svg). A green badge on your portfolio project signals to reviewers and interviewers that your tests actually run and pass — not just that the code exists. It is a small detail that demonstrates professionalism and CI/CD competence.
Warning: If your capstone targets a public practice app like SauceDemo, it will always be available. If you target your own application, you must either deploy it in CI (using Docker Compose) or use a persistently hosted staging environment. A CI pipeline that fails because the target app is down defeats the purpose of demonstrating CI/CD skills.

Common Mistakes

Mistake 1 — Not using if: always() for artifact uploads

❌ Wrong: Reports are only uploaded on success — failure artifacts (the ones you need most) are lost.

✅ Correct: if: always() ensures reports, screenshots, and logs are uploaded regardless of test outcome.

Mistake 2 — Using fail-fast: true in the browser matrix

❌ Wrong: Chrome failure cancels the Firefox job — you never learn if Firefox also has issues.

✅ Correct: fail-fast: false — both browser jobs complete independently for a full cross-browser picture.

🧠 Test Yourself

Why does the capstone CI pipeline run linting as a separate job before tests?