A test suite that passes or fails with no evidence is like a medical test that gives a result with no lab report. When a test fails in CI at 3 AM, the team needs to diagnose the failure without reproducing it. Rich reports with screenshots, logs, step traces and environment details transform a cryptic “AssertionError” into a complete diagnostic package. This lesson covers the three pillars of test evidence: reports, logs, and screenshots.
Three Pillars — Reports, Logs and Screenshots
Professional frameworks produce evidence automatically on every test run, with additional detail captured on failures.
import pytest
import logging
import os
from datetime import datetime
# ── PILLAR 1: HTML Reports (pytest-html) ──
# Install: pip install pytest-html
# Run: pytest --html=reports/report.html --self-contained-html
# ── PILLAR 2: Allure Reports (rich interactive reports) ──
# Install: pip install allure-pytest
# Run: pytest --alluredir=reports/allure-results
# View: allure serve reports/allure-results
# ── PILLAR 3: Logging ──
# Configure in conftest.py
LOG_DIR = "reports/logs"
os.makedirs(LOG_DIR, exist_ok=True)
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(name)s: %(message)s",
handlers=[
logging.FileHandler(f"{LOG_DIR}/test_{datetime.now():%Y%m%d_%H%M%S}.log"),
logging.StreamHandler(),
]
)
logger = logging.getLogger("selenium_framework")
# ── Automatic screenshot on failure ──
# conftest.py hook
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()
# Attach to the test item for use in fixtures
setattr(item, f"rep_{report.when}", report)
@pytest.fixture(autouse=True)
def capture_failure_screenshot(request, browser):
yield
# After test completes, check if it failed
if hasattr(request.node, "rep_call") and request.node.rep_call.failed:
test_name = request.node.name.replace("[", "_").replace("]", "_")
screenshot_dir = "reports/screenshots"
os.makedirs(screenshot_dir, exist_ok=True)
path = f"{screenshot_dir}/{test_name}.png"
browser.save_screenshot(path)
logger.error(f"FAIL: {test_name} — screenshot saved: {path}")
# For Allure reports, attach the screenshot
try:
import allure
allure.attach.file(path, name="failure_screenshot",
attachment_type=allure.attachment_type.PNG)
except ImportError:
pass
# ── Logging inside page objects ──
class BasePage:
def __init__(self, driver):
self.driver = driver
self.log = logging.getLogger(self.__class__.__name__)
def click(self, locator):
self.log.info(f"Clicking: {locator}")
element = self.driver.find_element(*locator)
element.click()
self.log.info(f"Clicked: {locator}")
def type_text(self, locator, text):
masked = text if "password" not in str(locator).lower() else "****"
self.log.info(f"Typing '{masked}' into: {locator}")
element = self.driver.find_element(*locator)
element.clear()
element.send_keys(text)
# ── Report types comparison ──
REPORT_TOOLS = [
{
"tool": "pytest-html",
"output": "Single HTML file with embedded screenshots",
"setup": "pip install pytest-html; run with --html=report.html",
"best_for": "Quick, lightweight reports; email-friendly single file",
},
{
"tool": "Allure",
"output": "Interactive web dashboard with categories, history, attachments",
"setup": "pip install allure-pytest; allure serve results/",
"best_for": "Team dashboards; CI/CD integration; trend analysis",
},
{
"tool": "Custom logging",
"output": "Timestamped log files with step-by-step execution trace",
"setup": "Python logging module — no external dependency",
"best_for": "Debugging; audit trail; integration with log aggregators",
},
]
print("Test Evidence — Tools Comparison")
print("=" * 60)
for tool in REPORT_TOOLS:
print(f"\n {tool['tool']}")
print(f" Output: {tool['output']}")
print(f" Setup: {tool['setup']}")
print(f" Best for: {tool['best_for']}")
pytest_runtest_makereport hook is the standard mechanism for detecting test failures and triggering automatic evidence capture. It runs after each test phase (setup, call, teardown) and attaches the result to the test item. The capture_failure_screenshot fixture checks rep_call.failed after the test body completes and saves a screenshot only on failure. This hook-and-fixture combination is the production pattern used by most professional pytest Selenium frameworks.**** instead of the actual value. A simple check — if "password" in str(locator).lower() — prevents credentials from appearing in log files that may be stored in CI artefacts, shared via dashboards, or uploaded to cloud log aggregators. Security-conscious logging is a framework-level responsibility, not a per-test concern.--screenshot-all that temporarily enables pass-state screenshots.Common Mistakes
Mistake 1 — No screenshots on failure in CI/CD
❌ Wrong: CI reports show “test_checkout FAILED: AssertionError” with no visual evidence. The developer must reproduce the failure locally to understand what happened.
✅ Correct: CI reports include a screenshot captured at the moment of failure, showing the exact page state — wrong text, missing element, error modal, or unexpected redirect. Diagnosis takes seconds instead of hours.
Mistake 2 — Logging every Selenium command without filtering
❌ Wrong: Logging every find_element, get_attribute, and is_displayed call — the log becomes thousands of lines of noise.
✅ Correct: Logging meaningful actions: “Clicking login button”, “Typing username”, “Navigating to /checkout”. Internal framework operations (waits, retries) are logged at DEBUG level, visible only when needed.