Chapter 60 introduced accessibility testing with cypress-axe for Cypress projects. This lesson takes a broader view: accessibility testing is not tool-specific — it is a discipline that combines automated scans, manual keyboard testing, screen reader verification, and cognitive accessibility evaluation. Whether you use Selenium, Cypress, Playwright, or no automation at all, the accessibility testing methodology is the same.
Comprehensive Accessibility Testing — Beyond Automated Scans
A complete accessibility testing strategy has four layers, each catching different categories of WCAG violations.
# Four-layer accessibility testing strategy
A11Y_LAYERS = [
{
"layer": "Layer 1 — Automated Scans (30-40% of violations)",
"tools": "axe-core (via cypress-axe, playwright-axe, or axe CLI), WAVE, Lighthouse",
"catches": [
"Missing alt text on images",
"Missing form labels",
"Insufficient colour contrast",
"Incorrect heading hierarchy (h1 → h3 skipping h2)",
"Missing ARIA roles and landmarks",
"Duplicate IDs",
],
"integration": "Add to every functional test: cy.checkA11y() / page.accessibility()",
"time": "Seconds per page — fully automated",
},
{
"layer": "Layer 2 — Keyboard Testing (20-25% of violations)",
"tools": "Your keyboard — Tab, Shift+Tab, Enter, Space, Escape, Arrow keys",
"catches": [
"Elements unreachable by Tab key",
"Focus not visible (no outline on focused elements)",
"Focus order illogical (jumps randomly around the page)",
"Keyboard traps (focus enters but cannot leave)",
"Modal focus not trapped (Tab escapes the modal)",
"Interactive elements not operable by Enter/Space",
],
"integration": "Manual testing during sprint; automate critical flows with Cypress/Playwright",
"time": "5-10 minutes per page — manual with selective automation",
},
{
"layer": "Layer 3 — Screen Reader Testing (20-25% of violations)",
"tools": "VoiceOver (macOS), NVDA (Windows, free), JAWS (Windows, paid), TalkBack (Android)",
"catches": [
"Announcements that do not make sense in context",
"Dynamic content changes not announced (missing aria-live)",
"Form errors not announced to screen reader users",
"Custom components (tabs, accordions) not navigable",
"Images described as 'image' instead of meaningful alt text",
"Decorative images announced unnecessarily",
],
"integration": "Manual testing by QA — quarterly or pre-release for key flows",
"time": "30-60 minutes per user flow — fully manual",
},
{
"layer": "Layer 4 — Cognitive and Content Accessibility (10-15%)",
"tools": "Human review, readability checkers, plain language guidelines",
"catches": [
"Error messages that do not explain how to fix the problem",
"Complex language requiring high reading level",
"Inconsistent navigation patterns across pages",
"Time limits without extension options",
"Content that flashes or auto-plays without pause controls",
],
"integration": "UX review during design; QA review during testing",
"time": "15-30 minutes per feature — human evaluation",
},
]
# WCAG 2.1 conformance levels
WCAG_LEVELS = {
"Level A (minimum)": "Basic accessibility — must be met for any claim of conformance",
"Level AA (standard)": "The target for most organisations and legal requirements",
"Level AAA (enhanced)": "Highest level — recommended for specialised audiences, not required",
}
# Legal requirements (2025-2026)
LEGAL = [
"European Accessibility Act (EAA) — effective June 2025, requires digital accessibility for products/services in the EU",
"Americans with Disabilities Act (ADA) — US courts interpret this to cover websites and apps",
"Section 508 — US federal agencies must make digital content accessible",
"EN 301 549 — European standard referencing WCAG 2.1 Level AA",
"Accessibility lawsuits increased 300%+ from 2018-2024 in the US",
]
print("Accessibility Testing Layers:")
for layer in A11Y_LAYERS:
print(f"\n {layer['layer']}")
print(f" Tools: {layer['tools']}")
print(f" Time: {layer['time']}")
print(f" Catches:")
for item in layer['catches'][:3]:
print(f" - {item}")
Common Mistakes
Mistake 1 — Equating automated scan results with “accessible”
❌ Wrong: “Our Lighthouse accessibility score is 98. The application is accessible.”
✅ Correct: “Automated scans pass. We also verified keyboard navigation, tested with VoiceOver on critical flows, and reviewed error messages for clarity. Automated scores cover 30-40%; the remaining 60-70% requires manual verification.”
Mistake 2 — Testing accessibility only on desktop
❌ Wrong: Running axe-core on desktop Chrome and declaring accessibility compliance.
✅ Correct: Testing on desktop (keyboard + screen reader) AND mobile (touch target sizes, screen reader with TalkBack/VoiceOver, zoom to 200%). Mobile accessibility barriers are distinct from desktop barriers.