With limited time in each sprint, how do you decide what to test and how? The Agile Testing Quadrants, originally defined by Brian Marick and popularised by Lisa Crispin and Janet Gregory, provide a framework for categorising all testing activities into four quadrants based on two dimensions: whether the tests support the team (guide development) or critique the product (evaluate quality), and whether they are technology-facing or business-facing. This model ensures you do not over-invest in one type of testing while neglecting another.
The Four Agile Testing Quadrants
Each quadrant represents a category of testing with different goals, audiences, and techniques. A balanced sprint includes activities from all four quadrants.
# Agile Testing Quadrants — Brian Marick / Crispin-Gregory model
QUADRANTS = [
{
"quadrant": "Q1 — Technology-Facing, Supporting the Team",
"purpose": "Guide development with fast, automated feedback",
"tests": [
"Unit tests (pytest, JUnit, Jest)",
"Component tests",
"API contract tests",
],
"who": "Developers (with QA input on coverage)",
"automation": "Fully automated — runs on every commit",
"sprint_role": "Foundation — ensures code correctness at the lowest level",
},
{
"quadrant": "Q2 — Business-Facing, Supporting the Team",
"purpose": "Validate that features meet business requirements",
"tests": [
"Functional tests derived from acceptance criteria",
"Story-level tests and examples",
"BDD scenarios (Gherkin/Cucumber)",
"Prototypes and wireframe reviews",
],
"who": "QA engineers + Product Owner",
"automation": "Partially automated — key scenarios automated, rest manual",
"sprint_role": "Core QA work — confirms stories do what the business needs",
},
{
"quadrant": "Q3 — Business-Facing, Critiquing the Product",
"purpose": "Evaluate product quality through human judgement",
"tests": [
"Exploratory testing",
"Usability testing",
"User acceptance testing (UAT)",
"Alpha / beta testing",
],
"who": "QA engineers + end users + UX team",
"automation": "Manual — requires human observation and creativity",
"sprint_role": "Discovers defects that scripted tests miss",
},
{
"quadrant": "Q4 — Technology-Facing, Critiquing the Product",
"purpose": "Evaluate non-functional quality attributes",
"tests": [
"Performance / load testing (JMeter, k6)",
"Security testing (OWASP ZAP, Burp Suite)",
"Scalability testing",
"Infrastructure and reliability testing",
],
"who": "QA engineers + DevOps + security specialists",
"automation": "Tool-driven — automated scans and load generators",
"sprint_role": "Ensures the product works well under real-world conditions",
},
]
for q in QUADRANTS:
print(f"\n{'='*60}")
print(f" {q['quadrant']}")
print(f"{'='*60}")
print(f" Purpose: {q['purpose']}")
print(f" Who: {q['who']}")
print(f" Automation: {q['automation']}")
print(f" Sprint role: {q['sprint_role']}")
print(f" Tests:")
for t in q['tests']:
print(f" - {t}")
Common Mistakes
Mistake 1 — Treating the quadrants as sequential phases
❌ Wrong: “We do Q1 in week one, Q2 in week two, Q3 in week three and Q4 in week four.”
✅ Correct: “All four quadrants are active throughout the sprint. Q1 runs on every commit. Q2 tests are executed as stories are completed. Q3 exploratory sessions are scheduled mid-sprint. Q4 performance checks happen before sprint review.”
Mistake 2 — Assigning quadrants rigidly to specific roles
❌ Wrong: “Q1 is the developer’s job, Q2-Q4 is the tester’s job.”
✅ Correct: “Quality is a whole-team responsibility. Developers contribute to Q1 and review Q2 scenarios. Testers focus on Q2, Q3 and coordinate Q4. The product owner participates in Q3 usability sessions. Everyone owns quality.”