Agile Testing Quadrants — A Framework for Choosing What to Test and How

With limited time in each sprint, how do you decide what to test and how? The Agile Testing Quadrants, originally defined by Brian Marick and popularised by Lisa Crispin and Janet Gregory, provide a framework for categorising all testing activities into four quadrants based on two dimensions: whether the tests support the team (guide development) or critique the product (evaluate quality), and whether they are technology-facing or business-facing. This model ensures you do not over-invest in one type of testing while neglecting another.

The Four Agile Testing Quadrants

Each quadrant represents a category of testing with different goals, audiences, and techniques. A balanced sprint includes activities from all four quadrants.

# Agile Testing Quadrants — Brian Marick / Crispin-Gregory model

QUADRANTS = [
    {
        "quadrant": "Q1 — Technology-Facing, Supporting the Team",
        "purpose": "Guide development with fast, automated feedback",
        "tests": [
            "Unit tests (pytest, JUnit, Jest)",
            "Component tests",
            "API contract tests",
        ],
        "who": "Developers (with QA input on coverage)",
        "automation": "Fully automated — runs on every commit",
        "sprint_role": "Foundation — ensures code correctness at the lowest level",
    },
    {
        "quadrant": "Q2 — Business-Facing, Supporting the Team",
        "purpose": "Validate that features meet business requirements",
        "tests": [
            "Functional tests derived from acceptance criteria",
            "Story-level tests and examples",
            "BDD scenarios (Gherkin/Cucumber)",
            "Prototypes and wireframe reviews",
        ],
        "who": "QA engineers + Product Owner",
        "automation": "Partially automated — key scenarios automated, rest manual",
        "sprint_role": "Core QA work — confirms stories do what the business needs",
    },
    {
        "quadrant": "Q3 — Business-Facing, Critiquing the Product",
        "purpose": "Evaluate product quality through human judgement",
        "tests": [
            "Exploratory testing",
            "Usability testing",
            "User acceptance testing (UAT)",
            "Alpha / beta testing",
        ],
        "who": "QA engineers + end users + UX team",
        "automation": "Manual — requires human observation and creativity",
        "sprint_role": "Discovers defects that scripted tests miss",
    },
    {
        "quadrant": "Q4 — Technology-Facing, Critiquing the Product",
        "purpose": "Evaluate non-functional quality attributes",
        "tests": [
            "Performance / load testing (JMeter, k6)",
            "Security testing (OWASP ZAP, Burp Suite)",
            "Scalability testing",
            "Infrastructure and reliability testing",
        ],
        "who": "QA engineers + DevOps + security specialists",
        "automation": "Tool-driven — automated scans and load generators",
        "sprint_role": "Ensures the product works well under real-world conditions",
    },
]

for q in QUADRANTS:
    print(f"\n{'='*60}")
    print(f"  {q['quadrant']}")
    print(f"{'='*60}")
    print(f"  Purpose:    {q['purpose']}")
    print(f"  Who:        {q['who']}")
    print(f"  Automation: {q['automation']}")
    print(f"  Sprint role: {q['sprint_role']}")
    print(f"  Tests:")
    for t in q['tests']:
        print(f"    - {t}")
Note: The most common imbalance is teams that invest heavily in Q1 (unit tests) and Q2 (functional tests) but neglect Q3 (exploratory testing) and Q4 (non-functional testing). Automated tests verify that the software works as designed, but only exploratory testing (Q3) discovers defects that nobody anticipated — the “unknown unknowns.” A balanced sprint allocates time for all four quadrants: Q1 and Q2 provide the safety net, Q3 provides creative discovery, and Q4 provides confidence in production readiness.
Tip: Map your team’s current testing activities to the four quadrants. If an entire quadrant is empty, that is a gap worth discussing in the retrospective. For example, if Q4 is empty, your team has no performance or security testing — a significant risk for any public-facing application. Use the quadrants as a visual tool to advocate for balanced testing coverage during sprint planning.
Warning: Q3 (exploratory testing) cannot be replaced by automation. Automated tests check for known conditions — they verify that the software does what you told it to do. Exploratory testing uses human creativity to discover what the software does that nobody expected. Cutting exploratory testing because “we have 90% automated coverage” is a false economy that lets unpredictable defects escape to production.

Common Mistakes

Mistake 1 — Treating the quadrants as sequential phases

❌ Wrong: “We do Q1 in week one, Q2 in week two, Q3 in week three and Q4 in week four.”

✅ Correct: “All four quadrants are active throughout the sprint. Q1 runs on every commit. Q2 tests are executed as stories are completed. Q3 exploratory sessions are scheduled mid-sprint. Q4 performance checks happen before sprint review.”

Mistake 2 — Assigning quadrants rigidly to specific roles

❌ Wrong: “Q1 is the developer’s job, Q2-Q4 is the tester’s job.”

✅ Correct: “Quality is a whole-team responsibility. Developers contribute to Q1 and review Q2 scenarios. Testers focus on Q2, Q3 and coordinate Q4. The product owner participates in Q3 usability sessions. Everyone owns quality.”

🧠 Test Yourself

A team has strong unit test coverage (Q1) and functional tests (Q2) but no exploratory testing (Q3). What risk does this create?