Writing Your First Test Plan — A Step-by-Step Walkthrough

You now understand what a test plan is, its sections, how it differs from a strategy, and how risk-based thinking shapes your approach. In this lesson, you will write a complete lightweight test plan from scratch using a practical template. This is the exact exercise that interviewers give to QA candidates and that team leads expect new hires to be able to do within their first week.

Step-by-Step — Writing a Test Plan for a Password Reset Feature

Your team has been asked to test a new “Forgot Password” feature. Users can request a password reset link via email, click the link, set a new password, and log in with the new credentials. Here is how to build the test plan, section by section.

# Complete lightweight test plan — Password Reset Feature

test_plan = {
    "identifier": "TP-PWD-RESET-v1.0",
    "author": "Junior QA Engineer",
    "date": "2026-03-15",
    "project": "StackStore — Forgot Password Feature (Sprint 14)",

    # Section 1: Objective
    "objective": (
        "Verify that registered users can securely reset their password "
        "via email link, and that the feature handles error conditions "
        "(expired links, invalid emails, weak passwords) correctly."
    ),

    # Section 2: Scope
    "in_scope": [
        "Forgot Password form (email input + submit)",
        "Reset email delivery and link generation",
        "Reset link expiry (valid for 30 minutes)",
        "New password form (with validation rules)",
        "Login with new password after reset",
        "Error handling: unregistered email, expired link, weak password",
    ],
    "out_of_scope": [
        "Login page (tested in Sprint 12, no changes)",
        "Registration flow (separate feature)",
        "Password reset via SMS (Phase 2 — not yet built)",
    ],

    # Section 3: Test Approach
    "approach": {
        "functional": "Manual — 12 test cases covering happy path and error conditions",
        "security": "Verify reset tokens are single-use, check for token leakage in URLs",
        "usability": "Verify error messages are clear and actionable",
        "automation": "Not for this sprint — add to regression suite in Sprint 15",
    },

    # Section 4: Environment
    "environment": {
        "server": "Staging (https://staging.stackstore.com)",
        "browsers": ["Chrome 120+", "Firefox 121+", "Safari 17+"],
        "email": "Mailhog on staging for intercepting reset emails",
    },

    # Section 5: Entry Criteria
    "entry_criteria": [
        "Feature code merged to staging branch",
        "Build deployed to staging and smoke test passes",
        "Mailhog configured and verified",
        "Test accounts created with known emails",
    ],

    # Section 6: Exit Criteria
    "exit_criteria": [
        "All 12 test cases executed",
        "Pass rate >= 100% for critical cases (happy path + security)",
        "Pass rate >= 90% overall",
        "Zero open P1/P2 defects",
        "Reset tokens confirmed as single-use",
    ],

    # Section 7: Risks
    "risks": [
        {
            "risk": "Mailhog service on staging may be unreliable",
            "mitigation": "Verify Mailhog before test execution; fallback to checking DB for token",
        },
        {
            "risk": "Reset link expiry (30 min) is hard to test in a short session",
            "mitigation": "Ask dev to add a test-only config flag that shortens expiry to 1 minute",
        },
    ],

    # Section 8: Schedule
    "schedule": {
        "test_design": "Mar 15 (0.5 day)",
        "environment_verify": "Mar 15 (0.5 day)",
        "execution": "Mar 16 (1 day)",
        "defect_retest": "Mar 17 (0.5 day)",
        "sign_off": "Mar 17 PM",
    },

    # Section 9: Resources
    "resources": [
        {"person": "Junior QA Engineer", "role": "Write and execute all test cases"},
        {"person": "Senior QA Engineer", "role": "Review test cases, approve sign-off"},
        {"person": "DevOps", "role": "Mailhog setup and staging deployment"},
    ],
}

# Print the plan summary
print(f"{'='*60}")
print(f"  TEST PLAN: {test_plan['identifier']}")
print(f"  Project:   {test_plan['project']}")
print(f"{'='*60}")
print(f"\nObjective: {test_plan['objective']}")
print(f"\nIn Scope: {len(test_plan['in_scope'])} items")
print(f"Out of Scope: {len(test_plan['out_of_scope'])} items")
print(f"Test Cases: {test_plan['approach']['functional']}")
print(f"\nExit Criteria:")
for c in test_plan['exit_criteria']:
    print(f"  ✓ {c}")
print(f"\nRisks: {len(test_plan['risks'])} identified with mitigations")
print(f"\nSchedule: {test_plan['schedule']['test_design']} → {test_plan['schedule']['sign_off']}")
Note: Notice how the risk section identifies a practical testing challenge — the 30-minute link expiry is difficult to test within a normal session. The mitigation (asking developers for a test-only configuration flag) is a real technique used by professional QA teams. Test-specific configuration flags, seeded test data, and mock services are all legitimate tools that make features testable without changing production behaviour.
Tip: Save every test plan you write as a template for future use. After a few projects, you will have a library of plans covering different feature types — authentication, payments, search, CRUD operations. Starting from a proven template cuts your planning time dramatically and ensures you do not forget sections that applied to similar features in the past.
Warning: Do not list automation as part of your test approach if you do not have the time, tools or skills to deliver it. Promising automated tests in the test plan and then delivering only manual execution erodes trust with stakeholders. It is far better to say “automation is out of scope for this sprint — we will add these cases to the regression suite in Sprint 15” and deliver on that commitment.

Common Mistakes

Mistake 1 — Writing test plans in isolation without developer or PO input

❌ Wrong: The QA engineer writes the entire test plan alone and shares it only after testing has begun.

✅ Correct: Drafting the test plan and reviewing it with the developer (for technical accuracy), the product owner (for scope agreement), and the QA lead (for approach validation) before execution starts.

Mistake 2 — Forgetting to define what “environment ready” means

❌ Wrong: Entry criteria says “environment is ready” without specifying what that means — leading to testing on a broken staging server.

✅ Correct: Entry criteria says “build deployed to staging, smoke test passes (login + homepage load), Mailhog verified to capture emails, test accounts seeded in database.”

🧠 Test Yourself

You are writing a test plan for a feature that requires testing a time-sensitive link (expires after 30 minutes). What is the best approach?