STLC in Practice — Entry Criteria, Exit Criteria and Real-World Workflow

Theory without practice is quickly forgotten. In this lesson, you will apply the STLC to a realistic project scenario — a user registration feature for an e-commerce platform. You will define entry and exit criteria for each phase, identify the artefacts produced, and see how skipping phases creates real problems. This exercise mirrors the type of thinking your team lead will expect from you on day one.

STLC in Action — Testing a User Registration Feature

Your team has been asked to test a new user registration feature. The developers are in the Implementation phase. Here is how the STLC unfolds for this feature, with concrete entry and exit criteria at each step.

# Practical STLC walkthrough for a user registration feature

project = "User Registration Feature — E-Commerce Platform"

stlc_walkthrough = {
    "Phase 1 — Requirements Analysis": {
        "entry": "User story US-042 is written and accepted by the product owner",
        "activities": [
            "Review acceptance criteria: valid email, password 8+ chars, unique username",
            "Clarify: what happens with duplicate emails? → PO says show error message",
            "Add to RTM: 4 functional requirements, 2 non-functional (performance, security)",
        ],
        "exit": "RTM covers all 6 requirements; PO has answered all clarification questions",
        "artefact": "RTM with 6 traced requirements",
    },
    "Phase 2 — Test Planning": {
        "entry": "RTM is approved; sprint timeline is confirmed (2 weeks)",
        "activities": [
            "Scope: functional tests (manual), security tests (SQL injection, XSS)",
            "Tools: browser DevTools, Postman for API-level registration tests",
            "Estimate: 3 person-days for test design, 2 for execution",
        ],
        "exit": "Test plan reviewed in sprint planning meeting",
        "artefact": "Lightweight test plan (1-page in team wiki)",
    },
    "Phase 3 — Test Case Design": {
        "entry": "Test plan approved; UI mockups available",
        "activities": [
            "Write 15 test cases covering happy path, validation errors, edge cases",
            "Prepare test data: valid emails, duplicate emails, SQL injection strings",
            "Peer review: another tester reviews all 15 cases",
        ],
        "exit": "All 15 test cases reviewed and approved; test data prepared",
        "artefact": "15 test cases in test management tool + test data sheet",
    },
    "Phase 4 — Environment Setup": {
        "entry": "Test cases ready; staging server specs confirmed",
        "activities": [
            "Deploy latest build to staging environment",
            "Verify database is clean (no leftover test accounts)",
            "Run 3 smoke tests: page loads, form submits, confirmation email sends",
        ],
        "exit": "All 3 smoke tests pass on staging",
        "artefact": "Verified staging environment",
    },
    "Phase 5 — Test Execution": {
        "entry": "Environment verified; build deployed and smoke-tested",
        "activities": [
            "Execute 15 test cases; 13 pass, 2 fail",
            "File 2 defect reports: BUG-101 (duplicate email accepted), BUG-102 (no max-length on name field)",
            "Developers fix BUG-101 (P1); re-test passes",
            "BUG-102 deferred to next sprint (P3)",
            "Run regression suite: 13/13 pass",
        ],
        "exit": "13/15 pass, 1 fixed and re-tested, 1 deferred with stakeholder approval",
        "artefact": "Execution report + 2 defect tickets",
    },
    "Phase 6 — Test Closure": {
        "entry": "Execution complete; all critical defects resolved",
        "activities": [
            "Generate test summary: 93% pass rate, 1 deferred low-priority defect",
            "Lessons learned: duplicate-email validation was missing from the spec — PO will add it to the template",
            "Archive test cases and defect reports",
        ],
        "exit": "Test summary signed off; artefacts archived",
        "artefact": "Test summary report",
    },
}

for phase, details in stlc_walkthrough.items():
    print(f"\n{phase}")
    print(f"  Entry:    {details['entry']}")
    print(f"  Exit:     {details['exit']}")
    print(f"  Artefact: {details['artefact']}")
Note: Notice how Phase 1 uncovered a gap — the original requirements did not specify what happens when a user registers with a duplicate email. This is exactly the kind of defect that requirements analysis catches before any code is written. In this walkthrough, the tester asked the product owner, got a clear answer, and added it to the RTM. Without this step, the developer might have made their own assumption, leading to a defect discovered much later during test execution.
Tip: Keep your entry and exit criteria realistic and measurable. “All tests pass” is rarely achievable as an exit criterion — instead use quantifiable thresholds like “95 % of critical test cases pass, no open P1/P2 defects.” This gives stakeholders a clear, data-driven basis for the go/no-go release decision.
Warning: Deferring defects to the next sprint is a legitimate practice, but it must be a conscious, documented decision — not a way to hide unfinished work. Every deferred defect should have a stakeholder-approved justification, a target sprint for resolution, and a risk assessment explaining what could go wrong if it is not fixed. Untracked deferrals accumulate into technical debt that eventually undermines product quality.

Common Mistakes

Mistake 1 — Using vague exit criteria like “testing is done”

❌ Wrong: “Testing is complete — we tested everything.”

✅ Correct: “Testing is complete: 15/15 cases executed, 14 passed, 1 deferred (P3, stakeholder-approved), 0 open P1/P2 defects, regression suite 100 % green.”

Mistake 2 — Not archiving test artefacts during Test Closure

❌ Wrong: Finishing the sprint and immediately moving to the next feature without saving test cases, results, or defect reports.

✅ Correct: Archiving all artefacts in the team wiki or test management tool so that future regression testing, audits, and onboarding new team members can reference the work that was done.

🧠 Test Yourself

During STLC Phase 5 (Test Execution), two out of fifteen test cases fail. One defect is critical (P1) and one is low priority (P3). What is the best course of action?