If you have ever wondered why experienced testers make different decisions than beginners, the answer often lies in the seven principles of software testing. Defined by ISTQB and refined over decades, these principles are not academic theory — they are practical truths that prevent wasted effort, unrealistic expectations and missed defects. Memorising the names is easy; understanding why each principle matters is what makes you effective on the job.
The Seven Principles — What They Mean in Practice
The seven testing principles form the philosophical backbone of the profession. They apply equally to manual testing, automated testing and every hybrid approach in between.
- Testing shows the presence of defects, not their absence. You can find bugs, but you can never guarantee there are none left.
- Exhaustive testing is impossible. With virtually infinite input combinations, you must use risk-based prioritisation and techniques like equivalence partitioning to select the most valuable tests.
- Early testing saves time and money. A defect found in requirements costs a fraction of one found in production.
- Defects cluster together. A small number of modules typically contain the majority of defects — Pareto’s 80/20 rule applies to software quality.
- The pesticide paradox. Running the same tests repeatedly will stop finding new defects. Test cases must be regularly reviewed and updated.
- Testing is context dependent. How you test a medical device is fundamentally different from how you test a social media app.
- Absence-of-errors fallacy. A defect-free product that does not meet user needs is still a failure.
# Demonstrating Principle 2: Exhaustive testing is impossible
# A simple form with 3 fields can produce enormous input space.
username_options = 100 # 100 distinct usernames
password_options = 1000 # 1 000 distinct passwords
role_options = 5 # 5 user roles
total_combinations = username_options * password_options * role_options
print(f"Total input combinations: {total_combinations:,}")
# Output: Total input combinations: 500,000
# In a real application with 20+ fields, the number exceeds
# trillions — proving exhaustive testing is impossible.
# Testers use techniques like equivalence partitioning and
# boundary value analysis to select a manageable, high-value
# subset of tests.
Common Mistakes
Mistake 1 — Assuming 100 % test coverage means zero defects
❌ Wrong: “We have 100 % code coverage, so there are no bugs.”
✅ Correct: “100 % code coverage means every line was executed during testing, but it says nothing about whether every logical condition, boundary or integration path was tested. Coverage is a useful metric, not a guarantee.”
Mistake 2 — Never updating the test suite (ignoring the pesticide paradox)
❌ Wrong: Running the exact same 200 regression tests for a year without adding or modifying any.
✅ Correct: Reviewing the regression suite each sprint, retiring obsolete tests and adding new cases based on recent changes and reported defects.