A test plan tells your team what you're testing, how you're testing it, and when you'll be done. It's not a novel. Most useful test plans fit on one to three pages, and teams that keep them short tend to actually follow them.
The PractiTest State of Testing 2025 report found that test case maintenance is the top challenge QA teams face year after year. Part of that problem is bad planning. When you don't define scope upfront, you end up with test cases scattered across features nobody changed, while the areas that actually need attention go untested.
What goes into a test plan#
A test plan answers five questions. If it doesn't answer all five, it's incomplete. If it answers more than these five, it's probably too long.
1. What are we testing?#
List the features and areas under test. Be specific. "The checkout flow" is better than "the application." If a feature didn't change in this release, say so explicitly and note whether you'll regression-test it or skip it.
Bad scope definition: "Test all features of the application."
Good scope definition: "Test the new guest checkout flow (ticket PROJ-1234), the updated payment error handling (PROJ-1240), and regression on the existing cart page."
2. What are we NOT testing?#
This is the section most people skip, and it's arguably the most important one. Explicitly listing what's out of scope prevents the "but I thought someone was testing that" conversation three days before release.
Out-of-scope examples: "Admin panel (no changes this sprint), mobile app (separate test cycle), third-party payment gateway internals (we test our integration, not Stripe's logic)."
3. How are we testing it?#
Define your approach. Manual? Automated? Both? Which browser/device combinations? What environments?
For most teams, this looks like: manual testing on staging, Chrome and Firefox, desktop and mobile viewports. If you have automation, note which suites run as part of CI and which areas still need manual coverage.
4. Who is testing?#
Name people or roles. "QA team" is vague. "Maria runs the checkout tests, Alex handles the payment integration, guest checkout goes to an external tester via guest access link" is actionable.
When external testers or stakeholders need to run test cases, TestRush guest access lets them execute tests through a link without creating an account. No onboarding friction, no extra licenses.
5. When does testing start and end?#
Set dates. "Testing begins when the build is ready" is a trap because builds are never ready on time. Instead: "Testing starts March 12. If the build isn't on staging by March 12, we escalate to the PM. Testing ends March 15 or when all P0 test cases pass, whichever comes later."
A practical test plan template#
Here's a template you can actually use. I've seen teams paste this into a wiki page or straight into their test management tool's description field.
Release: v2.4.0 Test window: March 12-15, 2026 Build: staging-2.4.0-rc1 QA lead: [Name]
In scope:
- Guest checkout flow (new)
- Payment error messages (updated)
- Cart page regression
Out of scope:
- Admin panel (unchanged)
- Mobile native app
- Performance testing (separate cycle)
Approach:
- Manual testing on staging
- Browsers: Chrome 120+, Firefox 119+, Safari 17+
- Devices: Desktop, mobile viewport (375px)
- Automation: existing Playwright suite for login and search (runs in CI)
Test cases:
- Guest checkout: 18 cases, tagged "checkout" and "smoke"
- Payment errors: 12 cases, tagged "payments"
- Cart regression: 25 cases, tagged "regression"
- Total: 55 cases
Schedule:
- March 12: Smoke run (checkout + payments, smoke tag)
- March 13-14: Full regression
- March 15: Retest failed items, sign-off
Entry criteria: Build deployed to staging, test data seeded Exit criteria: Zero P0 failures, all P1 failures have tickets
Risks:
- Payment gateway sandbox may be unstable (happened last sprint)
- Guest checkout has no automated tests yet
That's it. One page. Everyone on the team can read it in two minutes and know what they're responsible for.
Real example: e-commerce release#
Let me walk through how a real team might build a plan for a feature release. Say your company sells shoes online, and this sprint adds a "buy now, pay later" option to checkout.
Step 1: Define scope from the ticket. The feature ticket says: "Add Klarna integration to checkout. User selects 'Pay Later' at payment step. Order is created with pending payment status. Klarna webhook confirms payment within 24 hours."
From that, your test areas are: the UI (payment option appears and is selectable), the integration (order creation with correct payload to Klarna), and the webhook handler (order status updates when Klarna confirms).
Step 2: Identify what you're skipping. You're not testing Klarna's internal logic. You're not retesting the existing credit card flow unless the code changed. You're not testing the mobile app because this feature is web-only for now.
Step 3: Write the test cases. For the UI: does the option appear? Does it disappear when the cart total is under $35 (Klarna's minimum)? Does selecting it change the order summary text? For the integration: does the API call go out with the right amount and currency? What happens on timeout? For the webhook: does a valid webhook update the order? Does an invalid signature get rejected?
In TestRush, you'd create a script with three headers ("Pay Later UI", "Klarna Integration", "Webhook Handler") and child items under each. Tag the first group "smoke" so you can run a quick check before diving into the full suite.
Step 4: Assign and schedule. UI tests go to whoever knows the checkout page best. Integration tests go to someone comfortable with API testing. The webhook tests might need a developer's help to simulate callbacks.
45% of QA teams still haven't integrated AI into their testing workflows despite most saying it's critical for their future — PractiTest State of Testing, 2025
Step 5: Define done. All smoke tests pass. All P0 cases (order creation, payment confirmation) pass. P1 failures (edge cases like currency formatting) have Jira tickets and are approved to ship by the PM.
When test plans go wrong#
I've seen a few patterns that consistently cause problems.
The 20-page plan nobody reads. Some organizations require IEEE 829-format test plans. These documents define everything from "test item transmittal reports" to "test incident reports." They satisfy compliance auditors and collect dust everywhere else. If your company requires this format, write the full document for compliance and extract a one-page summary for the people actually doing the testing.
The plan that never updates. A test plan written on day one of a two-week sprint is often wrong by day five. Features get descoped, priorities shift, builds arrive late. Treat the plan as a living document. Update it when scope changes instead of pretending everything is going according to the original plan.
No exit criteria. Without clear exit criteria, testing never officially ends. Someone always wants "one more pass." Define what done looks like before you start: zero P0 failures, specific pass rates, or a sign-off from the QA lead.
Lightweight plans for agile teams#
If your team ships weekly, a full test plan per release is overkill. Here's what works instead:
Write a test plan per feature, not per sprint. Each feature gets three to five sentences: what's in scope, what approach, and what's the exit criteria. Store this in the script description in your test management tool, not in a separate document.
Use tags as your plan. Tag test cases with "sprint-42" or "payments-v2" and you have an instant filtered view of what needs to run. In TestRush, you can start a test run filtered by tag, so your "plan" is just a well-organized set of tags and a one-line note about when to run them.
Lisa Crispin said it well: "The whole team is responsible for quality, not just the testers." A lightweight plan that developers and PMs can read in 30 seconds gets more buy-in than a formal document they never open.
Common mistakes#
-
Writing the plan after testing starts. The plan exists to guide testing, not to document what already happened. If you write it retroactively, you've missed the point. Write it when the feature is being specced, not when the build is already on staging.
-
Listing "all browsers" without being specific. "Test on all browsers" means nobody tests anywhere systematically. Pick three browsers, name the versions, and test those. You can expand later if you find issues.
-
Skipping the "out of scope" section. When everything is in scope, nothing is prioritized. The out-of-scope section is how you protect your team from scope creep during the test window. Check the test prioritization strategies for a framework on what to test first when time is limited.
-
Not connecting test cases to the plan. A plan that says "test checkout" without linking to the actual test cases is a wishlist, not a plan. Reference specific scripts or case IDs so anyone can go from plan to execution without guessing. How to write test cases covers the structure that makes cases easy to link and find.
FAQ#
How often should I update a test plan?#
Update it whenever scope changes. A feature gets descoped mid-sprint? Remove it from the plan. A new edge case surfaces during development? Add it. The plan is useful precisely because it reflects reality. A stale plan is worse than no plan because it gives false confidence.
Can AI help write test plans?#
Yes, and this is one of the better uses of AI in QA. Feed your feature requirements into an AI agent via MCP and it can draft a plan with scope, test areas, and even generate the test cases for each area. You still review and adjust, but the first draft takes minutes instead of an hour.
What tools do I need for test planning?#
At minimum, a place to store your plan and a place to store your test cases. Some teams use a wiki for plans and a separate tool for cases. Others keep everything in the test management tool. TestRush lets you store the plan in the script description and the test cases as items within the same script, so there's no context switching between the plan and the execution.
Should test plans include test data?#
Yes, at least reference it. If your checkout tests need a user with a specific address or a product in a certain category, note where that data lives. "Test data: see seed script in /scripts/test-data/checkout.sql" saves testers from guessing or creating duplicate data.
Need a place to store test plans alongside your test cases? Start free with TestRush or explore the demo to see how scripts and runs work together.