Regression testing checklist: what to include and what to skip

Regression testing catches bugs introduced by new changes. Here's a checklist for building effective regression suites without testing everything.

TestRush Team·March 28, 2026·10 min read

Regression testing is re-running your existing tests after code changes to verify that nothing previously working is now broken. The concept is simple. Doing it well is where most teams struggle.

The common failure mode isn't skipping regression testing entirely. It's running a bloated suite that takes two days, produces flaky results, and makes people dread releases. Or the opposite: running a tiny ad-hoc set of tests because nobody maintained the suite, and finding out in production that the checkout flow broke because someone refactored the user session module.

PractiTest's State of Testing 2025 report lists test case maintenance as one of the top challenges QA teams face. Regression suites are where that pain hits hardest, because they grow with every feature and shrink only when someone deliberately prunes them.

What belongs in a regression suite#

Your regression suite isn't "all your test cases." It's a curated subset focused on catching breakage from new changes. Here's a framework for deciding what goes in.

Core user workflows#

Every regression suite starts here. These are the paths your users take most often, the ones where failure means lost revenue or lost trust:

  • Authentication flow: sign up, log in, log out, password reset, session expiration
  • Primary value action: whatever your product exists to do (placing an order, sending a message, creating a report)
  • Payment and billing: charges, refunds, subscription changes, invoice generation
  • Data CRUD: can users create, read, update, and delete their core data?

If any of these break, you'll hear about it within minutes of deploying. They're non-negotiable.

Integration points#

Regressions love hiding at boundaries between systems:

  • API contracts: do your endpoints still return the expected shape and status codes?
  • Third-party services: payment gateways, email providers, auth providers (OAuth, SSO)
  • Database interactions: migrations that alter schemas, queries that touch modified tables
  • Cross-feature dependencies: features that share state, components, or data (changing user profiles might break notification preferences)

A common pattern: Developer A changes the user model. Developer B's feature reads from the user model. Nobody realizes they're connected until the regression run catches it. Or doesn't, because the test wasn't in the suite.

Recently buggy areas#

If a feature had bugs in the last 2-3 releases, it deserves extra regression coverage. Defects cluster. The module that broke last month is more likely to break again than one that's been stable for a year.

Check your test run history for patterns: which features fail most often? Those areas need permanent spots in your regression suite, even if they haven't changed in the current release.

Test case maintenance remains a top-3 challenge for QA teams, with regression suites being the primary growth area — PractiTest State of Testing, 2025

Platform and environment coverage#

If your application runs on multiple platforms, your regression suite needs to cover the differences:

  • Browser matrix: Chrome, Firefox, Safari, Edge (at minimum the top 2 by your analytics)
  • Mobile responsive: key workflows on mobile viewport sizes
  • OS-specific behavior: if your app has desktop clients or OS-dependent features
  • Environment parity: staging vs production configuration differences

You don't need to run every test on every platform. Run your full suite on the primary platform, then run a focused cross-browser/cross-device pass for UI-dependent tests.

What to leave out#

Knowing what to exclude is just as important as knowing what to include. A regression suite that tries to cover everything ends up covering nothing well.

One-time verification tests#

Tests written to verify a specific bug fix or one-off feature validation don't belong in the permanent regression suite. They served their purpose. Archive them.

Example: "Verify that the export CSV button works after the fix deployed on March 12." This was a bug fix verification. Once confirmed, the underlying functionality (CSV export) should be covered by a general regression test, not a time-stamped one-off.

Low-traffic admin features#

Admin panels, internal tools, and settings pages that rarely change and are used by a small number of people carry low regression risk. Test them quarterly, not every release.

Cosmetic and copy tests#

Checking button colors, font sizes, and marketing copy changes doesn't belong in a regression suite. These change frequently (often intentionally) and create noise when they fail. Handle visual consistency through design reviews or separate visual testing if you need it.

Deprecated features#

If a feature is on its way out, don't keep its test cases in the regression suite. They'll fail eventually because of intentional removals, confusing the results.

Building the checklist#

Here's a practical checklist you can adapt for your team. Not every item applies to every product, but it's a solid starting point.

Pre-run checklist#

Before starting a regression run:

  1. Identify what changed. Review the changelog, commit log, or release notes for the build.
  2. Map impact areas. Which feature areas could be affected by those changes, even indirectly?
  3. Select the right tag filter. Full regression or targeted? Use test prioritization to decide.
  4. Verify test environment. Is the staging environment running the correct build? Is test data in place?
  5. Assign testers. Who runs what? Split by feature area for parallel execution.

During-run checklist#

While executing:

  1. Follow the script. Execute steps as written, don't improvise (save exploratory testing for separate sessions).
  2. Log failures immediately. Don't batch them for later. Note the exact step, expected result, actual result.
  3. Distinguish bugs from environment issues. "Service unavailable" might be a deployment problem, not a regression.
  4. Flag blocked tests. If a prerequisite is broken, mark downstream tests as blocked rather than skipping them silently.
  5. Note flaky tests. Tests that pass on retry are still problems. Track them for later investigation.

Post-run checklist#

After completing the run:

  1. Review all failures. Are they real regressions, flaky tests, or environment issues?
  2. File bugs for confirmed regressions. Include steps to reproduce, expected vs actual, and the test item reference.
  3. Report the summary. Pass rate, critical failures, blocked areas, and a go/no-go recommendation.
  4. Update the suite. Remove tests that are obsolete, add tests for gaps you discovered.
  5. Archive the run. Results become part of your testing history for future reference.

In TestRush, regression runs are tag-filtered executions of your test scripts. Tag items as "regression," start a run with that filter, and use keyboard shortcuts (1=pass, 2=fail, arrows to navigate) to move through the suite fast. Each run is saved with full results for historical comparison.

Keeping the suite healthy#

A regression suite that isn't maintained becomes a liability. Here's how to prevent that.

Quarterly pruning#

Every quarter, review your regression suite:

  • Remove tests for deleted features. If the feature doesn't exist anymore, the test is noise.
  • Consolidate overlapping tests. Two tests that verify the same thing with slightly different data can usually be merged.
  • Update stale steps. UI changes, renamed buttons, restructured pages — if the steps don't match the product, testers waste time figuring out what the step means now.
  • Recalibrate priority. A feature that was high-risk last quarter might be stable now. Move it down. Something new might need to move up.

The flaky test problem#

Flaky tests — tests that sometimes pass and sometimes fail with no code change — are regression suite poison. They train testers to ignore failures ("oh, that one always fails, just re-run it"). When a real regression shows up in a test that's been flaky, nobody notices.

Fix or remove flaky tests. If a test fails more than twice in a row without a code change, investigate. Common causes: timing dependencies, shared test data, environment configuration drift. If it can't be stabilized within a sprint, pull it from the suite and fix it as a separate task.

Growing the suite with each release#

After each release, ask: "Did we miss any regressions in production?" If yes, write a test case for it and add it to the suite. The best regression suites are built from real production incidents, not theoretical "what if" scenarios.

James Bach said "Good testing is a challenging intellectual process." Your regression suite is a record of what your team has learned about where your product breaks. Each test case represents a past failure or a known risk. That's why maintenance matters. A stale suite is your team forgetting what it learned.

Common mistakes#

  1. Running the entire test suite as "regression." Your full test suite includes smoke tests, edge cases, exploratory scenarios, and one-off verifications. A regression suite is a subset. If you're running everything every time, you're wasting time on low-value tests and slowing down releases.

  2. Never removing test cases. Suites only grow. Nobody wants to delete a test because "what if we need it?" The result is a 500-item suite where 100 items are for features that no longer exist. Schedule quarterly pruning and actually do it.

  3. No tag strategy. Without tags, you can't run subsets of your regression suite. You're forced into all-or-nothing runs. Tag tests as "smoke," "regression," "critical" so you can filter runs by priority when time is tight.

  4. Ignoring flaky tests. Every ignored flaky test reduces trust in the suite. After enough "oh, just ignore that one" comments, testers start ignoring real failures too. Fix flaky tests aggressively.

  5. Skipping the impact analysis. Running regression tests without knowing what changed is inefficient. You spend equal time on areas that couldn't possibly be affected and areas that are high risk. Spend 10 minutes reviewing the changelog before starting. It saves hours of unfocused testing.

FAQ#

What is regression testing?#

Regression testing verifies that new code changes haven't broken existing functionality. You re-run test cases for features that should still work after the update. It catches side effects, the bugs nobody intended to introduce. Most teams run regression tests before every release, with smaller subsets running more frequently.

How many test cases should a regression suite have?#

It depends on the size and complexity of your product. A typical SaaS application with 10-15 feature areas might have 150-300 regression test cases. The goal isn't a specific number but a suite that covers your core workflows, integration points, and historically buggy areas while still being runnable within your release cycle. If your suite takes longer than a day to execute, consider splitting it into tiers (prioritization helps here).

How often should I update my regression suite?#

After every release. Add tests for any regressions found in production. Remove tests for deprecated features. Update steps that no longer match the product. Quarterly, do a deeper review to consolidate overlapping tests and recalibrate priorities.

Can AI help with regression testing?#

Yes. AI agents through MCP can analyze your existing test suite, identify gaps in regression coverage, and generate new test cases for areas that lack coverage. They can also flag likely redundancies and suggest tests to prune. The human still decides what goes in and out, but AI speeds up the analysis. See how AI test case generation works.


Want to build a regression suite with tag-filtered runs and keyboard-fast execution? Start your free trial or try the live demo.

Frequently asked questions

What is regression testing?

Regression testing is re-running existing test cases after code changes to make sure nothing that worked before is now broken. It targets bugs introduced by new features, bug fixes, or infrastructure updates. Most teams run regression suites before every release.

How many test cases should a regression suite have?

There's no fixed number. A typical web application might have 100-300 regression test cases covering core workflows, integrations, and edge cases for high-risk areas. The suite should be large enough to catch regressions but small enough to run within your release cycle. If it takes three days to run, it's too big.

How often should regression tests run?

At minimum, before every release. Many teams also run a subset (smoke tests) on every build or deploy. The frequency depends on your release cadence. Daily deploys need automated or fast manual regression. Monthly releases can afford longer, more thorough passes.

Should regression testing be manual or automated?

Both. Automate stable, repetitive checks like login flows, API responses, and data validations. Keep manual tests for UX evaluation, visual consistency, and complex user journeys that are hard to script. Most teams end up with a mix, automating 40-60% of their regression suite.

What's the difference between regression testing and retesting?

Retesting verifies that a specific bug fix works. Regression testing verifies that the fix didn't break anything else. After a developer fixes a login bug, retesting checks the login. Regression testing checks that payments, user profiles, and notifications still work too.

Ready to rush through your tests?

14-day free trial. No credit card required.

Start free trial