Regression testing is the process of re-running existing tests after making changes to the code, to make sure the changes didn’t break anything that was working before.
Why regression testing is necessary? Because as modern applications grow more complex and interconnected, a small code update can ripple through the system and cause unforeseen issues elsewhere. Therefore, when the codebase is updated, regression tests must be executed to check up on the existing features to see if that update breaks anything unexpectedly.
A bug found during a regression test is called a regression.
Let's imagine that you're testing an e-commerce app called VeryGoodEcommerce.
The current features of your app include:
One day, the dev team decided to add a new feature that allows users to apply discount codes at checkout. It's a fascinating feature, and beta users love it.
However, we can never know how that shiny new feature interacts with the current features.
What if it breaks the Product browsing? What if it breaks the Price calculation?
That is why regression testing is needed. Instead of testing the new feature, you run tests for the five existing features to make sure that the new Discount feature didn't negatively affect them.
If the discount feature affects some feature (for example, price calculation), we call that a regression. The dev team is immediately informed of the issue for immediate troubleshooting.
If the discount feature does not negatively affect anything, there is no regression, and everyone is happy. The new feature is then rolled out for users to enjoy.
📚 Read More: Regression Testing in Agile: A Complete Guide
Typically, regression testing is done when:
The terms "retesting" and "regression testing" are usually misused. It is important to differentiate them:
In simpler terms:
You can think of "New feature testing" is the opposite of "regression testing". Put simply:
Together they ensure quality across the entire application. Here is a simple comparison table:
|
Aspect |
Regression Testing |
New Feature Testing |
|
Purpose |
Ensures that existing functionality continues to work after changes (e.g., bug fixes, enhancements, or new features). |
Validates that a newly developed feature works as intended before release. |
|
Scope |
Covers previously implemented features, checking if they still function correctly. |
Focuses on the new feature’s functionality, edge cases, and user experience. |
|
When it's Done |
After code changes, deployments, or integrations. |
During development and before the feature is merged into production. |
|
Methodology |
Automated tests are often used for efficiency, especially in CI/CD pipelines. |
Often involves manual and exploratory testing in addition to automation. |
|
Example |
A new UI update is added, and regression tests ensure that login, checkout, and other core functions still work. |
A new "dark mode" is introduced, so new feature testing ensures it works correctly across different devices and settings. |
First, your team needs to identify what was changed and where it happened. From our experience, the most common reasons for code changes are usually:
Not all changes have the same level of risk. Doing regression testing for everything would be ideal, but in reality, time, budget, and resources are limited. Prioritization helps ensure you're focusing on areas that are most likely to break, reducing wasted effort on areas that aren’t impacted by recent changes.
Some features or components of an application are more critical than others. Modifications that have an impact on core features, or those that significantly change how the application works, should always be the top priority.
Consider:
📚 Read More: How to Build a Regression Test Plan?
This step is crucial because deciding when to start regression testing ensures that the environment is ready, the codebase is stable, and the test objectives are well-defined. Common entry points for this are:
Exit points, on the other hand, defines when testing can be considered complete. This ensures that sufficient testing has been done to meet quality standards before the software is released or moved to the next phase.
In this step, we deep-dive into the plan formulated in step 3 and categorize them by several factors:
📚 Read More: 10 Best Practices for Automated Regression Testing
Having test environments at hand at all times is important for frequent regression testing.
As new code is being developed constantly, environments need to be stable and ready-to-test to not interfere with the planned testing schedule. In addition, poor environment setup can give tests more reason to fail, missed defects and false positives/negatives.
📚 Read More: How to Set Up Test Environments
At this stage, all test cases are ready for execution. Teams can schedule test cases to run based on the test plan. Certain test cases can even be scheduled to run periodically in intervals over the entire development cycle. Time-based test execution allows teams to have greater quality control over the constant changes of their application.
📚 Read More: Test execution: A Complete Guide
This stage gives important insights for future test runs. Comprehensive analytics generated from test results enable QA managers and other key stakeholders to quantify testing efficiency, assess resource utilization, and measure the effectiveness of the testing strategy.
Here is how a test report may look like. In Katalon TestOps, after running regression test suites, QA teams can access the dashboard with rich insights about the effectiveness of that test run.
Yes, regression testing should be automated. Here's why:
However, sometimes manual regression testing still makes sense:
Exploratory testing: When intuition, curiosity, and user empathy are needed.
UI/UX validation: Human eyes still catch layout glitches and visual regressions better.
Highly dynamic interfaces: Where the cost of automation maintenance outweighs the benefit.
Short-lived features: If a feature will be deprecated soon, don’t waste time automating its tests.
📚 Read More: Manual Regression Testing vs. Automated Regression Testing
Let's meet Alex Martins, with 20+ years in software quality and test automation at Fortune 500 companies, sharing his thoughts on how to automate regression tests if you're team are spending all of the resources for new feature testing:
Focus on the parts of your app that break the most or matter the most, like checkout in e-commerce. Use code coverage tools, bug history, and your own experience to decide what needs testing. Don’t just test everything blindly.
Not every test should be automated. Automate the tests that are:
Repetitive
Stable
High-impact
Good examples: login, form submission, backend calculations. Leave the messy, unpredictable stuff (like exploratory testing) to humans.
To decide which test cases are suitable for automation, we usually use a Test Case Selection Matrix, which is a practical scoring system that helps you evaluate test cases based on meaningful criteria. It brings objectivity and structure to your automation decisions by focusing on five key factors:
Run Frequency – how often the test is executed
Stability – how consistently it behaves across builds
Business Criticality – how important the feature is to users or revenue
Reusability – whether the logic is shared with other tests
Manual Effort – how painful it is to do by hand
Here's how it works:
Score each factor from 0 to 1
Add up the total (max = 5)
Set a cutoff (e.g., 3.5+ = worth automating)
Use the matrix during backlog grooming, test planning, or automation roadmap reviews to focus your efforts where they matter most.
Here's an example:
| Test Case | Run Freq | Stability | Business Critical | Reusable | Manual Effort | Score | Automate? |
|---|---|---|---|---|---|---|---|
| Login Flow | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 5.0 | ✅ Yes |
| Newsletter Popup UI | 0.2 | 0.3 | 0.2 | 0.2 | 0.1 | 1.0 | ❌ No |
You can totally establish a different weighting system depending on your needs.
You’ve got two paths:
Build your own test automation framework
Use off-the-shelf tools (like Katalon, Selenium, etc.)
Pick based on your team’s skills, time, and long-term goals.
📝 Here's a list of best automated regression testing tools we've curated
There are 3 common ways to run automated tests:
Batch runs: Group similar tests together.
Scheduled runs: Run tests on a timer (e.g., every night).
CI/CD runs: Run tests automatically with each code update.
Decide where to run tests: local machines, mobile devices, cloud platforms, or headless browsers. If you can’t afford all the devices, use cloud services like Katalon TestCloud.
This technique involves re-running the entire test suite to ensure that recent changes have not introduced new defects.
Although highly reliable, this technique is expensive and time-consuming, without much room for strategic considerations. This makes it less practical for large applications with frequent updates.
Instead of testing everything, this method selects a subset of test cases that are most relevant to the modified code. It balances cost and efficiency by focusing only on affected functionalities.
This technique ranks test cases based on their importance and likelihood of detecting critical defects. High-priority test cases are executed first, ensuring that significant issues are caught early.
A combination of regression test selection and test case prioritization, this approach optimizes efficiency by selecting only the most critical test cases while ensuring they are executed in order of priority.
Manual regression testing can be tedious and inefficient, especially when the same test steps need to be repeated for each iteration. Using a regression testing tool is a more effective approach. These tools enable you to create an automated regression test suite that can be run in batches whenever a new build is available.
Katalon have helped QA teams around the world quickly automate regression tests for web applications, API, and mobile apps.
With a Katalon license, Automation Testers and QA teams can:
Selenium is a popular tool for regression and web application testing due to its flexibility and compatibility with major browsers (Chrome, Safari, Firefox) and operating systems (macOS, Windows, Linux).
Key features:
Selenium's effectiveness depends on the coding skills of testers, with experienced users maximizing its potential, while less technical teams may face challenges
📝 Read More: Katalon vs Selenium: A Detailed Comparison
Playwright is a modern end-to-end testing framework known for its speed and versatility across multiple languages (JavaScript, TypeScript, Python, C#, Java). It’s widely seen as a strong alternative to both Selenium and Cypress.
Highlights:
Seamless execution across Chromium, Firefox, and WebKit with noticeably faster test runs compared to Selenium. Works well for both web automation and API testing.
Ships with auto-waiting, screenshot capture, tracing, and a powerful codegen tool for recording interactions and generating test code. Great for teams wanting immediate productivity out-of-the-box.
Official bindings for JavaScript/TypeScript, Python, Java, and .NET (C#). TypeScript is the most mature and best-documented, though Python and C# also work reliably for most scenarios.
Easy parallel execution and test isolation at the process level without requiring third-party tools. This is a key advantage over Cypress, especially for scaling test suites.
Auto-waiting for UI elements reduces the need for custom sleeps or explicit waits.
Designed from the ground up for modern web apps. Works well with GitLab, Azure, Docker, and WSL, though some users report rougher setup experiences in non-TypeScript environments.
Regression testing is key to improving the overall quality of the product and user experience. The right regression testing tools can significantly identify all surfaced defects and eliminate them early in the pipeline.
With Katalon, you can write regression tests in no-code, low-code, and even full-code mode for maximum flexibility, schedule them, then execute across environments.