You’ve built a stunning bridge, an architectural marvel. But before cars can cross it, you need to make sure it can withstand the weight of traffic, handle weather changes, and hold up over time. That’s where testing comes in. And just like building that bridge, you need a strategy for testing your software.
As QA managers, knowing how to craft a good test strategy is more than crucial. It’s not just about running tests, but also being strategic: knowing what to test, how to test, and most importantly, why you are testing them in the first place.
In this article, we’ll walk you through everything you need to know to build a robust test strategy.
A test strategy is a high-level document that outlines the overall approach and guiding principles for software testing. It provides a structured framework that directs the testing team on how to efficiently and effectively conduct tests, ensuring that all key aspects of the software are validated.
We identify 3 major types of test strategies:
Here we encounter a dual approach to software testing. Static testing involves evaluating code or documents without execution (catching issues early), while dynamic testing requires actually running the software to validate its behavior in real scenarios.
Why balance them? With static testing, we want to catch defects early, whereas with dynamic testing, we want to verify the software’s real-world behavior.
Without static testing, bugs get so deeply entrenched in the codebase that it will take quite a lot of time, effort, and money to resolve. Without dynamic testing, even code that looks correct on paper might fail in practice due to overlooked execution paths or integration problems.
As the name suggests, we want to strike a balance between anticipating potential defects vs responding to issues after they have occurred.
Why balance them? Preventive measures are great, but they usually focus on known risks and expected issues. The cost of fixing defects is lower since they are caught before being embedded deep in the system.
However, no team can predict all problems upfront. It’s nice to leave some room open for the notorious Unknown Unknown. Those edge cases surface only when the system is operational or undergoing integration with other components. That’s exactly where a reactive test strategy shines.
Here we want to strike a balance between manual testing and automation testing.
Why balance them? Manual testing is awesome as a starting point. A common best practice is conducting exploratory testing sessions to find bugs. After that, through automation feasibility assessment, the team decides if that specific test scenario is worth automating or not. If the answer is yes, the team can leverage automation test scripts/testing tools to automate it for future executions.
Learn More: How To Do Automation Testing?
Start your test strategy with the concept of a test pyramid, which consists of 3 levels:
How do you incorporate all of this into your test strategy? The pyramid can be understood as the percentage to which a type of tests should be automated. E2E tests should account for no more than 5-10% of the overall test suite. For integration testing, the figure is approximately 15-20%, while for unit tests, it should reach up to 70-80%.
After that, you’ll need to identify the objectives. What exactly are we trying to achieve here? What does success look like? For example, in this test session, we want to validate functional aspects, while in another, we want to aim for the non-functional aspects (security, performance, usability).
This objective usually comes from project requirements. You should review the project’s requirements, user stories, and business objectives to align the strategy with those desired outcomes of the software.
From those objectives, we develop a test scope.
Scope in a test strategy outlines the boundaries of what will and will not be tested during the software development process. It defines the areas, features, or components that will be the focus of testing. You’ll need to list down:
Why is this necessary? We want to prevent testing scope creep where the team tests unnecessary features. Considering which areas to test is highly strategic when it comes to meeting sprint requirements.
We also want to prioritize critical areas. Ask yourself:
There are so many types of testing to choose from, each serving a different purpose. Testing types can be divided into the following groups:
Learn More: 15 Types of QA Testing You Should Know
Agile is the go-to approach for most QA teams today. Instead of treating testing as a separate phase, it is integrated throughout the development process. Testing occurs continuously at each step, enabling testers to work closely with developers to ensure fast, frequent, and high-quality delivery.
Agile allows for shift-left testing where you essentially “push” testing to earlier stages of development and weave development with testing.
Read More: What is Shift Left Testing? How To Shift Left Your Test Strategy?
The criteria act as checkpoints to ensure the product is both stable and testable before major testing efforts, and that it is ready to move forward after testing is complete. There are 2 types: Entry Criteria and Exit Criteria.
Entry criteria are the conditions that must be fulfilled before system testing can begin, such as:
Exit criteria are the conditions that must be met before system testing is considered complete, and the product can proceed to the next stage, such as:
This is your test environment: where the actual testing takes place. They should mirror the production environment as closely as possible, and there should be additional tools/features to assist testers with their job. They have 2 major parts:
For performance testing specifically, you’ll also need to set up the network components to simulate real-world networking conditions (network bandwidth, latency simulations, proxy settings, firewalls, VPN configurations, or network protocols).
Here’s an example for you:
Category | Mobile Testing | Web Testing |
Hardware | - iPhone 13 Pro (iOS 15) - iPad Air (iOS 14) - Google Pixel 6 (Android 12) - Samsung Galaxy S21 (Android 11) | - Windows 10: Intel Core i7, 16GB RAM, 256GB SSD - macOS Monterey: Apple M1 Chip, 16GB RAM, 512GB SSD |
Software | - Google Chrome (across versions) | |
Network Config | - Simulate 3G, 4G, 5G, and high-latency environments - Wi-Fi and Ethernet connections | |
Database | MySQL 8.0 | |
CI/CD Integration | Jenkins or GitLab CI |
If you go with manual testing, you would need a test management system to keep track of all of those manual test results. Most commonly we have Jira as an all-in-one project management tool to help with bug tracking.
After testing, they can document their bug in a Google Sheet document that looks like this.
Of course, as you scale, this approach proves to be inefficient. At a certain point, a dedicated test management system that seamlessly integrates with other test activities (including test creation, execution, reporting) is a better option.
This system should also support automation testing. Imagine a system where you can write your test scripts, store all of the test objects, test data, artifacts, where you can also run your tests in the environment of your choice, then generate detailed reports for your findings.
To achieve such a level of comprehensiveness, testers have 2 options:
Each comes with their own advantages and disadvantages. The former is highly customizable, but requires a significant level of technical expertise to pull off, while the latter comes with out-of-the-box features that you can immediately enjoy, but there is some investment required.
The real question is: what is the ROI you can get if you go with one of those choices? Check out our article on how to calculate test automation ROI.
This is what success looks like. Test deliverables come from test objectives you set out earlier.
Define what artifacts and documents should be produced during the testing process to communicate the progress and findings of testing activities. As test strategy is a high-level document, you don’t need to go into minute details of each deliverable, but rather only a brief outline of the items that the team wants to create.
Establish the key performance indicators (KPI) and success metrics for the project. These metrics will not only be the means to measure the efficiency and quality of the testing process but also provide a common goal and language of communication among team members.
Some common testing metrics include:
These metrics will later be visualized in a test report.
List out the potential risks and clear plans to mitigate them, or even contingency plans to adopt in case these risks do show up in reality.
Testers generally conduct a level of risk analysis (= probability of it occurring x impact) to see which risk should be addressed in priority.
For example, after planning, the team realizes that the timeline is extremely tight, but they are lacking the technical expertise to deliver the objectives. This is a High Probability High Impact scenario, and they must have a contingency plan: either changing the objectives, investing into the team’s expertise, or outsourcing entirely to meet the delivery date.
All of these items in the document should be carefully reviewed by the business team, the QA Lead, and the Development Team Lead. From this document, you will be able to develop detailed test plans for sub-projects, or for each iteration of the sprint.
Read More: Types of Automation Testing: A Beginner’s Guide
Here’s a sample test strategy for your reference:
Section | Details |
1. Product, Revision, and Overview | - Product Name: E-commerce Web Application - Revision: v1.5 - Overview: The product is an online e-commerce platform allowing users to browse products, add items to the cart, and make secure purchases. It includes a responsive web interface, a mobile app, and backend systems for inventory and payment processing. |
2. Product History | - Previous Versions: v1.0, v1.2, v1.3 - Defect History: Previous versions had issues with payment gateway integration and cart item persistence. These issues have been addressed through unit and integration testing, leading to improvements in overall system reliability. |
3. Features to Be Tested | - User Features: Product search, cart functionality, user registration, and checkout. - Application Layer: Frontend (React.js), Backend (Node.js), Database (MySQL), API integrations. - Mobile App: Shopping experience, push notifications. - Server: Load balancing, database synchronization. |
4. Features Not to Be Tested | - Third-party Loyalty Program Integration: This will be tested in a separate release cycle. - Legacy Payment Method: No longer supported and excluded from testing in this release. |
5. Configurations to Be Tested | - Mobile: iPhone 13 (iOS 15), Google Pixel 6 (Android 12). - Desktop: Windows 10, macOS Monterey. - Browsers: Chrome 95, Firefox 92, Safari 15, Edge 95. Excluded Configurations: Older versions of Android (<9.0) and iOS (<12). |
6. Environmental Requirements | - Hardware: Real mobile devices, desktop systems (Intel i7, Apple M1). - Network: Simulated network conditions (3G, 4G, Wi-Fi). - Software: Testing tools (Selenium, Appium, JMeter). - Servers: Cloud-hosted environments on AWS for testing scalability. |
7. System Test Methodology | - Unit Testing: Verify core functions like search, add-to-cart. - Integration Testing: Test interactions between the cart, payment systems, and inventory management. - System Testing: Full end-to-end user scenarios (browse, add to cart, checkout, receive confirmation). - Performance Testing: Stress testing with JMeter to simulate up to 5,000 concurrent users. - Security Testing: OWASP ZAP for vulnerability detection. |
8. Initial Test Requirements | - Test Strategy: Written by QA personnel and reviewed by the product team. - Test Environment Setup: Environments must be fully configured, including staging servers, test data, and mock payment systems. - Test Data: Create dummy users and product listings for system-wide testing. |
9. System Test Entry Criteria | - Basic Functionality Works: All core features (search, login, cart) must function. - Unit Tests Passed: 100% of unit tests must pass without error. - Code Freeze: All features must be implemented and code must be checked into the repository. - Known Bugs Logged: All known issues must be posted to the bug-tracking system. |
10. System Test Exit Criteria | - All System Tests Executed: All planned tests must be executed. - Pass Critical Scenarios: All "happy path" scenarios (user registration, product purchase) must pass. - Successful Build: Executable builds must be generated for all supported platforms. - Zero Showstopper Bugs: No critical defects or blockers. - Maximum Bug Threshold: No more than 5 major bugs and 10 minor bugs. |
11. Test Deliverables | - Test Plan: Detailed plan covering system, regression, and performance tests. - Test Cases: Documented test cases in Jira/TestRail. - Test Execution Logs: Record of all tests executed. - Defect Reports: Bug-tracking system reports from Jira. - Test Coverage Report: Percentage of features and code covered by tests. |
12. Testing Measurements & Metrics | - Test Coverage: Target 95% coverage across unit, integration, and system tests. - Defect Density: Maintain a defect density of < 1 defect per 1,000 lines of code. - Performance: Ensure 2-second or less response times for key transactions. - Defect Leakage: Ensure no more than 2% defect leakage into production. |
13. Risks | - Payment Gateway Instability: Could cause transaction failures under high load. - Cross-Browser Issues: Potential inconsistencies across older browser versions. - High User Load: Performance degradation under concurrent users > 5,000. - Security: Risk of vulnerabilities due to new user authentication features. |
14. References | - Product Documentation: Internal API documentation for developers. - Test Tools Documentation: Selenium and JMeter configuration guides. - External References: OWASP guidelines for security testing. |
When creating a test strategy document, you can create a table with all of the items listed above, and have a brainstorming session with the important stakeholders (project manager, business analyst, QA Lead, and the Development Team Lead) to fill in the necessary information for each item. You can use the table below as a starting point:
The test strategy document gives a higher level perspective than the test plan, and contents in the test plan must be aligned with the direction of the test strategy.
Test strategy provides general methods for product quality, tailored to different software types, organizational needs. quality policy compliance, and the overall testing approach. The test plan, on the other hand, is created for specific projects, and considers goals, stakeholders, and risks. In Agile development, a master plan for the project can be made, with specific sub-plans for each iteration.
The table below provides a detailed comparison between the two:
Test Strategy | Test Plan | |
Purpose | Provides a high-level approach, objectives, and scope of testing for a software project | Specifies detailed instructions, procedures, and specific tests to be conducted |
Focus | Testing approach, test levels, types, and techniques | Detailed test objectives, test cases, test data, and expected results |
Audience | Stakeholders, project managers, senior testing team members | Testing team members, test leads, testers, and stakeholders involved in testing |
Scope | Entire testing effort across the project | Specific phase, feature, or component of the software |
Level of Detail | Less detailed and more abstract | Highly detailed, specifying test scenarios, cases, scripts, and data |
Flexibility | Allows flexibility in accommodating changes in project requirements | Relatively rigid and less prone to changes during the testing phase |
Longevity | Remains relatively stable throughout the project lifecycle | Evolves throughout the testing process, incorporating feedback and adjustments |
Katalon is a comprehensive solution that supports test planning, creation, management, execution, maintenance, and reporting for web, API, desktop, and even mobile applications across a wide variety of environments, all in one place, with minimal engineering and programming skill requirements. You can utilize Katalon to support any test strategy without having to adopt and manage extra tools across teams.
Katalon allows QA teams to quickly move from manual testing to automation testing thanks to the built-in keywords feature. These keywords are essentially ready-to-use code snippets that you can quickly drag-and-drop to construct a full test script without having to write any code. There is also the record-and-playback feature that records the sequence of action you take on your screen then turns it into an automated test script that you can re-execute across a wide range of environments.
After that, all of the test objects, test cases, test suites, and test artifacts created are managed in a centralized Object Repository, enabling better test management. You can even map automated tests to existing manual tests thanks to Jira and Xray integration.
For test execution, Katalon makes it easy to run tests in parallel across browsers, devices, and OS, while everything related to setup and maintenance is already preconfigured. AI-powered features such as Smart Wait, self-healing, scheduling, and parallel execution enable effortless test maintenance.
Finally, for test reporting, Katalon generates detailed analytics on coverage, release, flakiness, and pass/fail trend reports to make faster, more confident decisions.