A Complete Guide to Choose Test Cases for Automation
Although automation testing is becoming more popular, there remain certain difficulties for automation teams, including selecting which test cases to automate. According to the World Quality Report in 2019, 24% of teams encounter obstacles prohibiting them from deciding on the right test scenario. This article will detail how to choose test cases for automation and maximize your testing journey.
Get the free course: Create and Execute Test Cases in Katalon Studio
Test Cases and Their Importance
According to the latest Test Automation Landscape 2020 Report, 50% of test projects are automated, and that number is expected to grow. Not only does automation testing solve the problems of human error in testing, but it also increases the coverage and speed of the testing, hence improving the ROI for the whole project.
In software testing, a test case is a detailed document of specifications, input, steps, testing conditions, and expected outcomes regarding the execution of a software test on the application under test (AUT). There have to be at least 2 test cases for one feature to produce the desired coverage: a positive test case – where the input is correct – and a negative test case – where the input is incorrect. Therefore, most testers combine multiple test cases into a test suite for most coverage, which allows for easier testing and maintenance.
With that said, you can see the importance of selecting test cases for automation. Not all tests can be automated, and the chosen test cases are the foundation for your automation tools selection and execution. Should you select unsuitable tests for automation, you risk wasting time and resources rather than saving.
Identify Test Case for Automation
First, testers should solidify their insights about two elements: the AUT’s testing requirements and their team capability. Then teams can measure those facts against the benefits of automation testing and visualize all the fields where automation can be the answer.
With that being said, there are some crucial factors upon choosing automation test cases that testing teams must consider:
- Execution time and testing frequency of the test cases: if the answer to both components is significant, these test cases make a strong candidate for automation. The same principles apply for test suites, with an added element of the number of test cases in the suite.
- Resources requirement: this includes the numbers of devices, browsers, OS, platforms, and databases involved while running test cases. It also depends on the level of user involvement required in the test (the higher it is, the less likely you should automate them)
- The test case’s characteristics: identify if the test cases in question have a defined outcome for testing software to generate, or if the testing features are new and flaky. In question can also be the criticality and the complexity of the features: the more critical/complex it is, the more likely you should automate it to avoid human error.
- The downside of automation won’t outweigh the value (or ROI). The value here includes the time, the insights, and the manual resources saved. The downside here includes implementation cost, maintenance cost, and human flexibility. According to the State of Quality Report 2024, the majority of participants, 64%, reported witnessing an ROI of 20% or more.
Test Case to Automate
Before automating any test cases, teams must carefully compare those test cases with a set of criteria. Below are test cases that are recommended for applying to automation.
- Regression tests (smoke test and sanity test, etc.): these tests are consuming in terms of time and resources as they’re the backbone of each release’s testing process.
- Performance tests (load test, stress test, etc.): they are repetitive and time-consuming to reach the desired coverage.
- Data-driven tests or tests on the AUT’s crucial features: automation is the answer to minimize human error potential on the data or the product’s critical components.
- Some other test cases to automate are integration tests, API tests, Unit tests, cross-browser tests, etc.
Test Case to not Automate
Although automation testing is a promising solution for a Quality-at-Speed process, automating every test case will do you more harm than good. To avoid overdoing or misusing automation testing, there are some golden rules when it comes to the uninhabitable part of automation testing:
- Exploratory and ad-hoc tests: as these tests don’t have concrete criteria for testing software to evaluate, they are least feasible for automation or result in a false outcome.
- User Experience Tests (Usability Testing): it is unlikely that testing software can perfectly mimic a human’s exact emotion and expression when using an app.
- Intermittent tests and redundant, low-risk tests: these tests will produce an unreliable result when automated. Also, just because you can automate your test suite doesn’t mean you should do it.
- Anti-automation tests: this goes without saying if you can automate an anti-automation feature (e.g. CAPTCHA).
Measurement for Automation Evaluation
- Before testing:
Identify the parameters on which you will evaluate candidates for automation testing. Then, teams can break the AUT into module test cases and measure that against the criteria above. Before finalizing your selection, perform an ROI measurement as the last filter, and as criteria for automation results.
- After testing:
The expected ROI calculation from before is measured against the outcome. There are also other measurements specific to each level (project, department, company, etc.) that the team can calculate to measure the automation process’s effectiveness.
Conclusion
Automation testing has greatly increased testers’ speed and quality in the past decades. Combined with recent AI/ML integration, automation is likely to be more popular in the future. Knowing how to select test cases for automation is the first step to maximizing your testing journey’s efficiency with such a testing landscape.