The Landscape of AI-Enabled Test Automation Tools
A significant event happened on the midnight of December 31, 1999. It was the beginning of a millennium where aircrafts would not fall out of the sky and bank ATMs would not spew millions of dollars from their cash withdrawal slots. Preemptive software testing and remediation on a global scale provided the automated world with some assurance that the machines would continue to run as they had for decades. One of the most impressive aspects of the year 2000 software maintenance effort was that development teams conducted all testing manually.
Now, arguably, software is far more complex, differentiated, and plentiful. Whether B2B or B2C, software makers are confronted with development and test goals that far outstrip what was underway 25 years ago. Artificial intelligence (AI) is one of the means by which software makers hope to resolve the challenges of an ever-changing IT landscape.
Katalon AI researchers recently reviewed the state of AI-enabled software platforms in their academic paper, "A Review of AI-Augmented End-to-End Test Automation Tools," and determined that AI development is still in its infancy while AI-enabled software tools are plentiful. Furthermore, the industry is making great strides in aiding software makers to improve the efficiency and effectiveness of their wares. As such, opportunities for growth and maturity abound for AI-enabled software testing.
Where AI Can Make a Difference
Software testing is a craft. It is a complex and time-consuming process that requires development teams to have a great deal of experience and expertise on staff. The multitude and variety of software products under constant development and modification, though, has outstripped the resources available to address the challenge. This is where testing tool makers see an opportunity to introduce AI and machine learning (ML).
The dream and the hype is that AI will one day be able to cut humans out of the testing/analysis loop. However, it is not possible to automate all stages, as humans still play a vital role in certain testing activities such as planning, management, and reporting. So, test tool vendors are focusing their AI-development efforts on aspects of testing in which AI makes a significant impact. The Katalon AI research group's paper rounds up current products and examines their functionality in test instances involving test case generation, test data generation, test execution, test maintenance, and root cause analysis.
AI-Enabled Test Techniques
Test Script Generation
Creating test cases can be a challenging, slow, and inefficient process, if done manually. AI/ML capabilities can greatly speed test script generation. Testers can use AI/ML to build visual testing and functional testing cases.
For example, an automated test case generator can take web elements such as a button and create relevant test cases for validation. In this scenario, AI functionality like natural language processing (NLP) or computer vision can be used to understand the system under test (SUT). The automated test kit can generate multiple test cases whenever there’s an update in the product’s features. Automation helps assure the application functionalities still work as expected.
Another advantage of automated generation of test cases is a self-maintenance capability. The automated test kit can automatically select and keep records and metadata for web elements that it will interact with. It can also update existing test cases to avoid false positives.
Test Data Generation
Test data generation is one of the most onerous aspects of manual testing. Nevertheless, testers must provide valid test data in various testing scenarios to verify applications work under all conditions. Some of the scenarios in which users may expect business applications to operate include form logging, registering a new account, and entering a receiver's address.
Development teams can generate test data based on project specifications or source code. For instance, they may create possible combinations of data based on a previously collected dataset. They may also use search-based data generation, or heuristic approaches.
Testers can use AI/ML to help QA engineers scan through large code bases to understand the contexts for the tests better. Automation may probe more in-depth areas for testing than manual testers can. AI/ML may also identify critical issues for test coverage that would otherwise escape human inspection.
Humans may find it a challenge to manually execute test scripts on different browsers and environments. Testing across different operating environments can be time-consuming and difficult. AI/ML may be able to address these issues by efficiently managing, prioritizing, and scheduling test cases. The approach increases test coverage and execution speed, and may save significant effort and resources.
AI/ML can help teams gain efficiencies by:
- Automatically managing, prioritizing, and scheduling test cases. AI/ML can determine which tests need to be run on different devices, operating environments, and configurations.
- Running regression tests only for significant actions in a software application, which can save time and effort, and provide assurance the app is still running as expected.
- Automatically verifying the application with multiple combinations of devices, operating systems, and browsers. Verification can increase test coverage and execution speed.
- Saving humans significant effort and resources for executing test cases.
Manual testers must constantly update test scripts to keep up with changes to an application's source code. Unfortunately, selectors used to interact with web UI can be fragile and cause test breakage even with minor user interface (UI) modifications. AI/ML can reduce testing redundancy and breakage for manual testers.
AI/ML engines can introduce a "self-healing" function into testing to address fragility and breakage. When a script fails, the self-healing mechanism provides a complete understanding and analysis of possible alternative options. For example, AI selects the option most similar to the object previously used.
AI/ML uses various techniques to streamline test maintenance: data analytics, visual hints, NLP, or other heuristic approaches. The techniques are intended to identify objects in a script even after the objects have changed.
Root Cause Analysis
Root cause analysis is a quality control measure used to identify what is wrong in software testing and determine the reasons behind it. However, tracking how such failures occur can take a lot of time for QA engineers.
AI/ML can help with root cause analysis by identifying the test cases that are impacted by a feature change. AI-enabled software can then trace issues back to the affected user stories and feature requirements.
The approach enables QA engineers to avoid wasting time on false positive error reports.
The New Software Testing Landscape
Clearly, all the AI/ML-enabled software tools discussed in the paper "A Review of AI-Augmented End-to-End Test Automation Tools" have their strengths. Compared to manual and even automated rote testing, AI/ML-enhanced test kits can greatly enhance the testing process.
Vendors have created the kits to automate testing activities, and to improve product quality and delivery time. In every case, the products can detect bugs and errors, maintain existing test cases, and generate new test cases much faster than humans.
The tools, however, become less effective with CI/CD pipelines when the system under test is constantly changing. In response, Katalon research offers a map to a test automation landscape that is constantly evolving.