There is no doubt about it: Artificial Intelligence (AI) and Machine Learning (ML) has changed the way we think about software testing. Ever since the introduction of the disruptive AI-powered language model ChatGPT, a wide range of AI-augmented technologies have also emerged, and the benefits they brought surely can’t be ignored.
In this article, we will guide you to leverage AI/ML in software testing to bring your QA game to the next level.
First we need to define the concept of AI/ML properly.
According to the definition from Google Cloud:
With that in mind, AI/ML in software testing is the use of AI/ML technologies to assist software testing activities.
According to the State of Software Quality Report 2025, test case generation is the most common application of AI for both manual testing and automation testing, followed by test data generation. You can download the report for the latest insights in the industry.
There are so many that we can use AI/ML to power-up our software testing, and the key to unlock those capabilities is knowing what these technologies can potentially do, then find creative ways to incorporate them into your day-to-day testing tasks. Note that there are 3 major approaches when choosing an AI/ML system to incorporate into your software testing, including:
The final decision on which approach to use depends on the vision of the entire organization and the team. Whichever one you choose, there are generally 5 areas an AI/ML system can contribute the most:
We all know how disruptive ChatGPT has been to the software engineering industry. Basic coding tasks have now been handed to ChatGPT.
Similarly, in the software testing field, we can also request the ChatGPT to write some test cases for us. Keep in mind - ChatGPT is not the one-size-fit-all solution. Sometimes it works, but sometimes when the requirements get too complex, you need to provide extra context so it can better execute your prompt.
For example, I asked ChatGPT to “write a Selenium unit test to check if clicking on a button on https://exampleweb[dot]com will lead to the correct link “https://exampleweb[dot]com/example-destination”, and ChatGPT provides a fairly well-written code snippet with detailed explanation of each step as well as clear assertion to verify that the destination link is indeed the expected URL. It is actually impressive!
Traditionally, these automated test scripts have to be written by skilled testers with coding skills using test automation frameworks. With AI, this process is sped up, significantly.
Not just that, ongoing maintenance was necessary whenever source code changes occurred, or else the scripts won’t understand the updated code, resulting in wrong test results. This is a common challenge among automation testers since they don’t have enough resources to constantly update their test scripts in the ever-changing Agile testing environment.
Thankfully, when incorporating AI/ML in software testing, you can now use simple language prompts to guide the AI in crafting tests for specific scenarios and speed up your test maintenance work.
Imagine you're running an eCommerce website with thousands of products, different user paths, and constant updates to the website. It is a huge challenge to cover all possible scenarios, even with repetitive areas automatically tested.
This is where Machine Learning comes into play. At first, you need to feed the AI with data about how users interact with your website. This data includes things like what products they view, what actions they take, and when they abandon their carts.
As the AI gathers and analyzes this data over time, it starts noticing patterns. It recognizes that certain products are frequently viewed together, and that users tend to abandon their carts during specific steps in the checkout process.
And that's exactly what TrueTest from Katalon is doing. It leverages AI/ML technologies to map user journeys and identify important scenarios to be tested. After that, it auto-generates automation test cases for those scenarios. This drastically speeds up automation speed.
Personally I think this is one of the most helpful use cases of AI. To cover the most scenarios, you would need a gigantic volume of data with high variety.
Manually creating this amount of data is time-consuming, and you can't use consumer data either because of concerns about data privacy.
That's when AI comes in.
Let’s use the eCommerce website example. For global eCommerce websites that ship across borders, generally there are variations in the way shipping fee is calculated, such as:
You can use AI to provide you with a list of addresses in regions that your business operates in. If you want to go into the minute details (shipping weight, time zones, additional fees, etc.), just tell the AI and save yourself hours of manual work.
In the past, human testers had to rely on their eyes to find visual differences between how the UI looked before it was launched and how it appeared after it was launched.
The problem arises when we want to automate visual testing, which involves comparing the screenshots of the UI before and after it was launched:
AI can learn if certain zones should be “ignored” even if there are changes there, and if the differences it notices between the 2 screenshots are reasonable.
AI-powered testing is still in its early stages, yet it’s already solving many of the long-standing challenges in traditional automation testing.
As the technology matures, we can expect it to become an essential part of every testing process. Today, AI represents the future, but soon, it will be the standard. Testers who embrace it early will stay ahead of the curve.
Here’s how AI/ML makes a difference:
As AI accelerates development, developers write more code at higher speeds, creating more potential for defects. AI testing helps testers match that pace and maintain quality.
As applications integrate AI features, they introduce new quality concerns like bias, explainability, and adaptive behavior. AI-based testing provides the intelligence to address these evolving issues.
AI speeds up test creation, strengthens test maintenance, and enhances reliability across updates.
It offers smarter analytics and recommendations, helping teams make more informed decisions.
It streamlines the entire testing process, improving efficiency and reducing human error.
Most importantly, AI doesn’t replace testers; it even empowers them. Think of it as a cognitive extension: an intelligent assistant that works continuously, learns from every test cycle, and helps testers solve problems faster and smarter.
Having AI/ML in software testing comes with added responsibilities that we should embrace. Some of the challenges that testers should be aware of include:
AI seems like magic (in some ways it is) but the key to using AI is being practical in your approach. Lay a solid foundation of the AI/ML systems you’re working with, understand your own workflows, then find out ways that you can integrate two of them with each other. AI should streamline your workflow and simplify tasks which you previously spent a lot of time on, and you can only do that if you understand both of them correctly.
AIs take time to develop and learn about the tasks you assign them.
Treat the AI as a blank canvas that you can gradually train to perform complex tasks. It is even better to have a dedicated plan for how you want to integrate AI into your workflow.
You don’t have to take big leaps. Even baby steps that are non-disruptive work also: move at your own pace.
When working with AI and especially generative AI, it is crucial to provide them with well-structured and precise input prompts to generate accurate and relevant outputs from the models. It gives you a certain sense of control over the probabilistic nature of the system.
Prompt engineering is all about bringing context, specifications, and boundaries to the table, and that is a real skill in itself.
No matter what, AI is only a tool, and it is a truly powerful tool if used in concert with the testers.
Testers will not be replaced by AI, but rather, they should and will be empowered by the technology. The more skilled and experienced the tester is, the more benefits they can extract out of these tools. Bring your own creativity and originality to the table, and let AI propel those ideas to another level.
As Alex Martins perfect puts it:
Testing using AI and testing for AI systems are 2 completely different topics.
Katalon is a comprehensive AI-augmented software quality management platform, orchestrating the entire software testing lifecycle from test creation, execution, maintenance, to reporting across web, API, and mobile applications.
A forerunner in the AI testing landscape, Katalon continuously enriches its product portfolio with these innovative AI-infused capabilities, empowering global QA teams with unprecedented accuracy and operational efficiency. Learn more about Katalon AI testing here.