Nice to meet you.

Enter your email to receive our weekly G2 Tea newsletter with the hottest marketing news, trends, and expert opinions.

The Future of Software Testing With AI

May 22, 2024

future of software testing

ChatGPT paved the way for a new world where we can confidently say that AI is here to stay. 

It is a revolutionary piece of tech that has influenced many industries, including testing. With AI, we can expect new testing best practices, where QA teams leverage it to enhance the testing experience, creating test cases faster, better, and smarter.

However, many QA teams are still hesitant. They are not yet leveraging AI to its fullest potential. 

In this article, we will show you how the software testing world can benefit from the adoption and integration of AI into their testing lifecycle and how testers should seize this opportunity to level up their testing experience.

Towards an autonomous software testing benchmark

Before we dive into the topic, let’s establish a benchmark to gauge the level of AI integration into testing.

Adopting AI can be seen as an effort to make testing more autonomous. The more autonomous a system is, the more it can operate, make decisions, and perform tasks without human intervention. That is exactly what any AI is built for. It is equipped with a near-human capacity to identify patterns and make non-deterministic decisions.

All QA teams can be placed somewhere on the autonomy spectrum, which spans from no autonomy (i.e., all decisions have to be made by humans) to complete autonomy (all decisions can be made by machines).

One question arises from here: if you are a QA team, how can you define our level of autonomy? This is when we need the autonomous software testing benchmark (ASTM).

The benchmark is conveniently inspired by the autonomous vehicle benchmark, as the bar chart below shows.

Levels of autonomy

Source: Katalon

The ASTM model sets out six stages of AI integration into testing.

At level zero, QA teams are doing only manual testing, probably with the assistance of a test case management system. The deeper they integrate automation technology into QA processes, the higher up they move on the ladder.

At the highest level, six, we see full AI/ML involvement in the design, implementation, and decision-making of tests:

  • Level 0. Manual testing: Human testers make all decisions.
  • Level 1. Assisted test automation: Human testers use automated testing tools or write scripts to perform the interaction on their behalf. They still have to actively create and maintain those scripts.
  • Level 2. Partial test automation: Both humans and computers engage in testing activities and propose potential decision choices; however, the majority of testing decisions are still made by humans.
  • Level 3. Integrated automated testing: At this stage, the computer generates a list of decision alternatives, chooses one for action, and proceeds only if the human approves.
  • Level 4. Intelligent automated testing: The computer generates decision alternatives, evaluates and selects the optimal one, and performs testing actions accordingly. Human intervention is still an option if necessary.
  • Level 5. Autonomous testing: The computer assumes complete control over the testing process for the system under test (SUT), which includes decision-making and the execution of all testing actions. At this stage, human testers do not intervene.

The neat part of this is that you can already incorporate AI in the first stage: manual testing. For example, testers can use generative AI to help them create test steps from a scenario. If they are not familiar with a testing framework, AI can generate a script in the framework of their choice within a few seconds. 

That is just one among the many applications of AI in software testing.

The nature of AI in software testing

The core of applying AI to software testing is based on the idea that AI can search for a solution to your problem. Those problems can include generating test data for a data-driven test scenario, generating fully executable test scripts, explaining a complex code snippet, or categorizing a list of test cases into the right groups.

AI achieves this by drawing conclusions from a vast wealth of data on which it has been trained. It identifies the pattern from the prompt you provided to generate the answer with the highest probability of accuracy and relevance.

Ideally, we don’t want it to stop there. Over time, as the AI learns more and more about the application under test (AUT) that it is working on, we want it to be able to consolidate everything it has tested into “knowledge” about the AUT. This newly acquired knowledge allows it to perform the tests gradually without as much intervention from human testers as in the first stages.

ChatGPT works exactly in the same way. If a team is dedicated enough, they can custom-train an AI to support their testing project. Creating an entire AI from scratch is an extremely daunting task, so a better alternative is to bolt on an existing one.

Benefits of generative AI in software testing

Once you have your AI up and running, you should see how its unique capabilities can unleash new possibilities for your QA team.

Improved test coverage and efficiency

As the application grows in complexity, the number of test cases and scenarios to consider also grows accordingly. To meet the ever-rising demand from users, stakeholders need to include more and more requirements for QA teams during the test planning stage. 

However, sometimes, testers run into blind spots and miss out on important, critical test scenarios. AI can come into the scene and act as a test case generator, filling in the gaps that testers may have missed.

QA teams can go even further and integrate the AI into their application or system, feeding it with data and then giving tailored recommendations for what to test, which areas to focus on, and which areas to optimize. Essentially, they turn generative AI into an advisor, and it is up to the human testers to decide whether to follow its recommendations.

Having more test cases is great, but then comes the question of efficiency. In the limited timeframe of an Agile sprint, automation testing is a default option if you want to balance high test coverage with short time-to-market. Automation testing does have its challenges, and AI can be leveraged to address those. 

Here are some ideas for you to level up automation testing with AI:

  • Provide AI with a natural language prompt to generate a test case. It is crucial to be highly specific about which language you want to write the script in, assertions, pass/fail criteria, and any data needed for the test to run properly.
  • Employ AI algorithms to automatically update test scripts. This helps accommodate changes in the application’s UI. For example, if the selector of a certain UI element is updated, the AI can choose an alternative selector based on other attributes of that element. This is known as a self-healing mechanism.
  • Generate diverse and realistic test data to use in data-driven testing sessions. This approach solves the need for extensive data without having to resort to real user data, which may lead to data privacy and security concerns.
  • Use AI-powered visual testing as another interesting domain. In this domain, testers leverage computer vision algorithms to automatically compare the actual UI with its expected version to identify visual issues. The AI can also be engineered to avoid false positives, i.e., knowing which visual issues create actual UX challenges and which do not.
  • Generate valuable insights and recommendations. Once you have executed all of the test cases, AI can also assist in generating insights by analyzing the metrics and providing recommendations for improvement.

Enhanced bug detection

All of the capabilities of generative AI mentioned above should translate into enhanced bug detection.

With its robust pattern recognition capabilities, AI can be leveraged to scan through code repositories and identify recurring patterns often associated with common programming errors. This is especially helpful when you have a large and complex codebase with thousands of lines of code to work with. It can also help you detect logical errors that may not be immediately evident through traditional static code analysis.

After bug detection comes bug diagnosis; this is when we try to find the root cause of a bug and suggest possible fixes. Bug diagnosis can be seen as a digital form of detective work where you need a sharp mindset and strong technical know-how to connect the dots and localize the area of the problem. Before AI, this was purely manual work that could quickly become time-consuming. With AI, you can distill complex bug reports and trace the bug to the specific line of code that has the most potential to be the root cause.

In fact, you can also provide the AI with previous bug fixes and code reviews so it can learn more about your application's recurring issues. The insights it gains from these can be applied to discover newly found bugs. 

This is especially helpful for developers, who often become deeply immersed in the process of creating their applications. This immersion can lead to a certain frame of mind where they view the code and application from a particular perspective — one that is shaped by their understanding of the architecture, design decisions, and implementation details. Large language models (LLMs) help them break out of such blind spots and find new perspectives on the code.

Accelerated software development

AI is already helping software engineers produce more code at a much faster rate, which demands the testing team to speed up at a corresponding level. Together, they accelerate the entire software development process without having to compromise on quality.

This is truly a game-changer. The developer role essentially got an upgrade when the tedious, time-consuming tasks were handed off to AI. Now, with the freed-up bandwidth, they can focus their effort and intelligence on more challenging issues, allowing for more software creation without having to hire new developers.

However, as of now, AI remains far from being capable of writing the entirety of the code and test script in place of the software engineers and testers. AI in software development and testing still needs a human touch. This is the necessary first step to making AI more intelligent in the future.

The future of software testing with AI

The future of software testing is intelligent. AI is set to transform software testing by automating repetitive tasks, generating smarter test cases, and much more.

AI software testing tools boost efficiency

AI can help manual testers level their game. There are three inherent drawbacks to manual testing, which is its:

  1. Repetitiveness
  2. Time-consuming nature
  3. Lack of consistency across testers

AI can be the ticket out of that labyrinth, taking you from labor-intensive, repetitive tasks to smart, more efficient processes. It starts from the very first steps, where you can use generative AI to suggest the necessary test cases for a specific scenario.

Not just that, AI-powered testing tools can also facilitate faster test execution through automated test case prioritization and optimization, focusing on high-impact areas first. Another good option with AI in manual testing is to ask it to provide intelligent recommendations and insights derived from analyzing vast amounts of testing data.

AI makes automation testing easier

You can leverage AI to generate test scripts, saving yourself a lot of time and effort. The trick is to master prompt engineering.

If you are familiar with ChatGPT, Bard, or other LLM-based solutions, you will know that the output quality (of the test scripts) is deeply tied up with the input question. Knowing how to write a good prompt means being one step closer to efficient AI-powered automation testing.

When prompting for your automation efforts, make sure to follow these best practices:

  • Provide clear examples in your prompt. Clarify your intents and limit the AI from going off on unnecessary creative tangents (or set it free from creative limits, depending on how you look at it). The end goal is to be targeted with your prompts.
  • Give formatting directions for your response.
  • Be specific with your requirements. For example, let it know your assertions, the acceptance criteria, the programming language, the testing framework, and the environment you want to test on.

Not just that, AI also solves one critical issue with automation testing — test maintenance, especially in web testing. Website updates are constantly made, and test scripts written to test a specific element can quickly fail when the code is updated. 

Let’s say you have a button with the ID "checkout-button” that initiates the checkout workflow and takes users to the payment gateway. As a tester, this ID helps your script understand where the button is.

However, if this ID is changed for any reason, the test is very likely to fail since it can’t fetch the element with the “checkout button” ID. Having to update a large number of test cases every time code changes occur (which happens quite frequently) is a counterproductive process. 

AI can help with the self-healing mechanism. It can automatically apply the new object’s locators and ensure the entire script stays up-to-date with each change made to the product’s design.

AI gradually upgrades automation to autonomous testing

As more and more information is fed into the AI, it can gradually upgrade your automation testing to autonomous testing over time. Looking back at the autonomous software testing benchmark, you can see that as long as some form of automation is applied, you are making progress on the scale. In a way, autonomous testing is the automation of automation testing.

Having an AI system that can continuously learn new patterns from your application-under-test is synonymous with having a virtual assistant that analyzes data for you. Thanks to this, it can even easily adapt itself to changes in the product. Over time, it should be able to generate not just better but also more organization-specific test data and scenarios.

Challenges of adopting AI for software testing

While AI offers exciting possibilities for software testing, there are some challenges left to overcome.

AI skepticism

AI is great, but developers and QA teams still have a certain level of skepticism, and they are totally within reason to adopt this stance.

At the end of the day, the core of what AI is doing is generating the best possible answer by predicting words after words after having learned the relationships between words, phrases, and concepts from a vast amount of text input. There is no actual “intelligence” behind these systems, but rather, just an advanced form of autocomplete.

Moreover, the transition to AI can sometimes be a messy one. Disruption, new SOPs, and unfamiliarity happen. There is also the recurring myth that AI will take over all technical jobs if they are advanced enough.

The truth is that AI only minimizes the effort needed from developers and testers. Certain types of testing, such as exploratory testing, still require a lot of human creativity and ingenuity. The struggles of adopting AI are totally worth it since testers gain so much more than they lose.

Initial investment into AI training data

To have a custom-made AI that suits your needs, some effort needs to be invested into the training process. We all know that this is quite a resource-intensive activity, not just training data, computing power, but also time and the right talent with AI expertise to train it. The energy required to maintain a decently good AI is also immense.

To overcome this, you can build your AI on a pre-trained foundation model, essentially fine-tuning it to perform specific tasks to cater to your testing needs. These models are already trained on extensive datasets, providing a solid foundation for various tasks. 

Through fine-tuning, users can customize these models to suit specific requirements or industry nuances. This approach essentially combines the advantages of the initial training with the flexibility to tailor the model according to specific needs.

Ethical considerations of AI

Where can we draw the line between ethical and unethical use of AI? The practice of using historical social data to train the AI may come with concerns about data bias.

The result of such bias is that the training outcome may align with the societal picture that the dataset represents, but in the long run, it can further reinforce societal stereotypes.

Test your limits!

Adopting AI for software testing is undoubtedly an ambitious and futuristic endeavor that is sure to change the way we think and do testing. 

Yes, the transition can be messy with so many emerging terminologies, concepts, and discussions, but it will surely bring a host of interesting benefits as well.

Learn how test automation simplifies software testing and ensures higher software quality with minimal manual effort.

Edited by Jigmee Bhutia

Software testing tools
Don't let the bug bite!

Assess whether the software is usable, performs properly, is free of bugs, and meets requirements with software testing tools.

Software testing tools
Don't let the bug bite!

Assess whether the software is usable, performs properly, is free of bugs, and meets requirements with software testing tools.

The Future of Software Testing With AI The testing industry is changing fast, and QA teams must adapt. The future of software testing involves testing more holistically with the power of AI/ML. https://learn.g2.com/hubfs/Software%20testing%20with%20ai.jpg
Hy Nguyen Hy Nguyen is a content marketing specialist at Katalon. A tech enthusiast with a passion for writing, Hy writes for QA personas, focusing on topics such as the application of AI, software testing, software development, and business. In his spare time, he loves board games, karaoke, and watching comedy. https://learn.g2.com/hubfs/Photo%20Of%20Hy.jpeg https://www.linkedin.com/in/nguy%E1%BB%85n-ho%C3%A0ng-hy-123a8821a/

Never miss a post.

Subscribe to keep your fingers on the tech pulse.

By submitting this form, you are agreeing to receive marketing communications from G2.