ML-Based Test Automation Represents Future for QA Teams
Algorithms can eliminate poor scripts and increase time to value
The transformation of software development and testing has only just begun. As we enter a new decade, traditional manual and automated testing will remain essential, but there are new alternatives with artificial intelligence (AI) and machine learning (ML) entering the ring.
In recent years, organizations sought to quickly deliver value to customers. This led to a shift from Waterfall to Agile and DevOps methodologies. To succeed in a DevOps culture, organizations need a high degree of automation present throughout their entire pipeline of activities, including testing.
While these organizations found ways to build and develop software in smaller chunks (e.g., sprints, a squad team structure, other DevOps processes), the QA element has remained stable.
This is where new AI and ML algorithms come into play.
Limits of traditional test automation
Until recently, QA teams and developers leveraged leading open-source test automation frameworks such as Selenium, Appium, and other code-based scripting solutions. These solutions have great practices, knowledge bases, and documentation. That said, there are a few common issues associated with code-based scripting solutions. They:
- Tend to be flaky and unstable over time as the app changes
- Use frameworks that are well integrated into the development environments of the test engineers and developers (e.g., IntelliJ, Eclipse, etc.)
Given these issues, QA teams are seeking more stable alternatives that are easier to script and maintain, and offer faster time to value or feedback. For many software testing teams, the answer is ML-based test automation solutions.
However, ML-based test automation is not a magic bullet. When is the right time to use ML-based test automation? When should you stick with a traditional method? It depends on the use case.
Traditional test automation
ML-based test automation
Define manual flows, BDD style
Record test flows (usually no coding)
Changes required proactively
Self-healing/correction automatically handled
High (including guidelines, documents)
Emerging (web more advanced than mobile)
API, load, functional
Mostly functional and API
Use cases for ML-based test automation
Organizations cannot and should not completely shift their test strategy to ML-based testing. Development and testing teams should assess when ML-based testing is right for them, with clear KPIs and success metrics spelled out.
The following are four of the top use cases for ML-based test automation. These can serve as a starting point to find other use cases:
- Eliminate specific, flaky code-based test scripts
- Provide business testers an alternative for test automation creation
- Increase test automation coverage
- Accelerate time to create and maintain test automation
Eliminate specific flaky code-based test scripts
Flaky, code-based test scripts are a killer for your digital quality and are often the result of poor coding practices. This reduces confidence in test automation scripts. How do you know if you have flaky test scripts? Here are some indicators:
- Test results are inconsistent from run to run or platform to platform
- Tests aren’t using stable object locators
- Tests don’t properly handle environment-related implications (e.g., pop-ups, interrupts, etc.)
If you have flaky scripts, identify your testing bottlenecks and where you’re not getting value from code-based test automation.
Provide business testers an alternative for test automation creation
Test automation suffers from low success rates these days. In addition to flaky, code-based test scripts, there are two main reasons:
- Developers and test engineers are pressed for time
- Agile feature teams lack the skills to create automation scripts within sprints
The lack of skills in Agile feature teams represents an opportunity for data scientists and business testers. These non-testers can leverage the ML-based tools and create robust test automation scripts for functional and exploratory testing through simple record and playback flows.
Increase test automation coverage
When you replace manual testing with ML-based test automation, you’ll likely increase the overall test automation coverage and reduce the risk of defects escaping into production. That’s great, but you still need to ensure your team works efficiently and drives value. Properly scope the ML-based test automation suite with team members to avoid duplicates and focus on problematic areas.
You also must consider how teams will view the two methods’ results. Teams must strive towards a consistent quality dashboard that shows both test automation reports in a single view so management can assess the overall product quality with ease.
Accelerate time to create and maintain test automation
On average, ML-based test automation is six times faster than code-based testing, which means faster time to value.
What makes ML-based test automation so much faster? Code-based testing requires the developer to build the proper environment (e.g., Selenium Grid), set up the prerequisites through code and debug the code from within the IDE. This takes significant time, skills and effort — and it’s not a one-time investment. As the product changes, the developer must continually update the code.
On the other hand, ML-based test creation is typically a record-and-playback process with built-in self-healing algorithms. This generally does not require heavy maintenance, unless there are significant changes in the element locators or the product itself.
However, ML-based tools are less mature than code-based tools. As a result, there is less flexibility and integration with other tools and frameworks. You should consider this last point as you scope out where to apply ML-based automated testing.
The future of ML-based test automation
You can expect a lot of change in the ML-based test automation realm over the next few years. For starters, ML tools are evolving, and the next one to two years are critical for DevOps teams to adopt, integrate and change their processes to bring these tools into their SDLC.
Teams will need new methodologies to determine when they should use ML-based versus the traditional code-based options. In addition, the tools will need to seamlessly integrate into existing CI/CD tools and frameworks as well as reporting structures.
Lastly, ML tools will evolve to cover additional testing types outside of functional testing, such as security testing.
I recommend exploring how ML-based test automation can complement existing code-based practices, and identify the top challenges that these tools will best address. With the right approach, using ML-based test automation in the new decade can immediately increase the value of your software development cycle.