Select Page

Introducing Applause’s First Agentic Workflow

It’s no secret that AI is changing the way organizations build and test apps: particularly agentic AI. As a testing partner for many of the world’s technology innovators, we’ve got a front row seat as those developments unfold. But we’re not just sitting back and watching from the sidelines – we’re incorporating agentic workflows into our own roadmap.

In a May 2024 Forbes blog on functional testing, I noted that as digital products get more complicated with different dynamic elements, testing becomes more complex as well. Many organizations, reasonably, turn to automation to improve coverage while controlling costs. But automation, like other types of testing, gets harder and more intricate. Automation alone cannot guarantee great digital quality. It can, however, definitely help.

With that in mind, Applause’s engineering team began experimenting with using agentic workflows to help quickly develop test automation solutions. We set out to accelerate customer onboarding using AI systems to reduce the human involvement in project setup, Github configuration, and even some initial starter tests using the open source Applause automation framework.

This system automates the process of creating and executing web-based test scripts using components and agents, as well as creating the project structure and GitHub repository for the customer. For customers who have opted in to participate, we’ve already incorporated Gen AI into our test case management system to rewrite and update test cases; now we’re talking about a much higher level than just asking AI for a function that does a specific thing. This agentic workflow includes different components and calls for AI to carry out some thinking and processing in addition to generating test cases and scripts. It relies on multiple models, including both small language models and large language models.

The basic workflow:

  • Capturing user inputs and web page data (component)
  • Generating a structured test plan (agentic workflow)
  • Translating the test plan into executable Java code (agentic workflow)
  • Executing the tests and analyzing results (component).

Let’s walk through each of these steps in more detail.

Step 1: Capturing user inputs

User input defines the high-level mission. The user can specify features, test cases, or number of tests to create, or let the AI figure it out – then point it to an app or a page.

Step 2: Generating a structured test plan

The system takes all the information from that page or app, to figure out what the asset is and what types of tests it needs to create. It has to look at the html, pull out the xml, capture screen shots, think about that a bit, come up with recommended test cases, then create a test plan.

Step 3: Writing test scripts

At this point, the system has an understanding of the Applause automation framework, which is open source. It now has to write tests that match those test cases and then produce working, running Java code.

Step 4: Executing tests and analyzing results

As the final step in the workflow, the system executes the automated tests and analyzes the test results to determine which tests passed or failed.

The workflow is currently less autonomous in that it doesn’t iterate or go back and refine itself; that’s planned for a future generation. As we developed our first agentic workflow, we wanted to limit risks. To that end, we focused on:

  • Clearly scoped rules, permissions and access boundaries
  • Audit logs for every agent action
  • Sandboxed execution environments
  • Strict access control and API rate limiting
  • Thorough testing with humans in the loop

The process is a quick start for our new opt-in customers, designed to create an automation project from scratch with initial running tests. We’re breaking up the agentic workflow and exposing each phase individually. This allows users to execute a single piece of the process and review or refine the data, effectively adding more human-in-the-loop touchpoints to deliver high quality tests. From here our automation experts take that project, review the work, and continue to refine and expand to meet the customer’s requirements, including coverage needs.

We’re looking forward to evolving this workflow to become more autonomous. We’re examining additional business processes from an agentic workflow perspective to identify places where we can make them smarter, better and more valuable to the business – and our customers.

Published On: July 16, 2025
Reading Time: 4 min

Anatomy of an Accessibility Pilot

Learn what a short accessibility pilot engagement looked like for a small tech firm that needed help fast.

Should Software Testers Learn How To Code? Pros and Cons

Coding offers opportunity for career advancement, if you’re up to the task.

What is a11y? Advocacy for Accessibility

Discover what a11y stands for and why it’s a critical component of digital quality for brands.

Scaling Globally with Automation and Localization

Learn how to scale localization efforts by combining automation with real-world human expertise.

Digital Accessibility: Common Issues and Challenges

Explore some common issues and challenges organizations encounter on the path to digital accessibility.

Reducing the Effect of Defect Technical Debt

Learn how defect technical debt impacts testing and get tips for tracking defects and keeping regression suites up to date.
No results found.