Select Page
An abstract image representing Applause's first workflow using agentic AI

Introducing Applause’s First Agentic Workflow

It’s no secret that AI is changing the way organizations build and test apps: particularly agentic AI. As a testing partner for many of the world’s technology innovators, we’ve got a front row seat as those developments unfold. But we’re not just sitting back and watching from the sidelines – we’re incorporating agentic workflows into our own roadmap.

In a May 2024 Forbes blog on functional testing, I noted that as digital products get more complicated with different dynamic elements, testing becomes more complex as well. Many organizations, reasonably, turn to automation to improve coverage while controlling costs. But automation, like other types of testing, gets harder and more intricate. Automation alone cannot guarantee great digital quality. It can, however, definitely help.

With that in mind, Applause’s engineering team began experimenting with using agentic workflows to help quickly develop test automation solutions. We set out to accelerate customer onboarding using AI systems to reduce the human involvement in project setup, Github configuration, and even some initial starter tests using the open source Applause automation framework.

This system automates the process of creating and executing web-based test scripts using components and agents, as well as creating the project structure and GitHub repository for the customer. For customers who have opted in to participate, we’ve already incorporated Gen AI into our test case management system to rewrite and update test cases; now we’re talking about a much higher level than just asking AI for a function that does a specific thing. This agentic workflow includes different components and calls for AI to carry out some thinking and processing in addition to generating test cases and scripts. It relies on multiple models, including both small language models and large language models.

The basic workflow:

  • Capturing user inputs and web page data (component)
  • Generating a structured test plan (agentic workflow)
  • Translating the test plan into executable Java code (agentic workflow)
  • Executing the tests and analyzing results (component).

Let’s walk through each of these steps in more detail.

Step 1: Capturing user inputs

User input defines the high-level mission. The user can specify features, test cases, or number of tests to create, or let the AI figure it out – then point it to an app or a page.

Step 2: Generating a structured test plan

The system takes all the information from that page or app, to figure out what the asset is and what types of tests it needs to create. It has to look at the html, pull out the xml, capture screen shots, think about that a bit, come up with recommended test cases, then create a test plan.

Step 3: Writing test scripts

At this point, the system has an understanding of the Applause automation framework, which is open source. It now has to write tests that match those test cases and then produce working, running Java code.

Step 4: Executing tests and analyzing results

As the final step in the workflow, the system executes the automated tests and analyzes the test results to determine which tests passed or failed.

The workflow is currently less autonomous in that it doesn’t iterate or go back and refine itself; that’s planned for a future generation. As we developed our first agentic workflow, we wanted to limit risks. To that end, we focused on:

  • Clearly scoped rules, permissions and access boundaries
  • Audit logs for every agent action
  • Sandboxed execution environments
  • Strict access control and API rate limiting
  • Thorough testing with humans in the loop

The process is a quick start for our new opt-in customers, designed to create an automation project from scratch with initial running tests. We’re breaking up the agentic workflow and exposing each phase individually. This allows users to execute a single piece of the process and review or refine the data, effectively adding more human-in-the-loop touchpoints to deliver high quality tests. From here our automation experts take that project, review the work, and continue to refine and expand to meet the customer’s requirements, including coverage needs.

We’re looking forward to evolving this workflow to become more autonomous. We’re examining additional business processes from an agentic workflow perspective to identify places where we can make them smarter, better and more valuable to the business – and our customers.

Want to see more like this?
Published: July 16, 2025
Reading Time: 4 min

Introducing Applause’s First Agentic Workflow

Learn about Applause's agentic workflow for accelerated test automation, leveraging AI for enhanced digital quality.

What Is Assistive Technology?

Learn about what assistive technology is, how it supports independence and inclusion in daily living, see examples and how it benefits all of us.

Key Insights on Regional Payment Testing

Going global means serving local

5 Reasons UX Research Should Be Part of the M&A Process

During mergers and acquisitions, don’t overlook the importance of assessing – and adapting – user experience and customer journeys.

Why Should I Do an Accessibility Audit?

Learn why organizations audit for accessibility, how they decide where to start, and what to prioritize after the audit and more.

Beyond Traditional Testing: Advanced Methodologies for Evaluating Modern AI Systems

As AI systems continue to demonstrate ever more complex behaviors and autonomous capabilities, our evaluation methodologies must adapt to match these emergent properties if we are to safely govern these systems without hindering their potential.
No results found.