Select Page
A software QA lead looks at trends on his laptop.

Highlights From The 2025 State of Digital Quality Benchmark Survey

Does it work? 

It seems like a simple question. But it’s the launch pad for the entire software testing and QA process – does it work when someone does X? Does it work on these devices? Does it work under specific conditions? Does it work easily, intuitively, quickly, consistently? The list goes on.  

For Applause’s annual state of digital quality series, we want to understand some related questions: How do teams determine whether an application works? How are they measuring quality? What types of tests and QA measures are most common? How are they changing over time? Where are there challenges in the drive to release high-quality digital experiences quickly, time and time again? 

This blog post highlights some of the key findings from this year’s survey of more than 2,100 software development and testing professionals around the world. You can read the full report here. 

1. Organizations rely on a variety of tests, QA measures and metrics to evaluate digital quality.

Of the top five most common indicators used to measure digital quality, four center around customer feedback and behavior: 

  • customer satisfaction research: 59.8%
  • customer sentiment and feedback: 51% 
  • test coverage: 39.9%
  • number of customer support tickets: 39.4%
  • increase in activity (logins, purchases, etc.): 37.4%

It’s no surprise, then, that teams prioritize user experience, usability and user acceptance testing among the top testing types. Unit testing, integration testing and checking for bugs in staging are the most common quality control activities.  

Though most organizations rely on an internal QA team to carry out testing (73.4%), engineering (35.6%) and DevOps (31%) also conduct tests. In addition, 33.3% of respondents listed crowdtesting as part of their quality control activities.

Teams are also increasingly embedding testing throughout the process. While 41.7% of respondents to our previous survey reported they only tested and gathered feedback at the testing stage of the SDLC, this year just 14.7% said they limit testing to a single stage. 

2. In just a short time, AI has become a crucial part of testing at many organizations.

This year, 59.6% stated that their organization uses AI in the testing process – a 96% increase over the number doing so in last year’s AI survey. Currently, 70.3% say they use it to create test cases. Other top use cases are creating test scripts for automation (54.8%) and to analyze test outcomes and recommend improvements (47.7%). 

AI is also helping teams improve coverage, prioritize test cases based on risk and usage patterns, and recommend improvements to test plans and code. Using AI can often free up time for members of busy dev and QA teams to focus on more strategic priorities. Humans remain an essential part of QA, despite AI’s rapid adoption. People still train and test AI systems, and evaluate (and often act on) AI outputs. Human judgment is critical for many subjective tests and aspects of QA that demand creativity. While agentic and generative AI may become more deeply integrated into testing in the next few years, don’t expect humans to be fully removed from the QA process any time soon. 

3. While teams are gaining efficiency and improving coverage through AI and automation, they still encounter obstacles on the path to excellent digital quality.

We added a new section to this year’s survey, asking respondents to rank the difficulty of various testing challenges. Challenges spanned three different categories: Skills and resource constraints, scale and coverage, and documentation. The top challenge: lack of time for sufficient testing prior to release, with 36.8% finding this very or extremely challenging. Testing in inconsistent/unstable environments and keeping up with rapidly changing requirements were other common causes for concern – and less likely to be solved by AI.

Rob Mason, Applause CTO said, “Speed matters — but not at the cost of quality. The most competitive teams integrate functional testing early and often, using AI and automation to accelerate coverage without cutting corners. But tooling alone isn’t enough. Combining structured automation with real-world testing and continuous feedback loops ensures faster releases don’t mean riskier ones.”

Teams must constantly assess the effectiveness of their quality assurance activities and adjust their strategies to remain competitive. Balancing investments in AI and automation with human-in-the-loop testing will be imperative for teams looking to cover all the bases as user expectations rise ever higher.   

 

Special Report
The State of Digital Quality in Functional Testing 2025

Want to see more like this?
Published On: September 17, 2025
Reading Time: 4 min

EAA Enforcement: What We Learned at IAAP Dublin

We recap the main talking points of the IAAP EU Accessibility event in Dublin, with a special focus on EN 301 549 and the European Accessibility Act.

Why Accessibility Is the Infrastructure for AI Readiness

AI agents cannot transact with what they cannot interpret.

U.S. Super Apps: Orchestrating Seamless Ecommerce Experiences

Learn why the US super app is an integrated layer, powered by agentic AI. And why quality execution is the core challenge.

Rethink Regression Testing: 3 Reasons to Outsource

Hand off regression testing to a crowdtesting partner to save time, improve coverage and keep your QA staff happy.

Crowdtesting vs. In-House QA: Why Market Leaders Choose a Hybrid Strategy

Internal QA is an organization’s main line of defense in digital quality. Find out how crowdtesting fills in the gaps and complements in-house teams.

4 Digital Health Trends That Will Define Healthcare in 2026

AI bias, unpatched devices and inaccessible products are key factors for health tech organizations.
No results found.