Select Page
Panorama shot of front-end developer team brainstorming UI and UX designs for mobile app on paper wireframe interface. User interface development team planning for user-friendly UI design.

Which Metrics Are Organizations Using to Monitor Digital Quality?

If you ask a group of software engineers and quality assurance professionals what defines digital quality, you’ll probably get a wide variety of answers – including plenty of overlap, along with some outliers. It can be hard to distill a shared definition, as so many different factors shape each user’s perceptions of what makes software good. User opinions vary wildly, with people prioritizing different aspects of the experience. Still, you’d probably be hard pressed to find a developer or QA pro who believes you can’t somehow measure digital quality, difficult though it may be to define.

So what metrics are teams using to evaluate digital quality and help them develop successful software products? Applause surveyed uTest community members who hold jobs in software engineering, QA, Ops, product or DevOps to better understand how different organizations are measuring quality and using that data to guide testing efforts, improve software reliability, and boost user satisfaction.

Explore some common digital quality metrics

Many organizations start by looking at coverage metrics. Visibility into the percentage of code coverage, unit test coverage, integration test coverage, and end-to-end test coverage in place helps reveal gaps and can showcase opportunities to incorporate testing at all levels of development. Assessing coverage across different devices, OSes and networks can also be a useful data point. Of 4,554 respondents to this question in Applause’s survey, 38.5% reported that their organization uses test coverage as a way to measure digital quality.

While understanding coverage is a valuable starting point, this metric only tells part of the story. Focusing too narrowly on coverage percentages without considering the efficacy of test cases can lead to a false sense of security or distorted view of an application’s overall quality.

Defect metrics provide another view into an application’s health. Defect density, open/close rates, and time to fix all allow development and QA teams a way to assess quality: 36.4% of survey respondents reported that their organizations use the number of defects reaching production as a key metric.

Reducing the number of defects at all stages of development – and detecting problems earlier in the SDLC – certainly shows improvement in overall quality and the effectiveness of the QA process. The severity of the defects must factor into the equation as well. Users may tolerate some issues that cause minor inconvenience but they’re unlikely to forgive a catastrophic failure. If your team and tests are catching minor defects while major issues slip through unnoticed and unaddressed, there’s still a problem. Monitoring the quantity of defects alone can’t provide an accurate health assessment without context around severity or impact.

In addition, 1,709 survey respondents indicated that their organizations use test case reporting and metrics to measure quality KPIs. One comment summarizes the ways most are using this data: “To measure the progress of testing activities over time, to analyze the efficiency of test cases in identifying defects, and to ensure that each requirement has associated test cases.”

User experience (UX) metrics, including user satisfaction scores, usability testing results, and user engagement metrics offer insight into some of the more qualitative aspects of digital quality. Though user opinions may be subjective, analyzing trends and preferences can guide improvements that enhance the overall experience. Survey respondents shared some of the ways organizations quantify user experience include:

  • Customer satisfaction research: 55.5%
  • Customer sentiment/feedback: 47%
  • Net Promoter Score (NPS): 14.4%

User behavior also provides insight into quality. Among survey respondents, 36.9% track the number of customer support tickets while 32.5% look at increases in activity such as purchases or logins as a sign of success. Another 28% consider engagement rates on websites/apps, and 13.5% cite adoption rates as a metric their organizations monitor around quality.

Feedback from outside the organization offers intelligence as well, including app store ratings (23.6%) and reviews on third-party sites (15%).

Metrics that focus on business value and results also play a role in evaluating quality. Respondents cited the following business KPIs as factors in how they assess overall quality:

  • Revenue: 29.6%
  • Attach rates/upsell/cross-sell: 12.8%
  • Customer retention rates: 25.4%
  • Time to market: 16.7%

Overall, most organizations are using a variety of methods and metrics to evaluate digital quality. Balancing quantitative data with qualitative insights allows teams to make informed decisions about software quality. Analysis of trends over time and expert judgment about which metrics to prioritize is also key.

The survey mentioned throughout this blog post is part of Applause’s ongoing exploration of the State of Digital Quality. Watch for the full report in the second half of the year.

Webinar

Experts on the State of Digital Quality in 2023

A panel of experts walks through how market leaders and technology innovators assess functionality, localization, accessibility, payments and UX holistically to truly understand the customer experience and create outstanding customer experiences.

Want to see more like this?
View all blogs ⟶
Published: March 8, 2024
Reading Time: 6 min

Mobile App Accessibility Testing Basics

Adhere to these mobile app accessibility standards

Usability Testing for Agentic Interactions: Ensuring Intuitive AI-Powered Smart Device Assistants

See why early usability testing is a critical investment in building agentic AI systems that respect user autonomy and enhance collaboration.

Global Accessibility Awareness Day and Digital Quality Insights

Get the latest insight around accessibility and inclusive design from our annual survey of professionals working in digital quality and software development. Learn the steps your organization can take to move forward with accessibility and inclusive design.

Do Your IVR And Chatbot Experiences Empower Your Customers?

A recent webinar offers key points for organizations to consider as they evaluate the effectiveness of their customer-facing IVRs and chatbots.

Agentic Workflows in the Enterprise

As the level of interest in building agentic workflows in the enterprise increases, there is a corresponding development in the “AI Stack” that enables agentic deployments at scale.

Your Essential Web Accessibility Checklist

Priorities for digital inclusivity
No results found.