The Path to Functional Testing Excellence
In the webinar “Getting to Functional Testing Excellence, A Proven Path,” Applause experts shared findings from the 2024 State of Digital Quality report. They shared ways for organizations to advance their functional testing programs. The webinar outlined our functional testing framework, and walked through some trends in functional testing across different industries and use cases. The webinar closed with ways to build the business case for digital quality investments. Read on for some highlights from the session, or watch the recording.
David Carty, Senior Content Manager at Applause facilitated a conversation with Adonis Celestine, Senior Director, Automation Practice Lead, and Alexander Waldmann, Manager, International Solutions Consulting.
Key Findings from the State of Digital Quality Report
- Measuring success is still a challenge. Most organizations rely on a variety of metrics and KPIs, to try to understand where they’re at in their digital quality journey. But despite reliance on multiple KPIs, many teams still have an incomplete picture of how well they’re doing. Fewer than one-third of survey respondents reported that their organization has comprehensive documentation for their test cases and test plans.
- Most teams see the fundamental value of clearly-defined test methodologies, accurate documentation, and testing and feedback throughout the SDLC. Yet these strategic imperatives are often sacrificed in favor of speed.
- Despite being a priority, accessibility resources are still lacking. In our 2024 accessibility and inclusive design survey, 42% of respondents rated accessibility as a top priority for their organization. However, 29% do not involve people with disabilities directly in the design or testing process. Furthermore, 44% of respondents say that they have limited or no in-house resources for accessibility testing.
- High-quality user experiences make or break generative AI. While Gen AI holds the potential for hyper-personalization, the best way for organizations to make customers feel recognized is to focus on UX that accounts for a variety of preferences. Our survey found that 27% of Gen AI users have swapped one service for another due to UX and performance issues.
How do you measure digital quality?
Applause surveys found that most organizations focus on metrics such as customer sentiment and satisfaction, test coverage and the number of defects reaching production. Adonis pointed out that developers, testers and end users have different perspectives on what defines quality. He said he often sees organizations chasing vanity metrics that don’t add any clear value, like the number of test cases, number of test cases automated, or the number of bugs found.
“If you look at these numbers, none of these give a clear answer to what is the quality of the product,” Adonis said. “So the actual answer of what is the quality can only come from the consumer of the product.” He recommends that teams focus on capturing feedback from users to help them assess quality and whether or not a product or feature is successful.
Alex talked about the importance of the “why” in testing. In his role in pre-sales, he has frequent conversations with customers and prospects looking for a certain type of testing. But all too often, the people he’s speaking with can’t explain why they have a specific ask. Ultimately, they’re asking for something that doesn’t get them where they want to be.
“We always ask for the why, what they are trying to achieve, and the effect they’re looking for,” Alex said. ”Then we’re turning it into these business metrics, saying, OK, you want to be releasing faster, that’s cool. That’s something an engineering person might care about. But how does the business care? Do you really have a benefit if you ship things every three days instead of every four? Is that a material impact to your base? That’s the questioning we have to do.”
Testing and collecting feedback earlier in the SDLC
The State of Digital Quality Report found that most teams are still testing and automating at dev and test stages rather than making quality a priority throughout the SDLC. Adonis pointed out that many people talk about shift left, having a proper quality architecture, test pyramids, doing more tests on unit level, and so on. But in reality, for many teams, testing is purely reactive and done much later in the life cycle.
“We need to still shift left a lot. A lot of organizations are not able to do it because they don’t have a proper quality strategy. Testing and feedback should start right from the ideation process,” Adonis said. He then shared a list of questions organizations should consider at different stages.
- Planning/ideation: Is the idea correct? What are we building? Why are we building it? Who is my end user?
- Defining requirements: What constitutes a requirement? Is there a proper acceptance criteria tied to it?
- Development process: For test-driven development, do we have the test cases in place? Can we automate a few so that as soon as you develop the code, you can execute and see if the quality is at the right level?
- Production: what are you monitoring and how?
Alex emphasized the importance of making sure the right people are involved in developing acceptance criteria and determining how best to test. “During planning, defining requirements, and also the design phase, you need QA professionals to help you understand what could go wrong. It could be something as simple as someone clicking on a button twice. And you definitely don’t want a second car to be ordered online because that’s very expensive.”
“That’s often a discrepancy we see, where the management says, we need to ship faster and we are OK to break some things,” Alex said. But then they don’t provide clear context or make conscious decisions describing what they’re willing to let break.
Alex also described how AI is changing the static code review process for some teams. “Tools like Microsoft Copilot warn you as you write code that honestly, that might be a bad idea. You might not want to write code like this, and then you can fix it right at that moment,” he said. It’s more efficient than developers getting feedback from QA several days later, when they’ve lost the context of what they just built and must find their way back into the code.
The Functional Testing Framework
Applause has developed a functional testing framework that provides a path to excellent digital quality. We created this plan based on our experience working with many different clients globally. There are four levels:
- Emergence: a lack of formal systems, processes and documentation — the organization has no consistent methodology or approach to quality.
- Essentials: the early stages of defining and documenting processes and procedures; the business is establishing some consistency and structure around test efforts.
- Expansion: Clear processes, some reporting, and a variety of testing types are in place. The focus is on coverage, scalability and efficiency across the organization.
- Excellence: Quality is embedded across the organization; testing and feedback occur throughout the SDLC. Quality is built into all products and experiences from end to end.
Functional Testing Emergence
At the emergence stage, teams typically have inconsistency around test cases and environments. There’s a lot of reliance on dogfooding and some exploratory testing, but the goals aren’t clearly defined. At this stage, teams are generally being more reactive than proactive in terms of quality.
Alex explained that when companies are testing at the emergence level, they often overlook different steps in the process or only follow the happy path. “They probably either don’t have test cases or what they call test cases have nothing to do with test cases. It’s more like scenarios – at a high-level, place an order. Does it work?”
These teams don’t put thought into the customer journey, and the steps a customer needs to take, Alex said. “It’s all material impact to the customer satisfaction, which was the top metric in the previous slides. But here, we’re just looking at mechanically testing some things and checking a box that we tested this. It’s good to go.”
Adonis added that exploratory testing is often the starting point for teams to realize that they need a more advanced quality strategy. For example, in the retail sector, someone may have a Shopify website and sell into a few different markets. When they try to expand, they struggle because issues are reaching customers. When they realize customers are leaving part way through the checkout process, they need to find a solution to chase down the bugs. “That’s when this emergence layer comes in, where we start with exploratory testing, trying to stop the bugs first before we define a proper quality process.”
Functional Testing Essentials
At the essentials stage, organizations are establishing some consistency. They’re laying the groundwork for a more comprehensive QA approach. Some shift left is happening and pre-production testing, along with some test automation. Part of consistency includes clearly written test cases and a defined device coverage matrix.
Alex expanded on the importance of having good test cases and the right test coverage. “Does the developer that writes the code also dictate what’s being tested for? You can disable a QA team by telling them to test only the things that you built… that will not reflect the actual end user behavior,” he said.
In many cases, Alex said, customers at this stage may have test cases, but “they’re not portable. They’re not in a format that makes it easy to be executed and document that the tests were run and what the results were. If you have an Excel sheet, how do you track the quality of build over build over time, for example?” He sees test case management as an essential bridge to reach the next level of quality.
Functional Testing Expansion
Here, teams have a firmer grasp of the QA practices and strategy. They’re consistently leveraging automation and measuring digital quality KPIs.
“This is a place where QA gets a bit more formal,” said Adonis. “You have written down test cases. You know your strategy. You know what you want to test. And that’s the first step towards doing proactive quality assurance. You can do quality assurance within the sprint as and when feature development happens. Of course, this relies heavily on automation and doing things early, embedding things into your CI/CD pipeline, having your DevOps way of working clearly defined. All these things will help you to really shift left. And we see a lot of customers who are in the digital space who have been doing DevOps for four or five years now. They are into this expansion space, where they have a baseline control of their quality, and they want to move on to the next level.”
“Quite often, what I see with a customer is they want to start at the emergence level, right at the beginning, and they immediately want to jump with automation and AI. I always advise them, quality assurance is like a journey. You need to do each of those steps perfectly first before you can move on to the next level. So this is a good place, where customers are really thinking about speed versus quality, and how do we balance it.” – Adonis Celestine
Functional Testing Excellence
The excellence stage describes a true shared approach to digital quality across different functions and teams. This stage includes testing throughout the SDLC. The customer voice is present and prioritized. In addition teams here are striking a balance between manual, exploratory, and automated testing where each fits best. They’re getting sophisticated with reports and taking action on those findings.
Adonis said that he sees a lot of big tech companies at that excellence level. “That’s the ultimate. When I say it’s an ultimate, it’s not the end. It’s a journey. If you look at these high-tech companies who are at this level, they have a better control of quality. They know what to expect. They move on from proactive quality assurance into even predictive quality assurance. That’s when they know what kind of bugs they can expect in a release.”
At this stage, the conversation focuses on where to put the next investment, Alex said. “What kind of dollar spend gets you a higher quality of the product at the end and a higher customer satisfaction? What are the bugs still escaping? When are they escaping? Is that now maybe market specific?”
Building the Business Case For Quality
The key, according to Alex, is whether teams can effectively make the case for more investment, whether that’s more automation, more QA headcount, or people with disabilities to offer an accessibility lens from a UX perspective. “Anyone involved in these conversations needs to talk the business language and make sure they actually think beforehand how this ties to executive priorities. We want to put further money into this level and get to 102%. What does it bring the company, more customers, more revenue, cost avoidance? These ROI dimensions are really critical here.”
Whitepaper
The Business Value of Applause
See what IDC has to say about the ways Applause solutions deliver ROI.
Adonis shared that in his dealings with C-level leaders, they typically share three objectives. They want to reduce time to market, while improving their quality and the user experience. They also want to reduce cost. Any business case for investment in quality needs to center on those goals.
David walked through two different approaches to demonstrating ROI:
- Cost reductions such as boosting internal efficiency, finding defects earlier where they are less costly, and reducing strain on call centers or customer support
- Revenue gains though innovating faster, minimizing friction, securing revenue channels, and reducing customer churn
In terms of speeding new product and market launches, Alex pointed out that competition has intensified recently. “Especially with AI-powered development now, a single person can whip out a product, an app that rivals the feature set you have. Then maybe they have something cooler. And suddenly, you’re competing with someone: your product launch cannot have any flaws in it.”
In organizations that invest in significant marketing campaigns for new launches, QA can use that to build the business case. “They want to be sure if they spend millions on announcing this new app, launching in this market, that it’s actually working,” Alex said.
With the right data, Applause staff can help calculate ROI figures that will stand up to scrutiny from a CFO. Based on your organization’s priorities and some research, we can work together to build the business case that works for you. Contact us today.