Why Manual Testing Has a Place in Your Regression Test Suite
Automation goes a long way, but isn’t the best fit for every use case with regression tests
While automated regression testing is helpful, some parts of your system might be impossible to automate. One obvious example is CAPTCHA, the challenge-and-response test used across the web to tell human and computer visitors apart. It is specifically engineered not to be bypassed by automation tools.
Some aspects of automated regression testing are just as tricky and time consuming. Automation implies predictable and repeatable test scenarios, but regression tests are by definition part of making sure a changing system continues to work correctly.
Automated tests of all kinds are, essentially, another code base that requires a significant time commitment to maintain, especially on new projects where features change often. In a fluid development environment, tests easily break or output false negatives. This can eat up valuable time for the development team, which has to keep validating and fixing the tests. Attempting to automate too much usually results in test suites that fall behind development, become obsolete and stop delivering value. Bottom line: overdoing test automation can result in a poor ROI.
Automation simply cannot replicate the ease at which humans can make subjective comparisons and think creatively, driven by curiosity and countless external conditions. For example, verifying that the correct text is displayed when a button is clicked is straightforward — this is the kind of regression test that makes sense to automate and should deliver great ROI — but how would we discover that the text now appears in the wrong position if the user mistakenly clicks and drags the button? Or what if the outline of the button now looks "a bit strange" when zoomed on a browser?
Automation simply cannot replicate the ease at which humans can make subjective comparisons and think creatively, driven by curiosity and countless external conditions.
Today, we have a vast array of choices, from Android to iOS, from Windows to MacOS to Linux, on displays ranging from 3” to 60”, and we have to support all of these choices. While many aspects of the testing can be automated, automating the regression tests for all of these platforms and devices can be time consuming to manage, and prone to missing the “alternative path” where users do not take the journey that the product and engineering teams intended. It's not just the effort of setting up and running tests for hundreds of potential device configurations — it's setting up, running and maintaining those tests for every build on every device.
There is value in automating many parts of regression testing, but when teams automate their full regression testing suite, they end up chasing their tail and become inefficient.
When to Leverage Manual Regression Testing
With a team of experienced manual testers working in parallel, you can receive real-world feedback on both the happy path and the alternative path, where users do not operate as the product manager and engineer originally intended. This kind of exploratory testing is only possible with manual testing — writing test scripts for an unknown alternative path is inherently impossible. Of course, after an alternative path is discovered to have caused a bug, you could then write test automation scripts to cover that scenario going forward.
What does a manual tester have to offer that test automation doesn't? To answer this, let’s look closer at some specific test types.
Both web and mobile apps must look and work well across a wide range of devices. Your app’s functionality could break or degrade in subtly different ways on different devices, and it’s difficult to write automated tests for all of them. Devices are becoming more diverse, especially in the Android market. For example: if an update to your app makes the text overflow the screen of Realme C2 users in India, your automated regression test suite probably won’t notice — but a manual tester would see it immediately.
Manual testers can provide subjective descriptions of quality regressions that automated tests won’t catch. An automated test won’t tell you that the new release of your app feels slower or scrolls more choppily on a specific device. But a manual tester will give detailed feedback on what went wrong.
Manual testers can provide subjective descriptions of quality regressions that automated tests won’t catch.
And as mentioned, you can only write automated regression tests for regressions you’ve anticipated. If a change breaks your application in a way you’ve never seen or even thought of, automated tests won’t catch it — but a manual tester can spot parts of your app that look broken, even if it’s not on the official test plan.
Exploratory testing is an effective way of finding bugs in an application's business logic. Business logic anticipates the user requirements for the system, but it's inherently biased by internal expectation. Business logic tends to define unit tests — and these will only catch what the developer has anticipated. A human tester is able to tease out those subtle bugs through novel scenarios and unanticipated use cases, perhaps even revealing unseen user requirements and business opportunities.
Optimal Test Coverage is a Blend
We’re living in an era where high-quality software is no longer optional. Preventing defects is especially important – users won’t always notice when your application improves, but they definitely notice when it breaks.
You need to use an approach to regression testing that will identify defects along all paths that users could take, including the ones you can’t anticipate. Your approach to regression testing should be whatever gives you the most confidence that relevant scenarios are covered, and provides useful feedback quickly. If you take one lesson away from this ebook, it’s that automated regression testing is extremely valuable, but it’s not a golden hammer — there are scenarios where manual regression testing is the better path.
Today, the path to this optimal test coverage is through a blend of manual and automated testing. The trick is to have the option to automate where it's efficient and to take advantage of a human-driven process where it's useful. Manual regression is key to delivering good software – it ensures that applications function correctly, and catches the regressions and bugs that automated tests miss.