Touring vs. Exploring: The Value of Exploratory Testing

John KotzianJohn Kotzian
minute read

Think of your manual testing strategy like visiting a new country

One of the traits I like to see in a software tester is insatiable curiosity. Testers who routinely ask, “What happens if I do this?” — rather than simply checking for the expected outcome of a written test case — are setting themselves up for success. These are the testers who routinely log hard-to-find and critical bugs.

Whereas test case execution adheres to predetermined and documented paths with expected outcomes, exploring an application-in-test should be an interactive process. The tester is actively learning what the app does and, more importantly, discovering what it should not do.

Here’s an example of the differences between test case execution — or ‘touring’ an app — and exploratory testing.

Touring vs Exploring

Scenario 1

Imagine for a moment that you want to visit a country for the first time and understand its culture. Any country will do for the purposes of this exercise. You go to a travel agency and book a tour. The tour consists of a pre-planned itinerary of the country’s sights and landmarks.

You know what you’re expecting to see before you see it, as it’s on the itinerary, and each site you visit on this tour has a time limit — any deviation from the timeline could prohibit you from seeing them all. In this scenario, you never have a meaningful interaction with a native of the country, or get to experience the country in a way like someone who lives there.

Scenario 2

Now imagine you travel to this country with no plans other than where you plan to rest your head at night. You check into your hotel; you feel a bit peckish, so you head out for dinner. As you walk down the street, you see the many options the town has to offer: a pub, a bakery, a fancy bistro and a small local restaurant among them. Of the many options, you decide to check out the small local restaurant. When you walk in, the owners happily greet you and provide you with a table and menu to order from.

You tell the server you are new to town and you would like to try some local cuisine. You ask for recommendations, and place an order for an appetizer and an entrée. When the food arrives, the waitress says the town is having its annual festival tomorrow that you may enjoy. You taste the appetizer and decide it’s not something you like, but the entrée is sublime. You think to yourself that you’d like to check out the festival tomorrow, but tonight you’ll sample the wares over at the pub you saw.

Learning vs. Following

The previous scenarios offer an example of the difference between touring and exploring. In the first scenario, taking the tour will show you the sights and you may learn something, but it is a tailored experience. You will learn what the guides want you to learn. You are following a detailed plan. From a software testing perspective, this is akin to executing a test script.

In the second scenario, you are actively interacting with the people, places and experiences the country has to offer — and potentially changing your intended actions based on those interactions. You are learning about the country’s culture by engaging with it. This is more similar to exploratory testing.

Putting the two side-by-side, you are clearly learning more and getting a better feel for the experiences that someone would experience on a day-to-day basis when you conduct an exploration rather than a guided tour.

The Human Equation

When I explore an application, I like to think like a human. Yes, I know that sounds funny, but humans do not always follow the path they are given and that’s when real problems can crop up. Here’s an example of a bug I found where the development team put their trust in the human equation to do its job.

I was asked to test a phone application for a national sports league. I received a suite of test cases to follow. Most of these test cases passed and some failed — if I had solely been executing a test case, my job as a tester would’ve likely ended there.

However, one area really piqued my interest, and I wanted to explore and test a bit further than the test case allowed for. This area of the application gave the user up-to-date player statistics. I called the developer and asked where the data for this section comes from and what would happen to the application if some data was missing.

The developer told me not to worry about it — the data comes in from the league itself in a flat file and is simply imported to the application to be displayed. I continued to press the question, however, asking how the application handles the potential for missing data. They didn’t really have an answer since it had never happened before. I replied that they were basically risking their entire user experience on the hope that a human in some league office wasn’t having a bad day and wouldn’t make a mistake.

After a few conversations along these lines, he finally relented and we set up a test to systematically run through the inputs for each field with a variety of data variables, including missing data. The first 10 or so tests went well — the application handled the variables well. When we ran the test with the missing data, things got very quiet on his end of the call. After a few long seconds, I heard him say “Uh oh.”

It appeared that the application did not handle the missing data very well at all. In fact, not only did the application freeze, the entire phone froze and had to be rebooted in order to recover from the error. I drove home my point by asking him for analytics on how many of the application’s users had this phone model. He replied that the number was in the millions. This is when I asked him if he still had trust in the human on the other end of that data file. I am happy to say that his answer was “no” and the application was eventually updated to validate the incoming data and handle errors gracefully.

This story shows that pushing beyond a test script, and having an engaged tester who thinks critically about the app and is willing to explore the ‘what if …’ will uncover potentially critical issues.

Conducting exploratory testing at scale

I would argue that most organizations should include exploratory testing as part of their holistic test strategy. But doing this easily and at scale can be time-consuming and inefficient for an organization when it relies solely on internal resources.

Part of the fun of working for Applause is we act as a supplement to development organizations. Our testing teams, sourced from our global community, are trained how to explore apps and websites to uncover the hard-to-find and significant bugs that test cases don’t cover. Some of our customers will provide guidance on specific areas to explore, which helps to narrow our focus while still giving testers the ability to go down unscripted paths. When a team of a dozen testers executes scripted test cases and conducts guided exploratory testing, and can’t find issues, that’s when you can be confident your app or website is ready for the real world.

As you review your testing strategy, think of it this way: are your testers following a tour guide, or taking an exploration?

You might also be interested in: