Testing in Two Days
About This Episode
For software testers, it’s always crunch time. Striking the balance between effective and expedient testing isn’t always easy, but it’s necessary in our increasingly fast-moving digital world. Digital quality must never come at the expense of speed. Cutting back on quality always comes with consequences that directly affect the business and the bottom line.
Amy Reichert, freelance QA SME, tutor and writer, has made a career of testing efficiently and effectively. She’s felt the crunch, and she’s seen fellow testers thrive or collapse under the pressure.
Amy joins the Ready, Test, Go. podcast to discuss how to fit all of your testing into increasingly tight windows. She also discusses how to best work with developers to create a shared quality-first culture that benefits organizations well into the future.
Amy Reichert is a freelance QA engineer SME tutor and consultant. She has nearly two decades of experience in software testing, as an analyst, team lead, engineer and tutor. Amy is also a freelance technical writer, including for Applause, where she writes about software testing topics.
(This transcript has been edited for brevity.)
David Carty: Over two decades in software testing, Amy Reichert has trained the best of them.
Amy Reichert: Well, I guess, with perseverance and a calm and assertive attitude.
Carty: She knows how to tap into their strengths.
Reichert: Well, you have to keep in mind, when you're training, you can't outmuscle one of them. So regardless of who you are, you're not going to force one to do anything.
Carty: She even knows the power of motivation, and snacks can go a long way.
Reichert: Sliced apples, carrots. They make cookies and treats.
Carty: Oh, right. We're talking about horses. Amy Reichert also trains horses. Probably should've led with that.
At any rate, just like managing software testers, horses have different personalities and require different ways of communicating.
Reichert: One has broken just about every piece of equipment I had just to go in a simple circle to the left because they were so right handed. They refused to go to the left. He broke everything I own, from the bridle to the reins, to the harness to everything. Just because they didn't want to go left.
You have to be clever, not necessarily manipulative, but clever in communicating and encouraging them to do what you want them to do, without having to force them.
Carty: Amy's training was in dressage, which involves very specific motions and movements. Perhaps that's not too dissimilar from a software launch. A simple, yet coordinated balance of tasks falling right into place, just so.
Reichert: Everything is very focused on finesse. You don't see the cues. You don't really see the rider doing anything. You don't see all the work they're putting into it. You just see the reaction of the horse and the steps that they take.
So, [that’s] similar to testing or release. You don't see the actions of the testers. So you see a little bit of the developer's actions, but mostly you don't see all the background work that goes into it, like support and testing.
Carty: This is the Ready, Test, Go. podcast brought to you by Applause. I'm David Carty.
Today's guest is horse wrangler and software testing expert Amy Reichert. Over the years, Amy has done it all, working as an analyst, team lead, engineer and tutor. Amy prides herself on performing thorough testing in half of the allotted time. Now she is a QA tutor and consultant for DevMountain and a writer, teaching the next generation of testers how to get their work done more efficiently. Let's talk with Amy.
Today, we're talking about having effective two-day testing periods. But, first, I'd like to get an idea of the industry trend. So, what has that been like in terms of how much testing time an organization has and how dramatically has that been reduced over the years?
Reichert: When I first started, like 20 some years ago, we were more in a Waterfall methodology, so you would do a much larger release. So there was much more planning, much more documentation, the coding period and then the testing period. So you'd get a month [or] 4 to 6 weeks to actually test an application through. And you'd be testing a lot more stuff, but you'd have enough time to cover it — usually.
Over the years, however, though, as you moved into Agile methodology and then continuous deployment — even faster than Agile — you get less and less time. So with Agile, if you have any regression testing at all, it's usually much condensed. It's gone anywhere from a week [to] five days, ten days. And, to me, the most common amount of time I actually get is about two days, tops.
Carty: Gotcha. And you can't test everything in this time frame, right? That's for sure. So how should you begin to prioritize those tests to be able to make sure you're getting the most important things done?
Reichert: Well, I think it depends on the experience of your team. So if you have an experienced testing team, or at least maybe not so much experience, but familiarity with the application, both the back end and the front end — the more experience or understanding of the application the tester has [enables them to] go off script and basically use different techniques. So instead of testing script by script by script and prioritizing those scripts, you can prioritize the functional areas you want to test.
Carty: Because they know where the bodies are buried, right? They know where the problems are with the product. So they can begin to kind of probe in ways that maybe a lesser, less-experienced tester might not be able to jump into and try that out.
Reichert: Yes, exactly. The more familiar [they are] with where the fragile parts of the code is, where the parts of the code that customers use the most, and where those kind of fragile gaps are between connections or integrated pieces of the application.
Carty: And you mentioned the customer component, the user component as well. You can prioritize according to customer feedback. So, you know what sorts of defects or areas might take precedence there?
Reichert: Well, I would usually like to talk to a support person who's familiar with the app, who's taking in the calls from customers and find out just how annoying certain sections of the application are. Where do they find the most defects? What are the most critical defects they find? And make sure you always look for those so they don't repeat. No one wants to experience it twice.
Carty: Let's talk about running some tests parallel with development. Now, this often means that a developer is running unit tests, which they might want to do or might not want to do. So how can you work to build trust between developers and testers to get them talking and collaborating toward that same end goal and help the testing team really deal with that tight time crunch that we're talking about?
Reichert: I usually start by directly communicating with the developers. So finding out how they want to be contacted is a first step because some people do not like to be surprised. When you work in the office, you don't want to just walk over and show up. Do they prefer you to IM them, send them a text message, send them an email or schedule time with them? Whatever works for them so it doesn't interrupt their train of thought when they're coding.
Carty: So, you're saying don't jump up from behind the computer screen and yell, ‘Surprise!’
Reichert: No. No, they don't like that.
But if you contact them and and you show interest. So, like, if I've gone in and I've done some analysis on an issue, I see, for example, when we're testing in a short timeframe, and I think I see an issue, then I go to them and say, ‘Hey, is this really an issue or is this something in the setup of the test server? Can you show me how it works in the back end?’ And I think the more open you are to learning, but doing some of the work first, I think you get a better response.
Carty: And how do you deal with it if you're not getting a lot of receptiveness back from the developer?
Reichert: Well, often there's usually more than one. So I'll go to another developer and see if I can get further. Or I'll circle back and say, ‘Okay, give me some advice on what I can troubleshoot on my own,’ and see if that helps them. Because you'll get some developers who feel like, if you ask them a question, then they're doing the testing for you, which isn't really true. But I can do more research and then come back, and I will do that if I can. But, if not, I find another developer.
Carty: Right. Might as well cast a wide net,
Carty: Great. So you're already rushing to get all these tests done in 48 hours. How does exploratory testing fit into that? Like we talked about, I'm sure if you're an experienced tester, you might have some particular areas that you might aim for based on your past experience with the product. But does it vary from one testing period to the next, or do you try to fit some of that in to run parallel with automated tests? What's the best way to approach that?
Reichert: What I really like to do is use exploratory testing in conjunction with automated smoke tests or automated tests, regression tests, if those exist. So when the developers are testing — I mean, we usually start with starting the automation, but we don't wait for it to just end. We just go ahead and jump in with the exploratory testing because automated tests will fail sometimes for no particular reason, and it sucks down a lot of time to figure out why they failed, if it's really for a script reason or an actual defect. So what we'll do is we'll kick them off, and then we will go ahead and start our exploratory testing. And that kind of helps you save time because then I can take the exploratory tests and use them to cover a lot more test coverage, if that makes sense. I can cover the UI, the back and the front end, all of it without interrupting my flow.
Carty: Right. And part of what we're talking about is the logistics issue where you just have so many tests to get through. But then you're also dealing with people, right? So, 48 hours, you hope everybody is up to the task. You hope everybody's ready to go. But, realistically, you could run into a motivation issue or some something similar to that [which] can really drag down productivity at a really tough time. So is there anything that you can do to incentivize or help push testers through the process, through to the finish line if they're running into difficulties?
Reichert: I kind of try to make a game out of it. So, you know, make a game out of -- not necessarily defects. A game out of, who can find the biggest defect? Who can find the most critical defect in the program? Or who can actually take down the system, make it hang? Make the connections fail? Whatever it is. Especially [with] security testing. That's always fun. Who can break into it, take it down?
So we kind of make a game out of it. It gives people some motivation because I can't usually offer them more money. And that's not always motivating [anyway]. Some people are motivated, again, like horses, with food, encouragement, vacations, whatever you want. But that's not always realistic. So I try to make a game out of it, and when people find really good defects, then at least internally to the testing team or the software development team, if I'm Agile, we celebrate that.
Carty: So, kind of like a bug hunt.
Reichert: Yeah, like a bug hunt. But we don't call it that because it offends developers. We call it a game or a challenge.
Carty: Right. Gotcha. So let's say we're hitting the stretch run, right? Final couple hours in the testing period and there is still so much to test. How do you hit the gas pedal and make sure you're getting to all that important stuff that's left before you get to launch?
Reichert: I think you just keep going .It takes some perseverance, and staying calm and not getting sucked into the chaos — and just steady on testing. Use your exploratory tests, have someone check on the automation. We just keep soldiering on, I guess you could call it.
Carty: Let's say you do have a hair-on-fire testing emergency, something that is definitely causing a problem, something out of the norm. How do you handle that? Is it possible to ask for more time? Under what circumstances do you ask for more time? How do you maneuver that situation?
Reichert: Well, when you find a problem like that, where everyone's running around in circles with their hair on fire, you usually try to stop. Instead of having a developer merely jump in and fix it, we actually sit down as a team and discuss it, at least for a few minutes — make sure we're not making any obvious mistakes that we don't see right away that will cause more defects, and then we won't be able to release.
So depending on how long it takes to fix and test, you could ask for a day or half a day. Usually it depends on how bad the defect is, whether you get it or not. But if there's at least one of those hair-on-fire defects, then usually you can push for at least a few more hours to make sure it really is fixed and doesn't cause any additional repercussions.
Carty: Right. So it might even sound counterintuitive, but in order to help get the job done, sometimes you have to slow down to make sure you're not making the situation worse.
Reichert: Exactly. It's very important to slow down and make sure you talk it over before you make changes that may be based on the development experience or the tester experience. They don't understand the implications of [the defect], and to make sure that's well thought out before we fix it.
Carty: Right. And I know you mentioned perseverance before, but if you could pick one key characteristic that a tester really needs to help deal with the pressure of these situations, because it can be pressure-packed sometimes, what is that characteristic that you would pick?
Reichert: I think the ability to remain calm. Not asleep, but calm and be able to keep chaos at bay. So, you can be calm and still working hard and doing your thing. Calm and focused, but you're not responding to the chaos or the hair-on-fire, and you're not going to panic or stress out.
Carty: And let me ask a devil's advocate question, if you don't mind, Amy. We say that we're getting used to two-day testing periods .If it seems like [defects] are getting through that shouldn't be, or that testing is insufficient, how do you go about trying to find more time for future testing periods? How do you circle the wagons in order to get what you need to properly test the app and make sure things are ready for launch in the future?
Reichert: Well, I'm a big proponent of continuous testing, so continuously regression testing. So, when I am — say we don't get all the tests done, then when we're starting in our next sprint or next iteration, where we're waiting for code to come across or stories to come across for testing, then I'd like everyone to start regression testing. So let's finish all the tests we didn't finish, see if we find anything. And let's just take it, and create a test suite, and just start on it. And everybody runs, picks tests as they can, and runs them while we're going through development, so that you're always looking for defects, you're always looking for failures. You don't wait because there may be some times you don't get the two days. So, if you continuously test, there's less chance you might get a defect.
Carty: And in what other ways are testers like horses?
Reichert: Oh, stubborn. [They] can be stubborn. Well, I kind of agree, but you have testers who work really rapidly and can cover a lot of ground and do it productively, but they may miss some things. And then you'll have testers — or horses — who have to have every little thing in place. And if they take a piece of functionality, they have to break it down to the minute level. So, they're getting something tested, takes a great deal of time, but is actually more thoroughly done, if that makes sense. So, you have to balance between speed and thorough testing.
Carty: Okay, Amy, lightning round here. I'm going to ask you a few quick questions. First off, in one sentence, can you tell me what digital quality means to you?
Reichert: Digital quality means achieving exceptional customer experience with an application.
Carty: But will digital experiences look like five years from now?
Reichert: Well, I think you'll see people using applications easier on mobile devices primarily, and then you'll have a lot of intersection with AI or machine learning and virtual reality. Those will all be wrapped together to produce data and more data analytics, but I think that will be helpful. You'll get digital apps that maybe solve problems for you, or help you solve problems.
Carty: What is your favorite app to use in your downtime?
Reichert: Actually, the app I like the most — and it's still kind of clunky, but it's gotten better — is the ESPN Fantasy Football app. It's very handy. And like I said, it's still a little clunky, could use some improvement, but it's gotten a lot better.
Carty: What is something that you are hopeful for?
Reichert: Oh, less tragedy, less war, or less distractions that don't need to be there.
Carty: I hear you on that. All right, Amy. Well, this has been fun. Thank you so much for joining us.
Reichert: Yeah, thank you for having me.
Carty: I'd like to once again thank our guest, Amy Reichert. You can read some of her work at Applause.com/blog. We'd also like to thank you for tuning in to our first episode, and there's lots more to come. If you'd like to reach out, please contact us at email@example.com.That's plural, firstname.lastname@example.org. Until next time.