Select Page
A four-member engineering team works around a desk

Getting Started on Automation with Applause

SauceCon, Sauce Labs’ annual user conference, starts today. The event brings together the global community of Sauce Labs users and automated testing experts to learn from each other and level up their automated testing and continuous delivery skills.

Applause partners with Sauce Labs to deliver automated testing to customers with Sauce Labs’ real device cloud. By combining the world’s largest crowdtesting community of functional testers and automation experts with the world’s largest device cloud, customers are able to rapidly validate and optimize their mobile applications and web properties in a manner that traditional approaches cannot match.

Given that I’ll be at SauceCon, I figured this would be the perfect opportunity to explain how Applause does automation in more detail. This is just part one of our series on automation, so make sure to be on the lookout for more on the topic in the coming weeks.

The Beginning of the Automation Journey…

A robust automation framework for mobile application testing starts with the selection of the device interaction library. In Applause’s case, this will be Appium. On top of that library, you need to consider three broad components to get started on your journey: technical enhancements, procedural implementations, and finally “proofs” and reporting. This post will give you an overview of each. Later in the series, we’ll explore these key components in fine detail.

1. Key Technical Enhancements

Out-of-the-box Appium doesn’t cover all of the technical requirements for your automation framework. You’ll need to choose architectural patterns, such as a Page Object Model Pattern, Page Factory Pattern, and Locator Management Pattern. These patterns are meant to keep the framework DRY (Don’t Repeat Yourself) readable and maintainable. You must also consider device cloud integration, driver management, error handling, bug tracking integration, test data management, and other enhancements needed to increase stability and robustness.

2. Procedural Implementations

Once a solid framework is created that meets the organizational and business needs, we must consider how we execute the created automation tests in an automatic fashion. This includes our procedural and operational concerns which generally include: CI/CD Integration, ROI analysis, test coverage analysis, and various development hygiene concerns such as coding standards and code review procedures.

3. Proofs and Reporting

Now that we have the framework created and have addressed the operational concerns, the final piece is focused on deriving value from all that hard work. These are the “proofs” and reporting that we want to obtain to maximize ROI of our investment. In general, these include Test Analytics, Performance Metrics, Test Result Reporting, Insights, and automatic alerts.

In closing, Appium and other open source libraries are a great base upon which to build an automation framework. However, today’s test automation experts and engineering leaders need to understand that this is just the first step. A robust and stable automation framework requires a considerable engineering investment in time, planning, and resources. Downloading the open source library is just the beginning of a long journey.

Stay tuned for part two of our guide to automation with Applause, and keep up with the latest from SauceCon in the meantime.

Whitepapers

Craft a Complete Test Automation Strategy

In this white paper, we explain how to put together an effective test automation strategy that blends scripted and codeless automated tests, enabling customers to scale testing with apps.

Published On: March 1, 2018
Reading Time: 3 min

Are AI Tools Improving Accessibility in 2026?

Read the highlights from Applause’s annual survey on the State of Digital Accessibility.

Human Testing vs. AI Testing: What Each Can (and Can’t) Catch

Find the perfect balance for reliable software testing.

From Drift to Deflection: Engineering Trust in AI Systems

Maintaining user trust in your AI chatbots is a continuous process, involving evaluation, observation and adversarial testing.

Test Automation, AI and Gaps in Digital Quality

While AI-generated code and automation can speed releases, they require human oversight to make sure you’re testing what really matters.

What Makes a QA Process Mature?

Mature QA moves from reactive defect-chasing to proactive quality engineering.

World Cup 2026: Essential Testing for Sports Betting Platforms

How to prepare for traffic spikes and regulatory demands by focusing on five QA pillars.
No results found.