Select Page

QA Beginner’s Guide

Identifying the right testing devices for a mobile app is one of the most difficult challenges a mobile test manager has to solve. In the article “Testing on the Right Devices”, I explained the approach to minimize the amount of testing devices and provided some practical steps on how to select the test devices for the product. However, that would only be half the job without grouping and prioritizing the different test devices in order to maintain a professional mobile device management (MDM).

Create a List of Test Devices

To begin, the target group must be identified. Once the target group is known, a mobile test manager can gather data about them and create personas. Based on their usage habits, specific mobile devices will be identified on which the mobile app must be tested.

Depending on the mobile app and its user base, it is possible that a mobile test manager has identified more than 30 devices that need to be covered during the development and testing phase. The mobile test manager should list all devices from highest to lowest priority and separate them into groups.

Create Device Groups and Prioritize Them

In the following example, the first group has the highest priority, ‘A.’ Devices in this group are the most used devices among the user base and represent the latest devices on the market. They have powerful hardware, a big screen with high resolution and density, and have the latest operating system and browser version installed.

Group 1, Priority A:

  • High-End devices
  • Quad Core CPU
  • +3GB RAM
  • Display size >5”
  • Retina, Full-HD display
  • Latest OS available for the device

The second group has a medium priority, ‘B.’ Devices in this group are mid-range devices. They have average hardware like a smaller CPU, screen resolution and size similar to the devices in group A, and the operating system version should be no older than one year.

Group 2, Priority B:

  • Mid-range devices
  • Dual Core CPU
  • 1GB RAM
  • Display size <5”
  • No Retina or Full-HD display
  • OS less than one year old

The third group has a low priority,’C.’ These devices have a slower CPU, a small screen resolution and density, with a software version older than one year.

Group 3, Priority C:

  • Slow devices
  • Single Core CPU
  • <1GB RAM
  • Display size <4”
  • Low screen resolution
  • OS older than one year

Define Requirements for Each Group

If the groups are in place, the mobile test manager needs to define requirements for each group. Requirements can be that the functionality, design, and usability is covered 100%. In this case, Group A devices must 100% fulfill all three requirements.

Devices in Group B still need to support the functionality and usability 100%. However, the design doesn’t need to be perfect due to smaller screen sizes.

For Group C, the design and usability doesn’t need to be perfect due to the old device hardware and software. However, the functionality must still be at 100%.

Keep the Groups and Devices Up-to-Date

Once the groups are established within the mobile development lifecycle of a company, the mobile test manager must ensure they remain up-to-date. This means constantly observing the mobile market for newly available devices and monitoring the device usage of your target customer.

If a new operating system is provided by the manufacturer, the mobile test manager must check on whether the new version is being used by the customers then update device groups accordingly.

What appears clear and simple in theory proves technically challenging in practice. Keeping track of an ever-growing amount of mobile devices remains a crucial but tricky part of mobile testing. Depending on a company’s mobile development organization, user base, and apps in development, mobile device management (MDM) can be a full-time job. Many companies cannot realistically support this through their internal QA lab or testing teams, making crowdtesting a logical solution.

Published On: April 17, 2018
Reading Time: 4 min

Building Agentic AI That Works: Real-World Lessons

Learn how to plan, evaluate and test agentic workflows to minimize potential risks.

Great Customer Experiences Start With UX Research

In honor of CX Day, we asked members of our UX team to share their insights on what goes into a great customer experience.

What Testing in Production Can and Can’t Do

Production testing adds some business value for user experience, but it comes with risks.

Optimizing the In-Store Experience in Retail

Learn more about the digital innovations enhancing the in-store customer experience and what testing they require.

Resilient QA: De-Risking Digital Strategies Amid Rising H-1B Costs and Volatile Talent Markets

See how crowdtesting can help enterprises maintain stability despite uncertainty in the market for tech talent.

Personalization and AI in Streaming Platforms

Learn how streaming platforms are using AI and ML to enhance recommendations, localization and real-time content.
No results found.