What Is a Test Case? Examples, Types, Format and Tips

Most digital-first business leaders know the value of software testing. Some value high-quality software more than others and might demand more test coverage to ultimately satisfy customers. So, how do they achieve that goal?

They test more, and test more efficiently. That means writing test cases that cover a broad spectrum of software functionality. It also means writing test cases clearly and efficiently, as a poor test can prove more damaging than helpful.

In this guide, we will talk all about test case format, types and management. But, before we can begin, it’s important to explain what a test case is, as well as what a test case isn’t.

Ebooks

Three Best Practices For Test Case Management

Part of a QA organization’s success depends on test case management. Learn the challenges that software dev orgs face in managing TCM and see three best practices.

Read 'Three Best Practices For Test Case Management' Now

What is a test case?

Test cases define how to test a system, software or an application. A test case is a singular set of actions or instructions for a tester to perform that validates a specific aspect of a product or application functionality. If the test fails, the result might be a software defect that the organization can triage.

A tester or QA professional typically writes test cases, which are run after the completion of a feature or the group of features that make up the release. Test cases also confirm whether the product meets its software requirements.

A group of test cases is organized in a test suite, which tests a logical segment of the application, such as a specific feature.

Test case vs. similar terms

A test case is a basic concept in software testing, but there are similar terms that might cause confusion for beginners or individuals less familiar with quality assurance. Let’s explain what a test case is, relative to other technical or similarly named terms.

Test case vs. test scenario. As the name implies, a test scenario describes a situation or functionality that requires testing. For example, a test scenario might be, “Verify login functionality.” Test scenarios typically have their own ID numbers for tracking. QA teams often derive test cases (low-level actions) from test scenarios (high-level actions); and test scenarios typically come from software and business requirements documentation.

Test case vs. test script. These terms are essentially interchangeable. Both a test case and test script describe a series of actions that test an element of software functionality. But, there is a caveat. A test script is often used in the context of test automation, in which a machine does the testing. Thus, in an automation context, a developer must write a test script to be machine-readable, while a test case would be interpreted by a human for manual testing.

Test case vs. test plan. A test case covers a particular testing situation or a specific part of product functionality. A test plan is a much more comprehensive document, covering all aspects of the impending software testing. The purpose of the test plan is to align expectations for the entire organization on what will occur during testing, including project scope, objectives, start and end dates, roles and responsibilities, deliverables and defect mitigation.

Test case vs. use case. A use case describes how a system will perform a task under certain conditions. Software or business requirements documentation outline the use cases, which detail how the end user will interact with the system and the output they should receive. Use cases describe how the product should work, while test cases describe how the product should be tested. Test cases are derived from use cases to ensure the product is tested thoroughly.

Types of test cases

To validate and verify system functionality, the organization must take a multi-faceted approach that evaluates the product’s front and back ends. There are different ways to categorize the various types of test cases. One way to start is with these two categories: formal and informal.

Formal test cases. With these types of test cases, the tester writes a test in which the inputs are all known and detailed, such as the preconditions and test data. Formal tests have predefined input, which means they provide an expected output, which the test attempts to validate.

Informal test cases. Conversely, informal test cases do not have known inputs or outputs. Testers execute these types of test cases to discover and record the outcomes, which can reveal interesting findings about digital quality.

Most types of test cases are formal — planned in advance according to software requirements. Let’s explore some more test case types and examples:

  • functionality

  • UI

  • integration

  • performance

  • usability

  • database

  • user acceptance

  • exploratory

Functionality test cases. These tests determine whether the target functionality succeeds or fails to perform its function within the system. The QA team writes these types of test cases based on requirements and performs them when the dev team is finished with the function. Many different types of functional tests can validate app functionality, including unit tests that check the smallest, isolated segments of functionality possible. Functional test cases should include:

  • a description and/or name of the function under test

  • preconditions

  • steps for testing

  • an expected result

Functionality test case example: Perform a successful login and validate that the user is logged in.

UI test cases. These tests confirm the user interface (what the end user interacts with) functions as expected. Typically, UI tests focus on an app or web page’s visual elements to confirm they function and perform according to requirements. UI tests often examine display elements such as menus, sub-menus, buttons, tables and columns to make sure they are readable and consistent.

UIs continue to evolve. For this reason, UI tests can also mean validating a voice or video interface. UI tests should also include accessibility concerns, such as whether a screen reader can identify a button on a page.

UI test case example: Navigate to the home page, validate that the hamburger menu displays correctly for desktop and mobile web.

Integration test cases. These types of test cases assess how the combined functionality works when merged into the application. While it is important to test individual units of software, it is equally important to make sure disparate systems can communicate with each other effectively. The tester must understand the application flows well to write effective integration tests.

API testing is one aspect of integration testing. Applications communicate with each other through APIs, especially as products become more interconnected in today’s mobile-centric world. API testing is a vital exercise to cover with integration test cases.

Integration test case example: Log in via a seller’s marketplace, validate that the marketplace then recognizes the user as logged in — in other words, the login and marketplace modules communicate with each other.

Performance test cases. Functional tests check whether the application works. Non-functional tests, such as performance testing, check how the application performs under different types of workloads. A performance test must be specific with each step and expected result documented, as well as input data clearly defined, so that the tester can accurately assess how the system performs in the given conditions.

There are a variety of performance testing types, including load, stress, spike and scalability testing. Each type of performance testing, and each individual test, reveals different information about how the system responds to varying user loads.

Performance test case example: Measure the largest number of users a system can handle before it crashes.

Security test cases. These tests identify vulnerabilities within a system or product. Another type of non-functional testing, security tests aim to find ways to better protect software assets, as well as identify how the system holds up against common types of attacks, and define the risk associated with the product.

Some security tests might include vulnerability scanning, configuration scanning and penetration testing, also called intrusive testing. Ultimately, the point of security testing is to yield actionable feedback that the organization can use to remediate vulnerabilities.

Security test case example: Validate that you cannot access company documents without a successful login.

Usability test cases. Rather than test the application functionality or performance, usability tests examine what prospective end users — not testers — think of a product. UX researchers prepare tests for participants outside the organization to gauge how easy or difficult the product is to use.

Organizations can conduct usability testing in a variety of ways, including moderated or unmoderated and remote or in-person. The goal is to take advantage of an end user’s perspective to identify points in the application that would cause them to stop using it. Usability tests can be formal or informal, depending on the goal and method of UX research.

Usability test case example: Task the participant with a money transfer between their checking and savings accounts, then gauge whether they can successfully complete the task and whether they experience any difficulty with the process.

Database test cases. Just because an app’s functionality, the user interface and APIs are all working doesn’t mean the data is being stored properly. Database tests validate whether the application data is stored in accordance with requirements and regulations. Like functionality tests, database tests can vary in scope, from validation of a small database object to a complex action involving multiple parts of the application.

Some criteria that database tests might evaluate include whether the data is stored consistently, whether unauthorized people can access it, and how it is stored locally on a device. Consistent and secure data should be a priority for every business, regardless of the industry’s compliance standards — database tests help achieve that.

Database test case example: Validate that new customer PII data is stored in an encrypted format.

User acceptance test cases. These types of test cases validate the product from the end user’s perspective. An end user or client conducts user acceptance tests in a testing environment to validate the end-to-end flow of the product.

User acceptance tests can come in handy when business requirements change during the course of development. Stakeholders do not always effectively communicate these changes to the dev team. Through UAT test cases, the organization can document entry and exit criteria that cover gaps in previous tests.

User acceptance test case example: Validate that a user can register for a new account and that they receive an email confirmation.

Exploratory test cases. These informal test cases occur when the tester evaluates the system on an ad-hoc basis to attempt to discover defects missed by structured testing. While exploratory tests aren’t defined by a prescribed set of actions, the approach still requires some structure, particularly around time-boxing and results documentation, to ensure effective feedback.

Exploratory tests can help validate requirements by checking the system in ways not covered in scripted tests. Exploratory testing enables the QA organization to be adaptable and learn from gaps in test coverage.

Exploratory test case example: Check how using the browser’s Back button affects application functionality and whether it requires another login.

Other platforms, such as low-code development platforms, might also have their own specific tests. Keep in mind how the product will be developed, as well as any unique details that might necessitate further testing.

Test case results

While the objectives of test cases vary, most formal ones have predictable outcomes. In fact, the typical test case format should detail the expected outcome and actual outcome, which the test itself validates. Most test case results fall into these categories:

  • pass

  • fail

  • not executed

  • blocked

Passing and failing tests indicate that the system either accomplishes what it is supposed to or fails in that attempt. These results are not to be confused with tests designed to be positive or negative, which can either pass or fail. Positive tests ensure that users are able to go through all the steps and pass the expected outcome when the input is correct, such as a successful money transfer between accounts when there is a balance above $0. Negative tests ensure the system handles invalid input correctly, such as not allowing login if a password is wrong. Both types of tests either pass or fail depending on the expected outcome.

Test results that get marked as not executed are as the name suggests — tests that have not yet run, or will not run as part of this round of testing. Blocked tests result from an external circumstance or precondition inhibiting the test from running. For example, a system failure that prevents functionality from being available will cause a blocked test, as will an improperly configured test environment.

Test case format

Test case documentation typically includes all the pertinent information to run and collect data from the test. While the specific test case format might differ between organizations, most include the following details:

  • Module name. This is the module or feature under test.

  • Test ID and/or name. This is a unique identifier that should follow a standard naming convention.

  • Tester name. The person conducting the test.

  • Test data. This describes the dataset(s) to use for the test.

  • Assumptions or preconditions. Describe the various steps that must be accomplished prior to testing, or what we can assume situationally about the test, such as “after a successful login.”

  • Test priority. Define whether the test is low, medium or high priority.

  • Test scenarios. As described above, this is the high-level action from which the test case derives.

  • Testing environment. Identify the name and/or characteristics of the environment for testing.

  • Testing steps. Detail the steps for the tester to follow in the desired order.

  • Expected results. This is the output you expect to receive from the system.

  • Actual results. This is the output you actually receive from the system.

  • Pass/fail determination. If the actual results match the expected results, the test passes. If not, the test fails.

By following the test case format above, the organization can adhere to a standard way of writing tests, which comes in handy during maintenance. The organization must regularly review, maintain and approve test cases to ensure they adequately cover new and old functionality. Thoroughly detailed test cases reduce the need for time-consuming exploratory testing to fill coverage gaps.

Writing test cases efficiently

Well-written test cases have obvious benefits: better quality products, happier customers, higher profits and easier test maintenance. But some effort and organization goes into writing test cases that help achieve these goals.

Generally, the tester should write test cases early in the SDLC, such as during the requirements gathering phase. Testers should refer to requirements and use case documentation as well as the overall test plan when they write test cases. A prototype can also inform the tester about how the feature or functionality will look when completed.

Once the tester has all of this information, they can begin to write the various types of test cases mentioned above. When writing test cases, the tester should consider application flows — how the user arrives at application functionality is an important element of their journey, and must be validated appropriately. For example, account settings changes must work correctly on a mobile app, which might be the primary flow, but also must work on a web browser, as well as any other places where users can interact with or change settings.

Write test cases in a clear and concise way to ensure accuracy no matter who reads and executes the test. While some details are important, aim to keep test cases economical and easy to execute on a high level to reduce maintenance when the application changes. Well-written test cases should also be repeatable and reusable; few tests run only once, and reusable tests can save time when developing additional functionality. Make each one traceable, so the documentation and results can easily inform the team.

Test case management

One way to make sure test cases are easy to locate and understand is to give them a thorough review. Test cases require consistency in naming conventions and descriptions. A sanity check can also reveal whether the writer’s “simple” description of the test steps actually makes sense to another reader, and that it reflects real-world conditions.

As the scope of a product increases, so does the footprint of its test cases. Simply put, the more you develop, the more you need to test, which can make for challenges when it comes to scaling test suites. Not only do test cases have to keep up with new functionality, but the need for regression testing means older test cases need updates as well.

Test management tools or products can help organizations track and update tests as needed. There are many options for test management tools. Ultimately, the best option is one that fits as seamlessly as possible with your workflows, enabling the team to view, comment and access audit trails.

Reporting is another important element of test case management. Test case reports should give the team actionable insight into how testing is proceeding, what coverage you have, and where the team can improve in the future.

While it can be daunting to manage test suites, it is ultimately a necessary task to maintain digital quality for your products. If the task is difficult to maintain internally, seek tools or services to help you keep up.

Applause test case creation and management

As the world leader in testing and digital quality, Applause is ready to optimize and maintain your testing efforts.

Our holistic platform enables you to approach testing with speed, scale and flexibility, and that includes an enterprise-grade test case management solution. Our dedicated experts can design your test cases to match your testing needs and help maintain them over time as products evolve.

With a vetted global community of more than one million digital experts across various industries, Applause testers can complete large test suites in a fraction of the time it takes internal teams, avoiding costly bottlenecks. Our experts write test cases with traceability and visibility in mind, so you can quickly grasp defects and common points of failure.

With Applause augmenting your internal testing efforts, your organization can focus on high-priority quality initiatives and ensure your customers have top-notch digital experiences.

Contact us to see why leading global brands trust Applause to create and maintain their test suites.

Want to see more like this?
David Carty
Senior Content Manager
Reading time: 16 min

Improving Digital Quality in 2023: Where to Focus

Learn the quality systems, processes and capabilities that drive outstanding digital experiences and customer loyalty

5 Highlights from the 2023 State of Digital Quality in Europe Report

Learn what's in the State of Digital Quality in Europe report this year

How Vodafone Shifted User Acceptance Testing Left

Learn how Vodafone embraced Agile and shifted user acceptance testing (UAT) left.

Benchmarking Progress Toward Digital Accessibility

Survey results show organizations are placing a higher priority on accessibility and inclusion

Advancing a Culture of Inclusivity and Equity through Inclusive Design

A look at one of the sessions from the recent Vista Equity Partners UX Design Summit

Win Subscribers with UX, Localization Testing

Either understand your customers, or try to comprehend why they unsubscribed