Designing and Testing APIs to Work Over the Long Run
Learning best practices can make a big difference in cost, longevity and customer satisfaction
Building an API isn’t terribly difficult: there are plenty of frameworks and technologies to expose your software to the outside world. But simply building an API doesn’t ensure it will thrive over time. While many organizations see API development as a one-off project, McKinsey encourages companies to think of each API as its own product that will evolve based on user feedback. Let’s examine some best practices for API design, and the critical API testing required to ensure longevity.
1. Let real-world uses drive API design choices
Consider how the API will be used in the real world and let that drive design choices.
There are two sides to all software: the side that creates it, and the side that uses it. Product owners and developers may think they know how a program should work, but users almost always have their own ideas. Almost any developer who has watched a usability session has been humbled seeing someone stumble to use the application they created.
This is equally true for an API and its consumers. Think about what details consumers need and how they’ll be used. Also, consider the expected environment. Is the API for use with an internal application across a local area network (LAN) or a mobile app with limited bandwidth? With a LAN, what happens if the company opens a new international office with a slow connection?
What is the response time? Does that response time suit the use case? An automated job running at night during off hours has different requirements than a process where a user is waiting for an immediate answer. How many consumers will your application need to serve?
Don’t forget security: is it a public or private API? Does every user need their own account? Can you revoke authorization? Does data require special handling, like payment card industry (PCI) or HIPAA healthcare industry compliance?
2. Test during API design
Bring testers into the process while designing the API, not just at the end of the software development life cycle (SDLC). Testers will help you find real-world use cases and validate that it’s providing real value, seamlessly connecting the right systems by exposing the right data in the right format.
Whether you’re trying to let users log in with their favorite social media platform, send package information to a shipping service, or aggregate financial data, don’t wait until the end to start API testing. Redesigning and rebuilding written code costs more than rethinking an idea or revising an API design document. API Testing early in the development cycle can prevent costly mistakes and rewrites, and reduce the chance of having confused or dissatisfied API consumers.
In many cases, automated tests can verify whether an API works properly. Manual API testing plays a part in evaluating quality as well, however. Ensure the user interface works as intended for developers who will work with your API. Beyond evaluating functionality, testers can help you assess if it is intuitive to use, your documentation is clear and behavior matches the documentation. Incorrect or incomplete documentation makes an API extremely difficult to use.
Even if the documentation is clear and correct, is the API usable? Is it easy to test? An API that is difficult to test signals an early red flag regarding its quality.
See how to bring testers into the process earlier with the Essential Guide to Shift Your Testing Left.
3. Use versioning for changes
Use versioning to add new features or change the API without breaking it for existing users.
It is more costly to make changes and fixes later in the development process, but even more costly to make changes once the API is out in the real world — especially behavior changes that would break functions for existing consumers. Compatibility is important to test as release cycles shrink.
Thoughtfully consider whether a new version is necessary. Certain changes are less problematic than others:
Adding new data to an existing response might be OK, but it depends on the data format (Is there a rigid schema?).
Removing existing data is probably not safe. Similarly, introducing a new endpoint is probably OK, but removing an existing endpoint, probably not (consider adding instrumentation to see if the endpoint is being used).
Changing the behavior of existing data or endpoints (as in the country-specific multiplier example) will almost certainly cause problems.
Explicit versions are not always a feature to be celebrated: they can be a burden (but a necessary one):
How long will you support versions and how many versions will you support?
How will you test the different versions?
Is the documentation versioned?
The versions described above would likely be “explicit” to the consumer: specifically consuming version 1.0, 1.1, 2.0, and so on. More “transparent” versions can also be useful for quality and testing. Consider the “canary” concept, where a gradually-increasing set of clients will be directed to the latest changes to ensure the changes are acceptable (rather than fielding angry calls from everyone at once). Or, consider A/B API testing, where you deploy a performance tweak to some clients to measure and validate the improvements.
To boost your product quality by harnessing real-world API tester expertise, contact Applause today to get started.