Skip to main content

6 APIs you should know

With software systems evolving from monoliths to distributed systems, the communication between distributed components has naturally become a prime interest for developers. The communication endpoints of system components, referred to as Application Programming Interfaces (APIs), are often public and highly relevant for users of a certain service. But the tricky question is: What makes an API successful and popular?

There are probably two basic answers to this question. First, some APIs may simply offer functionalities that are very popular, because they are in high demand by many users. Second, popular APIs may also just offer a solid design, as described in our API core guidance. The adoption of design principles that make a service easy to use, simple to understand or flexible, and extensible are important non-functional features that can drive the popularity of APIs.

The following six examples benefit from both principles, i.e. they are APIs that are of interest to many users due to their functions and exemplary API design principles. We describe the basic functionality, special features, and a typical scenario for the application of each service - and give examples where each service is used in practice.


Amazon Simple Storage Service

General explanation

The Amazon Simple Storage Service (AWS S3) allows users to store any kind of binary large objects - usually referred to as blobs, essentially any type of file - highly available in the cloud. AWS S3-APIs can therefore be used for numerous use cases in cloud apps, where blobs must be stored, like data blobs, web pages, documentation, backups, archives, and many more.

Files can be up- and downloaded over the Internet to the AWS S3 service or used inside the Amazon AWS cloud. That way, the AWS S3 replaces traditional, central file servers in the cloud. Similar to a folder structure, blobs can be organised in buckets (directories). In addition to storing private blobs, AWS S3-APIs can publish blobs on the web, too. Hence, AWS S3 can be used as a regular web server, serving static files.

AWS S3 uses a REST-API. The API offers function to create, read, update, delete and list objects and buckets. Furthermore, meta information, such as access control lists of objects and buckets can be managed.

Special features

The documentation for AWS S3 is informal, uses vague descriptions and is enriched with many short examples1. Additionally, there are many programming libraries (software development kits) for various programming languages, that can be used to access the S3 API. What is noteworthy is that S3 is available in different service qualities (e.g. differentiating by access time), coming in at different prices. This allows users to archive S3 buckets that do not need to be restored in short order in Amazon’s cheaper AWS Glacier2.

Typical usage scenario

S3 replaces a normal file system in the cloud. It is useful for storing pictures, documents and other objects that you do not like to store in a database. It is a cheap, general purpose storage.

Furthermore, S3 can also simply be used as a web server with static content. Another use case is to store backup files in S3. For cheaper long-term storage, some backup files can be moved periodically to AWS Glacier.

Popular services based on S3

AWS S3 is the basis for popular products like the cloud storage DropBox3. Furthermore, the open source project MinIO4 uses the S3 API and can be deployed on premise. This allows the cloud storage to run highly available on private and on-premise cloud resources, helping to ensure the data sovereignty. MinIO and AWS also offer a command line client. This supports AWS S3 and MinIO S3 and can be used in automated pipelines like CI/CD processes.



General explanation

GraphQL is a simple, REST-based API with a flexible, powerful method for querying data using a predefined data schema, that defines all available entities and meta information. GraphQL furthermore features a simple query language, that allows access only to the required subset of data and, thus, minimises the communication overhead. GraphQL can be used to store data in a backend and retrieve it in a very flexible manner for various clients. Furthermore, GraphQL is useful as a proxy for clients to integrate APIs in a uniform and common way.

Special features

Deutsche Bahn – Systel’s 1BahnQL-API is an interesting example to highlight the features of GraphQL. The API integrates various information sources, that would otherwise need to run separately, into one query endpoint using GraphQL. If you have installed the GraphiQL app5, you can explore the API using the URI: A simple query will look like this:

  search(searchTerm: "Berlin") {
    stations {

This example demonstrates how to integrate a set of APIs with a hub and spokes architecture using GraphQL. The GraphQL schema integrates the data models of all information sources while the GraphQL server adapts all technical interfaces provided by the various sources.

Typical Scenario

To release data via GraphQL you will need to build a GraphQL server (proxy) to access your data source and translate queries and mutations from GraphQL to the query language of your data source, e.g. SQL. To build a GraphQL server, several easily extendable GraphQL frameworks are available.

Another option is using a GraphQL code generator. This generates a type-safe code skeleton from a GraphQL schema that is easily extendable to access various data sources.

Popular services based on GraphQL

Good examples for how to use GraphQL in practice are the source code management platforms GitHub and GitLab. They offer most of their functionality alternatively as well-documented GraphQL endpoints6 7.



General explanation

One megatrend of the Internet is its evolution into a web of independent, self-contained services or systems8. WebHooks offers a common way to implement call-backs between these services and systems in a uniform but flexible way. The speciality of WebHooks is that they allow multiple services to be coupled at deployment time, removing the need for services to be already known at implementation time.

Special features

WebHooks are loosely coupled, using events and APIs to communicate. In a traditional implementation using multicast messaging, a service "A" would publish an event on a local event bus. All interested parties would then have to subscribe to the event type at this event bus.

WebHooks in contrast provide peer-to-peer notifications about events from service "A" (the subject) via remote callbacks. These may even use the Internet as a public distribution channel. WebHooks then also connect to a trigger function of service "B" (the observer) that is usually provided by a REST API of "B". This means that service "A" provides a public API (or a user interface) to manage the WebHooks of "A".

Because WebHooks are often used to connect a broad range of services that are unknown during implementation, some increased flexibility is required. Specifically, the payload sent to the trigger function of "B" should be configurable by supporting a template for the payload to be sent, e.g. using go templates9 or a similar solution. Another, but sometimes more complicated option is to allow service “B” to extract trigger fields from the payload, e.g. by binding values from the payload using path expressions and string functions. Either solution is critical to ensure that receiving services “B” can easily ingest relevant values from the payload sent by “A”.

Typical Scenario

A good example for how WebHooks can be used and supported are the Git-Repositories such as GitHub10. These services are frequently used to trigger external procedures like CI/CD processes or sending messages to project members when the source code of an application has been changed by a developer.

Another example is the application If this then that (IFTTT), which allows applets to be triggered by devices or services using a WebHook11. An applet is essentially a conditional statement that can execute all kind of actions like sending emails, controlling devices, calling REST services and many more. The IFTTT platform offers consumers predefined triggers and applets that do not require programming knowledge but can still be used to create sophisticated, tailored functions.

Popular services based on WebHooks

WebHooks are supported by Source Code Management Systems such as Github and Gitlab, Trouble Ticket Systems like Jira to inform other services about changes.

Messaging systems like Mattermost and Slack can be integrated using WebHooks, e.g. to send out a message to staff members if a critical event occurs.

IFTTT systems often serve as automation hubs to control systems if relevant events on input sources occur. This is for example the case for smart home and house automation systems.


The Raft Consensus Algorithm

General explanation

Services in distributed systems are mostly implemented stateless, ensuring that they can be simply replicated at scale and to be fault tolerant. This design implies that any data used by a service is usually not stored inside the service itself. Instead, a stateless service typically uses a separate database to store states. That database could be in memory for performance reasons, use SQL, be a NoSQL database, or simply come in the shape of a key-value store.

In distributed systems it is a big challenge to ensure a consistent state of data - particularly if a service has multiple instances that manage a common state or data. To address this issue, the Raft Consensus Algorithm enables the election of a leader node that controls reliable writing values and distributing new values to all peer nodes to guarantee consistency. Eventually, thus, the Raft Consensus Algorithm ensure that all nodes agree upon the same state transitions and values.

Special features

Until a few years ago, highly available databases used primarily Leslie Lamport’s12 Paxos protocol. However, since this algorithm is not easy understand, difficult to prove right, and ultimately hard to control, Diego Ongaro developed and implemented the more understandable Raft Consenses Algorithm13 in 2013. Thanks to its far more accessible logic and, thus, usability, the algorithm is now frequently used in distributed Key/Value stores like etcd14 and consul15. To facilitate an even better understanding and monitoring of the algorithm, various visualisations and simulations are also available e.g. via etcd16.

Typical Scenario

A consensus algorithm is needed to publish and store consistent state information across the nodes of a distributed service. The common state of a distributed service could be session information or sequential identifiers, such as ticket numbers. One solution is to use a distributed in-memory database or key/value store and, additionally, implement a leader election pattern17 18 to ensure consistency between nodes. However, you can also use a Raft library that integrates the raft consensus server API and the consensus algorithm directly inside the distributed service19.

Popular services based on the Raft Consensus Algorithm

The key/value store etcd used by Kubernetes introduced the Raft Consensus Algorithm first. Other key/value stores like consul added Raft as an alternative to the still somewhat common Paxos algorithm. In-memory databases like Redis added Raft for strong consistency as an extension for their high availability solution20.


Secret Management / Vault

General explanation

If you develop a service, you will most likely want to protect it and, therefore, need to take care of who is allowed to access its functions and data. Authentication and authorisation as well as keys/certificates are the usual trilogy of functions to achieve this end. Authentication functions validate the identity of users and services; authorisation functions restrict the access rights for each user/service depending on their role; and keys/certificates encrypt either the communication between client and server or the data itself. For all these security functions, secrets like passwords, keys and certificates must be managed in a very safe manner.

To reduce security risks and minimise the attack vector, secrets should not be static. Very often dynamic, i.e. regularly changing secrets are required. Furthermore, the security functions should be integrated with the security engines of other venders, e.g. by using a hub and spokes pattern. Vault is one security solution to ensure this21.

Special features

Vault manages secrets, security policies and offers data encryption as a service. Secrets are managed in a key/value store using the typical CRUD-operations (create, read, update and delete) of the HTTP protocol. The hierarchical structure (paths) in which the secrets are organised acts like a virtual filesystem. The root of the path addresses the specific security engine to access a specific security engine. This open, generic approach provides a uniform integration with the security engines of various (cloud) providers. Additionally, the command line interface (cli) offers a command “path-help”, which explains the special meaning of a security engine for a given path.

Another very interesting aspect is the uniform mapping of the API functions to the cli. Most API functions have a one-to-one mapping on the cli. The option “-output-curl-string” for the cli commands outputs the corresponding “curl”-command. This is very useful as you can directly see how to access that function using a HTTP URL22.

Typical Scenario

Vault is especially useful if two services need an authenticated communication, e.g. a business service that accesses a database. In general, it is critical to protect secrets such as usernames, password or tokens, that must be configured within a client and server. Vault provides dynamic secrets that are regularly changed. This functionality is useable via its API.

Additionally, Vault offers secret injection solutions, to enable (legacy) applications to easily use its secret management. The application can use password/configuration files or OS environment variables which are injected with the current secret information in the required format and representation23.

Popular services based on Vault

Secret management is a basic service required in an enterprise infrastructure solution as well by public services.


Kubernetes Resource Definitions

General explanation

Kubernetes (K8s) is an open-source system to automate deployment, scaling, and management of containerised applications24. It allows users to run and manage containers, and other resources on a computing infrastructure hosted by a provider or on premise. The Kubernetes API offers a unique approach to manage states and resources in a system. Thanks to the option of registering new resources by plugins, this kind of API is also extensible.

Special features

The core of Kubernetes is the Kubernetes API server25. This service manages all kinds of Kubernetes resources, like deployments, pods, containers, persistent volumes, and many more. The first genius of this approach is that there is only a limited set of operations defined to manage the resource entries: create, update, read and delete as well as their sub forms:

  • replace and patch for update,
  • get, list, list all namespaces and watch for read, and
  • rollback, read / write scale and read / write state as additional operations.

Each kind of resource is specified by a resource definition. Additional resource types for extensions can be added by specifying a custom resource definition. A resource definition specifies all properties of a resource and consists of a common meta part and a resource specific part of properties.

The second genius idea of Kubernetes is the reconciliation process. When you set the properties of a concrete resource using the API, you also define the new desired state of the resource. A resource-specific controller26 observes the actual and the desired state, calculates the difference, and then actuates actions to achieve the desired state for the resource27. This approach is a pattern for various domains and supported by tools like kubebuilder28.

Typical Scenario

The original idea of Kubernetes was that Kubernetes should provide its functionalities using its own Kubernetes API server and reconciliation process. Accordingly, the Kubernetes community frequently use these to enhance the functionalities of Kubernetes itself, e.g. by adding new security functions or CI/CD and deployment processes.

Another example is to use the Kubernetes API server to implement other types of applications, such as traffic light management systems. If a bus approaches an intersection, the resource traffic light of the intersection will switch on green lights for the bus lane. With respect to the actual state of the traffic light, the controller initiates the sequence of lights, so the bus can pass the intersection with no delay. You can model the intersection states, including the traffic lights, by a custom resource definition and implement the logic for the traffic lights with a Kubernetes controller.

Popular services based on Kubernetes Resource Definitions

All popular extensions of Kubernetes use custom resource definitions, e.g. Functions-as-a-Service tools like knative, Service Meshs like Linkerd, Contour and Istio, and CI/CD tools like Tekton. Tools supporting the use of custom resource definitions are controller builders like kubebuilder29, operator toolkits like Kudo30, or the Operator SDK31.

But there are also broader approaches to use the Kubernetes API server as a framework (e.g. Crossplane32) and to establish a new architecture style Open Application Model (OAM)33. The core idea of these approaches is to model application components with resources and to automate tasks as well as the reconciling controller logic. This represents a major shift in the programming paradigm, moving from imperative, functional programming styles (relying on imperative commands or mathematical functions) to state-transition-based programming (relying on resources having different states and reconciliation processes calculating transitions between these).

Support Centre for Data Sharing
Image credit: