What is an API Waterfall? Explained Simply
In the intricate tapestry of modern software architecture, where applications are no longer monolithic giants but rather constellations of interconnected services, the term "API" has become ubiquitous. It stands as the fundamental language through which disparate systems communicate, share data, and orchestrate complex operations. As software ecosystems evolve, becoming more distributed and specialized through microservices, the interactions between these components become increasingly elaborate. This growing complexity often gives rise to a phenomenon that, while not a formally defined technical term, is vividly described as an "API Waterfall."
To truly grasp what an API Waterfall entails, one must first appreciate the dynamic nature of Application Programming Interfaces (APIs) and the crucial role they play in knitting together the digital fabric of our world. An API Waterfall, at its core, refers to a sequence of interdependent API calls, where the successful completion and output of one call directly influence or enable the initiation of subsequent calls. Imagine a cascade, where each drop of water contributes to the flow that follows, eventually forming a powerful stream. In the context of APIs, this cascade represents the flow of data, logic, and control across multiple services, each relying on the preceding one to move forward. This article will delve deep into this concept, exploring its origins in modern architectures, its inherent benefits and profound challenges, and the strategic solutions, including the indispensable role of an API gateway, that are essential for managing these cascading interactions effectively. We will unravel how a seemingly simple chain of requests can become a complex web, and how understanding and mastering the API Waterfall is key to building robust, scalable, and high-performing applications.
The Foundation: Understanding APIs and Their Ubiquity
Before we plunge into the depths of an API Waterfall, it is imperative to solidify our understanding of what an API truly is and why it forms the bedrock of virtually every digital experience we encounter today. An API, or Application Programming Interface, is essentially a set of definitions and protocols that allows different software applications to communicate with each other. Think of it as a meticulously designed menu in a restaurant: it lists the dishes you can order (the functions available), describes what ingredients are in them (the data inputs required), and specifies what you can expect in return (the data output). You don't need to know how the chef prepares the meal; you just need to know how to order from the menu.
In the digital realm, APIs abstract away the internal complexities of a system, presenting a clean, standardized interface for interaction. For instance, when you use a weather application on your phone, it doesn't have its own weather station; it makes an API call to a weather service. When you log into a third-party website using your Google or Facebook account, that website is utilizing an API to authenticate your identity without needing your specific credentials. This principle of abstraction and interoperability is what makes modern software development incredibly efficient and innovative.
APIs come in various architectural styles, with REST (Representational State Transfer) being the most prevalent in today's web landscape due to its simplicity, statelessness, and scalability. Other styles include SOAP (Simple Object Access Protocol), which is more rigid and protocol-heavy, and GraphQL, which offers clients more flexibility in requesting precisely the data they need. Regardless of the underlying style, the core purpose remains the same: to facilitate seamless, programmatic interaction between disparate software components.
The increasing complexity of modern applications, often built on microservices architectures, means that a single user action might trigger a multitude of behind-the-scenes API calls. Each microservice, designed to perform a specific, fine-grained function, exposes its own API. When these services need to collaborate to fulfill a broader request, they do so by invoking each other's APIs. This proliferation of API interactions, while offering tremendous benefits in terms of modularity and independent scalability, inherently lays the groundwork for the cascading sequences that we refer to as an API Waterfall. Understanding this foundational role of APIs is the first step toward appreciating the dynamics, challenges, and solutions associated with their sequential execution.
Unpacking the "Waterfall" Metaphor in APIs: Two Core Interpretations
The term "API Waterfall" is evocative, painting a picture of something flowing downwards in a series of steps. While not a formal computer science term, its utility lies in describing two related, yet distinct, phenomena within the realm of API interactions. Both interpretations offer valuable insights into how APIs operate in complex systems.
Interpretation 1: Sequential and Dependent API Calls (Primary Focus)
This is the most common and practical interpretation of an "API Waterfall" and will be our primary focus. It refers to a series of API calls where the output or successful completion of one API call is a prerequisite for the initiation or correct functioning of the next. The data generated by an upstream API call acts as the input or a crucial condition for a downstream API call. This creates an explicit chain of dependencies, much like a physical waterfall where the water from one tier flows down to the next.
Concrete Scenario: E-commerce Order Processing
To illustrate this vividly, let's consider a typical e-commerce transaction: a customer places an order on an online store. This seemingly simple action triggers a complex API waterfall behind the scenes:
- User Authentication API Call: The first step typically involves verifying the user's identity. Before anything else can happen, the system needs to confirm that the user is logged in and authorized. An API call is made to an authentication service, providing credentials (e.g., a session token). The output is a confirmation of the user's identity and permissions.
- Product Lookup API Call: Once authenticated, the system needs to retrieve detailed information about the products in the customer's cart. This involves an API call to a product catalog service, passing the product IDs obtained from the user's cart. The output includes product names, prices, descriptions, and images.
- Inventory Check API Call: With product details in hand, the system must now ensure that the desired quantities are actually available. An API call is made to an inventory management service, passing the product IDs and quantities. The output indicates availability and reserves the items temporarily. If this call fails (e.g., out of stock), the entire process might halt or trigger an alternative flow.
- Payment Processing API Call: Assuming inventory is confirmed, the next critical step is to process the payment. This involves an API call to a payment gateway (e.g., Stripe, PayPal), passing the order total, customer payment details, and a unique transaction ID. The output is a payment confirmation or a decline.
- Order Confirmation API Call: Upon successful payment, an API call is made to an order fulfillment service to officially create the order record, associating it with the user, products, and payment. This call confirms the transaction within the internal system. The output is an order ID and status.
- Shipping Notification API Call: Finally, with the order confirmed, an API call is made to a shipping service or a notification service to initiate the shipping process and potentially send an email/SMS to the customer. This uses the order details generated in the previous step.
Each of these steps is an API call, and each subsequent step relies on the success and data output of the preceding one. The failure at any point in this chain can ripple through, preventing later steps from executing or requiring complex rollback mechanisms. This dependency chain is the essence of an API Waterfall.
Why This Dependency Arises:
The sequential nature of these calls is not arbitrary; it's driven by fundamental business logic and data flow requirements: * Logical Precedence: Certain actions simply cannot occur before others (e.g., cannot process payment before knowing the total). * Data Transformation: The output of one service often becomes the input in a transformed or enriched state for another. * State Management: Each step updates the system's state, which subsequent steps then react to.
Implications of This Dependency:
The API waterfall, while logically necessary, introduces several critical implications: * Cumulative Latency: The total time for the entire operation is the sum of the individual API call times, plus network overhead, plus any processing time between calls. A slow call at the beginning can significantly delay the entire sequence. * Error Propagation: A failure in an early API call can prevent all subsequent calls from executing, potentially leading to a cascading failure across the system. * Resource Consumption: Each call consumes network resources, CPU cycles, and memory on the respective services. Managing this consumption across a waterfall is crucial.
Interpretation 2: Performance Visualization (Secondary Focus)
The second interpretation draws a direct parallel to "waterfall charts" commonly found in web browser developer tools or network monitoring solutions. These charts visually represent the timing of network requests, showing when each request started, how long it took, and crucially, any dependencies between them.
In this context, an "API Waterfall" would refer to such a visual representation of a series of API calls. For example, when a web page loads, it often makes multiple parallel and sequential API calls to fetch data, images, and scripts. A waterfall chart clearly shows: * The initial request for the HTML document. * Subsequent requests for CSS, JavaScript, and images, often occurring in parallel. * API calls to backend services to fetch dynamic content, showing their initiation relative to other resources. * Any blocking requests, where one resource (e.g., a JavaScript file) must fully load before another can begin.
Relevance to API Performance Monitoring:
While this interpretation is more about observing the waterfall rather than being the waterfall, it is highly relevant. A performance waterfall chart for a complex application making many API calls can immediately highlight: * Bottlenecks: Which individual API call is taking the longest? * Hidden Dependencies: Are there unexpected sequential delays between calls? * Parallelization Opportunities: Can some calls be made concurrently instead of sequentially? * Overall Latency: The total time taken from the first to the last request provides a crucial user experience metric.
Connecting this back to the primary interpretation, a slow API call early in a dependent sequence (as described in the e-commerce example) would appear as a long bar on a performance waterfall chart, demonstrating its detrimental impact on all subsequent operations and the overall transaction time. Therefore, understanding both aspects—the logical dependency and the performance visualization—is key to holistically addressing the challenges and optimizing the efficiency of API interactions.
The Genesis of API Waterfalls in Modern Architectures
The rise of the API Waterfall phenomenon is not accidental; it is a direct consequence of the prevailing architectural paradigms that dominate modern software development. As applications evolve from monolithic giants into distributed, modular systems, the inherent need for services to collaborate gives rise to these intricate cascades of API calls. Understanding these architectural drivers is crucial for appreciating why API waterfalls are an inevitable, rather than an anomalous, part of today's digital landscape.
Microservices Architecture
Perhaps the most significant contributor to the proliferation of API waterfalls is the widespread adoption of microservices architecture. In this paradigm, a large application is broken down into a collection of small, independent services, each running in its own process and communicating with others through well-defined, lightweight mechanisms—predominantly APIs. Each microservice is responsible for a specific business capability, such as user authentication, product catalog management, order processing, or payment handling.
While microservices offer unparalleled benefits in terms of development agility, independent deployment, and scalability, they inherently introduce a need for coordination. A single user request, which in a monolith might have been handled by a single function call, now often necessitates calls to multiple microservices. For instance, creating a user account might involve: 1. An API call to the User Service to create the basic user record. 2. An API call to the Profile Service to store additional user details. 3. An API call to the Notification Service to send a welcome email. 4. An API call to the Analytics Service to log the new user signup.
If the Profile Service needs the user_id generated by the User Service, or the Notification Service needs the user's email from the Profile Service, this creates a clear sequential dependency – an API waterfall. The granularity of microservices directly leads to a higher volume of inter-service communication via APIs, increasing the likelihood and complexity of these cascading interactions.
Distributed Systems
Modern applications are increasingly distributed, spanning multiple servers, data centers, or even cloud regions. This geographical distribution, while offering benefits like fault tolerance and reduced latency for geographically dispersed users, introduces significant network overhead. When API calls need to traverse networks, potentially crossing firewalls, load balancers, and vast physical distances, each hop adds latency.
In a distributed system, an API waterfall means that not only are there logical dependencies, but also physical ones tied to network performance. A sequence of three API calls, each requiring a round trip across a network, will inevitably take longer than three internal function calls within a single process. Moreover, the reliability of each network hop becomes a factor. A transient network glitch between two services in a waterfall can disrupt the entire chain, even if the services themselves are healthy. The very nature of distributed computing amplifies the challenges associated with managing sequential API calls.
Data Orchestration and Aggregation
Many contemporary applications serve as sophisticated data aggregators, pulling information from diverse sources to present a unified view or enable complex business logic. This often involves orchestrating data from internal databases, external third-party services, legacy systems, and various microservices.
Consider a financial dashboard that displays a customer's total assets. This might require: 1. An API call to a banking service for checking account balances. 2. An API call to an investment service for portfolio values. 3. An API call to a credit score provider. 4. An API call to an internal service that aggregates these figures and applies business rules.
Each of these external and internal data sources typically exposes an API. The process of gathering, transforming, and combining this data often forms an API waterfall, where the availability and structure of data from one source dictate how the next source can be queried or how the aggregation logic can be applied. The need to create a cohesive data narrative from fragmented sources inherently generates these sequential API dependencies.
Event-Driven Architectures
While often associated with asynchronous communication, event-driven architectures can also contribute to the creation of API waterfalls, especially at the initiation point or within specific event handlers. In an event-driven system, services communicate by publishing and subscribing to events. An initial user action or external trigger might generate an event that then kicks off a series of operations.
For example, a "New User Registered" event might be published. A "Profile Creation Service" subscribes to this event, and upon receiving it, makes an API call to an external identity provider to enrich user data. Subsequently, a "Welcome Email Service" also subscribes to the original event (or a "Profile Created" event) and makes an API call to an email gateway to send a personalized welcome message. While the event-driven nature allows for decoupling and parallel processing of some tasks, individual services reacting to an event may still perform internal API waterfalls to fulfill their specific responsibilities. Moreover, the initial event itself might be triggered by an API call to the system's entry point, effectively starting a cascade.
In essence, the modern paradigm of building software—characterized by modularity, distribution, and specialized services—inherently encourages the creation of intricate communication patterns. The API waterfall is a natural outcome of this evolution, a testament to the power of composable services, but also a complex challenge requiring careful consideration and robust management strategies.
The Benefits and Opportunities of API Waterfalls
While the term "API Waterfall" often brings to mind potential challenges, it's crucial to recognize that the architectural patterns leading to these sequential API calls are not inherently flawed. In fact, they underpin some of the most significant advantages of modern software development. The benefits derived from these cascaded interactions often outweigh the complexities, provided they are managed effectively.
Modularity and Decoupling
One of the foremost advantages of systems that exhibit API waterfalls is the high degree of modularity and decoupling they achieve. Each step in an API waterfall typically corresponds to a distinct service or component, responsible for a specific, well-defined function. * Independent Development: Teams can develop, test, and deploy individual services without significantly impacting others. This accelerates development cycles and fosters parallel workstreams. * Reduced Dependencies at Code Level: Services communicate via their public API contracts, not through shared internal codebases. This minimizes tight coupling, where changes in one part of the code could break another. * Clear Boundaries: The explicit API calls create clear boundaries between different parts of the system, making it easier to understand responsibilities and system architecture. This is particularly beneficial in large organizations with multiple teams.
For instance, in our e-commerce example, the Payment Service doesn't need to know how product details are retrieved or how inventory is managed. It just needs to receive an order total and process it. This modularity allows for a clearer division of labor and a more manageable codebase.
Specialization
With modularity comes specialization. Each service within an API waterfall can be designed to be highly proficient at one particular task. * Focused Expertise: Development teams can become experts in specific domains (e.g., authentication, inventory management, payment processing), leading to higher quality and more optimized solutions for their respective services. * Optimized Technologies: Services can choose the best technology stack, database, and infrastructure for their specific function, rather than being constrained by a one-size-fits-all approach of a monolith. A search service might use Elasticsearch, while a user profile service might use a relational database, and an AI inference service might leverage specialized GPU clusters. * Simpler Codebases: Smaller, specialized services have smaller codebases, which are easier to understand, maintain, and debug compared to a monolithic application.
Scalability
The independent nature of services in an API waterfall contributes significantly to scalability. * Independent Scaling: If one part of the application experiences higher load (e.g., the Product Catalog service during a flash sale), only that specific service needs to be scaled up, rather than the entire application. This optimizes resource utilization. * Elasticity: Services can be scaled up or down elastically based on demand, which is a hallmark of cloud-native architectures. This dynamic resource allocation can lead to cost savings and improved performance under varying loads. * Fault Isolation: While a failure in one service can impact downstream services in a waterfall, the failure is often contained within that service and does not necessarily bring down the entire application. Proper error handling and resilience patterns (like circuit breakers) can further mitigate the impact.
Flexibility and Agility
API waterfalls, born from modular architectures, inherently foster greater flexibility and agility in development and deployment. * Easier Updates and Replacements: Individual services can be updated, modified, or even entirely replaced without affecting other parts of the system, as long as their API contract remains consistent. This allows for continuous innovation and rapid iteration. * Experimentation: New features or technologies can be experimented with in isolated services, minimizing risk to the broader application. * Faster Time-to-Market: The ability to develop and deploy services independently means that new features can be rolled out much more quickly. If a new payment provider needs to be integrated, only the Payment Service might need modification, not the entire e-commerce platform.
Reusability
Each API in the waterfall chain represents a distinct business capability that can often be reused in multiple contexts. * Shared Services: An Authentication Service API can be used by the e-commerce platform, a mobile app, a customer support portal, and even internal administrative tools. * New Product Development: By composing existing APIs in new ways, developers can rapidly build new applications or features, significantly reducing development effort and accelerating innovation. The modular nature encourages the creation of robust, well-documented APIs that serve as building blocks.
In summary, while the cascaded nature of API calls presents operational challenges, the underlying architectural principles that lead to them are profoundly beneficial. They enable organizations to build highly modular, specialized, scalable, flexible, and reusable systems. The goal, therefore, is not to eliminate API waterfalls, but to design, implement, and manage them intelligently, leveraging their inherent strengths while mitigating their potential drawbacks. The next section will delve into these challenges in detail.
Navigating the Challenges of API Waterfalls
Despite the significant architectural benefits that give rise to API waterfalls, their sequential and interdependent nature introduces a unique set of challenges. These complexities can impact performance, reliability, security, and the overall developer experience. Effectively addressing these issues is paramount for building stable and efficient distributed systems.
Performance and Latency
Perhaps the most immediately apparent challenge in an API waterfall is cumulative performance degradation and increased latency. Each individual API call, regardless of how optimized it is, incurs overhead: * Network Latency: Data must travel across a network, which involves serialization, deserialization, and physical transmission time. Even within a data center, this adds milliseconds. In distributed systems spanning geographical regions, this can become hundreds of milliseconds per call. * Processing Time: Each service needs time to process the request, interact with its own database, execute business logic, and prepare a response. * Concurrency Limits: Services might have internal queues or rate limits, introducing waiting times if too many requests arrive simultaneously. * Cumulative Effect: The total execution time for an entire waterfall is, at best, the sum of the latencies of its sequential components. A waterfall with five sequential calls, each taking 100ms, will take at least 500ms, not including any inter-service processing time or further network hops. A single slow API call early in the chain can act as a bottleneck, dramatically delaying the entire user experience. This "tail latency" can be particularly frustrating for users and difficult to diagnose.
Error Handling and Resilience
Managing errors in a single API call is relatively straightforward; managing them across a waterfall of interdependent calls is significantly more complex. * Cascading Failures: A failure in an upstream API call can prevent all subsequent downstream calls from executing, leading to a cascading failure throughout the entire chain. If the authentication service fails, no product lookups or payments can occur. * Partial Failures: What happens if a payment API call succeeds but the subsequent order confirmation API call fails? The system is in an inconsistent state (customer paid, but order not recorded). This requires sophisticated transactional consistency mechanisms, often involving two-phase commits, sagas, or idempotent operations, which add considerable complexity. * Retry Mechanisms: Should a failed call be retried? How many times? With what delay? Simple retries can exacerbate the problem by overwhelming an already struggling service. Intelligent retry strategies (e.g., exponential backoff) are necessary. * Circuit Breakers and Fallbacks: To prevent a failing service from dragging down healthy services, circuit breaker patterns are essential. If a service consistently fails, the circuit breaker "opens," preventing further calls to it for a period, potentially serving a fallback response or triggering an alternative workflow.
Complexity and Observability
As the number of services and API calls in a waterfall grows, the system becomes exponentially more complex to understand, monitor, and debug. * Distributed Tracing: Pinpointing the exact source of a performance issue or an error in a multi-service transaction is incredibly challenging. Traditional logging is insufficient. Distributed tracing tools are required to follow a single request as it propagates through all services in the waterfall, correlating logs and metrics across different systems. * Monitoring Challenges: Monitoring individual service health is necessary, but it doesn't tell the whole story. What's needed is end-to-end monitoring of the entire transaction or waterfall to identify bottlenecks and failures across the chain. * Debugging Nightmare: Reproducing and debugging issues that span multiple services, potentially across different teams and environments, can be a major time sink. The "blame game" between teams ("it's your API, not ours!") is a common byproduct of poor observability.
Data Consistency
Ensuring data consistency across multiple, sequentially executed API operations in a distributed system is one of the hardest problems in software architecture. * Eventual Consistency: While many distributed systems lean towards eventual consistency (data will eventually become consistent, but might be inconsistent for a period), some operations in an API waterfall (like financial transactions) demand strong consistency. * Rollbacks and Compensation: If a transaction involving multiple API calls fails midway, how are the completed steps undone or compensated for? This requires careful design of idempotent APIs and compensation logic to reverse previous actions. For example, if payment succeeds but order creation fails, the payment needs to be refunded.
Security
Securing an API waterfall involves securing each individual API call, which can significantly increase the surface area for potential vulnerabilities. * Authentication and Authorization: How is user identity and authorization propagated securely across multiple service boundaries? Each service in the waterfall needs to verify that the incoming request is legitimate and authorized to perform the action. This often involves token propagation (e.g., JWTs) and granular access control. * Data Encryption in Transit: Ensuring that data is encrypted as it moves between services (mTLS, HTTPS) is critical, especially when sensitive information is involved. * API Gateway as a Choke Point: While a challenge, an API gateway (which we'll discuss later) can centralize much of this security, but it also becomes a critical security chokepoint.
Rate Limiting and Throttling
When one service makes multiple downstream calls, it's not just the originating service that needs to manage its outgoing rate, but also the downstream services that need to protect themselves from being overwhelmed. * Downstream Service Protection: A single request to an upstream API could fan out into many requests to a downstream service in a waterfall. Without proper rate limiting at each stage, or at a central gateway, this can lead to denial-of-service for individual services. * Fair Usage: Ensuring fair usage of shared resources across different consumers of the API waterfall, especially when third-party APIs are involved, is crucial.
The challenges of API waterfalls are multifaceted and require a holistic approach encompassing robust design principles, sophisticated tooling, and a strong operational mindset. Ignoring these challenges can lead to brittle systems, poor user experiences, and significant operational overhead. The next sections will explore strategies and solutions to effectively mitigate these complexities.
APIPark is a high-performance AI gateway that allows you to securely access the most comprehensive LLM APIs globally on the APIPark platform, including OpenAI, Anthropic, Mistral, Llama2, Google Gemini, and more.Try APIPark now! 👇👇👇
Strategies for Optimizing and Managing API Waterfalls
Effectively managing and optimizing API waterfalls is not about eliminating the sequential nature of interdependent API calls – as that is often a necessary outcome of business logic and modular architectures. Instead, it’s about strategically designing systems to mitigate the inherent challenges of latency, error propagation, and complexity. This requires a combination of architectural patterns, robust engineering practices, and intelligent tooling.
Asynchronous Processing
One of the most powerful strategies for breaking up tightly coupled API waterfalls, especially for operations that don't require an immediate response, is asynchronous processing. * Decoupling with Message Queues: Instead of making a direct, blocking API call to a downstream service, an upstream service can publish a message to a message queue (e.g., Kafka, RabbitMQ, SQS). The downstream service then subscribes to this queue and processes the message independently, at its own pace. * Reduced Latency for User: The initial API call from the user can return quickly (e.g., "Order Received, processing in background"), improving the perceived performance, while the subsequent steps of the waterfall occur asynchronously. * Increased Resilience: If a downstream service is temporarily unavailable, the message remains in the queue and can be processed once the service recovers, preventing cascading failures and reducing the need for complex retry logic in the upstream service. * Examples: Sending notifications (email, SMS), updating analytical dashboards, performing long-running background computations (e.g., generating reports, processing large files).
Batching and Aggregation
For scenarios where multiple independent API calls are made to the same service or related services within a short timeframe, batching and aggregation can significantly reduce network overhead and improve efficiency. * Single Request, Multiple Operations: Instead of making separate API calls for GET /products/1, GET /products/2, GET /products/3, a single API call like GET /products?ids=1,2,3 or a POST /batch_products_details can fetch all necessary data in one go. * Reduced Network Trips: This minimizes the number of round trips, reducing cumulative network latency and the load on the network infrastructure. * Optimized Backend Processing: Backend services can often process batched requests more efficiently (e.g., a single database query for multiple IDs rather than separate queries). * Considerations: Batching works best for operations that can be processed independently or where the order of operations within the batch does not matter. It also requires the downstream services to support batch endpoints.
Caching
Caching is a fundamental optimization technique that stores the results of expensive or frequently accessed API calls, allowing subsequent requests for the same data to be served much faster from the cache rather than hitting the backend service. * Reduced Load: Caching significantly reduces the load on backend services and databases, as they are not repeatedly processing the same requests. * Improved Latency: Retrieving data from a local cache (e.g., Redis, Memcached) is typically orders of magnitude faster than making a network call to a backend service. * Strategic Placement: Caches can be implemented at various layers: client-side (browser cache), API gateway (edge cache), service-level (application cache), or distributed cache systems. * Cache Invalidation: The primary challenge with caching is ensuring data freshness. Implementing effective cache invalidation strategies (e.g., time-to-live, event-driven invalidation) is crucial to prevent serving stale data. Caching is particularly effective for static or slowly changing data (e.g., product details, user profiles).
Parallelization
Not all steps in an API waterfall are strictly sequential. Identifying and executing independent API calls concurrently – known as parallelization – can dramatically reduce overall latency. * Identify Independent Paths: Analyze the dependency graph of an API waterfall. If Call B and Call C both depend on Call A, but are independent of each other, they can be initiated in parallel once Call A completes. * Concurrency Constructs: Use asynchronous programming patterns (e.g., async/await in JavaScript/Python, Goroutines in Go, Futures/CompletableFutures in Java) to manage concurrent API calls. * Fan-out/Fan-in Patterns: An initial API call might "fan out" to multiple independent services in parallel, and then "fan in" to aggregate their results before proceeding. * Examples: Fetching product reviews, related items, and user recommendations for a product page can often happen in parallel.
Robust Error Handling
To build resilient API waterfalls, comprehensive and intelligent error handling is indispensable. * Retry Mechanisms with Exponential Backoff and Jitter: Instead of immediate retries that can overwhelm a struggling service, implement exponential backoff (increasing delay between retries) and jitter (randomized delay within a range) to smooth out load spikes. * Circuit Breakers: Implement circuit breaker patterns (e.g., Hystrix, Resilience4j) that monitor the health of downstream services. If a service starts failing consistently, the circuit breaker trips, preventing further calls to that service for a set period and serving a fallback response or throwing an immediate error. This prevents a failing service from consuming resources and causing cascading failures. * Bulkheads: Isolate resources (e.g., thread pools, connection pools) for different services to prevent a failure in one service from exhausting resources needed by others, much like watertight compartments in a ship. * Idempotent Operations: Design APIs to be idempotent, meaning that making the same request multiple times has the same effect as making it once. This simplifies retry logic, as retrying a failed operation won't cause unintended side effects (e.g., charging a customer twice). * Compensation Logic (Sagas): For complex distributed transactions, implement saga patterns where a series of local transactions are coordinated. If any local transaction fails, compensation transactions are executed to undo the effects of previous successful transactions, ensuring eventual consistency.
Performance Monitoring and Distributed Tracing
Effective management of API waterfalls relies heavily on visibility into the system's behavior. * End-to-End Monitoring: Beyond monitoring individual service metrics, implement end-to-end transaction monitoring to track the journey of a request across all services in a waterfall. * Distributed Tracing Tools: Utilize tools like Jaeger, Zipkin, or OpenTelemetry to capture and visualize traces of requests as they propagate through multiple services. This allows developers to see the exact path, latency, and errors for each hop in the waterfall, making root cause analysis significantly faster. * Log Correlation: Ensure that logs from different services can be correlated using a common transaction ID, making it easier to follow a request's flow.
Service Mesh
For highly complex microservices environments with numerous API waterfalls, a service mesh (e.g., Istio, Linkerd) can provide a powerful infrastructure layer for managing inter-service communication. * Traffic Management: A service mesh provides sophisticated traffic control capabilities, including load balancing, dynamic routing, and fault injection, without requiring changes to application code. * Resilience Patterns: It can automatically inject resilience patterns like circuit breaking, retries, and timeouts into inter-service calls. * Observability: Service meshes offer out-of-the-box distributed tracing, metrics collection, and logging for all service-to-service communication, giving unparalleled visibility into API waterfalls. * Security: They can enforce mTLS (mutual TLS) between services, ensuring secure, encrypted communication for every hop in the waterfall.
By combining these strategies, organizations can transform the potentially problematic API waterfall into a highly efficient, resilient, and observable set of interconnected services. While these solutions add their own layer of complexity, they are essential investments in building robust distributed systems.
The Indispensable Role of an API Gateway
In the complex landscape of API waterfalls, where requests fan out to multiple backend services, managing this distributed orchestration can become a formidable challenge. This is precisely where an API Gateway steps in as an indispensable component, acting as a single, intelligent entry point for all client requests. It effectively stands as the first line of defense and the primary orchestrator, abstracting the intricate backend architecture from external consumers and simplifying the management of the API waterfall.
What is an API Gateway?
At its most fundamental, an API Gateway is a server that acts as an API front-end, or a "single point of entry," taking all API requests, determining which services are needed, and routing them to the appropriate backend. It is essentially a proxy that sits between clients and a collection of backend services. Its core purpose is to simplify how clients interact with microservices, offering a unified, consistent, and secure API for public consumption, while handling the complexities of the internal service landscape.
How an API Gateway Addresses API Waterfall Challenges
An API Gateway is uniquely positioned to address many of the challenges inherent in API waterfalls, offering a centralized mechanism for applying policies and orchestrating requests.
- Request Routing and Orchestration:
- Unified Access: Instead of clients needing to know the individual URLs and locations of multiple backend services in a waterfall, they only interact with the gateway. The gateway then intelligently routes requests to the correct backend services based on predefined rules.
- API Composition/Orchestration: Advanced API gateways can actively manage and orchestrate the API waterfall internally. A single client request to the gateway can trigger multiple backend API calls (e.g., in parallel or sequentially), combine their responses, and return a single, aggregated response to the client. This effectively moves the "waterfall logic" from the client or an intermediary service into the gateway itself, simplifying client-side development and reducing network latency by minimizing round trips to the client.
- Authentication and Authorization:
- Centralized Security: Instead of each backend service in the waterfall needing to implement its own authentication and authorization logic, the API gateway can centralize this function. It can authenticate incoming requests, validate tokens (e.g., JWTs), and apply authorization policies before forwarding requests to backend services. This ensures consistent security, reduces boilerplate code in microservices, and simplifies security management across the entire API waterfall.
- Reduced Attack Surface: By acting as a single point of entry, the gateway presents a smaller, more controlled attack surface to the outside world, shielding internal services from direct exposure.
- Rate Limiting and Throttling:
- Protection for Backend Services: The API gateway can enforce rate limits and throttling policies on incoming requests. This prevents individual backend services from being overwhelmed by a flood of requests, especially in scenarios where a single client request triggers a complex API waterfall involving many downstream calls. The gateway acts as a traffic cop, protecting the entire system.
- Caching:
- Edge Caching: An API gateway is an ideal place to implement caching for frequently accessed or computationally expensive API responses. By serving cached data directly from the gateway, it reduces the load on backend services, improves response times, and mitigates the cumulative latency inherent in API waterfalls.
- Request/Response Transformation:
- API Versioning and Evolution: The gateway can transform request and response payloads, allowing backend services to evolve independently of the client API contract. For instance, it can translate between different API versions or adapt data formats to suit client needs without modifying backend services. This simplifies the management of breaking changes in a waterfall where multiple services might need to adapt.
- Traffic Management and Resilience:
- Load Balancing: The API gateway can distribute incoming requests across multiple instances of backend services, ensuring optimal resource utilization and preventing single points of failure.
- Circuit Breaking: Some advanced API gateways incorporate circuit breaker patterns, preventing calls to unhealthy backend services and rerouting traffic or returning fallback responses, thus enhancing the resilience of the API waterfall.
- A/B Testing and Canary Deployments: Gateways can intelligently route a percentage of traffic to new versions of services, facilitating safe rollouts and testing.
- Monitoring and Logging:
- Centralized Observability: The API gateway provides a centralized point for collecting logs, metrics, and trace data for all incoming API calls. This greatly simplifies monitoring, debugging, and performance analysis across the entire API waterfall. It acts as a single pane of glass for understanding how external requests are handled internally.
Introducing APIPark
For organizations grappling with the complexities of managing numerous interconnected APIs, especially those involving AI models, platforms like APIPark offer a comprehensive solution. As an open-source AI gateway and API management platform, APIPark excels at streamlining API integration, ensuring unified invocation formats, and providing robust lifecycle management. Its ability to handle high TPS and offer detailed logging and data analysis makes it particularly valuable in scenarios where API waterfalls are prevalent and require meticulous oversight and optimization.
APIPark serves as an excellent example of a modern API gateway designed to tackle the intricacies of today's distributed applications. It helps centralize authentication, enforce access policies, and manage traffic flow, all critical functions when dealing with API waterfalls. Furthermore, its specialized capabilities for AI models, such as prompt encapsulation into REST API and quick integration of 100+ AI models with unified management, demonstrate how a sophisticated gateway can abstract complex underlying service interactions. By centralizing management and enhancing performance, APIPark helps to mitigate many of the challenges associated with sequential API dependencies, offering detailed API call logging and powerful data analysis that are indispensable for troubleshooting and optimizing the performance of complex API waterfalls.
In essence, an API gateway transforms a potentially chaotic and unmanageable API waterfall into a structured, secure, and performant flow. It acts as the intelligent conductor of the orchestra of microservices, ensuring that each instrument plays its part harmoniously and efficiently, ultimately delivering a seamless experience to the client.
Practical Examples and Use Cases
Understanding the theoretical aspects of an API waterfall is essential, but seeing it in action through practical examples solidifies the concept. API waterfalls are not abstract architectural anomalies; they are the backbone of many everyday digital experiences. Let's explore several common use cases that vividly demonstrate the sequential and interdependent nature of API calls.
E-commerce Checkout Flow (Expanded Example)
We touched on this earlier, but let's elaborate on the intricate details that make it a quintessential API waterfall. When a user clicks "Place Order" on an e-commerce website:
- Authentication & Authorization API: The very first step is often handled by an API gateway which verifies the user's session token or credentials. This API call confirms the user's identity and checks their permissions. Without a valid user, no further action can proceed.
- Shopping Cart Service API: The gateway then routes to the Shopping Cart Service, which exposes an API to retrieve the current contents of the user's cart, including product IDs and quantities. This data is critical for the next steps.
- Product Catalog Service API: For each product ID retrieved from the cart, an API call is made to the Product Catalog Service to fetch the latest product details – current price, description, images, and potentially any applicable discounts. This might be a batch API call for efficiency, but it's still dependent on the cart contents.
- Inventory Service API: With confirmed product IDs and quantities, an API call to the Inventory Service checks if all items are in stock and reserves them temporarily to prevent overselling. If any item is out of stock, this call fails, and the entire waterfall might stop, returning an error to the user.
- Pricing & Promotions Service API: Before payment, an API call to a dedicated Pricing & Promotions Service calculates the final total, applying any discount codes, loyalty points, or special offers based on the product list and user profile. This result is crucial for the payment step.
- Payment Processing Gateway API: With the final order total, an API call is made to an external Payment Gateway (e.g., Stripe, PayPal, or an internal payment service) to process the customer's credit card or chosen payment method. This is a critical point; success here means funds are captured.
- Order Fulfillment Service API: Upon successful payment, an internal API call is made to the Order Fulfillment Service to officially create the order in the system, assigning an order ID, associating products, user details, and payment information. This service often triggers further internal processes.
- Shipping Service API: The Order Fulfillment Service, upon successful order creation, then makes an API call to the Shipping Service to initiate the logistics, potentially generating a shipping label or scheduling a pickup. This requires the order details.
- Notification Service API: Finally, an API call is made to a Notification Service to send a confirmation email or SMS to the customer, using the order ID, product details, and shipping information generated throughout the waterfall.
This entire sequence is a tightly coupled API waterfall, where each step’s success and output are vital for the next. Failure at any point requires careful rollback or compensation to maintain data integrity.
Dashboard Aggregation
Many enterprise dashboards or single-pane-of-glass applications need to display aggregated information from various internal and external systems. This often forms an API waterfall:
- User Dashboard Request: A user logs in and requests to see their personalized dashboard.
- CRM Service API: An API call retrieves customer relationship management data (e.g., recent interactions, open tickets).
- ERP Service API: Another API call fetches enterprise resource planning data (e.g., outstanding invoices, order statuses).
- Marketing Analytics Service API: An external API call (e.g., to Google Analytics or an internal marketing platform) gets recent campaign performance.
- Internal Metrics Service API: An API call retrieves internal application performance metrics relevant to the user's role.
- Data Aggregation Service API: Once all these individual data sets are retrieved, an internal API call to a dedicated aggregation service combines, transforms, and potentially enriches the data before presenting it to the UI. This service relies on the output of all preceding calls.
While some of these calls can be parallelized (CRM, ERP, Marketing, Metrics might be independent), the final aggregation step is inherently dependent on all of them, making it a form of API waterfall.
IoT Data Processing Pipeline
Internet of Things (IoT) devices generate vast amounts of data, which often flows through a sophisticated API waterfall for processing and storage:
- Device Telemetry API: IoT devices make API calls to an Ingestion Service, sending sensor data (temperature, location, status).
- Data Validation & Transformation API: The Ingestion Service forwards the raw data via an API call to a Validation and Transformation Service. This service checks data integrity, cleanses it, and converts it into a standardized format.
- Rule Engine Service API: The transformed data is then sent via an API call to a Rule Engine Service, which applies business logic (e.g., "if temperature > 50°C, trigger alert"). This service's output might be an alert or a routing decision.
- Storage Service API: The processed and validated data is then sent via an API call to a Storage Service (e.g., a time-series database or data lake) for long-term retention.
- Analytics Service API: Concurrently or subsequently, an API call might be made to an Analytics Service to perform real-time analysis or feed into machine learning models.
- Notification Service API: If the Rule Engine triggered an alert, an API call to a Notification Service sends an alert to operators.
This complex pipeline, though potentially asynchronous in parts, starts with the device API and flows through a series of dependent API calls to achieve its goals, forming an IoT data processing waterfall.
AI Model Inference Chains
With the increasing adoption of Artificial Intelligence, especially in microservices environments, API waterfalls are common for orchestrating complex AI tasks. This is where products like APIPark shine with their AI gateway capabilities.
- Client Request to AI Service API: A client sends a request for an AI task, e.g., "analyze sentiment of text."
- Pre-processing Service API: An API call is made to a Pre-processing Service to clean, tokenize, and format the input text to be suitable for the AI model.
- Core AI Model Inference API: The pre-processed input is then sent via an API call to the actual AI model inference service (e.g., a large language model, a sentiment analysis model). This is often the most computationally intensive step.
- Post-processing Service API: The raw output from the AI model (e.g., a numerical sentiment score) is then sent via an API call to a Post-processing Service to convert it into a human-readable format or integrate it with other data.
- Data Storage/Logging API: Finally, the result and metadata might be logged or stored via an API call for auditing and future analysis.
Here, each step relies on the output of the previous one, forming an AI inference waterfall. An AI gateway like APIPark is critical here, as it can unify the invocation format for various AI models, manage prompt encapsulation, and handle the entire lifecycle of these AI-driven APIs, simplifying their integration and reducing the complexity of the waterfall for consumers.
These examples underscore that API waterfalls are a fundamental pattern in modern, distributed software. While they present challenges, they also enable modularity and powerful functionality that would be impossible with simpler architectures. The key is to acknowledge their existence and proactively implement strategies and tools for their effective management.
The Future Landscape of API Waterfalls
The API waterfall, as a consequence of modular and distributed architectures, is not a fleeting trend but an enduring pattern in software development. As technology continues to evolve, the ways we construct and interact with these cascading API calls will also transform, driven by emerging paradigms and an increasing demand for resilience, performance, and simplicity.
Emergence of Serverless Functions and Event-Driven Architectures
Serverless computing, exemplified by functions-as-a-service (FaaS) platforms like AWS Lambda, Azure Functions, and Google Cloud Functions, has already begun to reshape how developers think about and implement API waterfalls. * Granular Services: Serverless functions naturally encourage extreme granularity, often mapping to a single operation. This can lead to an increase in the number of individual API calls required for a complete business transaction. * Event-Driven Triggering: Serverless functions are often triggered by events (e.g., a new item in a queue, a file upload, an HTTP request). This event-driven nature can inherently abstract away some direct sequential API calls. Instead of Function A directly calling Function B, Function A might publish an event, and Function B reacts to that event. This promotes asynchronous processing as a default, effectively breaking strong, synchronous API waterfall dependencies into more resilient, eventually consistent flows. * Orchestration Tools: Even within serverless environments, complex workflows still require orchestration. Tools like AWS Step Functions or Azure Durable Functions provide explicit ways to define and manage sequential, parallel, and conditional steps between serverless functions, which are essentially serverless API waterfalls managed by a state machine. These tools offer built-in error handling and state management, simplifying the challenges of complex waterfalls.
While serverless might introduce more "APIs" (each function having its invocation endpoint), it simultaneously provides patterns and tools to manage the interdependencies more robustly, often shifting from direct synchronous calls to message-passing.
Continued Reliance on APIs
Despite new paradigms, the fundamental reliance on APIs for communication between disparate systems will only intensify. As applications become more interconnected, integrating with a broader ecosystem of third-party services, data providers, and AI models, the role of APIs as the universal language of digital interaction will remain paramount. The sheer volume and variety of API integrations will ensure that API waterfalls, in one form or another, continue to be a central architectural concern. * Generative AI Integration: The proliferation of generative AI models will drive new types of API waterfalls. A user prompt to an application might trigger an API call to an orchestrator, which then makes calls to multiple specialized AI models (e.g., one for text generation, another for image generation, a third for sentiment analysis), aggregates their results, and processes them before sending a final response. The complexity of these AI-driven cascades will be a new frontier for API waterfall management.
Advanced API Management Tools and Gateways
The sophistication of API management platforms and API gateways will continue to evolve to meet the demands of increasingly complex API waterfalls. * AI-Powered Gateways: Future API gateways might incorporate AI to dynamically optimize routing, predict traffic patterns, proactively identify performance bottlenecks in waterfalls, and even suggest optimizations. This is already being seen in platforms like APIPark, which specifically focuses on AI gateway and API management. * Built-in Orchestration: More API gateways will offer advanced capabilities for API composition and orchestration directly within the gateway itself, allowing developers to define complex waterfalls using visual tools or declarative configurations, significantly reducing development effort and improving performance by minimizing round trips. * Enhanced Observability: Expect deeper integration with distributed tracing, enhanced analytics, and AI-driven anomaly detection within API gateways and management platforms, providing even greater visibility into the health and performance of API waterfalls. * Hybrid and Multi-cloud Management: As organizations operate across hybrid and multi-cloud environments, future API gateways will offer seamless management and traffic routing across these diverse infrastructures, managing API waterfalls that span geographical and technological boundaries.
Focus on Resilience, Observability, and Performance
As applications grow in scale and criticality, the industry will place an even greater emphasis on the core challenges of API waterfalls: * Hyper-Resilience: Beyond basic circuit breakers, future systems will employ advanced self-healing capabilities, chaos engineering practices, and sophisticated fallback mechanisms to ensure that API waterfalls can gracefully degrade or reroute even under extreme stress. * Proactive Observability: The shift will be from reactive troubleshooting to proactive identification of potential issues within API waterfalls. Predictive analytics, AI-driven monitoring, and real-time anomaly detection will become standard. * Micro-optimizations: As general performance gains from hardware become less dramatic, the focus will shift to micro-optimizations within API waterfalls, such as optimizing network protocols (e.g., HTTP/3), minimizing data serialization overhead, and extremely efficient resource allocation.
In conclusion, the API waterfall is an intrinsic part of modern distributed architectures. Its future will be shaped by continuous innovation in how we design, manage, and observe the complex interactions between services. While new technologies will offer more elegant solutions for orchestration and resilience, the fundamental challenge of coordinating interdependent API calls will remain. Mastering this challenge will continue to be a hallmark of robust and scalable software engineering.
Conclusion
The journey through the intricacies of the "API Waterfall" reveals a concept that, while not formally defined, is profoundly descriptive of how modern distributed systems function. At its heart, an API waterfall represents a sequence of interdependent API calls, where the success and output of one call directly pave the way for the next. This cascading interaction is not an anomaly but a natural and often necessary outcome of architectures designed for modularity, specialization, and scalability, such as microservices.
We have explored the two primary interpretations of this metaphor: the logical chain of sequential and dependent API calls driven by business logic and data flow, and the visual representation of network request timing in performance charts. Both interpretations highlight the critical dynamics at play when disparate services communicate to fulfill a single overarching task.
The architectural paradigms of microservices, distributed systems, data orchestration, and event-driven approaches are the genesis of these waterfalls, enabling unprecedented levels of flexibility and reusability. However, these benefits come hand-in-hand with significant challenges. Cumulative latency, complex error handling, the nightmare of debugging across multiple services, maintaining data consistency, and securing each hop in the waterfall are all formidable obstacles that require meticulous planning and robust solutions.
To navigate these challenges successfully, a multi-faceted approach is essential. Strategies such as asynchronous processing, intelligent batching, aggressive caching, parallelizing independent calls, and implementing robust error handling mechanisms like circuit breakers and retry policies are crucial for building resilient and performant systems. Moreover, comprehensive performance monitoring and distributed tracing tools are indispensable for gaining visibility into the hidden complexities of these cascading interactions.
Central to any effective strategy for managing API waterfalls is the API Gateway. Acting as the intelligent front door to a constellation of backend services, an API gateway centralizes crucial functionalities like request routing, authentication, rate limiting, caching, and even API composition. It abstracts away the internal complexity, simplifying client interactions and providing a critical control point for applying cross-cutting concerns. Platforms like APIPark exemplify how a sophisticated gateway can streamline the management of complex API integrations, especially those involving AI models, by providing unified invocation, robust lifecycle management, and invaluable observability features like detailed API call logging and powerful data analysis.
Looking ahead, the API waterfall will continue to be a defining characteristic of our interconnected digital world. While serverless functions and event-driven architectures offer new ways to decouple and manage dependencies, and advanced API management tools will further automate and simplify orchestration, the fundamental need to coordinate interdependent API calls will remain. The future demands an even greater focus on resilience, proactive observability, and performance optimization to ensure that these cascades of data and logic flow smoothly and efficiently, ultimately powering the next generation of robust and scalable applications.
Mastering the API waterfall is not about fighting against its existence, but about intelligently designing and tooling our systems to harness its power while mitigating its inherent complexities. It is a testament to the ever-evolving challenge and triumph of modern software engineering.
Frequently Asked Questions (FAQs)
Q1: What exactly is an API Waterfall, and how does it differ from a simple API call?
A1: An API Waterfall is a descriptive term for a sequence of interdependent API calls, where the output or successful completion of one API call is required before the next API call can begin. It's not a single API call, but rather a chain reaction of multiple calls working together to fulfill a larger operation. For example, in an e-commerce checkout, an API call for "authenticate user" must succeed before an API call for "check product inventory" can proceed, which then must succeed before an "process payment" API call. A simple API call, in contrast, is an isolated request to a single API endpoint that doesn't necessarily depend on a prior specific API response for its initiation.
Q2: Why are API Waterfalls common in modern software architectures?
A2: API Waterfalls are prevalent due to the widespread adoption of architectural patterns like microservices and distributed systems. In microservices, a large application is broken into many small, specialized services, each communicating via APIs. A single user request often needs to orchestrate multiple of these services in sequence to complete its task. Distributed systems, with components spread across networks, also naturally involve multiple network hops and API interactions. These architectures promote modularity and scalability but inherently lead to complex, cascading API dependencies.
Q3: What are the main challenges associated with managing API Waterfalls?
A3: The primary challenges include: 1. Performance and Latency: The cumulative time of sequential API calls, plus network overhead, can lead to significant delays. 2. Error Handling: A failure early in the waterfall can cause a cascading failure across the entire system, requiring complex retry and rollback mechanisms. 3. Complexity and Observability: Tracing a single request through many services to debug issues or monitor performance becomes very difficult without specialized tools like distributed tracing. 4. Data Consistency: Ensuring data integrity across multiple operations in a distributed, sequential flow is complex. 5. Security: Managing authentication and authorization across every hop in the waterfall increases the security surface area.
Q4: How does an API Gateway help in managing API Waterfalls?
A4: An API Gateway is a crucial component that acts as a single entry point for all API calls. It significantly helps by: * Centralizing Routing & Orchestration: Directing client requests to the correct backend services and, in some cases, even orchestrating multiple backend API calls internally before returning a single, aggregated response to the client. * Centralized Security: Handling authentication, authorization, and rate limiting at the edge, protecting backend services. * Caching: Reducing load on backend services and improving latency by serving cached responses. * Traffic Management: Providing load balancing, circuit breaking, and traffic shaping for the entire waterfall. * Unified Monitoring: Offering a central point for collecting logs and metrics for all API calls. For instance, platforms like APIPark specialize in offering these gateway capabilities, particularly for complex AI API integrations, streamlining management and enhancing performance.
Q5: What strategies can be employed to optimize the performance and resilience of API Waterfalls?
A5: Key optimization strategies include: * Asynchronous Processing: Using message queues to decouple services for non-critical tasks, improving responsiveness and resilience. * Batching and Aggregation: Combining multiple independent API calls into a single request to reduce network overhead. * Caching: Storing frequently accessed API responses to avoid repeated calls to backend services. * Parallelization: Executing independent API calls concurrently within the waterfall to reduce overall latency. * Robust Error Handling: Implementing retry mechanisms with exponential backoff, circuit breakers, and compensation logic to handle failures gracefully. * Distributed Tracing and Monitoring: Using tools to visualize and analyze the entire flow of a request across all services for effective debugging and performance tuning.
🚀You can securely and efficiently call the OpenAI API on APIPark in just two steps:
Step 1: Deploy the APIPark AI gateway in 5 minutes.
APIPark is developed based on Golang, offering strong product performance and low development and maintenance costs. You can deploy APIPark with a single command line.
curl -sSO https://download.apipark.com/install/quick-start.sh; bash quick-start.sh

In my experience, you can see the successful deployment interface within 5 to 10 minutes. Then, you can log in to APIPark using your account.

Step 2: Call the OpenAI API.

