Control Log Verbosity with Tracing Subscriber Dynamic Level
APIPark is a high-performance AI gateway that allows you to securely access the most comprehensive LLM APIs globally on the APIPark platform, including OpenAI, Anthropic, Mistral, Llama2, Google Gemini, and more.Try APIPark now! πππ
Navigating the Labyrinth of Logs: Achieving Surgical Precision with Tracing Subscriber Dynamic Level
In the sprawling landscapes of modern software, where distributed systems, microservices, and complex API architectures reign supreme, the sheer volume of operational data generated can be overwhelming. Among the most fundamental forms of this data are logs β the unseen narratives of a system's inner workings. Yet, traditional logging approaches often present a dichotomy: either you drown in a flood of verbose output, obscuring critical insights, or you operate with too little detail, leaving you blind when an elusive issue strikes. This challenge is particularly acute in high-performance environments such as API gateways and Open Platforms, where millions of requests per second demand both efficiency and deep, on-demand observability.
The Rust ecosystem offers a sophisticated answer to this dilemma with its tracing crate and the powerful tracing-subscriber. This combination provides a structured, contextual approach to instrumentation, moving beyond simple log lines to capture rich, hierarchical data about system execution. At the heart of tracing-subscriber's capabilities lies the often-underestimated power of dynamic log level control. This article delves deep into how dynamically adjusting log verbosity can transform your debugging and monitoring strategies, providing surgical precision in diagnosing issues without the performance overhead or operational friction traditionally associated with verbose logging. We will explore the "why" and "how" of this critical technique, illustrating its profound impact on the resilience and efficiency of complex software systems, particularly those that serve as the backbone for intricate API interactions and public-facing Open Platforms.
The Unseen Language of Software: From Simple Logs to Structured Traces
At its core, logging is the practice of recording messages about the operation of a software system. For decades, developers have relied on println! statements or basic logging libraries to output textual messages to a console or file. While these methods serve adequately for simple applications, they quickly become insufficient as system complexity grows. Traditional log messages are often isolated, lacking the crucial context of the operations that generated them. When an error occurs in a multi-threaded application or a distributed system, a simple log line like "Error processing request" tells you very little about which request, where in the code path, or what other related events preceded it. This lack of context forces developers into a cumbersome, trial-and-error debugging process, often involving redeploying code with additional log statements.
The tracing crate in Rust represents a significant paradigm shift in observability, moving beyond "just logging" to a more holistic concept of "instrumentation." Instead of disparate log messages, tracing allows you to define spans and events. Spans represent a period of time during which an operation occurs, forming a hierarchical structure. For instance, a span might encompass the entire processing of an incoming API request within an API gateway, while nested spans could represent database queries, external service calls, or complex business logic steps. Events, on the other hand, are discrete points within a span, akin to traditional log messages but inherently enriched with the contextual data of the span they occurred within. This means an "Error processing request" event automatically carries information about the request ID, user ID, originating service, and any other fields defined for the encompassing span, providing an unparalleled level of detail and correlation.
This structured, contextual approach transforms raw operational data into a navigable narrative. By default, tracing doesn't dictate where these traces go; it only defines what information is captured. This decoupling is achieved through the Subscriber trait. A tracing-subscriber is an implementation of this trait that decides how to process, filter, format, and export the spans and events. This architecture allows developers to swap out logging backends (e.g., console output, JSON files, distributed tracing systems like OpenTelemetry) without modifying the application's core instrumentation logic. It's this powerful abstraction that paves the way for advanced features like dynamic log level control, which becomes indispensable for applications that must operate robustly and transparently, such as an Open Platform providing various API services to a multitude of users and integrations.
The Tyranny of Verbosity: Why Static Log Levels Fall Short
The concept of log levels β such as trace, debug, info, warn, and error β is a familiar tool for managing log volume. The idea is simple: less critical messages (like trace and debug) are typically enabled during development, while only more severe messages (info, warn, error) are active in production. This static approach, however, rapidly encounters limitations in the face of modern software's dynamic nature and scale.
Consider the classic dilemma of development versus production environments. During development, developers often need trace-level verbosity to step through intricate logic, inspect variable states, and understand the flow of execution. This is crucial for rapid iteration and debugging. However, deploying an application with trace logging enabled in production is almost universally a catastrophic mistake. The sheer volume of data generated can quickly overwhelm I/O subsystems, consume vast amounts of CPU for formatting and writing, saturate network links if logs are shipped to a central system, and incur exorbitant storage costs. Furthermore, the signal-to-noise ratio plummets, making it nearly impossible to spot critical warnings or errors amidst a deluge of granular details. This performance overhead can degrade the user experience, increase operational expenses, and even destabilize the application itself, especially for high-throughput services like an API gateway that processes millions of requests.
Beyond performance, security implications are a major concern. trace or debug level logs often contain sensitive information that should never be exposed in a production environment, such as user credentials, PII (Personally Identifiable Information), or internal system states. A static log configuration that inadvertently enables these verbose levels in production creates a significant vulnerability, risking data breaches and compliance failures. The difficulty lies in balancing the need for deep introspection with the imperative for security and performance.
Perhaps the most frustrating aspect of static log levels is the "redeploy to debug" dilemma. Imagine a critical bug surfaces in production, manifesting only under specific, rare conditions. Your default production log level (e.g., info) provides insufficient detail to diagnose the problem. The traditional solution involves modifying the log configuration to a more verbose level (e.g., debug for a specific module), rebuilding the application, and redeploying it. This process is not only time-consuming and disruptive, potentially causing further downtime, but it also carries the risk of introducing new issues during the redeployment. In an API gateway or an Open Platform that must maintain high availability and process continuous traffic, such a disruptive debugging cycle is simply unacceptable. It increases the Mean Time To Resolution (MTTR) dramatically, leading to frustrated users and significant financial impact. The inability to dynamically adjust logging detail creates a costly blind spot, turning minor glitches into major operational crises.
Unleashing Control: Dynamic Log Level Management with tracing-subscriber
The solution to the tyranny of verbosity and the "redeploy to debug" nightmare lies in the ability to dynamically control log levels at runtime, without requiring an application restart. This capability allows operators to surgically increase the verbosity of logging for specific components or modules when a problem is identified, gather the necessary diagnostic information, and then revert to a less verbose state β all while the application continues to serve requests. tracing-subscriber provides elegant mechanisms to achieve this, making it an indispensable tool for robust, observable systems.
At the heart of tracing-subscriber's dynamic filtering capabilities is the ReloadFilter layer. Before diving into ReloadFilter, it's worth briefly mentioning EnvFilter. EnvFilter is a powerful and flexible filter that parses directives from environment variables (e.g., RUST_LOG="info,my_module=debug") to control log levels for different targets. While EnvFilter is highly configurable, its settings are typically read once at application startup. To change the filter, you would still need to restart the application with new environment variables. ReloadFilter transcends this limitation.
The ReloadFilter works by wrapping an underlying filter (like EnvFilter). It exposes a ReloadHandle that can be used to atomically update the inner filter with a new one at any point during the application's runtime. Internally, this often involves using concurrency primitives like Arc (Atomically Reference Counted) and RwLock (Reader-Writer Lock) to ensure that reads (i.e., filtering decisions for incoming events/spans) are not blocked while a write (i.e., a filter update) is in progress, and vice versa, maintaining thread safety and performance.
Here's a conceptual breakdown of how to implement dynamic log level control using ReloadFilter in Rust with tracing-subscriber:
- Initial Filter Setup: You start by creating an initial filter, typically an
EnvFilter, with your desired default production logging levels (e.g.,infofor most modules). ```rust use tracing_subscriber::{EnvFilter, reload::Layer as ReloadLayer, fmt}; use tracing::Level;// ... inside your main function or setup let initial_filter = EnvFilter::builder() .with_default_directive(Level::INFO.into()) .from_env_or_debug(); // Defaults to DEBUG if RUST_LOG not set ``` - Wrap in
ReloadFilter: This initial filter is then wrapped in aReloadLayer(which acts as a filter layer for a subscriber).rust let (filter_layer, reload_handle) = ReloadLayer::new(initial_filter);TheReloadLayer::newfunction returns two things: the layer itself, which you'll add to yourtracing-subscribersetup, and aReloadHandle. Thisreload_handleis the key to dynamic control. - Construct the Subscriber: You then build your complete
tracing-subscriber, incorporating thefilter_layeralong with other layers for formatting, output, etc. ```rust // Example: combining with a formatter for console output let subscriber = fmt::Subscriber::builder() .with_max_level(Level::TRACE) // Max level collected, actual filtered by filter_layer .finish() .with(filter_layer);tracing::subscriber::set_global_default(subscriber) .expect("setting default subscriber failed");`` Note thatwith_max_level(Level::TRACE)ensures *all* events are sent to the subscriber, but thefilter_layer` will then discard those below the active threshold. - Expose the
ReloadHandle: Thereload_handleneeds to be accessible from whatever mechanism you choose to trigger filter updates. This could be:- An HTTP API Endpoint: For an API gateway or an Open Platform, this is a common and powerful approach. You might create a
/admin/log_levelendpoint that accepts aPOSTrequest with the new filter directives. The handler for this endpoint would then use thereload_handle. - Configuration File Watcher: The application could monitor a specific configuration file for changes. When the file is updated, it parses the new filter directives and uses
reload_handle. - Message Queue Listener: For highly distributed systems, a central control plane might publish log level change commands to a message queue, which your application consumes.
- An HTTP API Endpoint: For an API gateway or an Open Platform, this is a common and powerful approach. You might create a
- Triggering a Reload: When an update is triggered (e.g., via the admin API endpoint), you would parse the new filter configuration (e.g., a string like
"my_service=debug,other_module=trace") into a newEnvFilterand then callreload_handle.reload(new_filter).rust // Example: inside an HTTP handler async fn update_log_level( Extension(reload_handle): Extension<ReloadHandle<EnvFilter>>, Json(new_filter_str): Json<String>, ) -> impl IntoResponse { match EnvFilter::try_new(&new_filter_str) { Ok(new_filter) => { reload_handle.reload(new_filter).expect("Failed to reload filter"); (StatusCode::OK, format!("Log level reloaded to: {}", new_filter_str)) } Err(e) => (StatusCode::BAD_REQUEST, format!("Invalid filter string: {}", e)), } }
The benefits of this dynamic control are immense, especially for systems like API gateways and Open Platforms:
- Surgical Debugging: When a specific customer reports an issue on an Open Platform, you can temporarily increase logging for their
apicalls or for the specific service module they interact with, without flooding logs for other users or services. - Reduced MTTR: Faster problem diagnosis translates to quicker resolutions, improving system reliability and customer satisfaction.
- Optimized Performance: Maintain lean production logs by default, only incurring the overhead of verbose logging when actively debugging.
- Enhanced Security: Avoid the need to deploy highly verbose configurations that could accidentally expose sensitive data; verbosity is transient and controlled.
- A/B Testing Observability: When rolling out new features or API versions, dynamic logging allows you to temporarily increase verbosity for traffic routed to the new version, providing deep insights into its behavior without affecting the stability of the older version.
In such sophisticated environments, observability is paramount. Platforms like APIPark, an AI gateway and API management platform, inherently deal with vast amounts of diverse API traffic. APIPark offers detailed API call logging as a key feature, recording every detail for troubleshooting and system stability. Integrating dynamic log verbosity control would further enhance this capability, allowing administrators of an APIPark deployment to precisely adjust the level of detail for specific API calls, AI model integrations, or tenant-specific traffic patterns on the fly. This level of granularity is crucial for a high-performance, multi-tenant Open Platform that needs to provide robust service and rapid issue resolution.
Practical Implementations and Advanced Techniques
Implementing dynamic log level control is just the beginning; integrating it seamlessly into a production-grade system requires considering advanced techniques and practical best practices. The tracing ecosystem is designed for composability, allowing you to layer different functionalities together to build a robust observability pipeline.
Composing Layers for Sophisticated Observability: A typical tracing setup in a real-world application involves multiple layers, each serving a specific purpose. For instance: * filter_layer (our ReloadLayer<EnvFilter>) for dynamic log level control. This layer should generally be placed early in the subscriber stack, as it determines which spans and events are processed further, thus minimizing downstream overhead. * fmt::Layer for formatting events and spans into human-readable text for console output or tracing-appender::rolling::Layer for rotating log files. * opentelemetry_otlp::layer::OpenTelemetryTracingBridge for exporting traces and spans to a distributed tracing system like Jaeger or Zipkin, via OpenTelemetry Protocol (OTLP). This is critical for understanding the flow of requests across multiple services in a microservice architecture. * Custom layers for specific business logic, such as redacting sensitive data before it hits any logger, or sending specific metrics based on tracing events.
The order of these layers matters. Filters typically come first to reduce the volume of data processed by subsequent, potentially more expensive layers like formatting or network exporting.
Integration with Web Frameworks: In web applications, particularly those built with frameworks like Axum, Actix, or Warp in Rust, the ReloadHandle needs to be accessible within HTTP handlers or middleware. This is typically achieved by injecting the handle into the application state or by using a dependency injection mechanism. For example, in Axum, you might store the ReloadHandle within the AppState and then use axum::extract::Extension to retrieve it in your routes:
#[derive(Clone)]
struct AppState {
reload_handle: ReloadHandle<EnvFilter>,
// ... other application state
}
async fn main() {
let (filter_layer, reload_handle) = ReloadLayer::new(EnvFilter::new("info"));
// ... setup other tracing layers and subscriber
let app_state = AppState { reload_handle, /* ... */ };
let app = Router::new()
.route("/techblog/en/admin/log_level", post(update_log_level_route))
.with_state(app_state.clone()); // Clone the state for the router
// ... other routes
// Pass the handle as an extension to the router for specific routes if not using global state
let app = Router::new()
.route("/techblog/en/admin/log_level", post(update_log_level_route))
.layer(Extension(reload_handle.clone())); // Make handle available via Extension
// ...
}
async fn update_log_level_route(
Extension(reload_handle): Extension<ReloadHandle<EnvFilter>>,
Json(new_filter_str): Json<String>,
) -> impl IntoResponse {
// ... logic to update filter as shown previously
}
This pattern allows you to build an administrative API route that only authorized users or systems can call to change the logging behavior of your running application.
Middleware for Per-Request Dynamic Filtering: For very specific debugging scenarios, you might even consider a middleware that can temporarily elevate log levels for individual requests based on special headers (e.g., X-Debug-Trace: true) or query parameters. This temporary filter would only apply within the scope of that specific request's span, providing highly granular control without affecting the global log level. Implementing this would involve creating a custom tracing Layer that can inspect request details and apply a temporary, per-span filter, but this adds significant complexity.
Performance Considerations: While dynamic filtering is powerful, it's not entirely without cost. * Filter evaluation overhead: Each span and event must be evaluated against the active filter. EnvFilter is highly optimized, but complex filter directives (many targets, complex regular expressions) can introduce minor overhead. This overhead is generally negligible compared to I/O operations of logging, but it's worth noting. * ReloadFilter overhead: The RwLock operations for updating the filter are extremely fast, but they do have a small cost. Given that log level changes are infrequent, this is practically irrelevant. * String parsing for EnvFilter: When a new EnvFilter is created from a string, it involves parsing. To minimize the cost of frequent reloads (though rare), pre-compiling common filter configurations or having a simple enum to switch between predefined filters can be beneficial.
Security Best Practices: The endpoint or mechanism used to change log levels dynamically becomes a critical control point. * Authentication and Authorization: This endpoint must be secured. Only authorized administrators or automated systems should be able to invoke it. Use robust authentication (e.g., API keys, OAuth, mTLS) and authorization (role-based access control) to prevent unauthorized parties from increasing log verbosity, which could lead to sensitive data exposure or performance degradation through DoS (Denial of Service) via excessive logging. * Rate Limiting: Implement rate limiting on log level change requests to prevent abuse. * Audit Trails: Log who changed the log level when and to what value. This is crucial for security audits and accountability. * Data Masking/Redaction: Even with dynamic control, sensitive data should ideally be masked or redacted at the source (i.e., when creating the span/event) rather than relying solely on log levels to hide it. This provides an additional layer of defense against accidental exposure. * Default to Least Privilege: The default production configuration should always be the least verbose necessary (info or warn), ensuring that any temporary elevation of verbosity is a conscious, controlled action.
By carefully considering these aspects, organizations can harness the full power of dynamic log level control provided by tracing-subscriber, transforming it from a mere debugging tool into a foundational element of a robust, secure, and highly observable system. This approach is particularly vital for organizations operating an Open Platform that expose numerous APIs, where both operational efficiency and stringent security are paramount.
API Gateways and Open Platforms: The Ultimate Beneficiaries
The architectural patterns of API gateways and Open Platforms inherently present challenges that dynamic log level control is perfectly poised to address. These systems sit at the critical juncture of client applications and backend services, processing a colossal volume of diverse API requests, often from myriad external consumers. Their robust operation is non-negotiable, and their observability must be exceptional.
An API gateway acts as a single entry point for a multitude of backend microservices. It handles routing, authentication, authorization, rate limiting, and often caching and transformation. Given its central role, any issue within the gateway or with a downstream service can have widespread impact. Imagine a scenario where a specific backend service is intermittently failing for a subset of API calls, or a new client integration is causing unexpected errors. With static logging, diagnosing such issues would typically require a full redeployment of the gateway with verbose logs, affecting all traffic and potentially exacerbating the problem. Dynamic log level control, however, allows for surgical intervention. An operator can, for example, increase the log level to debug or trace only for requests originating from the problematic client, or for requests targeting the specific failing backend service, or even for a particular API route that's exhibiting anomalies. This granular control means that the diagnostic data for the specific issue becomes visible without drowning the logs in irrelevant information from millions of healthy requests, maintaining the gateway's performance and stability for the vast majority of users.
For an Open Platform, the benefits are even more pronounced. An Open Platform exposes a rich set of APIs to third-party developers, partners, and internal teams, effectively turning your system into a public service. These platforms typically support multi-tenancy, where different organizations or users have independent access to services and data. When a partner reports an issue with an API call, the ability to temporarily enable debug logging for that specific partner's client_id or tenant_id without impacting any other tenant is invaluable. This allows the platform's support team to quickly gather the necessary diagnostic information, pinpoint the problem, and provide a resolution, significantly improving customer satisfaction and reducing MTTR. Furthermore, compliance and audit trails often require specific api interactions to be logged at a higher detail level for a period. Dynamic control facilitates this by enabling detailed logging only when required, and for the specific transactions that need it, avoiding continuous, excessive logging that would otherwise be costly and cumbersome.
Consider the complexity of rolling out new API versions on an Open Platform. When a new version is introduced, it's often rolled out gradually, with a small percentage of traffic directed to it. Dynamic logging allows operators to increase the verbosity for this new version's traffic specifically, enabling deep scrutiny of its behavior, performance, and error rates in a live environment. If any issues are detected, the log levels can be quickly reverted, or the traffic routing adjusted, all without a full redeployment cycle. This flexibility accelerates feature delivery, reduces risk, and ensures the stability of the platform.
A robust API gateway and Open Platform like APIPark thrives on such fine-grained observability. APIPark, as an open-source AI gateway and API management platform, already provides comprehensive logging capabilities to record every detail of each API call. Enhancing this with dynamic log level control would empower businesses using APIPark to conduct even more precise troubleshooting. Imagine identifying a specific AI model integration that is misbehaving; with dynamic controls, an APIPark administrator could temporarily enable trace level logging only for calls routed to that particular AI model, gathering deep insights into its inputs, outputs, and internal processing without flooding the logs from all other AI models or REST services. This capability transforms reactive debugging into proactive problem identification and resolution, cementing APIPark's value proposition for efficiency, security, and data optimization in AI and API management.
Beyond Basic Logging: The Broader Observability Landscape
While dynamic log level control in tracing significantly enhances traditional logging, it's crucial to understand that tracing is designed to be a foundational piece of a much broader observability strategy. Modern systems rely on three pillars of observability: logs, metrics, and traces. tracing brilliantly bridges the gap between logging and distributed tracing, and also provides hooks for metrics.
tracing's core concept of spans, representing operations with parent-child relationships, naturally lends itself to distributed tracing. By integrating with libraries like tracing-opentelemetry, the contextual information from tracing spans can be automatically propagated across service boundaries, creating end-to-end traces that visualize the entire journey of a request through a complex microservice architecture. This allows developers to quickly identify bottlenecks, latency issues, and error sources within a distributed system, an indispensable capability for an Open Platform where requests might traverse multiple services, databases, and third-party integrations. Dynamic log levels complement this by allowing you to zoom into a specific segment of a distributed trace with verbose logging, providing the granular details needed to understand why a particular step in the trace might be failing or performing poorly.
Furthermore, tracing can integrate with tokio-console, a powerful diagnostic tool for tokio-based applications. tokio-console provides real-time introspection into the state of asynchronous tasks, showing task spawns, awaits, and instrumented tracing events. This offers an interactive, visual debugger for asynchronous Rust code, which is increasingly common in high-performance network applications like API gateways. Dynamic log levels can be used in conjunction with tokio-console to fine-tune the events being sent, ensuring that the console displays the most relevant information without being overwhelmed by excessive detail.
The holistic view offered by combining logs, metrics, and traces, all potentially originating from a unified tracing instrumentation, empowers operations teams and developers to achieve unprecedented clarity into system behavior. Metrics provide the "what" (e.g., CPU utilization, request rates, error counts), traces provide the "where" and "how" (the flow through services), and detailed logs, dynamically controlled, provide the "why" (the granular events and state changes within a specific operation). This synergy is critical for maintaining high availability, optimizing performance, and rapidly addressing issues in complex, large-scale systems. The future of tracing and its vibrant ecosystem promises even tighter integrations with various observability backends and tools, solidifying its role as a cornerstone for building resilient and transparent software.
Conclusion
In the relentless pursuit of robust and efficient software, the ability to understand our systems' internal dialogue is paramount. Traditional logging, while foundational, often forces an untenable trade-off between clarity and performance. The tracing crate in Rust, with its structured, contextual approach to instrumentation, and specifically tracing-subscriber's dynamic log level control, offers a sophisticated liberation from this dilemma. By enabling the on-demand adjustment of log verbosity, without the need for application restarts, developers and operators gain surgical precision in debugging and monitoring.
This power is particularly transformative for critical infrastructure like API gateways and Open Platforms. These systems, characterized by high throughput, diverse traffic, and the need for continuous availability, can leverage dynamic log levels to pinpoint issues affecting specific clients, APIs, or internal components, all while maintaining optimal performance for the vast majority of operations. The "redeploy to debug" era is over, replaced by a nimble, responsive approach that dramatically reduces Mean Time To Resolution, enhances security postures, and optimizes operational costs.
tracing-subscriber is not merely a logging utility; it is an enabling technology for superior observability. By embracing its dynamic filtering capabilities, organizations can build more resilient, transparent, and manageable systems, ensuring that even in the most complex and high-stakes environments, the unseen language of software is always ready to reveal its secrets, precisely when and where it matters most.
Frequently Asked Questions (FAQs)
- What is the primary benefit of dynamic log level control with
tracing-subscriber? The primary benefit is the ability to change the verbosity of logging at runtime without restarting the application. This allows for surgical debugging in production environments, where specific issues can be investigated with increased log detail for a limited scope (e.g., a particular API request or module) without impacting overall system performance or exposing sensitive information globally. - How does
tracingdiffer from traditional logging libraries likelogorprintln!?tracingprovides a structured, contextual, and hierarchical approach to observability, moving beyond simple log lines. It introduces spans (representing durations of operations) and events (discrete points within spans). Events are automatically enriched with the context of their encompassing span, providing much richer and correlated diagnostic data. It also decouples instrumentation from output via aSubscribertrait, allowing flexible backend integration. - Is there a performance cost associated with dynamic log filtering? While
tracing's filtering mechanisms are highly optimized, there is a minor performance cost associated with evaluating filters for each span and event. However, this cost is generally negligible compared to the I/O and CPU overheads incurred by actually writing out verbose log messages. Dynamic filtering helps minimize overall performance impact by allowing you to keep logs lean by default and only increase verbosity temporarily and selectively when needed. - What are the security considerations when implementing dynamic log level control? Implementing dynamic log level control requires careful security considerations. The mechanism used to change log levels (e.g., an administrative API endpoint) must be robustly secured with strong authentication and authorization to prevent unauthorized parties from increasing verbosity, which could lead to sensitive data exposure or performance degradation through excessive logging. Additionally, audit trails should be maintained to record who changed the log levels and when.
- Can
tracingintegrate with other observability tools like OpenTelemetry? Yes,tracingis designed for seamless integration with other observability tools. Through libraries liketracing-opentelemetry,tracingspans and events can be transformed into OpenTelemetry-compatible traces and exported to distributed tracing systems (e.g., Jaeger, Zipkin). This allowstracingto form a unified foundation for logs, metrics, and traces, providing a comprehensive view of system behavior across a distributed architecture.
πYou can securely and efficiently call the OpenAI API on APIPark in just two steps:
Step 1: Deploy the APIPark AI gateway in 5 minutes.
APIPark is developed based on Golang, offering strong product performance and low development and maintenance costs. You can deploy APIPark with a single command line.
curl -sSO https://download.apipark.com/install/quick-start.sh; bash quick-start.sh

In my experience, you can see the successful deployment interface within 5 to 10 minutes. Then, you can log in to APIPark using your account.

Step 2: Call the OpenAI API.

