Implementing Kong API Gateway: A Step-by-Step Guide
In the rapidly evolving landscape of modern software architecture, particularly with the widespread adoption of microservices, managing the proliferation of Application Programming Interfaces (APIs) has become an intricate challenge. Organizations are increasingly relying on a robust API gateway to serve as the central nervous system for their digital ecosystems, orchestrating requests, enforcing policies, and ensuring the seamless interaction between diverse services and clients. Without a well-implemented gateway, the complexity of a distributed system can quickly spiral into unmanageable chaos, hindering innovation and compromising security.
This comprehensive guide delves into the practicalities of implementing Kong API Gateway, a powerful, open-source, and cloud-native solution designed to streamline the management, security, and scalability of your APIs. Kong stands out as a critical component in many modern infrastructures, acting as a lightweight, fast, and flexible proxy that sits in front of your microservices, APIs, and legacy applications. It intercepts requests and applies a suite of plugins to enforce various policies—from authentication and rate limiting to logging and traffic shaping—before routing them to the appropriate backend service. The journey we embark on will cover everything from understanding the fundamental role of an API gateway to the meticulous steps of its deployment, configuration, and advanced management, ensuring you gain the knowledge and confidence to successfully integrate Kong into your own environment. We aim to equip developers, architects, and operations professionals with a deep, practical understanding, moving beyond theoretical concepts to hands-on implementation, culminating in a resilient and performant API infrastructure that can support the demands of modern applications.
1. Understanding API Gateways and Kong
Before diving into the specifics of implementation, it's crucial to establish a firm understanding of what an API gateway is, why it has become indispensable, and what makes Kong a particularly compelling choice in this domain. This foundational knowledge will inform every decision made during the planning and deployment phases.
1.1 What is an API Gateway?
At its core, an API gateway is a single entry point for all clients interacting with a set of backend services. Imagine a bustling city with countless shops, restaurants, and offices. Instead of clients needing to know the exact address and entrance for each specific establishment, an API gateway acts like a grand central station or a reception desk. All clients go to this central point, state their request, and the gateway then intelligently routes them to the correct backend service, applying various rules and safeguards along the way. This pattern is particularly vital in microservices architectures, where a single application might be composed of dozens or even hundreds of smaller, independent services.
The primary functions of an API gateway extend far beyond simple request routing:
- Request Routing: Directing incoming API requests to the appropriate backend service based on defined rules (e.g., path, host, headers). This is fundamental to decoupling clients from service locations.
- Authentication and Authorization: Verifying the identity of clients and ensuring they have the necessary permissions to access specific resources. This centralizes security logic, preventing individual services from having to implement their own authentication mechanisms.
- Rate Limiting: Protecting backend services from being overwhelmed by too many requests by restricting the number of calls a client can make within a certain timeframe. This helps maintain service stability and prevents abuse.
- Load Balancing: Distributing incoming request traffic across multiple instances of a backend service to ensure optimal resource utilization and high availability.
- Caching: Storing responses from backend services to reduce latency and load on those services for frequently accessed data.
- Request/Response Transformation: Modifying incoming requests or outgoing responses to ensure compatibility between clients and services, or to simplify the client-side API contract.
- Logging and Monitoring: Capturing detailed information about API calls for auditing, analytics, and troubleshooting, providing critical insights into system performance and usage.
- Circuit Breaking: Automatically detecting failing services and preventing further requests from being routed to them, thereby isolating failures and preventing cascading outages.
- Protocol Translation: Converting requests between different communication protocols (e.g., HTTP to gRPC, or SOAP to REST), enabling integration with diverse systems.
The strategic placement of an API gateway solves several critical problems inherent in distributed systems. It reduces client-side complexity by providing a single, consistent API for consumers, abstracting away the underlying microservices architecture. It enhances security by centralizing authentication and access control policies. It improves performance and resilience through load balancing, caching, and circuit breaking. Moreover, it simplifies operational management by providing a central point for monitoring and analytics of all API traffic. Without an API gateway, clients would need to be aware of the specific endpoints and intricacies of each individual microservice, leading to tighter coupling, increased development effort, and a significant security burden spread across multiple services.
1.2 Why Kong API Gateway?
Kong is a leading open-source API gateway and microservices management layer that has gained immense popularity due to its robust feature set, high performance, and flexible architecture. Built on top of NGINX and powered by LuaJIT, Kong is designed for maximum scalability and low latency, making it an ideal choice for high-traffic environments and complex microservices deployments. Its core philosophy revolves around extensibility through a rich plugin architecture, allowing users to customize and extend its functionality to meet specific business needs.
Here are some of the key reasons why Kong stands out as a preferred choice for many organizations:
- Open-Source and Community-Driven: Being open-source under the Apache 2.0 license, Kong benefits from a vibrant and active community that contributes to its development, provides support, and creates a vast ecosystem of plugins. This transparency and collaborative environment foster innovation and allow for extensive customization.
- High Performance and Scalability: Leveraging NGINX's battle-tested performance and LuaJIT's speed, Kong can handle hundreds of thousands of requests per second with minimal latency. Its stateless design means it can be horizontally scaled by simply adding more Kong nodes, making it highly suitable for demanding enterprise environments.
- Extensive Plugin Ecosystem: Kong's greatest strength lies in its plugin architecture. It offers a wide array of ready-to-use plugins for common functionalities like authentication (API Key, JWT, OAuth 2.0, LDAP), traffic control (Rate Limiting, Circuit Breaker, Request Size Limiting), security (IP Restriction, CORS, WAF integration), transformations, logging, and monitoring. This modularity allows users to pick and choose the exact functionalities they need without bloating the core gateway.
- Cloud-Native and Container-Friendly: Kong is designed from the ground up to thrive in cloud-native environments. It integrates seamlessly with Docker and Kubernetes through its Kong Ingress Controller, making deployment and management in containerized setups incredibly efficient. This makes it a perfect fit for organizations adopting modern DevOps practices.
- Flexible Deployment Options: Kong can be deployed in various ways: on bare metal, virtual machines, Docker containers, or orchestrators like Kubernetes. This flexibility ensures it can adapt to diverse infrastructure requirements.
- Declarative Configuration: Kong's configuration can be managed declaratively through its Admin API or tools like
deck(Declarative Config for Kong), allowing configurations to be version-controlled and applied consistently across environments, aligning with GitOps principles. - Robust Management Interfaces: Kong offers a powerful Admin API for programmatic control and a user-friendly Kong Manager GUI (available in Kong Gateway Enterprise and through community efforts) for visual management of services, routes, consumers, and plugins.
- Hybrid and Multi-Cloud Support: With its distributed architecture, Kong can be deployed across multiple data centers and cloud providers, facilitating hybrid and multi-cloud strategies and ensuring resilience.
While other API gateway solutions exist, such as Apigee, Mulesoft, Tyk, and AWS API Gateway, Kong's blend of open-source flexibility, raw performance, and extensive plugin ecosystem often makes it a top contender, particularly for organizations seeking a powerful and customizable solution without vendor lock-in. Its ability to manage both traditional REST APIs and newer gRPC services, coupled with its native support for container orchestration, positions it as a future-proof choice for evolving API strategies.
2. Planning Your Kong API Gateway Deployment
A successful Kong API gateway implementation begins long before any code is deployed. A thorough planning phase is critical to ensure that the gateway aligns with your architectural goals, security requirements, and operational capabilities. This section outlines the key considerations and decisions you'll need to make during the planning stage.
2.1 Define Your Requirements
Before even thinking about installation, articulate a clear vision for what your Kong API gateway needs to achieve. This involves a deep dive into the specific APIs you intend to expose, the services you'll be proxying, and the overarching demands of your application ecosystem.
- API Inventory and Scope:
- Which APIs will Kong manage? Start by listing all internal and external APIs that will sit behind Kong. This could include microservices, legacy monolith endpoints, or even third-party APIs you need to proxy for internal use.
- What are their current protocols and formats? Are they purely RESTful HTTP/1.1 APIs, or do you also have gRPC, WebSockets, or SOAP services that need to be managed? Kong supports a variety of protocols.
- What are the expected traffic patterns? Estimate the number of requests per second (RPS), peak loads, and average daily traffic. This will directly influence your scaling and infrastructure choices.
- Are there public-facing APIs or only internal ones? The exposure level significantly impacts security and access control policies.
- Security Needs: Security is paramount for any API gateway.
- Authentication Mechanisms: What methods will you use to identify and verify clients? Common choices include:
- API Keys: Simple token-based authentication.
- JWT (JSON Web Tokens): For stateless, token-based authentication, often used with OAuth 2.0 flows.
- OAuth 2.0: For delegated authorization, allowing third-party applications to access user data without exposing user credentials.
- OpenID Connect: An identity layer on top of OAuth 2.0.
- LDAP/Active Directory: For enterprise environments with existing identity providers.
- Basic Authentication: For simple, username/password-based access.
- Authorization and Access Control: Beyond authentication, how will you control what authenticated clients can access? Will you implement role-based access control (RBAC), attribute-based access control (ABAC), or a combination? Kong plugins can enforce fine-grained permissions.
- TLS/SSL: Will all traffic be encrypted? (Highly recommended, typically via HTTPS). Who will manage TLS certificates (Kong, a load balancer in front of Kong, or both)?
- IP Restriction: Do you need to restrict API access to specific IP addresses or ranges?
- Web Application Firewall (WAF): Do you need to integrate with a WAF for deeper threat protection against common web vulnerabilities?
- Authentication Mechanisms: What methods will you use to identify and verify clients? Common choices include:
- Traffic Management and Resiliency:
- Rate Limiting: How many requests can a specific client make per second/minute/hour? Will this be global, per-consumer, or per-API? This protects backend services from overload.
- Circuit Breaking: How will you handle failing backend services? Kong can automatically stop routing requests to unhealthy services for a specified period to allow them to recover.
- Load Balancing Strategy: How will requests be distributed among multiple instances of a backend service? (e.g., round-robin, least connections, consistent hashing).
- Retries: Should Kong retry failed requests to backend services? If so, under what conditions?
- Timeouts: What are the appropriate connection and request timeouts for upstream services?
- Observability Requirements: Visibility into your API traffic is crucial for debugging, performance analysis, and security auditing.
- Logging: What information needs to be logged for each API request and response? Where should these logs be sent (e.g., Elasticsearch, Splunk, custom log aggregators)?
- Monitoring: What metrics do you need to collect from Kong (e.g., request count, latency, error rates, CPU/memory usage)? How will you visualize these metrics (e.g., Prometheus, Grafana, Datadog)?
- Tracing: Do you need distributed tracing (e.g., Zipkin, Jaeger) to track requests across multiple microservices behind Kong?
- Deployment Environment:
- Infrastructure: Where will Kong run? On-premises virtual machines, cloud instances (AWS EC2, Azure VMs, Google Cloud Compute Engine), or container orchestration platforms like Kubernetes?
- Network Topology: How will Kong integrate into your existing network? What firewall rules are necessary? Will it be publicly accessible, or only within a private network?
- DNS: How will clients resolve the Kong gateway's address? What DNS records are required?
2.2 Choose Your Database
Kong requires a database to store its configuration (services, routes, plugins, consumers, credentials). You have two primary choices: PostgreSQL and Cassandra. The selection largely depends on your existing infrastructure, operational expertise, and scalability needs.
- PostgreSQL:
- Strengths: ACID compliance, strong consistency, widely understood relational database. Excellent for scenarios where strong consistency and data integrity are paramount. Simpler to set up and manage for many teams, especially if PostgreSQL is already in use.
- Use Cases: Suitable for most small to medium-sized deployments, and even large deployments if properly clustered and managed. Ideal if your team has expertise in relational databases.
- Considerations: Scaling for very high read/write loads can be more complex than Cassandra, often requiring read replicas and sharding.
- Cassandra:
- Strengths: Distributed NoSQL database known for high availability, linear scalability, and high write throughput. Excellent for very large-scale deployments that require extremely high performance and always-on availability.
- Use Cases: Best for large-scale, globally distributed deployments with massive traffic volumes, where eventual consistency is acceptable. Ideal if your team has existing Cassandra expertise.
- Considerations: More complex to set up and manage than PostgreSQL. Eventual consistency might be a concern for certain configuration changes (though Kong mitigates this well internally). Requires more operational overhead.
General Recommendation: For most deployments, PostgreSQL is the recommended choice due to its ease of management, strong consistency, and widespread familiarity. It's often easier to get started and maintain for teams without specific Cassandra expertise. If you anticipate extreme scale (tens of thousands of RPS and geographically distributed clusters) from day one, and have the operational capacity, Cassandra might be considered.
2.3 Infrastructure Considerations
The physical or virtual infrastructure housing your Kong API gateway is a critical aspect of your planning.
- Deployment Method:
- Docker: Ideal for local development, testing, and smaller production deployments. Provides isolation and portability.
- Kubernetes: The go-to choice for cloud-native, scalable, and highly available production deployments. Kong offers a dedicated Ingress Controller for seamless integration.
- Virtual Machines (VMs) / Bare Metal: For traditional server environments, offering fine-grained control over the operating system and resources.
- Networking:
- Public IP Addresses: Will Kong require public IP addresses, or will it be behind an existing load balancer (e.g., AWS ELB/ALB, NGINX Plus, F5) that handles public exposure and TLS termination?
- Internal Network: Ensure proper network connectivity between Kong nodes and your backend services.
- Firewall Rules: Carefully configure firewalls to allow incoming traffic to Kong's proxy ports (typically 80/443) and to restrict access to Kong's Admin API (typically 8001/8444) to authorized internal networks only.
- High Availability (HA) and Scalability:
- Multiple Kong Nodes: Deploy at least two Kong instances in an active-active configuration for redundancy.
- Load Balancer in Front: Place a dedicated load balancer (hardware or software) in front of your Kong nodes to distribute traffic and provide a single entry point. This also handles health checks for Kong instances.
- Database HA: Ensure your chosen database (PostgreSQL or Cassandra) is also highly available, either through clustering, replication, or managed cloud services.
- Auto-scaling: In cloud environments or Kubernetes, consider auto-scaling Kong instances based on traffic load to handle spikes gracefully.
2.4 Designing Your API Architecture
How you structure your APIs and expose them through Kong is crucial for long-term maintainability and developer experience.
- Service Granularity:
- One-to-one mapping: Each microservice gets its own Kong Service. This is common.
- Frontend-for-Backends (BFF): Design specific APIs for different client types (e.g., mobile app, web app) that aggregate data from multiple backend services. Kong can help orchestrate these.
- API Naming Conventions: Establish clear, consistent naming conventions for your Kong Services and Routes. This improves discoverability and manageability.
- Versioning Strategies:
- URL Versioning:
/v1/users,/v2/users(simplest, but can lead to URL proliferation). - Header Versioning:
Accept: application/vnd.myapi.v1+json(cleaner URLs, but clients must support custom headers). - Query Parameter Versioning:
?api-version=1(least recommended due to caching issues and non-RESTful nature). Kong can manage multiple versions of an API simultaneously using different routes.
- URL Versioning:
- Internal vs. External APIs: Clearly differentiate between APIs intended for internal consumption within your organization and those exposed to external partners or public internet. Apply stricter security and rate-limiting policies to external APIs.
- Developer Portal: Consider how developers will discover and consume your APIs. While Kong itself is an API gateway, integrating it with an API developer portal (like APIPark) can significantly enhance the developer experience by providing documentation, sandboxing, and self-service registration. APIPark, for instance, offers an all-in-one AI gateway and API developer portal that is open-sourced, making it easy for different departments and teams to find and use the required API services and manage the entire lifecycle of APIs.
By meticulously planning each of these areas, you lay a solid foundation for a successful and resilient Kong API gateway implementation that effectively supports your organization's API strategy.
3. Step-by-Step Installation of Kong
With a solid plan in hand, we can now proceed to the practical steps of installing Kong. This section will walk you through setting up the necessary prerequisites, configuring the database, and deploying Kong using various popular methods.
3.1 Prerequisites
Before installing Kong, ensure your environment meets the following basic requirements:
- Operating System: Kong runs on Linux (Ubuntu, Debian, CentOS, RHEL, Alpine, etc.), macOS, and Windows (via Docker or WSL2). For production, a stable Linux distribution is highly recommended.
- Database: A running instance of either PostgreSQL (version 9.5+) or Cassandra (version 3.x, 4.x). We will focus on PostgreSQL for our example due to its commonality and ease of setup.
- System Resources: While Kong is lightweight, ensure your server or container has sufficient CPU, memory, and disk space. A minimum of 2 vCPU and 4GB RAM is a reasonable starting point for a development or small production instance, scaling up significantly for higher traffic.
- Network Access: Ensure the Kong server can communicate with its database and with your backend services. Also, verify that the ports for Kong's proxy (80, 443) and Admin API (8001, 8444) are open and accessible as needed.
- Docker (Optional but Recommended): If you plan to deploy Kong using containers, Docker Engine must be installed and running on your host machine.
- Kubernetes (Optional): If deploying on Kubernetes, you'll need a running Kubernetes cluster and
kubectlconfigured to interact with it, along with Helm for easier deployment.
3.2 Database Setup (Example: PostgreSQL)
We'll illustrate the setup process using PostgreSQL, which is often the easiest starting point for many teams.
Step 1: Install PostgreSQL
If you don't have PostgreSQL installed, you can do so on a Linux system (e.g., Ubuntu) using the following commands:
sudo apt update
sudo apt install postgresql postgresql-contrib -y
For CentOS/RHEL:
sudo dnf install postgresql-server -y # or yum for older versions
sudo postgresql-setup --initdb
sudo systemctl enable postgresql
sudo systemctl start postgresql
Step 2: Create a Kong Database and User
Connect to your PostgreSQL instance as the default postgres user and create a dedicated database and user for Kong. This enhances security and isolation.
sudo -i -u postgres psql
Inside the psql prompt:
CREATE USER kong WITH PASSWORD 'your_secure_password';
CREATE DATABASE kong OWNER kong;
GRANT ALL PRIVILEGES ON DATABASE kong TO kong;
\q
Replace your_secure_password with a strong, unique password.
Step 3: Configure PostgreSQL (Optional but Recommended)
For production deployments, you might want to adjust PostgreSQL's configuration (postgresql.conf) for performance and security, such as listen_addresses, max_connections, and shared_buffers. Ensure listen_addresses allows connections from your Kong instances. If Kong and PostgreSQL are on separate machines, you'll also need to configure pg_hba.conf to allow remote connections from the Kong server's IP address.
3.3 Kong Installation Methods
Kong offers several deployment methods to suit various environments. We'll cover the most common ones.
3.3.1 Method 1: Docker (Quick Start for Development/Testing)
Docker is the easiest way to get Kong up and running quickly for development, testing, or even small-scale production.
Step 1: Initialize the Kong Database
Kong needs to run database migrations to set up its schema. You can do this by running a temporary Kong Docker container. Ensure your database connection details are correct.
docker network create kong-net # Create a network for Kong and Postgres
docker run -d --name kong-database \
--network=kong-net \
-p 5432:5432 \
-e POSTGRES_USER=kong \
-e POSTGRES_DB=kong \
-e POSTGRES_PASSWORD=your_secure_password \
postgres:9.6 # Use a compatible PostgreSQL version
# Wait a few seconds for the PostgreSQL container to fully start
docker run --rm \
--network=kong-net \
-e KONG_DATABASE=postgres \
-e KONG_PG_HOST=kong-database \
-e KONG_PG_USER=kong \
-e KONG_PG_PASSWORD=your_secure_password \
kong:2.8.1-alpine kong migrations bootstrap
Replace your_secure_password with the password you set for the kong user. Adjust kong:2.8.1-alpine to your desired Kong version.
Step 2: Start the Kong Gateway
Once the database is initialized, you can start the main Kong Gateway container.
docker run -d --name kong \
--network=kong-net \
-e KONG_DATABASE=postgres \
-e KONG_PG_HOST=kong-database \
-e KONG_PG_USER=kong \
-e KONG_PG_PASSWORD=your_secure_password \
-e KONG_PROXY_ACCESS_LOG=/dev/stdout \
-e KONG_ADMIN_ACCESS_LOG=/dev/stdout \
-e KONG_PROXY_ERROR_LOG=/dev/stderr \
-e KONG_ADMIN_ERROR_LOG=/dev/stderr \
-e KONG_ADMIN_LISTEN=0.0.0.0:8001,0.0.0.0:8444 ssl \
-p 8000:8000 \
-p 8443:8443 \
-p 8001:8001 \
-p 8444:8444 \
kong:2.8.1-alpine
8000: Default HTTP proxy port (where your client requests will hit).8443: Default HTTPS proxy port.8001: Default HTTP Admin API port (for configuring Kong).8444: Default HTTPS Admin API port.
3.3.2 Method 2: Kubernetes (Recommended for Production)
Deploying Kong on Kubernetes using its official Helm chart is the most robust and scalable method for production environments.
Step 1: Add the Kong Helm Repository
helm repo add kong https://charts.konghq.com
helm repo update
Step 2: Create a values.yaml File for Configuration
A values.yaml file allows you to customize your Kong deployment. Here's a basic example for deploying with PostgreSQL:
# values.yaml
# Enable a bundled PostgreSQL instance for simplicity, or connect to an external one.
# For production, it's generally recommended to use an external, managed database.
env:
database: "postgres"
pg_host: "kong-postgresql" # Name of the bundled Postgres service
pg_user: "kong"
pg_password: "your_secure_password" # Use Kubernetes secrets for production
# Uncomment and configure if you use an external PostgreSQL
# env:
# database: "postgres"
# pg_host: "your.external.postgres.host"
# pg_port: "5432"
# pg_user: "kong"
# pg_password: "your_secure_password"
# pg_database: "kong"
proxy:
enabled: true
type: LoadBalancer # Or NodePort/ClusterIP if an external LB is used
tls:
enabled: true
http:
enabled: true
admin:
enabled: true
type: ClusterIP # Admin API should typically not be exposed publicly
http:
enabled: true
# Optionally secure the Admin API with an ingress
# ingress:
# enabled: true
# hostname: kong-admin.example.com
# path: /
# annotations:
# kubernetes.io/ingress.class: nginx
# tls:
# - hosts:
# - kong-admin.example.com
# secretName: kong-admin-tls-secret
# Kong Manager (GUI) - typically for Enterprise, but community options exist
# manager:
# enabled: false
# Kong Ingress Controller for Kubernetes Ingress resources
ingressController:
enabled: true
# Optionally deploy with RBAC and CRDs
# rbac:
# create: true
# custerRbac:
# create: true
# enableCRDs: true
For production, NEVER hardcode sensitive information like passwords in values.yaml. Use Kubernetes Secrets and reference them.
Step 3: Install Kong with Helm
helm install kong kong/kong -f values.yaml --namespace kong --create-namespace
This command will deploy Kong, its database (if bundled), and the Kong Ingress Controller into the kong namespace.
Step 4: Verify Kubernetes Deployment
Check the status of your pods:
kubectl get pods -n kong
kubectl get svc -n kong
You should see Kong proxy and admin pods running, along with a service of type LoadBalancer (if configured) for the proxy, which will eventually get an external IP address.
3.3.3 Method 3: Native Installation (e.g., Ubuntu/Debian)
For specific control over the environment or if Docker/Kubernetes isn't preferred, you can install Kong directly on the operating system.
Step 1: Add the Kong Repository
For Debian-based systems (Ubuntu, Debian):
sudo apt update
sudo apt install -y software-properties-common lsb-release curl
curl -fsSL https://konghq.com/install-debian-package/ > /tmp/install-kong-debian.sh
sudo bash /tmp/install-kong-debian.sh
sudo apt update
Step 2: Install Kong Gateway
sudo apt install -y kong-gateway
Step 3: Configure Kong
Edit the Kong configuration file, typically located at /etc/kong/kong.conf. You'll need to update the database connection details to point to your PostgreSQL instance.
# /etc/kong/kong.conf
database = postgres
pg_host = 127.0.0.1 # Or the IP of your PostgreSQL server
pg_port = 5432
pg_user = kong
pg_password = your_secure_password
pg_database = kong
# Listen addresses for the proxy and admin API
proxy_listen = 0.0.0.0:8000, 0.0.0.0:8443 ssl
admin_listen = 0.0.0.0:8001, 0.0.0.0:8444 ssl
Step 4: Initialize the Kong Database
kong migrations bootstrap -c /etc/kong/kong.conf
Step 5: Start Kong
sudo kong start -c /etc/kong/kong.conf
sudo systemctl enable kong
sudo systemctl start kong
3.4 Initial Verification
Regardless of your installation method, perform a quick check to ensure Kong is running and its Admin API is accessible.
Check Kong Admin API:
curl -i http://localhost:8001 # Or the IP/hostname of your Kong Admin API
You should receive a JSON response containing Kong's status, version, and configuration details. This confirms that Kong is running and the Admin API is ready to accept commands.
Check Kong Proxy:
At this point, the Kong proxy ports (8000/8443) are listening, but they won't forward any requests until you configure services and routes. Attempting to access them directly will likely result in a 404 Not Found or similar, which is expected before configuration.
curl -i http://localhost:8000
With Kong successfully installed and verified, you are now ready to begin configuring your API gateway to manage your services.
APIPark is a high-performance AI gateway that allows you to securely access the most comprehensive LLM APIs globally on the APIPark platform, including OpenAI, Anthropic, Mistral, Llama2, Google Gemini, and more.Try APIPark now! 👇👇👇
4. Configuring Kong Services and Routes
Now that Kong is up and running, the next crucial step is to configure it to act as an API gateway for your backend services. This involves understanding Kong's core configuration concepts: Services, Routes, Upstreams, Consumers, and Plugins. These elements work together to define how Kong intercepts, processes, and forwards incoming requests.
4.1 Core Concepts: Services, Routes, Upstreams, Consumers, and Plugins
To effectively configure Kong, it's essential to grasp its fundamental building blocks:
- Service: A Kong Service is an abstraction of a backend API or microservice. It defines the upstream URL of your backend service (e.g.,
http://my-backend-service:8080). Think of a Service as a single entity you want to proxy. You can have multiple Routes pointing to a single Service.- Example: A
users-servicepointing tohttp://192.168.1.100:3000.
- Example: A
- Route: A Route defines how client requests are matched and routed to a specific Service. Routes are the entry points into Kong. They specify rules based on hostnames, paths, HTTP methods, headers, and more.
- Example: A Route for the
users-servicemight match requests with hostapi.example.comand path/users.
- Example: A Route for the
- Upstream: An Upstream object represents a virtual hostname and can load balance requests across multiple target IP addresses or hostnames. While not strictly required for a single backend instance, Upstreams are vital for high availability and scalability, especially when a Service has multiple instances.
- Example: An Upstream named
users-upstreamcould managetarget1:3000andtarget2:3000for theusers-service.
- Example: An Upstream named
- Consumer: A Consumer represents a developer, application, or system that consumes your APIs. It's a way to associate credentials (like API keys or JWTs) and apply specific policies (like rate limits) to individual users or applications.
- Example: A Consumer named
mobile-app-usercould represent your mobile application.
- Example: A Consumer named
- Plugin: Plugins are the heart of Kong's extensibility. They are modular pieces of functionality that execute during the API request/response lifecycle. Kong comes with many built-in plugins for authentication, traffic control, logging, and more. Plugins can be applied globally, to a specific Service, to a specific Route, or to a specific Consumer.
- Example: The
rate-limitingplugin to control request volume, or thejwtplugin for token-based authentication.
- Example: The
4.2 Adding Your First Service
Let's assume you have a simple backend API running at http://localhost:3001/hello. We'll configure Kong to proxy requests to this service.
4.2.1 Using Kong Admin API (cURL Examples)
The Kong Admin API is a RESTful interface for managing Kong's configuration. It's typically accessed on port 8001 (HTTP) or 8444 (HTTPS) from within your network.
Step 1: Add a Service
This command creates a new Service named example-service that points to our backend API.
curl -X POST http://localhost:8001/services \
--header 'Content-Type: application/json' \
--data '{
"name": "example-service",
"url": "http://localhost:3001/hello"
}'
You should receive a JSON response confirming the creation of the service.
Step 2: Add a Route for the Service
Now, we need to define how requests reach this example-service. We'll create a Route that matches requests to Kong's proxy port with the path /my-api.
curl -X POST http://localhost:8001/services/example-service/routes \
--header 'Content-Type: application/json' \
--data '{
"paths": ["/techblog/en/my-api"]
}'
This tells Kong: "Any request coming to me (Kong) with the path /my-api should be forwarded to the example-service (which is at http://localhost:3001/hello)."
Step 3: Test the Configuration
Now, try sending a request to Kong's proxy port using the defined route path:
curl -i http://localhost:8000/my-api
If your backend service at http://localhost:3001/hello is running and returns "Hello from backend!", you should see that response (and HTTP headers) from Kong. This confirms that Kong is successfully routing requests.
4.2.2 Using Kong Manager GUI (If Available)
If you're using Kong Gateway (Enterprise) or have installed a community-driven Kong Manager GUI, you can perform these actions through a user-friendly web interface.
- Access Kong Manager: Open your web browser and navigate to the Kong Manager URL (e.g.,
http://localhost:8002if configured). - Add a Service:
- Go to "Services".
- Click "Add New Service".
- Enter "Name":
example-service - Enter "URL":
http://localhost:3001/hello - Click "Create Service".
- Add a Route:
- Click on the newly created
example-service. - Go to the "Routes" tab.
- Click "Add New Route".
- Enter "Name":
example-route - Under "Paths", add
/my-api. - Click "Create Route".
- Click on the newly created
- Test: Use
curlas above to test the proxy.
4.3 Defining Routes
Routes are incredibly powerful for controlling how requests are directed. You can define various matching rules:
- Paths: The most common routing mechanism. Requests matching a specific URL path.
json "paths": ["/techblog/en/users", "/techblog/en/customers"] - Hosts: Requests targeting specific hostnames.
json "hosts": ["api.example.com", "dev.example.com"] - Methods: Requests using specific HTTP methods (GET, POST, PUT, DELETE).
json "methods": ["GET", "POST"] - Headers: Requests containing specific header values.
json "headers": { "X-API-Version": "v1" } - Regex Paths: For more complex path matching.
json "paths": ["~/^/users/(?<id>[0-9]+)$"]This matches paths like/users/123and captures123asid.
You can combine these rules. For example, a Route could match api.example.com on path /products using the GET method.
4.4 Applying Plugins
Plugins are where Kong truly shines, allowing you to add functionality to your API gateway without modifying your backend services. They can be applied globally (to all traffic), to specific Services, to specific Routes, or to specific Consumers.
4.4.1 Authentication: API Key Example
Let's secure our example-service with an API Key. This means only clients providing a valid API key in their request will be allowed through.
Step 1: Enable the key-auth Plugin on the Service
curl -X POST http://localhost:8001/services/example-service/plugins \
--header 'Content-Type: application/json' \
--data '{
"name": "key-auth"
}'
Now, if you try to access http://localhost:8000/my-api without an API key, you'll get a 401 Unauthorized error.
Step 2: Create a Consumer
Consumers represent the entities (users, applications) that will consume your API.
curl -X POST http://localhost:8001/consumers \
--header 'Content-Type: application/json' \
--data '{
"username": "my-mobile-app"
}'
Step 3: Provision an API Key for the Consumer
Associate an API key with the consumer. Kong will generate one if not provided.
curl -X POST http://localhost:8001/consumers/my-mobile-app/key-auth \
--header 'Content-Type: application/json' \
--data '{}' # Kong will generate a key
The response will include the generated key value. Let's assume it's some-secret-api-key.
Step 4: Test with the API Key
Send a request including the API key, typically in the apikey header or as a query parameter (configurable).
curl -i -H "apikey: some-secret-api-key" http://localhost:8000/my-api
You should now get the successful response from your backend service.
4.4.2 Traffic Control: Rate Limiting Example
Let's add a rate limit to prevent a single consumer from overwhelming our example-service. We'll limit them to 5 requests per minute.
Step 1: Enable the rate-limiting Plugin on the Service
curl -X POST http://localhost:8001/services/example-service/plugins \
--header 'Content-Type: application/json' \
--data '{
"name": "rate-limiting",
"config": {
"minute": 5,
"policy": "local"
}
}'
minute: 5: Allows 5 requests per minute.policy: local: The rate-limiting counter is maintained by each Kong node locally. Other policies (redis,cluster) exist for distributed environments.
Step 2: Test Rate Limiting
Using your some-secret-api-key, send more than 5 requests to http://localhost:8000/my-api within one minute. After the 5th request, subsequent requests will return a 429 Too Many Requests status code.
4.5 Managing Consumers
Consumers are central to applying granular control and policies in Kong. You can create consumers and then associate various credentials with them:
- API Key:
curl -X POST http://localhost:8001/consumers/username/key-auth - JWT:
curl -X POST http://localhost:8001/consumers/username/jwt(requiresalgorithmandrsa_public_keyorsecret). - OAuth2:
curl -X POST http://localhost:8001/consumers/username/oauth2 - Basic Auth:
curl -X POST http://localhost:8001/consumers/username/basic-auth
Each consumer can have multiple credentials of the same type or different types. For instance, a single consumer could have multiple API keys, allowing for key rotation or different keys for different client applications under the same consumer entity.
4.6 Using Kong Manager (GUI)
While the Admin API is powerful for programmatic control and automation, Kong Manager (the web-based GUI) provides a visual way to manage your Kong Gateway.
- Login: Access Kong Manager (e.g.,
http://localhost:8002if configured with a bundled Manager). - Dashboard Overview: See a summary of your API traffic, services, and plugins.
- Services: View, add, edit, or delete Services. For each Service, you can see its associated Routes and Plugins.
- Routes: Manage the routing rules for your Services.
- Consumers: Create and manage Consumers, assign them credentials, and view their associated plugins.
- Plugins: Configure plugins globally or for specific Services, Routes, or Consumers. The GUI simplifies the process of understanding plugin configurations.
- Upstreams/Targets: If you're using Upstreams for load balancing, Kong Manager provides an interface to manage them and their individual targets.
Kong Manager makes it easier for teams to visualize their API configurations, onboard new APIs quickly, and debug issues without needing to constantly use command-line tools. It's particularly useful for operations teams and those who prefer a graphical interface for day-to-day management tasks.
By mastering these core concepts and configuration steps, you unlock the full potential of Kong as an API gateway, enabling robust routing, strong security, and flexible traffic management for your modern API ecosystem. The modular nature of Kong's plugins allows you to build a highly customized and efficient gateway tailored to your specific architectural and business needs.
5. Advanced Kong Features and Best Practices
Once your basic Kong API gateway is operational, it's time to explore advanced features and adopt best practices to ensure high availability, scalability, security, and maintainability for your production environment. These considerations move beyond simply routing requests to building a resilient and observable API infrastructure.
5.1 High Availability and Scalability
For any production system, especially one as critical as an API gateway, high availability (HA) and the ability to scale under load are non-negotiable.
- Cluster Deployment:
- Multiple Kong Nodes: Deploy at least two (and ideally more) Kong instances. Kong nodes are largely stateless; they retrieve their configuration from the shared database. This means you can add or remove Kong nodes dynamically to adjust to traffic demands.
- Shared Database: Ensure your database (PostgreSQL or Cassandra) is also highly available, using replication (e.g., PostgreSQL streaming replication) or clustering (e.g., Cassandra ring, PostgreSQL high-availability solutions like Patroni). A single point of failure in the database will bring down your entire Kong cluster.
- Load Balancing Kong Instances:
- Place an external load balancer (e.g., AWS ELB/ALB, Google Cloud Load Balancer, Azure Load Balancer, NGINX Plus, HAProxy, F5 Big-IP) in front of your Kong nodes. This load balancer distributes incoming client traffic across the healthy Kong instances and provides a single public endpoint.
- Configure health checks on the external load balancer to monitor the operational status of individual Kong nodes. If a node fails, the load balancer should automatically stop routing traffic to it.
- Auto-scaling in Kubernetes:
- If deploying Kong on Kubernetes with the Kong Ingress Controller, leverage Horizontal Pod Autoscalers (HPAs) to automatically scale the number of Kong pods based on metrics like CPU utilization or custom metrics (e.g., requests per second). This allows Kong to dynamically adapt to varying traffic loads.
- Ensure your Kubernetes cluster has sufficient node capacity to accommodate scaling Kong pods.
5.2 Observability
Understanding the behavior and performance of your API gateway and the APIs it manages is paramount. Comprehensive observability allows for quick troubleshooting, performance optimization, and security auditing.
- Logging Integrations:
- Kong generates detailed access and error logs. Instead of storing them locally, integrate Kong with a centralized logging solution.
- ELK Stack (Elasticsearch, Logstash, Kibana): A popular choice for collecting, parsing, storing, and visualizing logs.
- Splunk: Another powerful log management and security information and event management (SIEM) platform.
- Cloud Logging Services: (e.g., AWS CloudWatch Logs, Google Cloud Logging, Azure Monitor Logs) for seamless integration in cloud environments.
- Utilize Kong's logging plugins (e.g.,
file-log,syslog,http-log,tcp-log,datadog-log) to forward logs in the desired format to your chosen aggregation system. These plugins provide flexibility in log format and destination, ensuring that critical information about requests, responses, and errors is captured.
- Monitoring Kong Metrics:
- Collect operational metrics from Kong instances to understand their health and performance.
- Prometheus and Grafana: A de facto standard for open-source monitoring. Kong provides a
prometheusplugin that exposes metrics in a Prometheus-compatible format (e.g., request counts, latency, status codes, Kong process metrics). Grafana can then visualize these metrics through customizable dashboards. - Datadog, New Relic, AppDynamics: Commercial APM tools that offer comprehensive monitoring for Kong and integrate with its metrics endpoints.
- Monitor key metrics such as CPU usage, memory consumption, open file descriptors, network I/O, and the number of active connections on Kong nodes. Additionally, track API-specific metrics like latency (p95, p99), error rates (4xx, 5xx), and total requests for each service/route.
- Tracing with Zipkin/Jaeger:
- For microservices architectures, distributed tracing is essential to follow a single request across multiple services.
- Kong offers tracing plugins (e.g.,
zipkin,jaeger) that can inject tracing headers into requests as they pass through the gateway. This allows downstream services to continue the trace, providing an end-to-end view of request latency and flow. - This capability is crucial for identifying bottlenecks and debugging complex interactions within your service mesh.
While Kong provides robust logging and monitoring capabilities through its plugins, platforms like APIPark can further enhance your observability strategy. APIPark, for instance, offers detailed API call logging and powerful data analysis features that complement any API gateway implementation, allowing businesses to trace and troubleshoot issues efficiently and gain insights into long-term performance trends. This unified view can be invaluable for maintaining system stability and data security across diverse API landscapes.
5.3 Security Enhancements
The API gateway is your first line of defense; securing it is paramount.
- Mutual TLS (mTLS):
- Beyond regular HTTPS (one-way TLS), mTLS requires both the client and the server to present and verify certificates. This provides strong mutual authentication.
- Kong can be configured to enforce mTLS for specific routes or services, ensuring that only trusted clients with valid certificates can communicate with your APIs.
- Web Application Firewall (WAF) Integration:
- For protection against common web vulnerabilities (e.g., SQL injection, XSS), integrate Kong with a WAF solution. This can be an external WAF appliance, a cloud WAF service, or a Kong plugin (if available for your specific WAF).
- DDoS Protection:
- While Kong's rate limiting helps, for large-scale DDoS attacks, it's best to place Kong behind a dedicated DDoS protection service (e.g., Cloudflare, Akamai, AWS Shield). These services can absorb and filter malicious traffic before it reaches your infrastructure.
- Securing the Kong Admin API:
- The Admin API is highly privileged; never expose it publicly.
- Restrict access to specific IP addresses or internal networks using firewall rules.
- Enable authentication on the Admin API (e.g.,
key-authorbasic-authplugins) for additional layers of security, even for internal access. - Always use HTTPS for the Admin API.
- API Schema Validation:
- Use plugins or external tools to validate incoming requests against your API schema (e.g., OpenAPI/Swagger definitions). This ensures that requests conform to expected formats and prevents malformed data from reaching backend services.
5.4 Automated Deployment and CI/CD
Manual configuration is error-prone and doesn't scale. Automate your Kong deployments and configurations.
- Declarative Configuration with
deck:deck(Declarative Config for Kong) is a powerful CLI tool that allows you to manage Kong's configuration declaratively using YAML or JSON files.- You define your Services, Routes, Plugins, and Consumers in a
.yamlfile, anddeckcansync,diff, ordumpyour Kong configuration. - This enables version control of your Kong setup, making it easy to track changes, revert to previous versions, and apply configurations consistently across different environments (dev, staging, production).
- GitOps Approach:
- Store your
deckconfiguration files in a Git repository. - Use CI/CD pipelines to validate and apply these configurations to your Kong instances whenever changes are pushed to the repository. This ensures that your Kong deployment state always matches your Git repository state.
- Store your
- Kong Ingress Controller for Kubernetes:
- When deploying Kong on Kubernetes, the Kong Ingress Controller allows you to manage Kong configurations directly using Kubernetes Ingress resources and Custom Resource Definitions (CRDs).
- Define your services, routes, and plugins as Kubernetes YAML manifests. Kubernetes then reconciles these definitions with Kong, making configuration management cloud-native and fully integrated with your Kubernetes deployment workflows.
5.5 Custom Plugins
While Kong offers a rich set of built-in plugins, there might be scenarios where you need highly specialized functionality not covered by existing options.
- When to Develop Custom Plugins:
- Specific business logic that needs to be applied to every request (e.g., custom header manipulation based on complex rules).
- Integration with proprietary internal systems (e.g., a unique authentication system, a custom logging endpoint).
- Advanced traffic routing or transformation that existing plugins don't support.
- Development Process:
- Kong plugins are primarily written in Lua. You'll need to understand Lua and Kong's plugin development kit (PDK).
- For performance-critical or complex logic, you can also develop plugins in Go using Kong's Go Plugin Server.
- Ensure your custom plugins are thoroughly tested and follow Kong's best practices for performance and stability.
5.6 Version Control and Environment Management
Treat your Kong configuration as code.
- Configuration as Code (CaC): Use
deckor Kubernetes manifests to store your entire Kong configuration in a version control system (like Git). This provides an audit trail, enables collaborative development, and simplifies disaster recovery. - Environment-Specific Configurations: Maintain separate configuration files or use templating (e.g., Helm charts with
values.yamloverrides) for different environments (development, staging, production). This allows for distinct settings (e.g., database connections, rate limits, plugin parameters) tailored to each environment. - Rollback Strategy: With CaC and CI/CD, rolling back a faulty configuration becomes as simple as reverting a Git commit and redeploying.
By embracing these advanced features and best practices, you can transform your Kong API gateway from a simple proxy into a highly reliable, secure, scalable, and observable central control point for your entire API ecosystem, ready to meet the demands of enterprise-grade applications.
6. Comparing Kong Installation Methods
To summarize the various installation approaches and help you choose the most suitable one for your specific needs, here's a comparison table:
| Feature | Docker/Containerized | Kubernetes (Helm) | Native (e.g., apt/yum) |
|---|---|---|---|
| Ease of Setup | Very High (quick docker run commands) |
Moderate-High (requires K8s cluster, Helm charts) | Moderate (OS package management, manual config) |
| Scalability | Moderate (manual orchestration for HA/scaling) | Very High (built-in K8s scaling & orchestration) | Moderate (manual setup of HA/load balancing) |
| High Availability | Manual/External Orchestration (e.g., Docker Swarm) | Built-in (managed by K8s for pods, services) | Manual (requires external load balancer, clustering) |
| Resource Usage | Flexible (container limits) | Managed by K8s (resource requests/limits) | Direct OS impact (shared OS resources) |
| Production Readiness | Yes (with proper orchestration) | Very High (cloud-native standard) | Yes (with robust OS management) |
| Upgrade Complexity | Moderate (pull new image, restart containers) | Low (Helm upgrade, declarative updates) | High (package upgrades, configuration migration) |
| Ideal For | Local development, testing, small-scale production, teams familiar with Docker Compose | Large-scale, cloud-native deployments, microservices architectures, teams leveraging Kubernetes | Specific OS environments, bare metal deployments, environments without containerization preference |
| Database Management | Can use containerized DB or external DB | Typically uses external or K8s-native DB solution | External DB (or OS-installed DB) |
| Configuration Mgmt. | deck, manual Admin API calls |
Kubernetes Ingress/CRDs, deck |
deck, manual Admin API calls, direct file edits |
This table provides a quick reference to guide your decision-making process, highlighting the trade-offs between speed of deployment, operational complexity, and native platform integration for each method.
Conclusion
The journey of implementing Kong API Gateway is a comprehensive one, extending from strategic planning to advanced operational management. As we've explored, in an era dominated by distributed systems and microservices, a robust API gateway is no longer a luxury but a fundamental necessity for any organization striving for agility, security, and scalability in its digital offerings. Kong, with its open-source flexibility, high-performance architecture, and rich plugin ecosystem, stands as a premier choice for this critical role, acting as the intelligent traffic controller and policy enforcer for your entire API landscape.
We began by dissecting the core purpose of an API gateway, understanding how it centralizes crucial functions like routing, authentication, rate limiting, and monitoring, thereby simplifying client interactions and abstracting the complexities of backend services. Kong's cloud-native design, extensibility through plugins, and superior performance were highlighted as key differentiators, making it suitable for a vast array of use cases from development to enterprise-scale production.
The planning phase underscored the importance of a well-defined strategy, covering everything from understanding API inventory and security requirements to choosing the right database and designing a resilient architecture. This meticulous preparation is the bedrock upon which a successful Kong implementation is built, ensuring that the gateway aligns perfectly with your business and technical objectives.
Our step-by-step guide walked through the practicalities of installation, demonstrating how to set up the database and deploy Kong using popular methods such as Docker, Kubernetes with Helm, and native OS packages. This hands-on approach aimed to demystify the initial setup, providing clear instructions for getting Kong operational in various environments. Following installation, we dived deep into configuration, illustrating how to define Services and Routes, the essential components that direct traffic, and how to leverage Kong's powerful plugin system for functionalities like API key authentication and rate limiting. The role of Consumers in applying granular policies was also emphasized, empowering fine-grained control over API access.
Finally, we ventured into advanced features and best practices, crucial for hardening Kong for production. Considerations such as high availability, scalability, comprehensive observability through logging, monitoring, and tracing, and advanced security measures like mTLS and WAF integration were discussed. The importance of automated deployment with tools like deck and a GitOps approach, alongside the strategic use of custom plugins and robust environment management, were presented as indispensable for maintaining a stable and efficient API gateway over time. Mentioning complementary solutions like APIPark during the discussion on observability further highlighted the broader ecosystem available to enhance API management.
By meticulously following this guide, you are now equipped with the knowledge to not only implement Kong API Gateway effectively but also to manage it with the foresight and expertise required for modern, dynamic API ecosystems. The journey of API management is continuous, demanding ongoing refinement and adaptation. Kong provides the flexible and powerful foundation upon which you can build, secure, and scale your APIs with confidence, driving innovation and enhancing the developer experience for years to come. Start experimenting, iterating, and unlocking the full potential of your API infrastructure with Kong.
Frequently Asked Questions (FAQ)
1. What is the primary purpose of an API Gateway? The primary purpose of an API gateway is to act as a single entry point for all client requests to a set of backend APIs or microservices. It centralizes functionalities like request routing, authentication, authorization, rate limiting, caching, and logging, simplifying client interactions, enhancing security, and improving the overall management and scalability of APIs in a distributed system.
2. How does Kong API Gateway differ from a traditional reverse proxy? While Kong API Gateway is built on NGINX, which is a reverse proxy, it offers significantly more advanced functionalities. A traditional reverse proxy primarily forwards requests based on simple rules. An API gateway like Kong, however, is API-aware; it understands the structure and context of API requests and responses. It applies complex policies via a plugin architecture for authentication, rate limiting, traffic transformation, logging, and more, providing a robust management layer specifically designed for APIs and microservices.
3. What are the key considerations when choosing a database for Kong? When choosing between PostgreSQL and Cassandra for Kong's database, key considerations include your existing operational expertise, scalability requirements, and consistency needs. PostgreSQL is generally recommended for most deployments due to its strong consistency, ACID compliance, and easier management, especially if your team is already familiar with relational databases. Cassandra is better suited for extremely large-scale, globally distributed deployments requiring high write throughput and eventual consistency, but it comes with higher operational complexity.
4. Can Kong integrate with existing authentication systems? Yes, Kong is highly extensible and can integrate with various existing authentication systems through its rich plugin ecosystem. It provides built-in plugins for common authentication methods like API Keys, JWT (JSON Web Tokens), OAuth 2.0, OpenID Connect, LDAP, and Basic Authentication. For custom or proprietary authentication systems, you can develop your own Kong plugins using Lua or Go to seamlessly integrate them into your API gateway's security framework.
5. Is Kong suitable for small projects or only large enterprises? Kong is incredibly versatile and suitable for projects of all sizes, from small development environments to large enterprise-grade deployments. Its lightweight nature and ease of setup (especially with Docker) make it excellent for individual developers or small teams to quickly get started. For larger enterprises, Kong's high performance, scalability, advanced features, and extensive plugin architecture provide the robustness and flexibility needed to manage complex API landscapes and high traffic volumes. The ability to deploy on Kubernetes further enhances its suitability for cloud-native enterprise environments.
🚀You can securely and efficiently call the OpenAI API on APIPark in just two steps:
Step 1: Deploy the APIPark AI gateway in 5 minutes.
APIPark is developed based on Golang, offering strong product performance and low development and maintenance costs. You can deploy APIPark with a single command line.
curl -sSO https://download.apipark.com/install/quick-start.sh; bash quick-start.sh

In my experience, you can see the successful deployment interface within 5 to 10 minutes. Then, you can log in to APIPark using your account.

Step 2: Call the OpenAI API.

