How to Setup Redis Cluster with Docker Compose & GitHub
APIPark is a high-performance AI gateway that allows you to securely access the most comprehensive LLM APIs globally on the APIPark platform, including OpenAI, Anthropic, Mistral, Llama2, Google Gemini, and more.Try APIPark now! πππ
How to Setup a Robust Redis Cluster with Docker Compose & GitHub for High Availability and Scalability
In the dynamic landscape of modern application development, data storage solutions must offer not only blazing-fast performance but also unwavering reliability and the ability to scale seamlessly. Redis, renowned for its speed as an in-memory data structure store, often becomes a cornerstone for caching, session management, real-time analytics, and message brokering. However, a standalone Redis instance presents a single point of failure and inherent scalability limitations. This is where Redis Cluster emerges as a powerful solution, transforming Redis into a distributed, highly available, and horizontally scalable system.
Setting up a production-grade Redis Cluster can be a daunting task, involving meticulous configuration, network orchestration, and node management. Yet, with the advent of containerization technologies like Docker and orchestration tools such as Docker Compose, the complexity of local development environments and even staging deployments for Redis Cluster can be dramatically simplified. Coupled with GitHub for version control and potential CI/CD automation, developers gain an unparalleled ability to manage, iterate on, and deploy their Redis infrastructure with confidence and efficiency.
This comprehensive guide will delve deep into the intricacies of setting up a robust Redis Cluster using Docker Compose for local development and demonstration purposes, while simultaneously leveraging GitHub to manage your configuration, ensure version consistency, and lay the groundwork for automated deployments. We will traverse the journey from understanding the foundational concepts of Redis Cluster to crafting a detailed Docker Compose configuration, initiating the cluster, and finally, integrating these practices into a GitHub-managed workflow. By the end of this extensive exploration, you will possess a profound understanding and practical skills to deploy and manage a highly available Redis Cluster, a critical component for any performance-sensitive and resilient application.
Chapter 1: Unraveling the Power of Redis Cluster β Fundamentals of Distributed Data
Before we embark on the practical setup, a thorough understanding of Redis Cluster's architecture and underlying principles is paramount. Redis Cluster is designed to provide high availability and scalability through automatic sharding across multiple Redis nodes. It allows your dataset to be automatically partitioned among several instances, enabling horizontal scaling for both data storage and read/write operations.
1.1 What is Redis Cluster and Why is it Essential?
At its core, Redis Cluster is a distributed implementation of Redis where data is automatically split across multiple Redis nodes. Unlike a single Redis instance which holds all the data and becomes a potential bottleneck and single point of failure, a Redis Cluster distributes data, requests, and responsibilities across a network of interconnected instances. This architecture offers several compelling advantages:
- High Availability: In a Redis Cluster, each master node can have one or more replica nodes. If a master node fails, one of its replicas is automatically promoted to become the new master, ensuring continuous operation without manual intervention or data loss. This automatic failover mechanism is crucial for mission-critical applications where downtime is simply not an option.
- Scalability: As your data grows and traffic intensifies, a standalone Redis instance will eventually hit its limits in terms of memory, CPU, or network bandwidth. Redis Cluster overcomes this by sharding your data across multiple master nodes. This means you can add more master nodes to expand your storage capacity and increase your throughput by distributing the load across more machines. Each node only manages a portion of the entire dataset, making it incredibly efficient.
- Performance: By distributing the dataset and operations, Redis Cluster can handle a significantly higher number of requests per second compared to a single instance. Read and write operations are directed to the specific node responsible for the data, reducing contention and maximizing parallelism.
- Simplicity (Relative): While configuring a distributed system always introduces some complexity, Redis Cluster aims to provide a relatively simple programming model for clients. Clients interact with the cluster as if it were a single instance, with the cluster itself handling the routing of commands to the appropriate node.
1.2 Key Architectural Concepts of Redis Cluster
To truly grasp the mechanics of Redis Cluster, we need to familiarize ourselves with its fundamental components and operational paradigms:
- Nodes: A Redis Cluster is composed of multiple Redis instances, each referred to as a "node." Each node participates in the cluster by storing a subset of the data, communicating with other nodes, and performing health checks.
- Master and Replica Nodes: Nodes can operate in two roles:
- Master Nodes: These nodes hold a portion of the dataset and are responsible for processing read and write operations for the keys they own.
- Replica Nodes (or Slave Nodes): These nodes are exact copies of master nodes. Their primary purpose is to provide high availability. If a master node fails, one of its replicas can be promoted to take over its role. Replicas can also serve read requests, offloading some load from the masters.
- Hash Slots: The entire Redis keyspace is divided into 16384 hash slots. Each key in Redis maps to one of these slots using a
CRC16hash function. Master nodes are assigned a subset of these hash slots. When a client wants to read or write a key, it first determines the hash slot for that key and then directs the command to the master node responsible for that slot. This sharding mechanism is entirely automatic and transparent to the application. - Gossip Protocol: Redis Cluster nodes constantly communicate with each other using a gossip protocol. They exchange information about their state, the state of other nodes, assigned hash slots, and replica configurations. This distributed and asynchronous communication ensures that all nodes eventually have a consistent view of the cluster's topology and health.
- Cluster Bus: Nodes communicate using a dedicated TCP port, typically the Redis data port plus 10000 (e.g., if Redis runs on 6379, the cluster bus runs on 16379). This separate channel is used for the gossip protocol, failover coordination, and configuration updates.
- Failover and Elections: When a master node becomes unreachable (due to network partition, crash, etc.), other nodes in the cluster detect its failure through the gossip protocol. If a sufficient number of master nodes agree that the failed master is truly down (a majority vote), one of its replicas is elected and promoted to become the new master. This process is fully automatic and ensures the cluster remains operational.
- Persistence: Like standalone Redis, cluster nodes can be configured with persistence (RDB snapshots or AOF logs) to ensure data durability even in the event of a total cluster shutdown or catastrophic failure. This is critical to prevent data loss.
1.3 Redis Cluster vs. Redis Sentinel: A Brief Comparison
While both Redis Cluster and Redis Sentinel aim to provide high availability, they address different scales and types of problems. Understanding their distinctions is important for choosing the right solution:
| Feature | Redis Sentinel | Redis Cluster |
|---|---|---|
| Primary Goal | High Availability for a single master-replica set. | High Availability & Horizontal Scalability (sharding). |
| Data Partition | No data partitioning; single logical dataset. | Data is partitioned across multiple master nodes (sharding). |
| Scalability | Read scalability via replicas; no write scalability. | Both read and write scalability via multiple master nodes. |
| Architecture | One master, multiple replicas, and multiple Sentinels to monitor. | Multiple master nodes, each with zero or more replicas. All nodes communicate. |
| Complexity | Simpler to set up for basic HA. | More complex setup due to distributed nature and sharding logic. |
| Use Cases | Caching, session store for smaller datasets, high availability without sharding. | Large datasets, high throughput requirements, applications needing true horizontal scaling. |
| Failover | Sentinels monitor and elect a new master. | Nodes themselves detect failures and elect a new master (Poxos-like consensus). |
For applications requiring data sharding and massive horizontal scalability, Redis Cluster is the definitive choice. For simpler high-availability needs without data distribution, Redis Sentinel might be sufficient. This guide, however, focuses on the more robust and scalable Redis Cluster.
Chapter 2: Docker and Docker Compose β Streamlining Your Development Environment
Setting up a multi-node distributed system like Redis Cluster directly on your host machine can be cumbersome, leading to port conflicts, dependency hell, and inconsistencies across development environments. Docker and Docker Compose elegantly solve these problems by encapsulating applications and their dependencies into portable, isolated containers.
2.1 The Docker Advantage: Containers, Images, and Isolation
Docker has revolutionized software deployment by introducing the concept of containers. A container is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries, and settings.
- Containers vs. Virtual Machines: Unlike traditional virtual machines (VMs) which virtualize the entire hardware stack, containers share the host OS kernel. This makes them incredibly lightweight, start almost instantly, and consume significantly fewer resources.
- Docker Images: A Docker image is a read-only template with instructions for creating a Docker container. You can build your own images or use pre-built images from Docker Hub, like the official
redisimage. - Isolation and Portability: Each container runs in isolation from other containers and the host system, ensuring consistent behavior across different environments (development, staging, production). This "build once, run anywhere" philosophy is a cornerstone of modern DevOps.
For our Redis Cluster, Docker means we can spin up multiple Redis instances, each in its own isolated container, with specific configurations and network settings, all without polluting our host machine.
2.2 Docker Compose: Orchestrating Multi-Container Applications with Ease
While Docker is excellent for managing individual containers, real-world applications often consist of multiple interconnected services (e.g., a web server, a database, a cache like Redis). Manually linking and managing these containers can quickly become unwieldy. Docker Compose steps in as the hero here.
Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file (typically docker-compose.yml) to configure your application's services. Then, with a single command, you create and start all the services from your configuration.
2.2.1 Why Docker Compose is Perfect for Redis Cluster Setup
- Simplified Configuration: Define all your Redis Cluster nodes, their configurations, network settings, and volumes in a single, human-readable
docker-compose.ymlfile. This centralizes your infrastructure definition. - Environment Consistency: Everyone on your team can use the exact same
docker-compose.ymlfile, guaranteeing that their Redis Cluster setup is identical. This eliminates "it works on my machine" issues. - Easy Lifecycle Management: Start the entire cluster with
docker-compose up, stop it withdocker-compose down, and rebuild it withdocker-compose build. This makes iterating on your cluster configuration incredibly efficient. - Network Isolation: Docker Compose automatically creates a dedicated network for your services, allowing them to communicate with each other using their service names (DNS resolution). This simplifies inter-node communication for Redis Cluster.
- Volume Management: Easily define persistent volumes for your Redis data, ensuring that your data isn't lost when containers are stopped or removed.
2.2.2 Core Concepts of Docker Compose
Understanding these concepts will be vital for building our docker-compose.yml:
- Services: Each containerized application component is defined as a service. In our case, each Redis node will be a separate service.
- Images: Specifies the Docker image to use for a service (e.g.,
redis:7-alpine). - Ports: Maps host ports to container ports, allowing external access to your services.
- Volumes: Mounts host paths or named volumes into containers for data persistence or configuration files.
- Networks: Defines custom networks for services to communicate over. Services on the same network can reach each other by their service name.
- Environment Variables: Sets environment variables within containers.
- Commands: Overrides the default command for a container, crucial for starting Redis in cluster mode.
By combining the isolation and portability of Docker with the orchestration capabilities of Docker Compose, we can create a sophisticated yet manageable Redis Cluster setup that is ideal for local development, testing, and even for showcasing cluster behavior. This foundational understanding sets the stage for designing and implementing our Redis Cluster architecture.
Chapter 3: Designing Your Redis Cluster Architecture with Docker Compose
Before diving into the actual configuration, a thoughtful design phase is essential. We need to decide on the number of nodes, their roles, network strategy, and data persistence mechanisms. This chapter outlines the architectural considerations for our Docker Compose-based Redis Cluster.
3.1 Choosing the Right Number of Nodes for Your Cluster
A Redis Cluster requires a minimum of three master nodes to function correctly and guarantee automatic failover. This is because the cluster needs a majority of master nodes to agree on a failure before a failover is initiated. For robust high availability, each master node should have at least one replica.
A common and recommended setup for a production-ready, highly available Redis Cluster is to have at least three master nodes, each with one replica. This configuration gives us a total of six Redis instances: three masters and three replicas.
- 3 Master Nodes (M1, M2, M3): Each master will own approximately one-third of the 16384 hash slots.
- 3 Replica Nodes (R1, R2, R3): R1 will replicate M1, R2 will replicate M2, and R3 will replicate M3.
This 3-master, 3-replica setup provides: * High Availability: If any single master fails, its replica can be promoted. If a master and its replica both fail (an unlikely but possible scenario), the remaining cluster can still function, albeit with a portion of the data unavailable, until the issues are resolved. * Fault Tolerance: The cluster can tolerate the failure of up to (N-1)/2 master nodes (where N is the total number of masters) without losing availability, as long as each failed master has an operational replica. For 3 masters, it can tolerate (3-1)/2 = 1 master failure. If one master fails, the replica takes over, and the cluster remains fully operational. If two masters fail, the cluster loses its quorum and stops accepting writes. Therefore, having a replica for each master is paramount.
For our Docker Compose setup, we will aim for this 3-master, 3-replica configuration to simulate a robust production environment effectively.
3.2 Network Strategy: Ensuring Inter-Node Communication
In a Dockerized environment, proper networking is crucial for the cluster nodes to discover and communicate with each other. Docker Compose automatically creates a default network for all services defined in the docker-compose.yml file. This network allows containers to resolve each other by their service names.
- Custom Bridge Network: It's good practice to explicitly define a custom bridge network within your
docker-compose.yml. This provides better isolation and allows you to name your network logically. All Redis nodes will join this network. - Internal DNS Resolution: Within the Docker network, containers can communicate using their service names. For instance,
redis-master-1can connect toredis-master-2simply by usingredis-master-2as the hostname. This simplifies the Redis Cluster configuration, as we don't need to hardcode IP addresses. - Host Port Mapping: While inter-node communication happens over the Docker internal network, you'll likely want to expose at least one node's port to your host machine for client connections (e.g., using
redis-cli). We will map the standard Redis port (6379) for each node to a unique port on the host (e.g., 6379, 6380, 6381, etc.). Remember that Redis Cluster also uses a "cluster bus" port (Redis data port + 10000) for internal communication, which does not need to be exposed to the host for local client access but must be accessible between containers.
3.3 Data Persistence: Safeguarding Your Redis Data
For any non-ephemeral data, persistence is a critical consideration. If your Redis containers are stopped and removed without proper persistence, all your data will be lost. Docker offers volumes as a robust mechanism for data durability.
- Named Volumes: Docker named volumes are the preferred way to persist data generated by Docker containers. They are managed by Docker and are more portable and easier to back up than bind mounts. For our Redis Cluster, we will create a separate named volume for each Redis node to store its RDB snapshots and AOF logs, as well as its cluster configuration file (
nodes.conf). - Mount Points: Each volume will be mounted into the
/datadirectory within its respective Redis container, which is the default working directory for Redis and where it stores its persistence files.
3.4 Redis Configuration Parameters for Cluster Mode
To enable Redis Cluster mode, each Redis instance needs specific configuration settings. While the official Redis Docker image allows passing these as command-line arguments, using dedicated configuration files (redis.conf) offers better readability and maintainability, especially for more complex setups.
Key configuration parameters for cluster mode:
cluster-enabled yes: This is the most critical setting, explicitly telling Redis to run in cluster mode.cluster-config-file nodes.conf: Specifies the name of the file where the cluster node's configuration (cluster ID, other nodes' IPs, slots, etc.) will be stored. This file is automatically generated and updated by Redis. It's crucial for persistence.cluster-node-timeout 5000: The maximum amount of time a node can be unreachable before it is considered to be in a failed state by the cluster.appendonly yes: Enables AOF (Append Only File) persistence, which provides better data durability than RDB snapshots alone.port <port>: The standard Redis client port (e.g., 6379). This needs to be unique per container.bind 0.0.0.0: Ensures Redis listens on all available network interfaces within the container, allowing communication from other containers on the Docker network.protected-mode no: For development environments, this allows external connections without explicit binding. For production,protected-mode yescombined with specificbindIPs andrequirepassis recommended.loglevel notice: Sets the logging verbosity.
By carefully considering these design aspects, we lay a solid foundation for constructing our docker-compose.yml file, ensuring a functional, robust, and understandable Redis Cluster setup.
Chapter 4: Step-by-Step Redis Cluster Setup with Docker Compose
With our design principles in place, it's time to roll up our sleeves and build the Redis Cluster using Docker Compose. This chapter provides a detailed, step-by-step guide from prerequisites to cluster initialization and verification.
4.1 Prerequisites: What You'll Need
Before you begin, ensure you have the following installed on your system:
- Docker Desktop / Docker Engine: Make sure Docker is installed and running. You can download Docker Desktop for macOS and Windows, or install Docker Engine on Linux.
- Git: For version control and later integration with GitHub.
- Basic understanding of Terminal/Command Prompt: You'll be executing commands.
4.2 Project Structure: Organizing Your Files
A well-organized project structure enhances clarity and maintainability. Let's create a directory for our project:
mkdir redis-cluster-docker
cd redis-cluster-docker
mkdir config
Your project directory will look like this:
redis-cluster-docker/
βββ config/
βββ docker-compose.yml (will be created)
Inside the config/ directory, we'll place our Redis configuration file template.
4.3 Crafting the Redis Configuration File (config/redis.conf)
While you can pass Redis configuration via command-line arguments in docker-compose.yml, using a separate redis.conf file is cleaner and allows for more detailed settings. Create a file named redis.conf inside the config/ directory with the following content:
# General Redis Configuration
port 6379
bind 0.0.0.0
protected-mode no
loglevel notice
daemonize no # Must be 'no' for Docker containers
# Persistence (Recommended for Cluster)
appendonly yes
appendfsync everysec
# Cluster Specific Configuration
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-require-full-coverage no # Important for local dev/testing - allows cluster to work even if not all slots are covered.
# For production, consider 'yes' to ensure data integrity, but makes cluster less tolerant to partial failures.
Explanation of key redis.conf settings: * port 6379: Each Redis instance will listen on this port inside its container. We'll map different host ports to this internal port. * bind 0.0.0.0: Allows Redis to accept connections from any interface inside the container, essential for inter-container communication on the Docker network. * protected-mode no: Disables protected mode, allowing external connections without specific binding. For production, this should be yes with bind to specific IPs and requirepass for security. * daemonize no: Redis should run in the foreground inside a Docker container, not as a daemon. * appendonly yes: Enables AOF persistence, which logs every write operation to ensure data durability. * cluster-enabled yes: Activates Redis Cluster mode for this instance. * cluster-config-file nodes.conf: Specifies the name of the file where the cluster state is stored. This file is critical and must be persistent. * cluster-node-timeout 5000: Sets the timeout in milliseconds for a node to be considered unreachable. * cluster-require-full-coverage no: Crucial for development and learning scenarios. If set to yes, the cluster will stop accepting writes if not all 16384 hash slots are covered (e.g., if a master node fails and its replica isn't available or hasn't taken over yet, or if a master and all its replicas fail). Setting it to no allows the cluster to continue operating with partial data availability in such scenarios, which is useful for testing without strict fault tolerance requirements. For a production environment, yes is generally preferred to prevent data inconsistencies, but it makes the cluster more sensitive to failures.
4.4 Constructing Your docker-compose.yml
Now, let's create the docker-compose.yml file in your redis-cluster-docker root directory. This file will define our six Redis nodes (3 masters, 3 replicas) and the shared network.
version: '3.8'
services:
redis-master-1:
image: redis:7-alpine
container_name: redis-master-1
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
- ./config/redis.conf:/usr/local/etc/redis/redis.conf
- redis_data_1:/data
ports:
- "6379:6379"
- "16379:16379" # Cluster bus port
networks:
- redis-cluster-network
environment:
# These are examples if you wanted to pass environment variables
# REDIS_PASSWORD: "your_strong_password"
# CLUSTER_MODE: "yes"
redis-master-2:
image: redis:7-alpine
container_name: redis-master-2
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
- ./config/redis.conf:/usr/local/etc/redis/redis.conf
- redis_data_2:/data
ports:
- "6380:6379"
- "16380:16379" # Cluster bus port
networks:
- redis-cluster-network
redis-master-3:
image: redis:7-alpine
container_name: redis-master-3
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
- ./config/redis.conf:/usr/local/etc/redis/redis.conf
- redis_data_3:/data
ports:
- "6381:6379"
- "16381:16379" # Cluster bus port
networks:
- redis-cluster-network
redis-replica-1:
image: redis:7-alpine
container_name: redis-replica-1
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
- ./config/redis.conf:/usr/local/etc/redis/redis.conf
- redis_data_4:/data
ports:
- "6382:6379"
- "16382:16379" # Cluster bus port
networks:
- redis-cluster-network
depends_on:
- redis-master-1
- redis-master-2
- redis-master-3
redis-replica-2:
image: redis:7-alpine
container_name: redis-replica-2
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
- ./config/redis.conf:/usr/local/etc/redis/redis.conf
- redis_data_5:/data
ports:
- "6383:6379"
- "16383:16379" # Cluster bus port
networks:
- redis-cluster-network
depends_on:
- redis-master-1
- redis-master-2
- redis-master-3
redis-replica-3:
image: redis:7-alpine
container_name: redis-replica-3
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
- ./config/redis.conf:/usr/local/etc/redis/redis.conf
- redis_data_6:/data
ports:
- "6384:6379"
- "16384:16379" # Cluster bus port
networks:
- redis-cluster-network
depends_on:
- redis-master-1
- redis-master-2
- redis-master-3
networks:
redis-cluster-network:
driver: bridge
volumes:
redis_data_1:
redis_data_2:
redis_data_3:
redis_data_4:
redis_data_5:
redis_data_6:
Understanding the docker-compose.yml structure:
version: '3.8': Specifies the Docker Compose file format version.services:: Defines our individual Redis nodes.redis-master-1(and others):image: redis:7-alpine: Uses the official Redis 7 image, based on Alpine Linux for a smaller footprint.container_name: Assigns a readable name to the container.command: redis-server /usr/local/etc/redis/redis.conf: Overrides the default command to start Redis with our custom configuration file.volumes::- ./config/redis.conf:/usr/local/etc/redis/redis.conf: Binds our hostredis.conffile into the container at the expected path.- redis_data_1:/data: Mounts the named volumeredis_data_1to the container's/datadirectory for persistence.
ports::"6379:6379": Maps host port 6379 to container port 6379 forredis-master-1. Subsequent nodes map to 6380, 6381, etc., on the host. This allows you to connect to individual nodes from your host."16379:16379": Maps the cluster bus port. While not strictly necessary to expose to the host for client connections, exposing it can be useful for debugging or specific scenarios. Crucially, these ports must be unique on the host.
networks:: Connects the service to our customredis-cluster-network.
depends_on(for replicas): This ensures that replica containers are started after the master containers. This is a best-effort dependency and doesn't wait for the services to be "ready," but merely ensures start order.
networks:: Defines our custom bridge networkredis-cluster-network.volumes:: Declares the named volumes that Docker will manage for data persistence. Each node gets its own volume.
4.5 Bringing Up the Containers
Save both redis.conf and docker-compose.yml. Now, navigate to your redis-cluster-docker directory in the terminal and run:
docker compose up -d
docker compose up: Starts the services defined indocker-compose.yml.-d: Runs the containers in detached mode (in the background).
You should see Docker pull the redis:7-alpine image (if not already present) and then create and start six Redis containers. Verify that all containers are running:
docker ps -a
You should see redis-master-1 through redis-replica-3 listed with their respective exposed ports.
4.6 Initializing the Redis Cluster
At this point, you have six independent Redis instances running in Docker containers, each configured to enable cluster mode, but they are not yet part of a functional cluster. We need to explicitly tell them to form a cluster. Redis provides the redis-cli --cluster create command for this.
We'll run this command from within one of our Redis containers or from a temporary redis-cli container, connecting to all the nodes. The command requires the IP addresses (or hostnames in a Docker network) and ports of all master nodes.
Since our containers are on a Docker network, they can resolve each other by their service names. We can connect from one of the containers to simplify things. Let's use redis-master-1 to initiate the cluster.
docker exec -it redis-master-1 redis-cli --cluster create \
redis-master-1:6379 \
redis-master-2:6379 \
redis-master-3:6379 \
redis-replica-1:6379 \
redis-replica-2:6379 \
redis-replica-3:6379 \
--cluster-replicas 1
Let's break down this command:
docker exec -it redis-master-1: Executes a command inside theredis-master-1container in interactive mode.redis-cli --cluster create: The command to create a new Redis Cluster.redis-master-1:6379 ... redis-replica-3:6379: A space-separated list of all Redis node addresses (service name:port) that will participate in the cluster. Note thatredis-cli --cluster createtakes all nodes (masters and replicas) as arguments and then uses the--cluster-replicasoption to assign replicas.--cluster-replicas 1: This crucial option tellsredis-clito assign one replica to each master. The utility will automatically figure out which nodes should be masters and which should be replicas, distributing hash slots and linking replicas to their respective masters.
When you run this command, redis-cli will propose a configuration, showing which masters will own which slots and which replicas will be assigned to which masters. It will then ask you to confirm: Can I set the above configuration now? (type 'yes' to accept):
Type yes and press Enter.
The output will indicate that the cluster has been successfully created. You'll see messages like: >>> Assign a common epoch to all the nodes >>> Assign hash slots to master 0 >>> Assign replicas to masters >>> All 16384 slots covered.
4.7 Verifying the Cluster State
After the cluster creation, it's vital to verify its health and configuration. You can do this using redis-cli again:
docker exec -it redis-master-1 redis-cli -c -p 6379 cluster info
The -c flag tells redis-cli to enable cluster mode, allowing it to automatically redirect commands to the correct node. The output of cluster info should show:
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_sent:XXXXX
cluster_stats_messages_received:XXXXX
Key indicators: cluster_state:ok, cluster_slots_assigned:16384, cluster_slots_ok:16384, and cluster_known_nodes:6. This confirms your cluster is up, healthy, and all slots are covered.
You can also view the detailed node configuration:
docker exec -it redis-master-1 redis-cli -c -p 6379 cluster nodes
This will list all six nodes, their IDs, IP addresses, ports, roles (master/slave), hash slot assignments, and links between masters and replicas. For example:
<master-id> redis-master-1:6379@16379 master - 0 1678822558000 1 connected 0-5460
<replica-id> redis-replica-1:6379@16379 slave <master-id> 0 1678822558000 2 connected
...
4.8 Testing Data Storage and Failover
Let's test if the cluster is working by setting and getting keys:
docker exec -it redis-master-1 redis-cli -c -p 6379
Now you are in the redis-cli interactive shell in cluster mode.
127.0.0.1:6379> set mykey1 "hello cluster"
-> Redirected to slot 15495 residing at redis-master-3:6379
OK
127.0.0.1:6379> set mykey2 "another value"
-> Redirected to slot 4443 residing at redis-master-1:6379
OK
127.0.0.1:6379> get mykey1
-> Redirected to slot 15495 residing at redis-master-3:6379
"hello cluster"
127.00.1:6379> get mykey2
-> Redirected to slot 4443 residing at redis-master-1:6379
"another value"
Notice how redis-cli automatically redirects your commands to the correct master node based on the key's hash slot.
Testing Failover (Optional but Recommended):
To simulate a master failure: 1. Identify one of your master nodes (e.g., redis-master-1) using cluster nodes. 2. Stop that container: docker stop redis-master-1 3. Wait a few seconds for the cluster to detect the failure and promote a replica. 4. Check the cluster state: docker exec -it redis-master-2 redis-cli -c -p 6379 cluster nodes You should see that the former replica of redis-master-1 has now been promoted to master, and the failed redis-master-1 will be marked as fail. 5. Try to get the keys you previously set. Data should still be accessible. 6. Start the failed master again: docker start redis-master-1. It will rejoin the cluster as a replica of the newly promoted master.
4.9 Cleaning Up Your Cluster
When you're done with your Redis Cluster, you can stop and remove all containers, networks, and volumes defined in docker-compose.yml:
docker compose down --volumes
docker compose down: Stops and removes containers and networks.--volumes: Crucially, this option also removes the named volumes, which contain your Redis data and cluster configuration. Be cautious with this in environments where you need to preserve data. For development, it's often desirable for a clean slate.
By following these detailed steps, you have successfully set up a functional, highly available, and scalable Redis Cluster using Docker Compose, ready for local development and rigorous testing. This controlled environment is perfect for experimenting with Redis Cluster features without impacting your host system.
Chapter 5: Integrating with GitHub for Version Control and Automation Foundation
Once your Redis Cluster configuration is working locally with Docker Compose, the next logical step is to integrate it with GitHub. GitHub provides robust version control, collaboration features, and a platform for continuous integration and continuous deployment (CI/CD), ensuring your infrastructure as code is managed professionally.
5.1 Setting Up Your GitHub Repository
Version control is paramount for managing infrastructure configurations. It allows you to track changes, revert to previous versions, and collaborate effectively.
- Initialize a Git Repository: Navigate to your
redis-cluster-dockerproject directory in your terminal.bash git init - Add Files to Staging: Add your
docker-compose.ymlandconfig/redis.conffiles to the Git staging area.bash git add . - Make Your First Commit: Commit these files to your local repository with a descriptive message.
bash git commit -m "Initial commit: Redis Cluster setup with Docker Compose" - Create a GitHub Repository: Go to GitHub.com, log in, and create a new repository (e.g.,
redis-cluster-docker-setup). Choose whether it's public or private. - Link Local to Remote: Follow the instructions on GitHub to link your local repository to the newly created remote repository.
bash git branch -M main git remote add origin https://github.com/your-username/redis-cluster-docker-setup.git git push -u origin mainNow your Redis Cluster setup files are safely stored on GitHub, allowing you to track changes, collaborate with teammates, and easily replicate the setup on other machines.
5.2 Version Control Best Practices for Infrastructure as Code
Managing infrastructure configurations effectively requires adherence to certain best practices:
- Meaningful Commit Messages: Every commit should have a clear, concise message describing the changes made and why. This helps in understanding the history of your infrastructure.
- Feature Branches: For any significant changes or new features (e.g., adding more nodes, changing persistence settings), create a separate feature branch. This keeps your
mainormasterbranch stable.bash git checkout -b add-monitoring-ports # Make changes to docker-compose.yml git add . git commit -m "Added Prometheus exporter ports to Redis nodes" git push origin add-monitoring-ports - Pull Requests (PRs): Before merging changes into the
mainbranch, open a pull request. This facilitates code review by teammates, ensuring quality and catching potential issues before they impact the stable environment. - Tagging Releases: Once a stable version of your infrastructure configuration is deployed to a specific environment (e.g., staging), tag that commit in Git. This makes it easy to revert to or redeploy known good states.
bash git tag -a v1.0.0 -m "Redis Cluster v1.0.0 - Initial stable setup" git push origin v1.0.0 - Ignore Sensitive Files: Use a
.gitignorefile to prevent sensitive information (like.envfiles with passwords) or generated files (like Docker'snodes.confif not persistent by design) from being committed to the repository. For our Redis Cluster, thenodes.confis generated by Redis and is part of the persistent data, so it resides in the Docker volume, not directly in our Git repo.
5.3 CI/CD with GitHub Actions: A Conceptual Foundation
While docker compose up is perfect for local development, for more automated and robust deployments (even to remote servers or cloud environments), Continuous Integration/Continuous Deployment (CI/CD) pipelines are invaluable. GitHub Actions is GitHub's built-in CI/CD platform that allows you to automate workflows directly from your repository.
Why Automate Deployment/Testing?
- Consistency: Ensures that your Redis Cluster is always deployed in the exact same way, eliminating manual errors.
- Speed: Automates repetitive tasks, dramatically reducing deployment time.
- Reliability: Automated tests can verify the cluster's health and functionality after deployment.
- Scalability: Easier to deploy to multiple environments (dev, staging, production) with minimal effort.
Basic Workflow Example for Redis Cluster (Conceptual):
You could define a GitHub Actions workflow (.github/workflows/deploy-redis-cluster.yml) that, upon a push to the main branch, performs the following steps:
- Checkout Code: Retrieves your
docker-compose.ymlandconfig/redis.conf. - Setup Docker: Ensures Docker is available in the runner environment.
- Spin Up Cluster: Runs
docker compose up -dto bring up the Redis containers. - Wait for Services: Implements a short delay or health check to ensure Redis containers are fully started.
- Initialize Cluster: Executes the
docker exec ... redis-cli --cluster createcommand to form the cluster. - Run Health Checks/Tests: Uses
redis-cli cluster infoandredis-cli cluster nodesto verify the cluster state. You could also write simple client scripts toSETandGETdata to ensure functionality. - Clean Up (Optional): Runs
docker compose downif this is purely for testing the setup process. For deployment, this step would be omitted.
Example Snippet for a GitHub Actions Workflow (conceptual deploy-redis-cluster.yml):
name: Deploy Redis Cluster (Local Test)
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
deploy-test:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Docker Compose
uses: docker/setup-buildx-action@v2 # Not strictly needed for simple compose, but good for robust setup
- name: Start Redis Cluster containers
run: docker compose up -d
- name: Wait for Redis containers to be ready
run: sleep 10 # Crude wait, consider actual health checks for production
- name: Initialize Redis Cluster
run: |
docker exec -it redis-master-1 redis-cli --cluster create \
redis-master-1:6379 \
redis-master-2:6379 \
redis-master-3:6379 \
redis-replica-1:6379 \
redis-replica-2:6379 \
redis-replica-3:6379 \
--cluster-replicas 1 \
--cluster-yes # Auto-confirm
env:
# Pass any necessary environment variables for the cli command, though not strictly needed here
DOCKER_BUILDKIT: 1
- name: Verify Redis Cluster state
run: |
docker exec -it redis-master-1 redis-cli -c -p 6379 cluster info
docker exec -it redis-master-1 redis-cli -c -p 6379 cluster nodes
- name: Test Redis Cluster data operations
run: |
docker exec -it redis-master-1 redis-cli -c -p 6379 SET mykey "Hello from CI"
docker exec -it redis-master-1 redis-cli -c -p 6379 GET mykey
- name: Clean up containers (optional for test)
if: always() # Run even if previous steps fail
run: docker compose down --volumes
This conceptual workflow demonstrates how GitHub Actions can automate the setup, verification, and teardown of your Redis Cluster configuration. For actual deployments to production, you'd integrate with cloud providers (AWS, Azure, GCP) or Kubernetes, often requiring more sophisticated actions for authentication, resource provisioning, and managing secrets.
5.4 Simple Mention of APIPark
While this article focuses on the infrastructure setup of Redis, robust API management is crucial for applications leveraging such data stores. For complex scenarios involving AI models or numerous REST services that need to interact with data layers like Redis, platforms like APIPark provide an excellent open-source solution. It simplifies the integration and management of diverse APIs, offering features like unified API formats and end-to-end lifecycle management, which can be invaluable when building microservices that depend on a highly available data backbone like Redis Cluster. Imagine an application layer consuming data from this Redis cluster via a set of well-defined APIs; APIPark can manage these APIs, enforce security policies, and provide analytics on their usage, streamlining the interaction between your applications and your powerful Redis backend.
By using GitHub for version control and laying the foundation for CI/CD with GitHub Actions, you establish a professional, automated, and collaborative workflow for managing your Redis Cluster infrastructure, preparing it for both local development and more advanced deployment scenarios.
Chapter 6: Advanced Topics, Monitoring, Security, and Production Considerations
Having successfully set up and verified your Redis Cluster with Docker Compose and integrated it with GitHub, it's crucial to consider advanced topics that move beyond a basic development environment. These considerations are vital for building a production-ready, secure, and performant distributed system.
6.1 Monitoring Your Redis Cluster: Staying Informed
A production Redis Cluster demands vigilant monitoring to ensure optimal performance, detect issues proactively, and prevent potential outages. Without adequate monitoring, you are operating in the dark.
- Redis
INFOCommand: TheINFOcommand is your first line of defense. It provides a wealth of information about the Redis server's state, memory usage, CPU, connections, persistence, and, crucially, cluster-specific metrics. You can runredis-cli -c cluster infoandredis-cli -c infoon individual nodes to gather insights. - Prometheus and Grafana: This is a popular and powerful open-source monitoring stack.
- Prometheus: A time-series database and monitoring system. You can use the
redis_exporter(a separate Docker container) to scrape metrics from each Redis node. Each exporter exposes a/metricsendpoint that Prometheus can pull from. - Grafana: A data visualization tool. You can connect Grafana to Prometheus and create dashboards to visualize key Redis metrics (e.g., memory usage, connected clients, hit/miss ratio, replication lag, cluster state). This provides a real-time, consolidated view of your cluster's health.
- Prometheus: A time-series database and monitoring system. You can use the
- Logging: Ensure your Redis nodes are configured for appropriate logging levels (
loglevel noticeorverbosefor troubleshooting). Integrate these logs with a centralized logging system (e.g., ELK Stack - Elasticsearch, Logstash, Kibana, or Splunk) to aggregate, search, and analyze logs from all cluster nodes. This is invaluable for debugging issues and understanding historical events. - Alerting: Beyond visualization, set up alerts in Prometheus/Grafana or your logging system. These alerts should trigger notifications (email, Slack, PagerDuty) when critical thresholds are crossed (e.g., high memory usage, master node down, replication lag exceeding limits).
For a Docker Compose setup, you could extend your docker-compose.yml to include redis_exporter services for each Redis node, along with Prometheus and Grafana, creating a fully integrated monitoring stack for your local cluster.
6.2 Security Considerations: Protecting Your Data
Running a Redis Cluster, especially in production, requires a robust security posture. Our Docker Compose setup, while convenient for development, has some inherent vulnerabilities that must be addressed for real-world deployments.
- Network Isolation: In our Docker Compose setup, all services are on a private Docker bridge network. This is good, but for production, ensure your Redis Cluster is behind a firewall and isolated in its own private subnet. Only application servers or other trusted services should have network access. Do not expose Redis ports directly to the public internet.
- Authentication (Requirepass): Redis by default has no authentication. This is a major security risk. You should always configure a strong password using the
requirepassdirective in yourredis.confandmasterauthfor replicas.conf requirepass your_strong_password masterauth your_strong_passwordClients will then need to authenticate usingAUTH your_strong_password. Store passwords securely (e.g., Docker secrets, environment variables in a secure CI/CD pipeline). - TLS/SSL Encryption: For data in transit, especially if your application servers are not on the same highly secured network segment as Redis, use TLS/SSL encryption. Redis 6.0 and later versions support TLS. This encrypts communication between clients and Redis, and potentially between Redis cluster nodes.
- Rename or Disable Dangerous Commands: Commands like
FLUSHALL,FLUSHDB,CONFIG,DEBUGcan be dangerous in production. Consider renaming them to obscure names or disabling them using therename-commandordisable-commandsdirectives inredis.conf. - Least Privilege: Ensure that applications connecting to Redis only have the minimum necessary permissions. If using Redis ACLs (Access Control Lists, introduced in Redis 6), define users with specific permissions rather than using a single, all-powerful password.
- Regular Updates: Keep your Redis instances and Docker images updated to the latest stable versions to benefit from security patches and bug fixes.
6.3 Performance Tuning: Maximizing Redis's Potential
While Redis is inherently fast, proper tuning can further enhance its performance, especially under heavy load.
- Memory Management:
maxmemory: Set amaxmemorylimit to prevent Redis from consuming all available RAM, which could lead to system instability.maxmemory-policy: Choose an appropriate eviction policy (e.g.,allkeys-lru,volatile-lru,noeviction) based on your application's needs.allkeys-lru(Least Recently Used) is a common choice for caches.- Memory Fragmentation: Monitor
mem_fragmentation_ratioinINFO memory. High fragmentation can indicate wasted memory. Restarting Redis can reclaim memory, but for a cluster, this needs careful orchestration.
- CPU Usage: Redis is single-threaded for command processing. While newer versions use threads for I/O, the main event loop is single-threaded. Ensure the CPU core where the Redis process runs is not oversubscribed. For multi-core machines, ensure each Redis instance (master and replica) has a dedicated CPU core if possible, or enough processing power.
- Network Optimization:
- Latency: Minimize network latency between your application and Redis nodes. Co-locate them if possible.
- Bandwidth: Ensure sufficient network bandwidth, especially with high-throughput applications or during replication.
- Persistence Strategy:
- RDB vs. AOF: AOF generally provides better durability (less data loss on crash) but can be slower and generate larger files than RDB. RDB is good for point-in-time backups. Most production setups use AOF, often combined with daily RDB snapshots.
appendfsync: For AOF,everysecis a good balance between durability and performance.alwaysis safest but slowest;nois fastest but risks losing more data.
- Client Connection Pooling: Configure your application clients to use connection pooling to avoid the overhead of establishing new connections for every command.
6.4 Scaling Your Cluster: Adding and Removing Nodes
One of the primary benefits of Redis Cluster is its ability to scale horizontally. While our Docker Compose setup is fixed for 6 nodes, understanding how to scale is crucial for production.
- Adding Master Nodes:
- Provision new Redis instances (Docker containers, VMs, etc.), configured for cluster mode but not yet part of a cluster.
- Use
redis-cli --cluster add-node <new_node_ip>:<new_node_port> <existing_node_ip>:<existing_node_port>to add the new node as a master. - Migrate hash slots from existing masters to the new master using
redis-cli --cluster reshard <existing_node_ip>:<existing_node_port>. This balances the data distribution.
- Adding Replica Nodes:
- Provision new Redis instances, configured for cluster mode.
- Use
redis-cli --cluster add-node <new_replica_ip>:<new_replica_port> <existing_master_ip>:<existing_master_port> --cluster-slave --cluster-master-id <master_node_id>to add the new node as a replica for a specific master.
- Removing Nodes:
- Migrate slots off the master node you wish to remove (if it's a master) using
redis-cli --cluster reshard. - Remove the node from the cluster using
redis-cli --cluster del-node <node_ip>:<node_port> <node_id_to_remove>. - Decommission the instance.
- Migrate slots off the master node you wish to remove (if it's a master) using
These operations are complex and require careful planning and execution, especially in production, to avoid data loss or cluster instability.
6.5 Production Deployment Considerations: Beyond Docker Compose
While Docker Compose is excellent for local development and testing, it's typically not the tool of choice for managing production Redis Clusters on its own.
- Kubernetes: For containerized production deployments, Kubernetes (K8s) is the industry standard. It offers robust orchestration, self-healing, scaling, and secret management capabilities. Deploying Redis Cluster on Kubernetes usually involves using StatefulSets, persistent volumes, and custom operators (like the Redis Operator) to manage the cluster's lifecycle. This provides far more resilience and automation than a standalone Docker Compose setup.
- Cloud Provider Services: Major cloud providers (AWS ElastiCache, Azure Cache for Redis, Google Cloud Memorystore) offer managed Redis services, including cluster mode. These services handle the underlying infrastructure, patching, scaling, and failover, significantly reducing operational overhead. They are often the easiest and most reliable way to run Redis Cluster in production.
- Bare Metal / VMs: Deploying directly on virtual machines or bare metal gives you maximum control but also the most operational responsibility. Automation tools like Ansible, Chef, or Puppet would be used for provisioning and configuration.
Our Docker Compose setup serves as an invaluable learning tool and a consistent development environment. It allows you to rapidly prototype, test, and understand the intricacies of Redis Cluster without the complexities of a full production orchestration system. The principles learned here are directly transferable to more advanced deployment strategies.
By thoughtfully addressing these advanced topics, from meticulous monitoring and stringent security to performance optimization and scalable deployment strategies, you elevate your Redis Cluster from a mere local setup to a robust, production-ready data solution, capable of meeting the demands of high-performance, resilient applications.
Conclusion
Throughout this extensive guide, we have embarked on a comprehensive journey to demystify the process of setting up a robust Redis Cluster using the powerful combination of Docker Compose and GitHub. We began by solidifying our understanding of Redis Cluster's fundamental architecture, recognizing its indispensable role in achieving high availability and horizontal scalability for modern applications. The crucial distinctions between master and replica nodes, hash slots, and the automatic failover mechanisms were thoroughly explored, laying the theoretical groundwork for our practical implementation.
Next, we embraced the elegance of Docker and Docker Compose, illustrating how these containerization and orchestration tools dramatically simplify the complexities inherent in deploying multi-node distributed systems like Redis Cluster. By leveraging Docker Compose, we demonstrated how to define, start, and manage six interconnected Redis instancesβthree masters and three replicasβeach running in its isolated container, communicating seamlessly over a custom Docker network, and preserving data through dedicated named volumes.
The heart of our practical guide lay in the step-by-step configuration. We meticulously crafted a generic redis.conf tailored for cluster mode and constructed a detailed docker-compose.yml file, carefully mapping ports, volumes, and network settings for each node. The subsequent cluster initialization using redis-cli --cluster create and rigorous verification processes ensured that our distributed Redis environment was fully functional and ready for action.
Beyond the initial setup, we elevated our approach by integrating with GitHub, transforming our local configuration into version-controlled infrastructure as code. We discussed best practices for repository management, branching strategies, and the pivotal role of pull requests in collaborative development. Furthermore, we touched upon the conceptual framework of GitHub Actions, envisioning how CI/CD pipelines can automate the deployment, testing, and validation of our Redis Cluster, fostering consistency and efficiency across environments. We also saw how a product like APIPark could fit into a broader ecosystem of API management for applications interacting with such a powerful data store.
Finally, we ventured into advanced topics critical for production readiness: from establishing comprehensive monitoring strategies with Prometheus and Grafana, implementing stringent security measures (authentication, network isolation, TLS), to optimizing performance through memory management and persistence fine-tuning. We also conceptually covered the scaling of a Redis Cluster and differentiated between Docker Compose's role in local development versus more robust production orchestration solutions like Kubernetes or managed cloud services.
By mastering the techniques and concepts presented in this guide, you are now equipped not only to deploy a functional Redis Cluster in a Dockerized environment but also to understand the underlying principles that govern its distributed nature. This foundational knowledge, coupled with effective version control and an eye towards automation, empowers you to build more resilient, scalable, and manageable applications, confident in your ability to leverage Redis Cluster as a cornerstone of your data infrastructure. The journey from a single Redis instance to a distributed, high-performance cluster is a significant leap, and with Docker Compose and GitHub, it is now more accessible and manageable than ever before.
Frequently Asked Questions (FAQs)
1. What is the minimum number of nodes required for a Redis Cluster, and why? A Redis Cluster requires a minimum of three master nodes to guarantee automatic failover and maintain quorum. This is because the cluster needs a majority of master nodes (N/2 + 1, where N is the total number of masters) to agree on a master's failure before initiating a replica promotion. For a cluster with three masters, at least two must be active for a failover to occur. While you can technically create a cluster with fewer masters, it will lack full fault tolerance. To achieve true high availability, it is strongly recommended to have at least three master nodes, each with at least one replica, totaling a minimum of six nodes.
2. Why use Docker Compose for Redis Cluster instead of just standalone Docker containers or a direct installation? Docker Compose simplifies the management of multi-container applications. For a Redis Cluster, which involves multiple interconnected Redis instances, Docker Compose allows you to define all services, networks, and volumes in a single docker-compose.yml file. This centralizes configuration, ensures environment consistency across development teams, makes it easy to start/stop the entire cluster with one command, and handles internal networking seamlessly. While direct installation is possible, it's prone to configuration errors and lacks the portability and isolation benefits of Docker. For production, Docker Compose is typically used for local development, while Kubernetes or managed cloud services are preferred for robust orchestration.
3. How do I ensure data persistence for my Redis Cluster when using Docker Compose? Data persistence is achieved by mapping Docker volumes to the /data directory inside each Redis container. In the provided docker-compose.yml, separate named volumes (e.g., redis_data_1, redis_data_2) are declared and mounted for each Redis node. These volumes store Redis's RDB snapshots, AOF logs, and the crucial nodes.conf cluster configuration file. This ensures that your data and cluster state are preserved even if containers are stopped, removed, or restarted. Without volumes, all data would be lost upon container removal.
4. Can I expose a Redis Cluster running in Docker Compose to external applications on my host or other machines? Yes, you can expose the Redis Cluster to applications outside the Docker network by mapping container ports to host ports in your docker-compose.yml file (e.g., "6379:6379"). Applications can then connect to the Redis nodes using the host's IP address and the mapped host ports. When connecting in cluster mode, your Redis client needs to be cluster-aware and should connect to one of the exposed master nodes, which will then redirect it to the appropriate node for the specific key. For security in non-development environments, ensure these ports are not exposed to the public internet and are properly firewalled.
5. What is the difference between Redis Cluster and Redis Sentinel, and when should I use one over the other? Both Redis Cluster and Redis Sentinel provide high availability for Redis, but they solve different problems: * Redis Sentinel focuses purely on high availability for a single master-replica Redis setup. It monitors the master, performs automatic failover if the master fails, and provides service discovery. It does not provide horizontal scalability or data sharding. Use Sentinel when your dataset fits into a single Redis instance, but you need automated failover. * Redis Cluster provides both high availability and horizontal scalability by automatically sharding your data across multiple master nodes. If a master fails, its replica is promoted. If you need to store large datasets that exceed a single server's memory or require higher throughput than a single instance can provide, Redis Cluster is the appropriate choice.
πYou can securely and efficiently call the OpenAI API on APIPark in just two steps:
Step 1: Deploy the APIPark AI gateway in 5 minutes.
APIPark is developed based on Golang, offering strong product performance and low development and maintenance costs. You can deploy APIPark with a single command line.
curl -sSO https://download.apipark.com/install/quick-start.sh; bash quick-start.sh

In my experience, you can see the successful deployment interface within 5 to 10 minutes. Then, you can log in to APIPark using your account.

Step 2: Call the OpenAI API.
