Mastering Dynamic Client to Watch All Kind in CRD
In the sprawling, ever-evolving landscape of cloud-native computing, Kubernetes stands as the undisputed orchestrator, a powerful foundation upon which modern applications are built. Its declarative API-driven paradigm empowers developers to define the desired state of their systems, letting Kubernetes tirelessly work to achieve and maintain that state. A cornerstone of Kubernetes' extensibility, and arguably one of its most transformative features, is the Custom Resource Definition (CRD). CRDs allow users to extend Kubernetes' native capabilities, introducing custom resource types that behave just like built-in ones – Pods, Deployments, Services – but are tailored to specific application domains or operational patterns.
However, merely defining a custom resource is only half the battle. To truly harness the power of CRDs, one must be able to interact with them programmatically, to watch their states, and to react to changes. This is where the Kubernetes client-go library comes into play, providing the essential tools for Go applications to communicate with the Kubernetes API server. While client-go offers strongly-typed clients for known, compiled-in resource types, the dynamic nature of CRDs — where resource schemas and kinds might not be known until runtime, or where a single controller might need to manage a multitude of custom resources — necessitates a more flexible approach. This is the realm of the dynamic.Interface, the dynamic client. This comprehensive guide will delve deep into mastering the dynamic client to effectively watch and manage "all kinds" of resources defined via CRDs, empowering you to build truly generic and robust Kubernetes operators and controllers. We will explore the intricacies of Kubernetes API interactions, the power of informers, and the specific mechanisms that allow a single piece of code to observe and react to an infinitely extensible universe of custom resources.
The Kubernetes Extension Ecosystem: CRDs Unveiled
Kubernetes, at its core, is a control plane designed to manage containerized workloads and services across a cluster of machines. While it provides a rich set of built-in resources like Deployments, Pods, and Services, the complexity and diversity of modern applications often necessitate capabilities beyond these primitives. This is where Custom Resource Definitions (CRDs) become invaluable, offering a powerful mechanism to extend the Kubernetes API and define entirely new resource types.
A CRD acts as a blueprint, telling the Kubernetes API server about a new kind of object that it should recognize and manage. When you create a CRD, you're essentially registering a new API endpoint within Kubernetes. For instance, if you're building an operator for a specific database, you might define a Database CRD. This Database CRD would then allow users to declare instances of their database (e.g., my-prod-db, dev-test-db) as Kubernetes objects. These custom objects, known as Custom Resources (CRs), can be created, updated, and deleted using standard kubectl commands or through Kubernetes API clients, just like any native resource. This seamless integration means that users interact with your custom application or infrastructure components using the same declarative paradigm they've come to expect from Kubernetes.
The structure of a CRD is defined in a YAML manifest, specifying crucial details such as:
apiVersionandkind: Standard Kubernetes object identification, typicallyapiextensions.k8s.io/v1andCustomResourceDefinition.metadata.name: The plural name of your custom resource followed by.your-domain.com(e.g.,databases.stable.example.com).spec.group: The API group for your custom resource (e.g.,stable.example.com). This helps organize and version your custom APIs.spec.versions: A list of versions for your custom resource (e.g.,v1alpha1,v1). Each version can have its own schema. This is critical for evolving your CRD without breaking backward compatibility.spec.scope: Whether the custom resource isNamespaced(like Pods) orCluster(like Nodes).spec.names: Defines how your custom resource will be referenced:plural: The plural name used in URLs (e.g.,databases).singular: The singular name (e.g.,database).kind: The Kind of object (e.g.,Database). This is what users put in their YAML files underkind: Database.shortNames: Optional, shorter aliases forkubectlcommands (e.g.,db).
spec.versions[].schema.openAPIV3Schema: This is the most critical part, defining the schema for your custom resource using OpenAPI v3 specification. It dictates what fields your custom resource can have, their types, and validation rules. This ensures data integrity and provides client-side validation. For example, aDatabaseCRD might define fields forspec.engine(e.g., "PostgreSQL", "MySQL"),spec.version,spec.size, andstatus.phase(e.g., "Provisioning", "Ready", "Failed").
The power of CRDs extends beyond merely defining new resource types; they are the bedrock upon which the entire Operator pattern is built. An Operator is a method of packaging, deploying, and managing a Kubernetes-native application. It leverages CRDs to extend the Kubernetes API for specific applications and uses a custom controller to continuously observe the state of these CRs and make appropriate changes to the underlying infrastructure or application components to bring the actual state closer to the desired state. For instance, a PostgreSQL Operator might watch PostgreSQL CRs. When a new PostgreSQL CR is created, the operator might provision a new Pod running PostgreSQL, create persistent storage, configure network access, and update the CR's status to "Ready". This deep integration of application-specific logic directly into the Kubernetes control plane is what makes CRDs and Operators so incredibly powerful for automating complex deployments and operations.
The API server itself plays a pivotal role in handling CRDs. When a CRD is submitted, the API server registers the new resource type. From that point on, it becomes responsible for storing, validating (against the OpenAPI schema), and serving instances of that custom resource. This means all the built-in functionalities of the Kubernetes API server—authentication, authorization (RBAC), admission control, and event generation—apply equally to custom resources, ensuring a consistent and secure management experience across the cluster. This consistency is a major advantage, as it allows developers and operators to leverage their existing Kubernetes knowledge and tooling when interacting with custom applications.
Navigating Kubernetes APIs: The client-go Perspective
To programmatically interact with a Kubernetes cluster from a Go application, the client-go library is the de facto standard. It provides a comprehensive set of tools for developing controllers, operators, and other Kubernetes-aware applications. At its heart, client-go offers different interfaces for interacting with the Kubernetes API server, each suited for particular scenarios. Understanding these distinctions is crucial for building efficient and maintainable code.
Primarily, client-go provides two main categories of clients:
- Typed Clients (or Standard Clients): These clients are generated directly from the Kubernetes API definitions for built-in resources (like Pods, Deployments, Services) and for CRDs where the Go types are known at compile time. When you use a typed client, you interact with Go structs that directly mirror the Kubernetes resource definitions. For example,
corev1.Podfor Pods orappsv1.Deploymentfor Deployments. If you've defined a CRD and generated Go types for it (e.g., usingcontroller-gen), you'd have aclientsetfor your custom resources, likeexamplecomv1.Database.Advantages of Typed Clients: * Type Safety: You get compile-time checks, ensuring that you're accessing valid fields and providing correct data types. This significantly reduces runtime errors and makes code easier to debug. * Code Completion and IDE Support: Editors and IDEs can provide excellent autocompletion and type hints, improving developer productivity. * Readability: Code is generally more readable because you're working with structured Go objects rather than generic maps.When to Use Typed Clients: * When you know the exact Go type of the resource you need to interact with at compile time. * When developing a controller for a specific CRD where you have generated Go types. * When interacting with built-in Kubernetes resources. - Dynamic Client (
dynamic.Interface): In contrast to typed clients, the dynamic client operates onunstructured.Unstructuredobjects. These are essentiallymap[string]interface{}, allowing you to handle any Kubernetes resource regardless of whether its Go type is known or even exists at compile time. This flexibility is what makes the dynamic client indispensable for situations where you need to interact with "all kinds" of CRDs or where the specific resource types are determined dynamically at runtime.Advantages of Dynamic Clients: * Flexibility and Genericity: Can interact with any Kubernetes resource, built-in or custom, without prior knowledge of its specific Go type. This is perfect for building generic tools, operators that manage multiple different CRDs, or tools that discover and interact with new CRDs as they are deployed. * Reduced Code Generation: You don't need to generate specific Go types for every CRD you want to interact with. This simplifies your build process, especially in environments with many CRDs. * Runtime Discovery: Enables building tools that can discover CRDs available in a cluster and interact with them on the fly.When to Use Dynamic Clients: * When building a generic controller or tool that needs to operate on arbitrary or unknown CRDs. * When a single controller needs to manage various custom resources, potentially from different API groups and versions. * When interacting with a CRD for which you have not generated Go types, perhaps because it's a third-party CRD or part of an experimental feature. * When the exact resource type is only determined at runtime, for example, based on user configuration or discovery.
To illustrate, consider a scenario where you want to retrieve a Pod. With a typed client, you'd use clientset.CoreV1().Pods("default").Get(context.TODO(), "my-pod", metav1.GetOptions{}), and the result would be a *corev1.Pod object. With a dynamic client, you'd first specify the GroupVersionResource (GVR) for Pods (corev1.SchemeGroupVersion.WithResource("pods")), then call dynamicClient.Resource(gvr).Namespace("default").Get(context.TODO(), "my-pod", metav1.GetOptions{}). The result would be an *unstructured.Unstructured object, which you would then need to navigate using map accessors. While this adds a layer of manual parsing, it grants immense power to operate universally across the Kubernetes API space.
For enterprises and projects that aim to standardize API interactions, whether with Kubernetes internal APIs or services exposed by custom resources, an API gateway becomes a critical component. An API gateway acts as a single entry point for all client requests, routing them to the appropriate backend services. This is particularly relevant when custom resources define services or AI model endpoints that need to be consumed by external applications. An API gateway can handle tasks like authentication, authorization, rate limiting, and request/response transformation, abstracting away the underlying complexity of the Kubernetes cluster. For instance, if your CRD manages a complex AI inference service, an API gateway can provide a consistent and secure external API endpoint, simplifying its consumption.
The Dynamic Client Deep Dive: Interacting with Arbitrary Resources
The dynamic.Interface is the workhorse for interacting with resources whose types are not known at compile time, or for building highly generic controllers. To truly master the dynamic client, one must understand its instantiation, the crucial concept of GroupVersionResource (GVR), and how to perform fundamental CRUD operations using unstructured.Unstructured objects.
Instantiating the Dynamic Client
Like any client-go client, the dynamic client needs a rest.Config to connect to the Kubernetes API server. This configuration typically includes the host, API path, authentication credentials, and TLS settings.
package main
import (
"context"
"fmt"
"log"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/tools/clientcmd"
)
func main() {
// Build config from kubeconfig file
kubeconfigPath := "/techblog/en/path/to/your/kubeconfig" // Or use clientcmd.RecommendedHomeFile for default
config, err := clientcmd.BuildConfigFromFlags("", kubeconfigPath)
if err != nil {
log.Fatalf("Error building kubeconfig: %v", err)
}
// Create a dynamic client
dynamicClient, err := dynamic.NewForConfig(config)
if err != nil {
log.Fatalf("Error creating dynamic client: %v", err)
}
fmt.Println("Dynamic client successfully instantiated.")
// Now you can use dynamicClient to interact with resources.
}
This code snippet demonstrates the basic setup. clientcmd.BuildConfigFromFlags is a common way to load the Kubernetes configuration, first from command-line flags (empty in this case), then from a specified kubeconfig path. dynamic.NewForConfig then uses this configuration to create the dynamic.Interface.
The Crucial Role of GroupVersionResource (GVR)
Unlike typed clients where the resource type is implicitly known from the Go struct you're working with (e.g., clientset.AppsV1().Deployments), the dynamic client requires explicit identification of the resource you intend to interact with. This is achieved through schema.GroupVersionResource, often simply referred to as GVR.
A GVR uniquely identifies a collection of resources within the Kubernetes API space:
Group: The API group to which the resource belongs (e.g.,appsfor Deployments,stable.example.comfor ourDatabaseCRD).Version: The API version of the resource within that group (e.g.,v1for Deployments,v1alpha1for ourDatabaseCRD).Resource: The plural name of the resource (e.g.,deployments,databases). Note that this is the plural, lowercase name as it appears in the API paths, not theKindfield from the YAML manifest.
For example, to interact with Pods, the GVR would be schema.GroupVersionResource{Group: "", Version: "v1", Resource: "pods"}. For a Deployment, it would be schema.GroupVersionResource{Group: "apps", Version: "v1", Resource: "deployments"}. For our hypothetical Database CRD from stable.example.com/v1alpha1, it would be schema.GroupVersionResource{Group: "stable.example.com", Version: "v1alpha1", Resource: "databases"}.
CRUD Operations with dynamic.Interface
Once you have the dynamic.Interface and a target GVR, you can perform standard CRUD (Create, Read, Update, Delete) operations. The results of these operations are always *unstructured.Unstructured objects.
Creating a Resource
To create a resource, you first construct an unstructured.Unstructured object representing the desired state. This object is essentially a map[string]interface{} that mirrors the YAML structure of a Kubernetes object.
package main
import (
"context"
"fmt"
"log"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/tools/clientcmd"
)
func main() {
// ... (dynamicClient instantiation as above) ...
kubeconfigPath := clientcmd.RecommendedHomeFile // Use default kubeconfig path
config, err := clientcmd.BuildConfigFromFlags("", kubeconfigPath)
if err != nil {
log.Fatalf("Error building kubeconfig: %v", err)
}
dynamicClient, err := dynamic.NewForConfig(config)
if err != nil {
log.Fatalf("Error creating dynamic client: %v", err)
}
// 1. Define the GVR for the Custom Resource (e.g., a hypothetical "Example" CRD)
// Make sure this CRD exists in your cluster!
exampleGVR := schema.GroupVersionResource{
Group: "example.com",
Version: "v1alpha1",
Resource: "examples", // Plural name for the resource
}
// 2. Define the desired Custom Resource as an Unstructured object
exampleCR := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "example.com/v1alpha1",
"kind": "Example",
"metadata": map[string]interface{}{
"name": "my-dynamic-example",
"namespace": "default",
},
"spec": map[string]interface{}{
"message": "Hello from dynamic client!",
"replicas": 3,
},
},
}
// 3. Create the resource
fmt.Printf("Creating %s/%s...\n", exampleCR.GetNamespace(), exampleCR.GetName())
createdCR, err := dynamicClient.Resource(exampleGVR).Namespace(exampleCR.GetNamespace()).Create(context.TODO(), exampleCR, metav1.CreateOptions{})
if err != nil {
log.Fatalf("Failed to create example CR: %v", err)
}
fmt.Printf("Created example CR: %s\n", createdCR.GetName())
// Example: Getting the created resource
fmt.Printf("Getting %s/%s...\n", createdCR.GetNamespace(), createdCR.GetName())
getCR, err := dynamicClient.Resource(exampleGVR).Namespace(createdCR.GetNamespace()).Get(context.TODO(), createdCR.GetName(), metav1.GetOptions{})
if err != nil {
log.Fatalf("Failed to get example CR: %v", err)
}
fmt.Printf("Retrieved example CR: %s, message: %s\n", getCR.GetName(), getCR.Object["spec"].(map[string]interface{})["message"])
// ... (Rest of the CRUD operations would follow a similar pattern)
}
Note: For the above code to run, you need a CRD named examples.example.com in your cluster. Here's a minimal example CRD definition:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: examples.example.com
spec:
group: example.com
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
message:
type: string
replicas:
type: integer
format: int32
scope: Namespaced
names:
plural: examples
singular: example
kind: Example
shortNames:
- ex
Apply this CRD with kubectl apply -f your-crd.yaml before running the Go program.
Getting and Listing Resources
Retrieving a single resource or a list of resources also involves specifying the GVR and optionally the namespace.
// Get a single resource by name
resourceName := "my-dynamic-example"
getCR, err := dynamicClient.Resource(exampleGVR).Namespace("default").Get(context.TODO(), resourceName, metav1.GetOptions{})
if err != nil {
log.Fatalf("Failed to get CR %s: %v", resourceName, err)
}
fmt.Printf("Got CR: %s, message: %s\n", getCR.GetName(), getCR.Object["spec"].(map[string]interface{})["message"])
// List resources in a namespace
list, err := dynamicClient.Resource(exampleGVR).Namespace("default").List(context.TODO(), metav1.ListOptions{})
if err != nil {
log.Fatalf("Failed to list CRs: %v", err)
}
fmt.Printf("Found %d example CRs in default namespace:\n", len(list.Items))
for _, item := range list.Items {
fmt.Printf(" - %s\n", item.GetName())
}
Updating a Resource
Updating a resource requires fetching the existing resource, modifying its unstructured.Unstructured object, and then calling the Update method.
// Update a resource
updatedCR := getCR.DeepCopy() // Always work on a deep copy!
// Modify a field in the spec
spec := updatedCR.Object["spec"].(map[string]interface{})
spec["message"] = "Updated by dynamic client!"
spec["replicas"] = 5
fmt.Printf("Updating CR %s...\n", updatedCR.GetName())
_, err = dynamicClient.Resource(exampleGVR).Namespace("default").Update(context.TODO(), updatedCR, metav1.UpdateOptions{})
if err != nil {
log.Fatalf("Failed to update CR %s: %v", updatedCR.GetName(), err)
}
fmt.Printf("CR %s updated.\n", updatedCR.GetName())
Deleting a Resource
Deleting a resource is straightforward, requiring the GVR, namespace, and resource name.
// Delete a resource
fmt.Printf("Deleting CR %s...\n", resourceName)
err = dynamicClient.Resource(exampleGVR).Namespace("default").Delete(context.TODO(), resourceName, metav1.DeleteOptions{})
if err != nil {
log.Fatalf("Failed to delete CR %s: %v", resourceName, err)
}
fmt.Printf("CR %s deleted.\n", resourceName)
The dynamic client's reliance on unstructured.Unstructured means that you'll often be performing type assertions and navigating map[string]interface{} structures. While this requires careful error handling, it provides unparalleled flexibility. This capability is paramount for building generic tools, such as an API gateway that dynamically adapts to custom resource definitions, exposing them as external APIs, or an operator that manages various types of AI models defined via different CRDs.
For organizations that need to centralize the management and exposure of these custom resources, especially when they represent backend services or AI functionalities, an API gateway becomes an essential architectural component. An API gateway provides a unified API facade, handling responsibilities such as authentication, authorization, rate limiting, and traffic routing. Products like APIPark, an open-source AI gateway and API management platform, excel in these scenarios. APIPark can significantly simplify the integration of 100+ AI models, offering a unified API format for AI invocation and the ability to encapsulate prompts into REST APIs. If your CRDs are defining new AI inference endpoints or data processing services within Kubernetes, APIPark can act as the crucial bridge, transforming internal Kubernetes APIs into externally consumable, managed APIs, complete with lifecycle management and detailed logging. This ensures secure, scalable, and manageable access to the rich ecosystem of services orchestrated by your dynamic Kubernetes controllers.
Watching Resources: The Informer Pattern
Merely performing CRUD operations on demand is often insufficient for building responsive, state-driven applications within Kubernetes. Controllers and operators need to continuously observe the state of resources and react to changes in real-time. Polling the API server repeatedly is inefficient and can quickly overload the server, especially in large clusters. This is where the Kubernetes Informer pattern comes to the rescue, providing an efficient and robust mechanism for watching resources.
An Informer is a client-side cache and event-driven system built on top of Kubernetes' watch API. Instead of constantly querying the API server, an Informer establishes a long-lived connection (a "watch") to the API server for a specific resource type. When any change occurs to that resource (creation, update, or deletion), the API server pushes an event to the Informer. The Informer then updates its local cache and dispatches the event to registered handlers.
The core components of the Informer pattern are:
- Reflector: The Reflector is responsible for synchronizing a local cache with the Kubernetes API server. It performs two main actions:
- Listing: Initially, it performs a
Listoperation to fetch all existing resources of the specified type and populate the local cache. - Watching: After the initial list, it establishes a
Watchconnection to the API server. It receives change events (Added, Modified, Deleted) and applies them to the local cache. If the watch connection breaks, the Reflector automatically attempts to re-establish it, potentially performing a newListto resynchronize the cache, ensuring eventual consistency.
- Listing: Initially, it performs a
- Indexer (or Store): This is the local, in-memory cache of resources. The Reflector updates the Indexer based on the watch events. The Indexer allows for fast, efficient lookups of resources without hitting the API server. It supports indexing by various fields (e.g., namespace, labels) to enable quick retrieval of specific subsets of resources. This significantly reduces the load on the API server and improves the performance of controllers.
- Controller (or Event Handler): This is your application logic that processes the events. The Informer dispatches events (
AddFunc,UpdateFunc,DeleteFunc) to registered event handlers. Your controller implements these functions to react to changes in the resource state. For example, anAddFuncmight trigger the provisioning of new infrastructure when a CR is created, anUpdateFuncmight reconcile configuration changes, and aDeleteFuncmight initiate cleanup. - SharedInformerFactory: In a typical Kubernetes controller, you might need to watch multiple different resource types (e.g., Pods, Deployments, and your custom
DatabaseCRs). Creating separate Informers for each resource can be resource-intensive. TheSharedInformerFactoryis designed to optimize this by providing a single point of entry for multiple Informers. It ensures that only one Reflector and cache exist per resource type across all controllers that need to watch that resource. This sharing mechanism conserves memory and reduces redundant API calls.
How Informers Work (Simplified Flow):
- A controller starts a
SharedInformerFactory. - The
SharedInformerFactoryinitiates an Informer for each registered GVR. - Each Informer's Reflector performs an initial
Listoperation to populate itsIndexer(cache). - Each Reflector then establishes a
Watchconnection to the Kubernetes API server. - When a change occurs (e.g., a Pod is created), the API server sends an event to the corresponding Reflector.
- The Reflector updates its
Indexer(cache) with the new state. - The Informer then queues the event for processing by the controller's registered
AddFunc,UpdateFunc, orDeleteFunc. - The controller retrieves the event from the queue and processes it, triggering its reconciliation logic.
Benefits of the Informer Pattern:
- Efficiency: Significantly reduces load on the Kubernetes API server by using long-lived watch connections and client-side caching instead of constant polling.
- Real-time Updates: Provides near real-time notifications of resource changes, enabling highly responsive controllers.
- Consistency: The client-side cache provides a consistent view of the resources, reducing race conditions and improving the reliability of controller logic.
- Resilience: Automatically handles watch connection drops and re-synchronizes the cache, making controllers robust to network issues or API server restarts.
- Scalability:
SharedInformerFactoryallows multiple controllers to share the same Informers and caches, leading to efficient resource utilization in large clusters.
Understanding the Informer pattern is foundational for developing any serious Kubernetes controller. It's the mechanism that powers the reactive nature of Kubernetes, allowing controllers to constantly monitor the cluster's state and take corrective actions to maintain the desired state defined by users through their YAML manifests. Without informers, building reliable and performant Kubernetes operators would be significantly more challenging, if not impossible.
APIPark is a high-performance AI gateway that allows you to securely access the most comprehensive LLM APIs globally on the APIPark platform, including OpenAI, Anthropic, Mistral, Llama2, Google Gemini, and more.Try APIPark now! 👇👇👇
Dynamic Informers for "All Kind"
While the Informer pattern is incredibly powerful, the SharedInformerFactory we discussed typically relies on compile-time knowledge of Go types, or at least a known client-go clientset for specific API groups. This works well for built-in resources or custom resources where you've generated Go types. However, what if you need to build a generic controller that can watch any CRD that might appear in a cluster, even those not known when your controller was compiled? This is precisely where dynamic informers come into play, combining the power of the dynamic.Interface with the efficiency of the Informer pattern.
A dynamic informer factory, specifically dynamicinformer.NewFilteredDynamicSharedInformerFactory, allows you to create informers for arbitrary schema.GroupVersionResource (GVR) instances at runtime. This means you can discover new CRDs in the cluster, construct their GVRs, and then start watching them, all without needing to recompile your controller. This capability is the cornerstone of truly generic Kubernetes operators and tools.
How Dynamic Informers Work
The process of setting up and utilizing dynamic informers typically involves these steps:
- Instantiate the Dynamic Client: As discussed earlier, you first need a
dynamic.Interfaceto interact with arbitrary resources. This client will be used internally by the dynamic informer factory. - Create a Dynamic Shared Informer Factory: ```go import ( // ... other imports ... "k8s.io/client-go/dynamic/dynamicinformer" )// ... inside your main or controller setup function ... // dynamicClient from previous steps // resyncPeriod specifies how often the informer re-lists resources from the API server (e.g., 30 seconds) resyncPeriod := 30 * time.Second factory := dynamicinformer.NewFilteredDynamicSharedInformerFactory(dynamicClient, resyncPeriod, metav1.NamespaceAll, nil)
`` Thedynamicinformer.NewFilteredDynamicSharedInformerFactoryfunction takes: *dynamic.Interface: The dynamic client for communication. *resyncPeriod: The interval at which the informer will re-list all objects from the **API** server, even if no changes occurred. This helps ensure eventual consistency and can recover from missed events, though it's less efficient than pure event-driven updates. *namespace: The namespace to watch (e.g.,metav1.NamespaceAllfor cluster-scope). *tweakListOptions: An optional function to modifymetav1.ListOptions` for the initial list and subsequent watch calls (e.g., to add label selectors). - Get an Informer for a Specific GVR: Once the factory is created, you can obtain an informer for any GVR you wish to watch.
go // Example: Watch our "Example" CRD exampleGVR := schema.GroupVersionResource{ Group: "example.com", Version: "v1alpha1", Resource: "examples", } exampleInformer := factory.ForResource(exampleGVR)TheForResourcemethod returns aGenericInformerwhich provides access to theInformer()(thecache.SharedIndexInformer) andLister()(thecache.GenericLister). - Register Event Handlers: Attach
AddFunc,UpdateFunc, andDeleteFuncto your informer to define the actions your controller will take in response to events.go exampleInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{ AddFunc: func(obj interface{}) { unstructuredObj, ok := obj.(*unstructured.Unstructured) if !ok { log.Printf("Error: expected *unstructured.Unstructured, got %T", obj) return } log.Printf("Added: %s/%s (%s)", unstructuredObj.GetNamespace(), unstructuredObj.GetName(), unstructuredObj.GetKind()) // Your custom logic for added resources }, UpdateFunc: func(oldObj, newObj interface{}) { oldUnstructured, ok1 := oldObj.(*unstructured.Unstructured) newUnstructured, ok2 := newObj.(*unstructured.Unstructured) if !ok1 || !ok2 { log.Printf("Error: expected *unstructured.Unstructured, got %T and %T", oldObj, newObj) return } log.Printf("Updated: %s/%s (%s) - Resource Version: %s -> %s", newUnstructured.GetNamespace(), newUnstructured.GetName(), newUnstructured.GetKind(), oldUnstructured.GetResourceVersion(), newUnstructured.GetResourceVersion()) // Your custom logic for updated resources }, DeleteFunc: func(obj interface{}) { unstructuredObj, ok := obj.(*unstructured.Unstructured) if !ok { log.Printf("Error: expected *unstructured.Unstructured, got %T", obj) return } log.Printf("Deleted: %s/%s (%s)", unstructuredObj.GetNamespace(), unstructuredObj.GetName(), unstructuredObj.GetKind()) // Your custom logic for deleted resources }, })Notice that the event handler functions receiveinterface{}as arguments, which must then be type-asserted to*unstructured.Unstructured. This is the fundamental characteristic of dynamic interaction. - Start the Factory: Once all informers are configured, start the factory. This will initiate the listing and watching processes for all registered informers. ```go stopCh := make(chan struct{}) // Channel to signal shutdown defer close(stopCh)factory.Start(stopCh) // Starts all informers in the factory factory.WaitForCacheSync(stopCh) // Waits for all caches to be synced fmt.Println("All informers synced.")<-stopCh // Block forever until stopCh is closed
``factory.WaitForCacheSync(stopCh)is crucial. It ensures that all informers have completed their initialList` operation and populated their caches before your controller starts processing events. This prevents your controller from reacting to events for resources that haven't been fully loaded into the cache yet, which could lead to inconsistent state.
Discovering CRDs to Watch Dynamically
The true power of dynamic informers for "all kind" comes when you combine them with the ability to discover CRDs themselves. You can list CustomResourceDefinition resources from the apiextensions.k8s.io/v1 group to find out which custom resources are available in the cluster.
// Get a typed client for CRDs
apiextensionsClient, err := apiextensionsclientset.NewForConfig(config) // Assuming config is built
if err != nil {
log.Fatalf("Error creating apiextensions client: %v", err)
}
crdList, err := apiextensionsClient.ApiextensionsV1().CustomResourceDefinitions().List(context.TODO(), metav1.ListOptions{})
if err != nil {
log.Fatalf("Error listing CRDs: %v", err)
}
for _, crd := range crdList.Items {
fmt.Printf("Found CRD: %s\n", crd.Name)
// For each CRD, you can iterate its versions and potentially create dynamic informers
for _, version := range crd.Spec.Versions {
if version.Served { // Only watch served versions
gvr := schema.GroupVersionResource{
Group: crd.Spec.Group,
Version: version.Name,
Resource: crd.Spec.Names.Plural,
}
fmt.Printf(" - Adding informer for GVR: %v\n", gvr)
informer := factory.ForResource(gvr)
// Add event handlers specific to this GVR or a generic one
informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
unstructuredObj, _ := obj.(*unstructured.Unstructured)
log.Printf("Dynamically Added CR of Kind %s: %s/%s", unstructuredObj.GetKind(), unstructuredObj.GetNamespace(), unstructuredObj.GetName())
},
// ... UpdateFunc and DeleteFunc ...
})
}
}
}
This pattern enables operators to dynamically adapt to the evolving API landscape of a Kubernetes cluster. A single operator can monitor a whole ecosystem of applications, each potentially introducing its own CRDs, and react accordingly. This is particularly powerful for multi-tenant environments or platforms that need to integrate with a wide variety of third-party services, where each service might introduce its own set of custom resources.
The table below summarizes the key differences between strongly typed informers and dynamic informers:
| Feature | Strongly Typed Informers (e.g., appsv1.NewFilteredSharedInformerFactory) |
Dynamic Informers (dynamicinformer.NewFilteredDynamicSharedInformerFactory) |
|---|---|---|
| Compile-time Knowledge | Requires Go types and clientset generated for specific GVRs. |
No compile-time knowledge of specific Go types needed; uses GVRs at runtime. |
| Data Representation | *yourcustomresource.Type (Go structs) |
*unstructured.Unstructured (Go map map[string]interface{}) |
| Genericity | Specific to a known resource type. | Highly generic; can watch any CRD or built-in resource dynamically. |
| Type Safety | High; compile-time checks, IDE support. | Low; runtime type assertions and map navigation. More prone to runtime errors if paths are incorrect. |
| Ease of Use for Knowns | Simpler if resource types are fixed and Go types are available. | Slightly more verbose due to unstructured handling. |
| Use Cases | Specific controller for a single or few well-defined CRDs. | Generic operators, API gateway components, platform tools that need to discover and interact with arbitrary CRDs. |
| Code Generation Burden | Requires code generation for each CRD. | Reduces or eliminates the need for CRD-specific client code generation. |
By leveraging dynamic informers, you can build extremely adaptable and powerful Kubernetes controllers. Imagine a security policy enforcer that dynamically watches all new CRDs in the cluster and applies general security policies to them, or a cost optimization tool that tracks resource consumption across all custom workloads. This flexibility is a game-changer for platform builders and SREs alike, enabling truly "meta-operators" that manage the Kubernetes API surface itself.
Building a Generic CRD Controller/Operator
The ability to dynamically watch "all kinds" of CRDs using dynamic informers sets the stage for building truly generic Kubernetes controllers or operators. A generic CRD controller aims to provide common functionalities or apply generic policies across a multitude of custom resources, often without explicit knowledge of their specific schemas at compile time. This is a powerful paradigm for platform engineers who want to automate operations for an extensible ecosystem of applications or for API gateway developers who need to expose custom services uniformly.
Architecture of a Generic Operator
A generic operator that leverages dynamic informers typically follows this architectural pattern:
- CRD Discovery Component:
- This component continuously watches
CustomResourceDefinitionresources themselves (apiextensions.k8s.io/v1/customresourcedefinitions). - When a new CRD is created or an existing one is updated/deleted, it triggers a reconfiguration of the main reconciliation loop.
- This component effectively "bootstraps" the dynamic watching mechanism.
- This component continuously watches
- Dynamic Informer Management:
- Based on the discovered CRDs, this module dynamically creates and manages
dynamicinformer.NewFilteredDynamicSharedInformerFactoryinstances and registers informers for the relevant GVRs (GroupVersionResources). - It ensures that each served version of a discovered CRD gets its own informer.
- It must handle the lifecycle of these informers: starting them, stopping them when CRDs are deleted, and gracefully restarting them upon API server reconnects.
- Based on the discovered CRDs, this module dynamically creates and manages
- Generic Event Handlers:
- Instead of handlers tailored to specific Go types, these handlers operate on
*unstructured.Unstructuredobjects. - They typically push events (e.g., object identifiers, event types) onto a shared work queue.
- This abstraction decouples the event reception from the core reconciliation logic.
- Instead of handlers tailored to specific Go types, these handlers operate on
- Work Queue and Worker Pool:
- A standard Kubernetes work queue (
workqueue.RateLimitingInterface) is used to store items that need processing. Each item usually identifies a unique resource (e.g.,namespace/name,GVR). - A pool of worker goroutines consumes items from the work queue and performs the reconciliation. This allows for concurrent processing of events and improves throughput.
- A standard Kubernetes work queue (
- Generic Reconciliation Logic:
- This is the heart of the operator. When a worker pulls an item from the queue, it fetches the latest state of the
*unstructured.Unstructuredobject (from the dynamic informer's cache) corresponding to that item. - It then applies its generic logic. This logic might involve:
- Validation: Checking for generic structural issues or policy violations across various CRDs.
- Annotation/Label Management: Automatically adding specific labels or annotations to any new CR.
- Resource Propagation: Creating related built-in Kubernetes resources (e.g., a ConfigMap or a Service) based on generic patterns found in the CR.
- Status Updates: Updating the
statusfield of the custom resource based on the outcome of its operations. - Finalizers: Implementing generic finalizer logic to ensure proper cleanup when a custom resource is deleted.
- This is the heart of the operator. When a worker pulls an item from the queue, it fetches the latest state of the
- Error Handling and Retries:
- Robust error handling is crucial. Failed reconciliation attempts should be retried with backoff mechanisms to prevent thrashing the API server or external services.
- The work queue automatically handles retries for items that were processed with an error.
Example: A Generic Annotation Controller
Consider a generic operator that watches all custom resources and automatically adds a managed-by: generic-operator label and a last-reconciled timestamp annotation to them.
Steps in the Reconciliation Loop:
- Receive Event: An
AddorUpdateevent for any custom resource is received by the dynamic informer's event handler. - Add to Work Queue: The event handler pushes the GVR and
namespace/nameof the changed resource onto the work queue. - Fetch Object: A worker pulls the item from the queue and uses the
dynamicClientandLister()from the relevant dynamic informer to fetch the*unstructured.Unstructuredobject from the cache. - Apply Logic: The worker checks if the
managed-bylabel is present and if thelast-reconciledannotation is up-to-date.- If not, it deep-copies the
unstructured.Unstructuredobject. - It adds/updates the label and annotation.
- It uses the
dynamicClienttoUpdatethe modifiedunstructured.Unstructuredobject back to the API server.
- If not, it deep-copies the
- Handle Status/Finalizers (if applicable): If the generic operator were performing more complex tasks (e.g., provisioning external resources), it would update the CR's status here and ensure finalizers are managed for graceful deletion.
- Mark Done: The worker marks the item as
Doneon the work queue.
Best Practices for Generic Operators:
- RBAC: Ensure your generic operator's ServiceAccount has appropriate RBAC permissions to
get,list,watchall CRDs (apiextensions.k8s.io/v1/customresourcedefinitions) and toget,list,watch,create,update,patch,deletefor custom resources (using*for bothapiGroupsandresourcesif it truly needs to be generic, though this is a powerful and potentially risky permission that should be constrained where possible). - Version Skew: Be mindful of API version skew between your
client-goversion and the Kubernetes cluster version. Dynamic clients are generally more resilient but still benefit from alignment. - Performance: Watching "all kinds" can involve watching a very large number of resources. Ensure your event handlers are efficient and your work queue processing is optimized. Avoid heavy computations directly within event handlers; offload them to workers.
- Concurrency: Use
sync.WaitGroupandcontext.Contextto manage goroutine lifecycles and gracefully shut down your operator. - Logging: Implement comprehensive logging to trace the reconciliation flow, especially when dealing with generic, unstructured data. It can be challenging to debug issues without clear logs.
- Testing: Writing unit and integration tests for generic operators can be more complex due to the dynamic nature. Focus on testing the reconciliation logic with various
unstructured.Unstructuredinputs.
Building a generic CRD controller empowers you to enforce consistent patterns, apply general policies, or provide foundational services across your entire Kubernetes environment. This level of extensibility is a hallmark of truly advanced Kubernetes platform engineering. For instance, if you have multiple teams deploying AI workloads using various CRDs, a generic operator could automatically inject sidecar containers for logging, enforce common network policies, or connect these AI services to a centralized API gateway like APIPark. APIPark, designed as an AI gateway and API management platform, could then aggregate and manage all these dynamically provisioned AI services, providing a unified external API surface, regardless of the underlying CRD definitions. This synergy between dynamic Kubernetes controllers and a robust API gateway creates a powerful and flexible ecosystem for managing complex cloud-native applications.
Advanced Considerations and Best Practices
Mastering the dynamic client to watch "all kind" in CRDs is a significant step towards building powerful, adaptable Kubernetes controllers. However, moving beyond basic functionality requires a deeper understanding of advanced considerations and adherence to best practices to ensure robustness, performance, and security.
Performance Implications of Watching Many Resources
Watching a large number of resources, especially if they are frequently updated, can have performance implications for both your controller and the Kubernetes API server:
- Memory Usage: Each informer maintains a client-side cache of all watched resources. If you are watching hundreds or thousands of different CRDs, and each CRD has a substantial number of instances, the total memory consumption of your controller can become significant. Be mindful of the size of
unstructured.Unstructuredobjects. - Network Bandwidth: While watch APIs are efficient, a very high churn rate across many resources will still generate substantial network traffic between your controller and the API server.
- CPU Utilization: Processing a continuous stream of events, especially if your reconciliation logic is computationally intensive, can consume considerable CPU resources.
ResyncPeriod: While useful for eventual consistency, a shortresyncPeriodfor a large number of watched resources can lead to unnecessary API server load by forcing full re-lists. Consider increasing it or setting it to 0 if your event handling is robust enough to not miss events.- Solution - Targeted Watching: Whenever possible, use
tweakListOptionsinNewFilteredDynamicSharedInformerFactoryto filter resources based on labels or fields. This reduces the number of objects synchronized to the cache, saving memory and bandwidth. If your operator only cares about CRDs with a specific label, specify that label selector. - Solution - Namespace Scoping: If your controller operates within specific namespaces, ensure you set the
namespaceparameter correctly inNewFilteredDynamicSharedInformerFactoryinstead ofmetav1.NamespaceAll.
Security: RBAC for Dynamic Clients
Using a dynamic client implies broader access permissions than a typed client, as it can potentially interact with any resource. This necessitates careful attention to Role-Based Access Control (RBAC):
- Least Privilege Principle: Always grant your controller's ServiceAccount the minimum necessary permissions. Avoid
*(wildcard) forapiGroupsorresourcesunless absolutely essential for a truly generic, cluster-level tool. - Discovering CRDs: To list
CustomResourceDefinitionresources themselves, your ServiceAccount needsgetandlistpermissions oncustomresourcedefinitionswithin theapiextensions.k8s.ioAPI group. - Interacting with Custom Resources: To
get,list,watch,create,update,patch,deletespecific custom resources (e.g.,databases), you need to specifyapiGroups: ["stable.example.com"]andresources: ["databases"]. - Generic Access: If your operator must watch and operate on any CRD, you will need very broad permissions, such as
apiGroups: ["*"]andresources: ["*"]. This is a significant security risk and should only be done for highly privileged, foundational operators, and with extreme caution. Consider using API gateway security features to restrict external access to these potentially sensitive resources. - Pod Security Policies/Admission Controllers: Even with RBAC, consider implementing Pod Security Policies or other admission controllers to further restrict what your controller Pod can do (e.g., prevent it from running as root, limit host path access).
Versioning and Schema Evolution for CRDs
CRDs, like built-in Kubernetes APIs, evolve over time. Supporting multiple versions and managing schema changes is critical:
- Multiple
spec.versions: Define multiple versions (v1alpha1,v1,v2) in your CRD'sspec.versionsfield. Mark one asstorage: true(the version stored in etcd) and others asserved: true. - Conversion Webhooks: For complex schema changes between versions, implement a conversion webhook. This webhook service runs in your cluster and is called by the API server to convert objects between different versions (e.g., from
v1tov2or vice versa) during storage or retrieval. This ensures clients can interact with different versions while the underlying data remains consistent. unstructured.Unstructuredand Versions: When retrieving anunstructured.Unstructuredobject, it will be in thestorageversion unless explicitly requested otherwise (which is complex). Your reconciliation logic needs to be resilient to different schema versions if it processes objects from multiple GVR versions. Generally, it's simpler to process all objects in the latest supported version by ensuring your CRD discovery logic prioritizes it.
Testing Dynamic Controllers
Testing dynamic controllers presents unique challenges due to the unstructured.Unstructured data type and the dynamic nature of watched resources:
- Unit Tests: Focus on testing your core reconciliation logic. Instead of passing concrete Go structs, create
*unstructured.Unstructuredobjects in your tests that simulate various states (e.g., object with missing fields, unexpected types). - Integration Tests (EnvTest): Use
controller-runtime'senvtestpackage to spin up a minimal Kubernetes API server and etcd instance for testing. This allows you to deploy your CRDs, create custom resources, and observe how your dynamic controller reacts in a near-real environment. - End-to-End Tests: Deploy your full controller to a test cluster and use
kubectlorclient-goto create/update/delete CRs, then assert that your controller performs the expected actions (e.g., creates dependent resources, updates CR status).
The Role of an API Gateway
As your Kubernetes ecosystem grows with an increasing number of custom resources and operators, the need for a robust API gateway becomes paramount, especially when these custom resources manage services or expose functionalities that need to be consumed externally.
For example, if your CRDs define various AI models or custom data processing pipelines, an API gateway can act as a single, consistent, and secure entry point for these services. This is where products like APIPark offer significant value. APIPark, as an open-source AI gateway and API management platform, is specifically designed for scenarios involving complex API landscapes.
Here's how APIPark integrates naturally with a dynamic CRD environment:
- Unified API Format: Your dynamic controller might manage different CRDs for various AI models (e.g.,
ImageRecognitionCRD,NaturalLanguageProcessingCRD). Each might have slightly different underlying APIs. APIPark can provide a unified API format, abstracting away these differences, simplifying consumption for client applications. - Prompt Encapsulation: If your CRDs expose AI models, APIPark can encapsulate custom prompts into REST APIs, allowing non-Kubernetes-aware applications to easily trigger complex AI tasks without understanding the underlying CRD structures or dynamic client interactions.
- Lifecycle Management: APIPark assists with end-to-end API lifecycle management, from design and publication to invocation and decommission. Even if your dynamic controller manages the lifecycle of the CRs, APIPark manages the lifecycle of the exposed APIs, handling traffic forwarding, load balancing, and versioning for external consumers.
- Security and Access Control: CRDs can represent sensitive application components. APIPark provides features like subscription approval and independent API and access permissions for each tenant, ensuring that external calls to services managed by your dynamic CRDs are authenticated, authorized, and governed.
- Performance and Scalability: As your dynamic CRD-managed services scale, APIPark can provide high-performance API gateway capabilities, rivaling Nginx, and supporting cluster deployment to handle large-scale traffic. This offloads performance concerns from your custom controllers.
- Monitoring and Analytics: APIPark offers detailed API call logging and powerful data analysis tools. This provides crucial visibility into how your dynamically provisioned services are being consumed, helping with troubleshooting and performance optimization, complementing the operational insights from your Kubernetes controllers.
By embracing these advanced considerations and leveraging complementary tools like APIPark, you can build a highly resilient, performant, secure, and extensible Kubernetes ecosystem that truly maximizes the power of dynamic clients and custom resources.
Conclusion
The journey to mastering the dynamic client to watch "all kind" in Custom Resource Definitions is a deep dive into the very heart of Kubernetes extensibility. We've traversed the landscape from the fundamental concepts of CRDs, which empower users to extend Kubernetes' API surface with domain-specific objects, to the nuanced world of client-go and its powerful dynamic.Interface. This dynamic client, operating on unstructured.Unstructured objects, liberates developers from the constraints of compile-time type knowledge, opening the door to unprecedented flexibility in interacting with Kubernetes resources.
We then delved into the efficiency and robustness of the Informer pattern, the cornerstone of event-driven Kubernetes controllers. By establishing long-lived watch connections and maintaining client-side caches, informers provide near real-time updates and significantly reduce the load on the Kubernetes API server, forming the backbone of responsive and scalable operators. The true synergy emerges when these two powerful concepts merge: dynamic informers. This advanced mechanism allows controllers to dynamically discover and watch any CRD deployed in a cluster, enabling the creation of truly generic operators capable of managing an evolving and extensible universe of custom resources.
Building a generic CRD controller or operator, while complex, unlocks immense potential for platform automation, policy enforcement, and standardized management across diverse applications within a Kubernetes environment. We've explored the architectural patterns, best practices for RBAC, performance considerations, and the critical aspects of schema evolution and testing. The ability to react to any custom resource, whether it's a new database instance, a custom network policy, or a dynamically provisioned AI model, forms the foundation of a highly adaptable and future-proof cloud-native infrastructure.
In this dynamic and interconnected ecosystem, the role of an API gateway becomes increasingly vital. As your custom resources and their associated services proliferate, managing their exposure, securing access, and ensuring consistent consumption by external clients is paramount. Solutions like APIPark, an open-source AI gateway and API management platform, stand ready to bridge the gap between your powerful internal Kubernetes operations and the external world. By providing unified API formats, robust lifecycle management, advanced security features, and powerful analytics, APIPark ensures that the rich functionalities orchestrated by your dynamic Kubernetes controllers are delivered to consumers in a controlled, performant, and secure manner.
Mastering these advanced Kubernetes patterns empowers you to build not just applications, but entire platforms that are resilient, extensible, and deeply integrated into the Kubernetes control plane. The journey is challenging, but the rewards—in terms of automation, scalability, and operational efficiency—are truly transformative for any organization operating at the forefront of cloud-native innovation.
5 Frequently Asked Questions (FAQs)
1. What is the primary difference between a typed client and a dynamic client in client-go? A typed client (e.g., clientset.AppsV1().Deployments) interacts with Kubernetes resources using strongly-typed Go structs that are known at compile time, offering type safety and IDE support. A dynamic client (dynamic.Interface) interacts with resources using *unstructured.Unstructured objects (essentially map[string]interface{}), allowing it to handle any resource, including CRDs, without prior compile-time knowledge of its Go type.
2. Why are Informers essential for Kubernetes controllers, and how do they differ from direct API polling? Informer provide an efficient, event-driven mechanism to watch resources by establishing a long-lived connection to the Kubernetes API server and maintaining a client-side cache. They receive push notifications for changes, significantly reducing the load on the API server and enabling real-time reactions. Direct API polling, in contrast, repeatedly queries the server, which is inefficient, generates high API server load, and introduces latency in detecting changes.
3. How does a dynamic informer factory (dynamicinformer.NewFilteredDynamicSharedInformerFactory) enable watching "all kinds" of CRDs? A dynamic informer factory allows you to create informers for any schema.GroupVersionResource (GVR) at runtime, without needing pre-generated Go types. By first discovering available CustomResourceDefinition resources in the cluster, you can programmatically construct GVRs for them and then use the dynamic informer factory to start watching those resources. This enables a single controller to adapt to and manage an evolving set of custom resource types.
4. What are the key security considerations when using a dynamic client or building a generic CRD controller? The primary concern is RBAC (Role-Based Access Control). Dynamic clients, especially if configured to watch or manage "all kinds," require broad permissions (e.g., apiGroups: ["*"], resources: ["*"]), which increases the security risk. It's crucial to follow the principle of least privilege, granting only the absolute minimum necessary permissions and complementing RBAC with other security measures like Pod Security Standards and admission controllers. For external access, an API gateway can provide an additional layer of security and access control.
5. How can an API Gateway like APIPark complement a Kubernetes environment utilizing dynamic CRDs? An API gateway like APIPark serves as a unified, secure, and managed entry point for services exposed by custom resources. It can standardize API formats for diverse CRD-managed services, encapsulate complex operations (like AI model invocation) into simple REST APIs, provide end-to-end API lifecycle management, enforce access control, and offer high-performance traffic management and detailed analytics. This allows external consumers to interact with your dynamically orchestrated services in a consistent, secure, and scalable manner, abstracting away the underlying Kubernetes complexities.
🚀You can securely and efficiently call the OpenAI API on APIPark in just two steps:
Step 1: Deploy the APIPark AI gateway in 5 minutes.
APIPark is developed based on Golang, offering strong product performance and low development and maintenance costs. You can deploy APIPark with a single command line.
curl -sSO https://download.apipark.com/install/quick-start.sh; bash quick-start.sh

In my experience, you can see the successful deployment interface within 5 to 10 minutes. Then, you can log in to APIPark using your account.

Step 2: Call the OpenAI API.

