Golang Dynamic Informer: Multi-Resource Watch Guide
In the intricate landscape of cloud-native computing, where Kubernetes stands as the de facto orchestrator for containerized workloads, the ability to programmatically interact with and manage resources is paramount. As applications grow in complexity, encompassing a multitude of services and custom resource definitions (CRDs), developers frequently encounter the challenge of building robust controllers, operators, and tools that can react to changes across diverse resource types in real-time. While client-go, Kubernetes' official Go client library, provides powerful mechanisms for this, its static nature can sometimes limit flexibility, particularly when dealing with resources whose schemas might not be known at compile time or when a single component needs to observe a broad spectrum of Kubernetes objects.
This guide delves deep into the world of Golang Dynamic Informers, a sophisticated feature within client-go that empowers developers to monitor and react to changes in multiple Kubernetes resource types dynamically. We will explore how dynamic informers overcome the limitations of their static counterparts, providing an indispensable tool for building highly adaptable and resilient Kubernetes-native applications. Whether you're constructing a generic controller that supports arbitrary CRDs, an observability tool that needs to track various components, or a policy engine that enforces rules across different resource kinds, mastering dynamic informers is a crucial step towards achieving unparalleled control and responsiveness within your Kubernetes environments.
In the evolving landscape of cloud-native applications, where APIs are the lifeblood of interconnected services and robust gateways manage intricate traffic flows, understanding dynamic resource management is crucial for building resilient and scalable Open Platform solutions. This guide will not only illuminate the technical intricacies of dynamic informers but also highlight their broader relevance in creating responsive and adaptable systems capable of handling the dynamic nature of modern infrastructure, laying the groundwork for more advanced API governance and management strategies.
Understanding Kubernetes Resource Management with Go
Before we plunge into the specifics of dynamic informers, it's essential to establish a foundational understanding of how client-go facilitates interaction with the Kubernetes API server. This section will provide context, explaining the core components and their roles in building Kubernetes-aware applications in Go.
The client-go Ecosystem: A Foundation for Kubernetes Interaction
client-go is more than just a simple HTTP client; it's a comprehensive library designed to handle the complexities of interacting with the Kubernetes API. It abstracts away low-level details like authentication, API versioning, and JSON serialization, allowing developers to focus on application logic.
At its heart, client-go offers several key components:
- Clientsets: These are type-safe clients generated for each core Kubernetes resource (e.g.,
corev1.Pod,appsv1.Deployment). They provide methods for common CRUD (Create, Read, Update, Delete) operations on known resource types. For example,clientset.CoreV1().Pods()gives you a client to interact with Pods. While powerful for well-defined resources, clientsets require recompilation if new resource types or versions are introduced. - Informers: Informers are the cornerstone of event-driven Kubernetes controllers. Instead of continuously polling the API server (which is inefficient and can overload the server), informers set up a watch connection. When an event (Add, Update, Delete) occurs for a specific resource type, the informer receives it and updates an in-memory cache. This cache, managed by the informer, reduces the load on the API server and allows controllers to query resource states quickly without making direct API calls. Informers also handle network disruptions and re-establish watches, ensuring reliability.
- Listers: Closely tied to informers, listers provide a convenient and efficient way to query the informer's local, in-memory cache. They offer methods like
List()to retrieve all objects of a certain type orGet()to fetch a specific object by name and namespace. Because listers operate on the local cache, these operations are extremely fast and do not incur any network latency or API server load. - Scheme: The
runtime.Schemedefines how Go types map to Kubernetes API versions and resource kinds. It's crucial for serialization and deserialization of objects and for ensuring type safety.
The Power of Informers: Beyond Direct API Calls
The conventional approach of making direct API calls for every resource query or update can quickly become problematic in a dynamic environment like Kubernetes. Consider a controller that needs to manage a fleet of Pods. If it frequently lists Pods to check their status, it would constantly hit the API server, generating significant network traffic and increasing the load on the control plane. This approach also introduces latency, as the controller only becomes aware of changes after its next poll interval.
Informers elegantly solve these issues by:
- Reducing API Server Load: By maintaining an in-memory cache, informers drastically reduce the number of direct API calls. Once the initial "List" operation populates the cache, subsequent queries are served locally. Only "Watch" events are streamed from the API server, which are much more efficient than repeated "List" calls.
- Event-Driven Architecture: Informers enable a truly reactive, event-driven model. Controllers don't need to poll; they simply register handlers for Add, Update, and Delete events. When an event occurs, the handler is triggered immediately, allowing the controller to respond promptly to changes.
- Local Cache for Fast Queries: The in-memory cache, also known as the "store," provides lightning-fast access to resource data. This is invaluable for controllers that need to make rapid decisions based on the current state of the cluster.
- Resilience and Reliability: Informers handle the complexities of maintaining a robust connection to the API server. They automatically reconnect on disconnections, re-list resources to resynchronize the cache, and ensure that the watch stream remains active, making them highly resilient components.
A key component for managing multiple informers efficiently is the SharedInformerFactory. Instead of creating an independent informer for each resource type, which would lead to multiple distinct connections and caches, a SharedInformerFactory allows multiple informers to share a single underlying API server connection and a common cache synchronization mechanism. This is particularly beneficial for controllers that need to watch several resource types, as it optimizes resource usage and simplifies cache management.
Delving into Dynamic Informers
While static informers, created via clientset.NewSharedInformerFactory(), are excellent for core Kubernetes resources and CRDs with generated Go types, they fall short when you need to interact with resources whose GroupVersionKind (GVK) isn't known at compile time. This is where Dynamic Informers step in, offering unparalleled flexibility.
What are Dynamic Informers?
Dynamic informers provide a mechanism to interact with any Kubernetes API resource, including custom resources, without needing their specific Go types. Instead of working with typed objects like corev1.Pod, dynamic informers operate on unstructured.Unstructured objects. These are generic Go maps that represent the raw JSON structure of a Kubernetes object, allowing you to access fields dynamically using string keys.
The core components for dynamic informers are:
dynamic.Interface: This is the dynamic client, obtained viadynamic.NewForConfig(cfg). It provides methods to interact with resources using theirschema.GroupVersionResource(GVR) rather than concrete Go types. For example, to get a list of pods, you'd specifyschema.GroupVersionResource{Group: "", Version: "v1", Resource: "pods"}.dynamicinformer.NewFilteredDynamicSharedInformerFactory: Similar to the staticSharedInformerFactory, this factory creates and manages dynamic informers. TheFilteredvariant allows you to apply filters (e.g., label selectors, field selectors) at the factory level, which can be propagated to all informers created by it. Crucially, instead of type arguments, you provide aGroupVersionResourceto the factory to specify which resources to watch.
The ability to operate on unstructured.Unstructured objects is the defining characteristic of dynamic informers. It means your controller can be generic; it doesn't need to know the exact schema of a CRD at compile time. It can retrieve any resource, parse its content, and react accordingly.
Use Cases for Dynamic Informers
Dynamic informers unlock a plethora of advanced use cases:
- Building Generic Controllers and Operators: Imagine an operator that manages different types of "database" CRDs (e.g., MySQLInstance, PostgresCluster, RedisCache) from various vendors. With static informers, you'd need to generate Go types for each, potentially integrating multiple
client-golibraries. A dynamic informer can watch all resources matching a certain label or API group prefix, processing them generically. This is ideal for building extensible and vendor-agnostic operators. - Cross-Resource Monitoring and Policy Engines: A policy engine might need to ensure that specific annotations are present on all
Deploymentobjects,StatefulSetobjects, and a customNetworkPolicyRuleCRD. A dynamic informer can subscribe to all these different GVRs, receive their events, and apply policies uniformly, without hardcoding each resource type. - Handling CRDs Without Codegen: Sometimes, you might consume CRDs from a third party for which you don't have generated Go types, or you might want to avoid the overhead of code generation for simple CRDs. Dynamic informers allow you to interact with these CRDs directly, treating their data as generic JSON.
- Runtime Resource Discovery: In highly dynamic environments, new CRDs might be installed or existing ones updated at runtime. A dynamic informer-based solution can potentially discover these new resources (perhaps by watching
CustomResourceDefinitionobjects themselves) and then dynamically start watching them, adapting its behavior without requiring a full redeployment. This is crucial for building self-adapting systems. - Observability Tools: Building dashboards or diagnostic tools that need to display information about any resource in the cluster can leverage dynamic informers to collect data broadly without being constrained by pre-defined types.
These use cases highlight the power and necessity of dynamic informers in creating flexible, powerful, and future-proof Kubernetes tooling.
Setting Up Your Go Environment
To begin our practical exploration, let's ensure your development environment is correctly set up.
Prerequisites:
- Go: Version 1.18 or higher is recommended.
- Kubernetes Cluster: A local cluster like Minikube, Kind, or a remote cluster accessible via
kubectl. kubectl: Configured to interact with your cluster.
Project Setup:
First, create a new Go module:
mkdir golang-dynamic-informer-guide
cd golang-dynamic-informer-guide
go mod init golang-dynamic-informer-guide
Next, add the necessary client-go dependency. We typically align with the Kubernetes version you're targeting. For this guide, let's assume a recent Kubernetes version like 1.28.x or 1.29.x, which would correspond to client-go v0.28.x or v0.29.x respectively. Check the client-go releases on GitHub for the exact version corresponding to your Kubernetes cluster.
go get k8s.io/client-go@v0.29.0 # Use your desired client-go version
This command fetches client-go and its transitive dependencies, preparing your project for development.
Implementing a Single Dynamic Informer
Let's start with a foundational example: setting up a dynamic informer to watch a single, well-known resource type, like Pods. This will illustrate the basic plumbing before we tackle multi-resource watching.
The goal here is to demonstrate how to: 1. Obtain a Kubernetes configuration. 2. Create a dynamic client. 3. Instantiate a dynamic shared informer factory. 4. Get an informer for a specific GroupVersionResource. 5. Register event handlers. 6. Start the informer and wait for its cache to sync.
package main
import (
"context"
"fmt"
"os"
"os/signal"
"syscall"
"time"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/dynamic/dynamicinformer"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/klog/v2"
)
func main() {
// 1. Load Kubernetes configuration
// Try to use Kubeconfig from ~/.kube/config, fallback to in-cluster config
kubeconfig := os.Getenv("KUBECONFIG")
if kubeconfig == "" {
kubeconfig = clientcmd.RecommendedHomeFile
}
config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
klog.Fatalf("Error building kubeconfig: %v", err)
}
// 2. Create a dynamic client
dynamicClient, err := dynamic.NewForConfig(config)
if err != nil {
klog.Fatalf("Error creating dynamic client: %v", err)
}
// Define the GroupVersionResource for Pods
podsGVR := schema.GroupVersionResource{Group: "", Version: "v1", Resource: "pods"}
// 3. Instantiate a dynamic shared informer factory
// Resync period of 0 means the cache will not periodically resync
// We use `WithNamespace` if we want to restrict to a specific namespace
// or `WithTweakListOptions` for label/field selectors.
// For now, we'll watch all namespaces.
factory := dynamicinformer.NewFilteredDynamicSharedInformerFactory(dynamicClient, 0, cache.AllNamespace, nil)
// 4. Get an informer for Pods
informer := factory.ForResource(podsGVR).Informer()
// 5. Register event handlers
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
unstructuredObj, ok := obj.(*unstructured.Unstructured)
if !ok {
klog.Error("AddFunc: Expected *unstructured.Unstructured but got something else.")
return
}
klog.Infof("ADD event: Pod %s/%s created", unstructuredObj.GetNamespace(), unstructuredObj.GetName())
// You can access other fields dynamically
if labels := unstructuredObj.GetLabels(); labels != nil {
klog.Infof(" Labels: %v", labels)
}
},
UpdateFunc: func(oldObj, newObj interface{}) {
oldUnstructured, ok := oldObj.(*unstructured.Unstructured)
if !ok {
klog.Error("UpdateFunc: Old object: Expected *unstructured.Unstructured.")
return
}
newUnstructured, ok := newObj.(*unstructured.Unstructured)
if !ok {
klog.Error("UpdateFunc: New object: Expected *unstructured.Unstructured.")
return
}
klog.Infof("UPDATE event: Pod %s/%s updated", newUnstructured.GetNamespace(), newUnstructured.GetName())
// Compare relevant fields if needed
// For example, if you want to know if a specific annotation changed.
},
DeleteFunc: func(obj interface{}) {
unstructuredObj, ok := obj.(*unstructured.Unstructured)
if !ok {
// If the object is deleted while being processed, it might be a cache.DeletedFinalStateUnknown.
// In this case, the object itself might be a tombstone.
tombstone, ok := obj.(cache.DeletedFinalStateUnknown)
if !ok {
klog.Error("DeleteFunc: Expected *unstructured.Unstructured or cache.DeletedFinalStateUnknown.")
return
}
unstructuredObj, ok = tombstone.Obj.(*unstructured.Unstructured)
if !ok {
klog.Error("DeleteFunc: Tombstone object was not *unstructured.Unstructured.")
return
}
}
klog.Infof("DELETE event: Pod %s/%s deleted", unstructuredObj.GetNamespace(), unstructuredObj.GetName())
},
})
// Create a context for graceful shutdown
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Set up signal handler for graceful shutdown
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM)
go func() {
<-sigChan
klog.Info("Received termination signal, shutting down...")
cancel()
}()
// 6. Start the informer factory (starts all informers within it)
// This runs in a goroutine, so it doesn't block.
klog.Info("Starting informer factory...")
factory.Start(ctx.Done())
// 7. Wait for the informer's cache to be synced
// This is crucial: don't process events until the cache is populated.
klog.Info("Waiting for informer cache to sync...")
if !cache.WaitForCacheSync(ctx.Done(), informer.HasSynced) {
klog.Fatal("Failed to sync informer cache")
}
klog.Info("Informer cache synced successfully.")
// Keep the main goroutine running until context is cancelled
<-ctx.Done()
klog.Info("Program terminated.")
}
Detailed Explanation of Each Part:
- Kubernetes Configuration Loading: The code first attempts to load the Kubernetes client configuration. It prioritizes the
KUBECONFIGenvironment variable, then falls back to the default~/.kube/configfile. If neither is available,clientcmd.BuildConfigFromFlagswill likely fail if not running inside a cluster, but it also has logic to use in-cluster config (ServiceAccount token and CA cert). - Dynamic Client Creation:
dynamic.NewForConfig(config)creates a genericdynamic.Interface. This client does not know about specific Go types; it operates solely onGroupVersionResource(GVR) andunstructured.Unstructuredobjects. - Defining
podsGVR:schema.GroupVersionResource{Group: "", Version: "v1", Resource: "pods"}specifies the target resource.Groupis empty for core Kubernetes resources (like Pods, Services).Resourceis typically the plural lowercase name of the resource. - Dynamic Shared Informer Factory:
dynamicinformer.NewFilteredDynamicSharedInformerFactoryis used.- The
0forresyncPeriodmeans the informer will not periodically resync its cache, relying solely on watch events. This is generally preferred for performance unless there's a specific need for periodic full reconciliations. cache.AllNamespaceindicates that the informer should watch resources across all namespaces.- The last
nilis fortweakListOptions, which can be used to add label or field selectors to the initial LIST call and subsequent WATCH calls, thereby filtering events at the API server level.
- The
- Getting the Informer:
factory.ForResource(podsGVR).Informer()retrieves the specific informer for Pods from the factory. If this is the first timeForResourceis called forpodsGVRon this factory, a new informer will be instantiated and added to the factory's management. - Registering Event Handlers:
informer.AddEventHandleris where you define the logic to execute when an Add, Update, or Delete event occurs.AddFunc: Called when a new object is created.UpdateFunc: Called when an existing object is modified. It provides both theoldObjandnewObj.DeleteFunc: Called when an object is deleted. It's important to handlecache.DeletedFinalStateUnknownhere, which is a "tombstone" object that sometimes wraps the actual deleted object if it was deleted from the cache before being processed.- All event handlers receive
interface{}. It's crucial to perform a type assertion to*unstructured.Unstructuredto access the resource's data.
- Graceful Shutdown: The
context.WithCancelandos.Signalhandling ensure that the program can be cleanly shut down, allowing the informers to stop watching and release resources. - Starting the Factory:
factory.Start(ctx.Done())launches all the informers managed by the factory in separate goroutines. Thectx.Done()channel provides a signal to gracefully stop these goroutines when the context is cancelled. - Waiting for Cache Sync:
cache.WaitForCacheSync(ctx.Done(), informer.HasSynced)is a critical step. Before your controller starts processing events, its local cache must be fully populated with the current state of resources. This function blocks until all informers managed by the factory (or the specific informer if used withinformer.HasSynced) have performed their initial "List" operation and are synchronized. Processing events before the cache is synced can lead to incorrect behavior, as the controller might miss initial resources or make decisions based on incomplete data.
To run this example, save it as main.go and execute go run .. Then, try creating, modifying, and deleting Pods in your Kubernetes cluster using kubectl. You should observe the corresponding ADD, UPDATE, and DELETE events logged by your program.
The Core Challenge: Multi-Resource Watching
Watching a single resource type with a dynamic informer is straightforward. The real power, and often the complexity, emerges when you need to monitor multiple distinct resource types simultaneously. Consider a scenario where your controller needs to react to changes in Deployments, Services, and a custom Database CRD. Each of these has a different GroupVersionResource.
The challenges in multi-resource watching include:
- Different GVRs: Each resource type requires its own
GroupVersionResourcedefinition. - Shared Factory Coordination: While a
SharedInformerFactorycan manage multiple informers, how do you collect events from all of them in a unified manner? - Event Handling Disentanglement: Each informer's
ResourceEventHandlerwill receive events for its specific resource type. How do you process these events without blocking the informer's internal processing loop? - Maintaining State Across Resources: A controller often needs to make decisions based on the combined state of related resources (e.g., "when a Deployment changes, check its associated Service"). This requires careful coordination and access to the synced cache of all watched resources.
To address these challenges, common design patterns involve:
- Event Queues (Workqueues): This is the most prevalent pattern. Instead of processing events directly within the informer's
AddFunc,UpdateFunc,DeleteFunc, these functions simply add thekey(e.g.,namespace/name) of the affected object to a workqueue. A separate set of worker goroutines then picks items from this workqueue and processes them, effectively decoupling event reception from event processing. This prevents slow processing logic from blocking the informer's event stream. - Listers for Cross-Resource Queries: Once events are received and pushed to a workqueue, the worker goroutines can use the listers provided by the informers to fetch the current state of the involved object (and potentially related objects) from the local cache.
APIPark is a high-performance AI gateway that allows you to securely access the most comprehensive LLM APIs globally on the APIPark platform, including OpenAI, Anthropic, Mistral, Llama2, Google Gemini, and more.Try APIPark now! πππ
Advanced Multi-Resource Watching with Dynamic Informers
Now, let's explore how to implement multi-resource watching efficiently using a shared dynamic informer factory and a workqueue.
Method 1: Multiple Informers within a Shared Factory with a Workqueue
This method is the most common and robust approach. We'll set up a single dynamicinformer.NewFilteredDynamicSharedInformerFactory and then use it to create informers for several GroupVersionResources. All event handlers will push the object's key to a single workqueue, which will then be processed by a dedicated worker.
We'll watch Pods, Deployments, and Services as an example.
package main
import (
"context"
"fmt"
"os"
"os/signal"
"sync"
"syscall"
"time"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/dynamic/dynamicinformer"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/workqueue"
"k8s.io/klog/v2"
)
// Controller represents a Kubernetes controller that watches multiple resources.
type Controller struct {
dynamicClient dynamic.Interface
informers map[schema.GroupVersionResource]cache.SharedIndexInformer
listers map[schema.GroupVersionResource]cache.GenericLister
workqueue workqueue.RateLimitingInterface
cachesSynced []cache.InformerSynced
cancel context.CancelFunc // For graceful shutdown
}
// NewController creates a new Controller instance.
func NewController(dynamicClient dynamic.Interface, gvrs []schema.GroupVersionResource) *Controller {
queue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())
factory := dynamicinformer.NewFilteredDynamicSharedInformerFactory(dynamicClient, 0, cache.AllNamespace, nil)
informers := make(map[schema.GroupVersionResource]cache.SharedIndexInformer)
listers := make(map[schema.GroupVersionResource]cache.GenericLister)
var cachesSynced []cache.InformerSynced
for _, gvr := range gvrs {
informer := factory.ForResource(gvr).Informer()
informers[gvr] = informer
listers[gvr] = factory.ForResource(gvr).Lister()
cachesSynced = append(cachesSynced, informer.HasSynced)
// Add event handlers to push keys to the workqueue
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
key, err := cache.MetaNamespaceKeyFunc(obj)
if err == nil {
queue.Add(key)
klog.Infof("Added to queue: %s (GVR: %s)", key, gvr.String())
}
},
UpdateFunc: func(oldObj, newObj interface{}) {
key, err := cache.MetaNamespaceKeyFunc(newObj)
if err == nil {
queue.Add(key)
klog.Infof("Updated to queue: %s (GVR: %s)", key, gvr.String())
}
},
DeleteFunc: func(obj interface{}) {
key, err := cache.MetaNamespaceKeyFunc(obj)
if err == nil {
queue.Add(key)
klog.Infof("Deleted to queue: %s (GVR: %s)", key, gvr.String())
}
},
})
}
ctx, cancel := context.WithCancel(context.Background())
factory.Start(ctx.Done()) // Start all informers managed by the factory
return &Controller{
dynamicClient: dynamicClient,
informers: informers,
listers: listers,
workqueue: queue,
cachesSynced: cachesSynced,
cancel: cancel,
}
}
// Run starts the controller.
func (c *Controller) Run(workers int, stopCh <-chan struct{}) error {
defer c.workqueue.ShutDown()
klog.Info("Waiting for informer caches to sync")
if !cache.WaitForCacheSync(stopCh, c.cachesSynced...) {
return fmt.Errorf("failed to wait for cache sync")
}
klog.Info("Informer caches synced successfully")
klog.Infof("Starting %d workers", workers)
for i := 0; i < workers; i++ {
go c.runWorker()
}
<-stopCh
klog.Info("Shutting down workers")
return nil
}
// runWorker is a long-running function that will continually call the
// processNextWorkItem function in order to read and process a message off the
// workqueue.
func (c *Controller) runWorker() {
for c.processNextWorkItem() {
}
}
// processNextWorkItem reads a single work item off the workqueue and
// attempts to process it.
func (c *Controller) processNextWorkItem() bool {
obj, shutdown := c.workqueue.Get()
if shutdown {
return false
}
// We call Done here so the workqueue knows we have finished processing this item.
// We also use a defer func to ensure that no matter what, we call Done.
defer c.workqueue.Done(obj)
var key string
var ok bool
if key, ok = obj.(string); !ok {
c.workqueue.Forget(obj) // We don't know how to handle this, so don't retry.
klog.Errorf("Expected string in workqueue but got %#v", obj)
return true
}
// Run the reconcile logic (your controller's core business logic)
if err := c.reconcile(key); err != nil {
// If an error occurs, handle it by retrying the item later.
// We use RateLimiter to prevent spamming the API server or logging.
c.workqueue.AddRateLimited(key)
klog.Errorf("Error reconciling %q: %v, requeued item", key, err)
} else {
// If no error occurs, we Forget this item so it's not retried.
c.workqueue.Forget(obj)
klog.Infof("Successfully reconciled %q", key)
}
return true
}
// reconcile contains the logic to process a work item.
// It retrieves the object from the lister and performs necessary actions.
func (c *Controller) reconcile(key string) error {
namespace, name, err := cache.SplitMetaNamespaceKey(key)
if err != nil {
klog.Errorf("invalid resource key: %s", key)
return nil // Don't retry malformed keys
}
// This is the challenging part: how to know WHICH GVR the key belongs to?
// In a generic controller, you might infer based on context,
// or have a more sophisticated workqueue item that includes GVR.
// For simplicity in this example, we'll iterate through all listers
// and try to find the object. In a real controller, you'd likely
// have a specific GVR associated with the workqueue item.
// For instance, the key pushed to the queue could be a struct like {GVR, Namespace, Name}.
found := false
for gvr, lister := range c.listers {
var obj *unstructured.Unstructured
if namespace == cache.AllNamespace {
// Cluster-scoped resource or trying to get a namespaced resource from all namespaces
u, getErr := lister.Get(name)
if getErr != nil {
// klog.V(5).Infof("Could not get %s/%s from lister for GVR %s: %v", namespace, name, gvr.String(), getErr)
continue
}
obj = u.(*unstructured.Unstructured)
} else {
// Namespaced resource
u, getErr := lister.ByNamespace(namespace).Get(name)
if getErr != nil {
// klog.V(5).Infof("Could not get %s/%s from lister for GVR %s: %v", namespace, name, gvr.String(), getErr)
continue
}
obj = u.(*unstructured.Unstructured)
}
// If object is found
if obj != nil {
// IMPORTANT: The object retrieved here from the lister might not be the exact object that triggered the event.
// It's the LATEST state of the object in the cache.
// The object.GetObjectKind().GroupVersionKind() or obj.GroupVersionKind() can identify the type.
currentGVK := obj.GroupVersionKind()
klog.Infof("Reconciling %s/%s (GVK: %s)", obj.GetNamespace(), obj.GetName(), currentGVK.String())
// Here is where you'd implement your specific logic based on the resource type
switch currentGVK.Kind {
case "Pod":
klog.Infof(" Pod event: Status = %v", obj.Object["status"])
case "Deployment":
klog.Infof(" Deployment event: Replicas = %v", obj.Object["spec"].(map[string]interface{})["replicas"])
case "Service":
klog.Infof(" Service event: Type = %v, ClusterIP = %v",
obj.Object["spec"].(map[string]interface{})["type"],
obj.Object["spec"].(map[string]interface{})["clusterIP"])
default:
klog.Infof(" Unknown GVK: %s", currentGVK.String())
}
found = true
break // Object found for this key, stop iterating listers for this key
}
}
if !found {
// This means the object was deleted from the cache or never existed for any watched GVR.
// If it's a delete event, we still want to process it if we need to clean up external resources.
// In a production system, you'd likely have a specific event structure in the workqueue
// that tells you if it was a delete event or if the object simply vanished.
klog.Infof("Object %s not found in any lister, likely deleted.", key)
}
return nil
}
func main() {
klog.InitFlags(nil)
flag.Parse()
// Load Kubernetes configuration
kubeconfig := os.Getenv("KUBECONFIG")
if kubeconfig == "" {
kubeconfig = clientcmd.RecommendedHomeFile
}
config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
klog.Fatalf("Error building kubeconfig: %v", err)
}
// Create dynamic client
dynamicClient, err := dynamic.NewForConfig(config)
if err != nil {
klog.Fatalf("Error creating dynamic client: %v", err)
}
// Define the resources we want to watch
watchedGVRs := []schema.GroupVersionResource{
{Group: "", Version: "v1", Resource: "pods"},
{Group: "apps", Version: "v1", Resource: "deployments"},
{Group: "", Version: "v1", Resource: "services"},
// Example for a Custom Resource Definition (CRD) if it exists in your cluster
// {Group: "stable.example.com", Version: "v1", Resource: "databases"},
}
// Create the controller
controller := NewController(dynamicClient, watchedGVRs)
// Set up signal handler for graceful shutdown
stopCh := make(chan struct{})
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM)
go func() {
<-sigChan
klog.Info("Received termination signal, stopping controller...")
close(stopCh) // Signal the controller to stop
controller.cancel() // Cancel the informer factory context
}()
// Run the controller with 2 workers
klog.Info("Starting controller...")
if err = controller.Run(2, stopCh); err != nil {
klog.Fatalf("Error running controller: %v", err)
}
klog.Info("Controller stopped.")
}
This example introduces a Controller struct to encapsulate the logic.
NewController:- Initializes a
workqueue.RateLimitingInterface(which handles backoff for failed items). - Creates a
dynamicinformer.NewFilteredDynamicSharedInformerFactory. - Iterates through the list of
schema.GroupVersionResources (watchedGVRs). For each GVR:- It gets an informer and a lister from the factory.
- It appends the
informer.HasSyncedfunction to a slice; this is used later to wait for all caches to sync. - Crucially, it adds
ResourceEventHandlerFuncswhereAddFunc,UpdateFunc, andDeleteFuncsimply extract the object'snamespace/namekey usingcache.MetaNamespaceKeyFuncand add it to the sharedworkqueue. This decouples event reception from actual processing.
- Starts the
factory.Start(ctx.Done())which launches goroutines for all informers to start listing and watching.
- Initializes a
Controller.Run:- Waits for all informers' caches to sync using
cache.WaitForCacheSync(stopCh, c.cachesSynced...). This is vital for ensuring your controller operates on complete data. - Launches
workersnumber of goroutines, each callingrunWorker. - Blocks until
stopChis closed, signaling a graceful shutdown.
- Waits for all informers' caches to sync using
Controller.runWorker: Continuously callsprocessNextWorkItemto pull items from theworkqueue.Controller.processNextWorkItem:- Gets an item from the
workqueue. - Calls
c.reconcile(key)to process the item. - Handles retries for failed items using
workqueue.AddRateLimited(key)and marks successfully processed items asForget.
- Gets an item from the
Controller.reconcile: This is the core business logic.- It splits the
keyintonamespaceandname. - The challenge of generic
reconcile: In this simplified example, thereconcilefunction iterates through all registered listers to try and find the object corresponding to thekey. In a more robust, generic controller, the item pushed to theworkqueuewould ideally be a custom struct containing both thekeyand theGVRof the object that triggered the event. This would allowreconcileto directly query the correct lister without iteration. - Once the
unstructured.Unstructuredobject is retrieved from a lister, it prints details based on itsKind. This is where your specific controller logic for each resource type would reside. For example, if it's aDeployment, you might inspect itsspec.replicas. If it's aService, you might check itsspec.clusterIP.
- It splits the
This setup provides a highly scalable and robust way to watch multiple resources. The workqueue handles concurrency, retries, and prevents your processing logic from blocking the informer's event delivery.
Table: Comparison of Static vs. Dynamic client-go Components
To further clarify the differences and highlight the situations where each approach shines, let's look at a comparative table.
| Feature/Component | Static Client-Go (clientset) |
Dynamic Client-Go (dynamic.Interface) |
|---|---|---|
| API Interaction Type | Type-safe Go structs (e.g., corev1.Pod) |
Generic unstructured.Unstructured (Go map/interface{}) |
| Resource Specification | Go type (e.g., clientset.CoreV1().Pods()) |
schema.GroupVersionResource (GVR) |
| Compile-Time Knowledge | Requires generated Go types for all resources | No compile-time knowledge of resource schemas needed (runtime flexible) |
| Codegen Requirement | Yes, for CRDs (using controller-gen) |
No, works directly with raw API structures |
| Informer Factory | factory.NewSharedInformerFactory (clientset) |
dynamicinformer.NewFilteredDynamicSharedInformerFactory (dynamic.Interface) |
| Lister Type | Type-safe listers (e.g., podLister.Pods(namespace).Get(name)) |
cache.GenericLister (returns runtime.Object which is *unstructured.Unstructured) |
| Primary Use Case | Building controllers/tools for well-defined, known API resources, especially core Kubernetes types and stable CRDs. | Building generic controllers, operators for unknown/dynamic CRDs, policy engines, and broad observability tools. |
| Error Handling (Type) | Compiler errors for type mismatches, runtime errors for API issues. | Runtime errors for incorrect field access on unstructured.Unstructured, API errors. |
| Performance | Slightly better due to direct struct access | Negligible overhead, but requires manual type assertion/map access. |
| Maintainability | High for stable schemas, requires codegen updates for schema changes. | High for diverse/evolving schemas, requires robust runtime data handling. |
This table clearly illustrates that while static client-go offers strong type safety and compile-time checks, dynamic client-go provides unparalleled flexibility and adaptability, making it the preferred choice for truly generic or evolving Kubernetes interactions.
Handling Unstructured Data and Type Conversion
Working with unstructured.Unstructured objects is fundamental to dynamic informers. These objects represent the raw JSON data of a Kubernetes resource as a nested map of string to interface{}, where interface{} can be string, int64, bool, []interface{}, or map[string]interface{}.
unstructured.Unstructured Basics:
An unstructured.Unstructured object has top-level methods for common metadata fields and a Object field for the actual spec, status, etc.
- Accessing metadata:
obj.GetName()obj.GetNamespace()obj.GetLabels()obj.GetAnnotations()obj.GetUID()obj.GetResourceVersion()obj.GroupVersionKind()(returns the GVK of the object)
- Accessing fields within
Object: TheObjectfield is amap[string]interface{}. You need to perform type assertions as you traverse deeper into the structure.```go // Example: Accessing spec.replicas of a Deployment if spec, ok := obj.Object["spec"].(map[string]interface{}); ok { if replicas, ok := spec["replicas"].(int64); ok { // Kubernetes numbers are typically int64 klog.Infof("Deployment %s has %d replicas", obj.GetName(), replicas) } }// Example: Accessing container image from spec.template.spec.containers[0].image if spec, ok := obj.Object["spec"].(map[string]interface{}); ok { if template, ok := spec["template"].(map[string]interface{}); ok { if podSpec, ok := template["spec"].(map[string]interface{}); ok { if containers, ok := podSpec["containers"].([]interface{}); ok && len(containers) > 0 { if firstContainer, ok := containers[0].(map[string]interface{}); ok { if image, ok := firstContainer["image"].(string); ok { klog.Infof("First container image: %s", image) } } } } } } ``` This nested type assertion can become verbose. Helper libraries or custom functions can simplify this, but the underlying principle remains the same.
Converting to Typed Objects (If CRD Schema is Known):
If you do have the Go type for a CRD (perhaps from client-go codegen, or you've defined it yourself), you can convert an unstructured.Unstructured object into its typed Go struct. This is useful when you want to leverage type safety for specific operations after a generic dynamic watch.
import (
// ... other imports
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
// Assume you have a custom type MyCRD and its list MyCRDList
// "github.com/your-org/your-repo/pkg/apis/stable.example.com/v1"
)
// Example MyCRD struct
// type MyCRD struct {
// metav1.TypeMeta `json:",inline"`
// metav1.ObjectMeta `json:"metadata,omitempty"`
// Spec MyCRDSpec `json:"spec"`
// Status MyCRDStatus `json:"status,omitempty"`
// }
// type MyCRDSpec struct { /* ... */ }
// type MyCRDStatus struct { /* ... */ }
// Helper function to convert unstructured to typed
func toTypedObject(unstructuredObj *unstructured.Unstructured, obj runtime.Object) error {
// First, serialize the unstructured object to JSON
jsonBytes, err := unstructuredObj.MarshalJSON()
if err != nil {
return fmt.Errorf("failed to marshal unstructured object to JSON: %w", err)
}
// Then, deserialize the JSON into the typed object
err = json.Unmarshal(jsonBytes, obj)
if err != nil {
return fmt.Errorf("failed to unmarshal JSON to typed object: %w", err)
}
return nil
}
// In your reconcile function:
// Assume obj is *unstructured.Unstructured for a "MyCRD" resource
myCRD := &v1.MyCRD{} // Assuming v1.MyCRD is your generated Go type
if err := toTypedObject(obj, myCRD); err != nil {
klog.Errorf("Failed to convert unstructured to MyCRD: %v", err)
return err
}
// Now you can work with myCRD.Spec and myCRD.Status type-safely
klog.Infof("Typed MyCRD instance: %s, field: %s", myCRD.Name, myCRD.Spec.SomeField)
This conversion uses MarshalJSON and json.Unmarshal. For this to work correctly, your runtime.Scheme must be aware of the v1.MyCRD type. For client-go generated types, the scheme usually handles this automatically. For custom types, you might need to register them with a runtime.Scheme if you're doing complex conversions or using a generic runtime.Object anywhere else.
Error Handling and Robustness
Building reliable Kubernetes controllers requires meticulous attention to error handling and robustness. Dynamic informers are powerful, but their proper integration demands careful consideration of potential failure points.
Context Cancellation (context.WithCancel):
As shown in the examples, context.WithCancel is fundamental for graceful shutdown. When ctx.Done() is closed, all goroutines started with this context (like informer factories and worker loops) should cease their operations. This prevents resource leaks and ensures a clean exit.
Informer Synchronization (WaitForCacheSync):
Never proceed with processing work items before cache.WaitForCacheSync returns true. This guarantees that your controller's local cache is fully populated and consistent with the API server's state at the time of the initial list. Without synchronization, your controller might make decisions based on stale or incomplete data, leading to incorrect reconciliations or even data corruption.
Workqueue Error Handling, Retries, Backoff:
The workqueue.RateLimitingInterface is a crucial component for building resilient controllers.
- Retries: When
reconcilereturns an error,c.workqueue.AddRateLimited(key)puts the item back into the queue, but with a delay. This prevents a constant stream of failures from overwhelming your controller or the API server. - Backoff: The default rate limiter (
workqueue.DefaultControllerRateLimiter()) implements exponential backoff, increasing the delay between retries for a given item up to a certain maximum. This is essential for handling transient errors gracefully. Forgetvs.Done:workqueue.Done(obj): Always call this when you finish processing an item, regardless of success or failure. It signals the queue that the item is no longer actively being processed.workqueue.Forget(obj): Call this when an item has been successfully processed and you don't need to retry it, or if an item is unrecoverable (e.g., malformed key). This removes the item from the rate limiter's tracking.
- Max Retries: For critical operations, you might want to implement a maximum number of retries before giving up on an item (e.g., if
c.workqueue.NumRequeues(key)exceeds a threshold). At that point, you might log the item as permanently failed and potentially send an alert.
Logging Best Practices:
Use a structured logger like klog/v2 (as shown in the examples) or Zap. * Log sufficient context (e.g., namespace, name, GVK, error details) for debugging. * Use appropriate log levels (INFO, WARN, ERROR, FATAL, V(level) for verbose debugging). * Avoid excessively noisy logs during normal operation.
Dealing with API Server Connection Issues:
client-go's informers are designed to be resilient. They automatically handle: * Watch Disconnections: If the watch connection breaks, the informer will attempt to re-establish it. * Resyncs: Periodically (or on reconnect), informers perform a full "List" operation to resynchronize their cache, ensuring eventual consistency. * Token Expiration: client-go's underlying HTTP client should handle renewing ServiceAccount tokens or other authentication mechanisms.
However, your controller's logic should still be prepared for eventual consistency. An object you query from the lister might be slightly out of date for a very brief period during a resync or watch reconnection. Design your reconciliation loop to be idempotent β applying the same logic multiple times should produce the same result, minimizing side effects.
Performance Considerations and Best Practices
While dynamic informers offer immense flexibility, ensuring your controller performs efficiently and doesn't become a resource hog is crucial.
Minimizing API Server Calls:
The primary benefit of informers is reducing API server load. Ensure your reconcile logic primarily uses the informer's listers for fetching objects. Only make direct dynamicClient calls for: * Creating, updating, or deleting objects. * Fetching resources not watched by any informer (e.g., during discovery or very specific one-off requests). * Getting sub-resources or status endpoints that aren't exposed through the main object.
Efficient Event Processing:
- Workqueues: As discussed, workqueues are essential. They buffer events, allowing your controller to process them at its own pace and preventing it from being overwhelmed by a burst of events.
- Number of Workers: Tune the number of
runWorkergoroutines based on your controller's workload and available CPU. Too few workers will create a backlog; too many can lead to contention and excessive resource usage. A good starting point is 2-5 workers, and then adjust based on profiling. - Avoid Long-Running Tasks in
reconcile: If your reconciliation logic involves heavy computation or external API calls, consider offloading those tasks to separate goroutines or another message queue, ensuringreconcilereturns quickly. Blockingreconcilefunctions will slow down your entire controller.
Memory Usage of the Cache:
Each informer maintains an in-memory cache of all objects it watches. If you watch a resource type that has tens of thousands of instances (e.g., Pods in a very large cluster), this cache can consume significant memory. * Resource Filtering: Use factory.ForResource(gvr).FilteredInformer(tweakListOptions) to apply label or field selectors. This tells the API server to only send you events and initial lists for objects matching your criteria, significantly reducing cache size and network traffic. go // Example: only watch Pods with specific labels selector, _ := labels.Parse("app=my-app,env=production") tweakListOptions := func(options *metav1.ListOptions) { options.LabelSelector = selector.String() } factory := dynamicinformer.NewFilteredDynamicSharedInformerFactory(dynamicClient, 0, cache.AllNamespace, tweakListOptions) // Then get informers from this factory * Namespace Filtering: Use dynamicinformer.NewFilteredDynamicSharedInformerFactory(dynamicClient, 0, "my-namespace", nil) if your controller only cares about resources in a single namespace.
Throttling and Rate Limiting:
Beyond the workqueue's rate limiter, if your controller makes external API calls (to databases, third-party services, etc.), implement additional rate limiting and circuit breakers to prevent your controller from overloading those external systems.
Scalability of the Controller:
For high-availability and horizontal scaling, deploy your controller using leader election (e.g., using client-go/tools/leaderelection). This ensures that only one instance of your controller is active at any given time, preventing duplicate processing and conflicts, while others stand by as hot spares.
Integration with Broader Systems & APIPark Mention
The dynamic resource management capabilities provided by Golang's dynamic informers are not isolated features; they are foundational building blocks for sophisticated Kubernetes operators, custom controllers, and comprehensive cloud-native platforms. By allowing real-time, event-driven responses to changes across any Kubernetes resource, dynamic informers enable the creation of highly responsive and adaptable systems.
These underlying mechanisms, which provide granular control and observability over Kubernetes resources, are crucial for organizations that manage complex API ecosystems. Modern applications, especially those built on microservices architectures, rely heavily on APIs for internal communication and external exposure. As the number and diversity of these APIs grow, robust management and governance become paramount. The ability to monitor dynamic changes in infrastructure components can directly influence how effectively an API Gateway functions, how quickly new services are discovered, or how securely policies are applied across an Open Platform.
For organizations looking to streamline the management and exposure of these underlying services, particularly when interacting with complex API ecosystems or serving as a central gateway for various applications, robust platforms become essential. This is where solutions like APIPark shine. APIPark offers an open-source AI gateway and API management platform, designed to simplify the integration and deployment of both AI and REST services. It enables a unified approach to API governance, vital for any modern Open Platform initiative. By leveraging powerful backend mechanisms, similar to how dynamic informers provide granular control over Kubernetes resources, APIPark helps developers and enterprises manage the entire API lifecycle, from design to deployment, ensuring security and efficiency. Whether it's orchestrating Kubernetes resources with dynamic informers or centralizing API management with tools like APIPark, the goal remains the same: to build resilient, scalable, and manageable cloud-native infrastructure that meets the demands of modern software development.
Conclusion
The journey through Golang Dynamic Informers reveals a powerful and indispensable tool for any developer working within the Kubernetes ecosystem. We've traversed the foundational concepts of client-go, understood the inherent advantages of informers over direct API calls, and then plunged into the flexibility offered by dynamic informers. From watching single, arbitrary resource types to orchestrating complex multi-resource reconciliation loops with workqueues, dynamic informers empower you to build highly adaptable, generic, and robust Kubernetes controllers and operators.
We explored the practicalities of setting up your development environment, detailed the step-by-step implementation of dynamic informers, and tackled the intricacies of handling unstructured.Unstructured data. Crucially, we emphasized the importance of robust error handling, graceful shutdowns, and performance optimization techniques such as rate limiting, efficient caching, and resource filtering. The ability to react to any resource change at runtime, without needing compile-time knowledge of its schema, fundamentally shifts how we approach Kubernetes-native application development, opening doors to more extensible and future-proof architectures.
Mastering dynamic informers is not merely about understanding another client-go feature; it's about embracing a paradigm of reactive, resilient, and truly cloud-native resource management. As Kubernetes continues to evolve, with an ever-growing array of Custom Resource Definitions and dynamic workloads, the insights gained from this guide will prove invaluable in crafting intelligent, self-healing, and performant systems that can thrive in the most demanding environments.
Frequently Asked Questions (FAQs)
1. What is the fundamental difference between static and dynamic informers in client-go?
The primary difference lies in how they handle resource types. Static informers (clientset.NewSharedInformerFactory) are type-safe and work with specific Go structs generated from Kubernetes API definitions (e.g., corev1.Pod). They require compile-time knowledge of the resource's schema. Dynamic informers (dynamicinformer.NewFilteredDynamicSharedInformerFactory), conversely, operate on unstructured.Unstructured objects, which are generic map[string]interface{} representations of raw JSON. They provide runtime flexibility, allowing you to watch any resource type (including CRDs) without needing its specific Go type at compile time.
2. When should I choose to use a dynamic informer over a static informer?
Dynamic informers are ideal for several scenarios: * Generic Controllers: When building a controller that needs to manage arbitrary or unknown Custom Resource Definitions (CRDs) without prior knowledge of their Go types. * Policy Engines: For implementing policies that apply across a broad range of resource types, including those that might not be known when the engine is developed. * Runtime Resource Discovery: When you need to adapt to new CRDs being installed in the cluster dynamically. * Avoiding Codegen Overhead: For simple CRDs where generating Go types and recompiling your controller might be overkill. * Broad Observability Tools: Building tools that need to inspect or report on a wide variety of cluster resources.
If you are working with well-defined, stable Kubernetes core resources or CRDs for which you have generated Go types and prefer compile-time type safety, static informers are often a more straightforward choice.
3. How do I effectively handle events from multiple resource types with a dynamic informer?
The recommended approach for multi-resource watching is to use a shared dynamic informer factory combined with a workqueue. 1. Initialize a single dynamicinformer.NewFilteredDynamicSharedInformerFactory. 2. For each schema.GroupVersionResource you want to watch, obtain an informer from this shared factory. 3. Register ResourceEventHandlerFuncs for each informer. These handlers should not contain heavy processing logic; instead, they should extract a unique key (e.g., namespace/name) from the unstructured.Unstructured object and push it onto a shared workqueue.RateLimitingInterface. 4. Launch one or more worker goroutines that continuously pull items from the workqueue. These workers perform the actual reconciliation logic, retrieving the latest state of the object from the informer's lister and taking appropriate actions. This decouples event reception from processing, ensuring efficiency and responsiveness.
4. What are the key performance implications of using dynamic informers?
While dynamic informers offer flexibility, it's crucial to manage their performance: * Cache Memory Usage: Each informer maintains an in-memory cache of all watched objects. Watching a large number of resources (e.g., tens of thousands of Pods) can lead to significant memory consumption. Use resource filtering (label/field selectors) and namespace filtering whenever possible to reduce the cache size. * CPU Usage: Processing events from multiple informers, especially with unstructured.Unstructured objects that require type assertions and map lookups, can be CPU-intensive if not optimized. The workqueue helps manage this by distributing load across workers. * API Server Load: Informers significantly reduce API server load compared to polling. However, misconfigured informers (e.g., very frequent resyncs without proper filtering) can still cause unnecessary load. Prioritize the default resyncPeriod=0 unless specific resync behavior is needed. * Reconciliation Loop Efficiency: The most significant performance factor is the efficiency of your reconcile function. Keep it fast, idempotent, and avoid blocking operations. Offload heavy computation or external network calls if possible.
5. Can I use dynamic informers for CRDs that don't have generated Go types? If so, how do I access their custom fields?
Yes, this is one of the primary use cases for dynamic informers. You absolutely can use them for CRDs without generated Go types. To access their custom fields, you work with the *unstructured.Unstructured object you receive in your event handlers or retrieve from the lister. The custom fields are available within the Object field of the unstructured.Unstructured struct, which is a map[string]interface{}. You then traverse this map using string keys and perform type assertions as you go deeper into the object's structure. For example, to access spec.replicas of a custom resource, you might do obj.Object["spec"].(map[string]interface{})["replicas"].(int64). While this requires manual type assertion, it provides the flexibility to interact with any arbitrary JSON structure defined by a CRD.
πYou can securely and efficiently call the OpenAI API on APIPark in just two steps:
Step 1: Deploy the APIPark AI gateway in 5 minutes.
APIPark is developed based on Golang, offering strong product performance and low development and maintenance costs. You can deploy APIPark with a single command line.
curl -sSO https://download.apipark.com/install/quick-start.sh; bash quick-start.sh

In my experience, you can see the successful deployment interface within 5 to 10 minutes. Then, you can log in to APIPark using your account.

Step 2: Call the OpenAI API.

