How to Build a Kubernetes Controller to Watch for CRD Changes
Kubernetes has firmly established itself as the de facto standard for container orchestration, revolutionizing how applications are deployed, managed, and scaled. At its core, Kubernetes is an API-driven system, where every operation, from scheduling a pod to configuring a network policy, is performed by interacting with its robust and extensible api. This fundamental design principle is not merely an implementation detail; it is the very foundation upon which Kubernetes' power and flexibility are built. Developers and operators leverage this consistent api interface to declaratively define the desired state of their applications, allowing Kubernetes to continuously work towards realizing and maintaining that state. The system's extensibility goes beyond its vast array of built-in resource types; it actively encourages users to define their own, custom resources, thereby extending the Kubernetes api to suit virtually any operational requirement.
However, merely defining new resource types is only one half of the equation. For these custom resources to become truly functional components of a Kubernetes cluster, they require an active agent to observe their state, interpret their meaning, and take appropriate actions. This is precisely where Kubernetes controllers come into play. A controller acts as the operational brain for specific resource types, continuously monitoring for changes, comparing the observed state to the desired state (as defined in the resource's specification), and initiating a series of operations to reconcile any discrepancies. This reconciliation loop is the heart of the "control plane" philosophy that underpins Kubernetes. Without a custom controller, a custom resource (CRD) would simply be a data schema stored within Kubernetes, devoid of any operational logic or impact on the cluster's actual state.
This comprehensive guide delves into the intricate process of building a Kubernetes controller specifically designed to watch for changes in Custom Resource Definitions (CRDs). We will explore the motivations behind creating such controllers, dissect the underlying architectural principles, and provide a detailed, step-by-step walkthrough using modern tooling. Our journey will cover everything from initial project setup and CRD definition to the implementation of the controller's core reconciliation logic, deployment strategies, and best practices. By the end of this article, you will possess a profound understanding of how to extend Kubernetes with your own custom resources and empower them with intelligent, automated operational capabilities, paving the way for advanced cluster automation and application management. We will also touch upon how robust API governance principles apply even to these internal Kubernetes APIs, ensuring consistency, security, and maintainability across your custom extensions.
Understanding Kubernetes Extensibility: The Pillars of Customization
To effectively build a Kubernetes controller, it's crucial to first grasp the fundamental mechanisms Kubernetes provides for extending its capabilities. These mechanisms allow users to tailor the platform to their specific needs, enabling the orchestration of virtually any workload or infrastructure component.
The Kubernetes API Server: The Control Plane's Central Hub
At the very core of Kubernetes lies the API Server. It is the sole component that exposes the Kubernetes api to the outside world and serves as the front-end for the cluster's control plane. All communications, whether from kubectl commands, other control plane components (like the scheduler or controller manager), or custom controllers, must go through the API Server. This central api endpoint ensures consistency, authentication, authorization, and validation for every operation within the cluster.
When you interact with Kubernetes, you're essentially making api calls to the API Server. For example, kubectl get pods translates into an api call to list pod resources. The API Server is responsible for:
- Request Routing: Directing
apicalls to the appropriate handlers. - Authentication: Verifying the identity of the user or service account making the request.
- Authorization: Checking if the authenticated user has permission to perform the requested action on the specified resource.
- Admission Control: Intercepting requests before they are persisted to
etcd(the cluster's persistent store). Admission controllers can validate, mutate, or reject requests based on predefined policies. - Validation: Ensuring that resource definitions conform to their schema.
- Persistence: Storing the desired state of resources in
etcd.
This architectural design makes Kubernetes inherently declarative and api-driven. Every resource, from a Pod to a Deployment, is simply an entry in etcd, accessible and modifiable through the API Server. This uniformity makes it possible to extend Kubernetes in a powerful and consistent manner. The structure of these resources, including their validation rules, is often described using schemas that can be understood as simplified forms of OpenAPI definitions, ensuring that the api consumers understand the expected format of requests and responses.
Custom Resource Definitions (CRDs): Defining New Resource Types
While Kubernetes offers a rich set of built-in resource types (Pods, Deployments, Services, etc.), real-world applications often require specialized operational logic or integrate with external systems that don't neatly fit into these existing abstractions. This is where Custom Resource Definitions (CRDs) become indispensable. A CRD allows you to define a completely new, custom resource type, making it a first-class citizen within the Kubernetes api.
When you create a CRD, you're essentially telling the Kubernetes API Server: "Hey, there's a new kind of object that I want you to recognize and store." Once registered, the API Server will automatically provide a RESTful api endpoint for your custom resource, just like it does for built-in types. For instance, if you define a CRD named Database in the mycompany.com group, you can then interact with instances of Database using kubectl: kubectl get databases.mycompany.com or kubectl create -f my-database.yaml.
Key aspects of CRDs include:
- Schema Definition: The most critical part of a CRD is its schema, defined using
OpenAPIv3 schema validation. This schema dictates the structure, data types, and validation rules for the custom resource'sspec(the desired state) andstatus(the observed state). A well-defined schema is crucial forAPI governance, ensuring that all instances of your custom resource adhere to expected formats and constraints, preventing malformed or invalid configurations from being applied. This robust validation layer is directly powered by theOpenAPIspecification, making custom resources as reliable as native ones. - Versioning: CRDs support multiple API versions (e.g.,
v1alpha1,v1beta1,v1). This allows forapievolution without breaking existing clients, a key tenet of goodAPI governance. - Scope: CRDs can be either
Namespaced(like Pods) orClusterscoped (like Nodes). - Subresources: You can define
/statusand/scalesubresources, allowing separate updates to the resource's status and enabling horizontal scaling functionality for your custom objects. - Conversion Webhooks: For complex
apiversion migrations, conversion webhooks can be configured to automatically translate resources between different API versions.
By extending the Kubernetes api with CRDs, you empower cluster users to manage complex application configurations or external infrastructure using the familiar kubectl command and declarative YAML manifests, integrating seamlessly with existing Kubernetes workflows.
Controllers: The Control Loop Pattern
A CRD, by itself, is merely a schema for data. It defines what a custom resource looks like but doesn't prescribe how it should behave or what actions should be taken when its state changes. This is the responsibility of a Kubernetes controller.
A controller is a continuous loop that observes the actual state of a cluster, compares it to the desired state (as defined by resource objects, including your CRDs), and then takes actions to move the actual state closer to the desired state. This fundamental pattern is known as the "control loop" or "reconciliation loop."
The core components and flow of a typical controller include:
- Informers/Watchers: Controllers don't constantly poll the API Server. Instead, they use "informers," which establish a watch on specific resource types. When a change occurs (creation, update, or deletion of a resource), the informer receives an event and adds the corresponding resource key (namespace/name) to a workqueue.
- Workqueue: This is a queue that holds keys of resources that need to be processed. It acts as a buffer, ensuring that events are processed reliably and preventing the controller from being overwhelmed by a burst of changes. Duplicate events for the same resource are often coalesced.
- Reconciler: This is the heart of the controller logic. When a key is popped from the workqueue, the reconciler is invoked. Its primary responsibilities are:
- Fetch the Desired State: Retrieve the latest version of the resource object from the API Server using the key.
- Observe the Actual State: Query the cluster or external systems to determine the current state related to the custom resource.
- Compare and Reconcile: Compare the desired state (from the custom resource's
spec) with the observed actual state. If there's a discrepancy, perform the necessary actions (e.g., create/update/delete child resources, call external APIs) to bring the actual state in line with the desired state. - Update Status: After reconciliation, update the
statusfield of the custom resource to reflect the observed actual state and any conditions or outcomes. This provides crucial feedback to users. - Error Handling and Requeueing: If an error occurs, the reconciler should gracefully handle it and potentially requeue the item for a retry later.
- Idempotency: All actions performed by the reconciler must be idempotent, meaning applying them multiple times has the same effect as applying them once. The reconciliation loop is inherently retry-driven, so actions must be safe to repeat.
By combining CRDs with custom controllers, you can create powerful operators that automate complex application lifecycles, manage databases, integrate with external cloud services, or enforce sophisticated policies directly within your Kubernetes cluster. This level of automation is a significant step towards robust API governance for your entire infrastructure, ensuring that custom resources are managed with the same rigor and automation as native Kubernetes objects.
Why Build a Custom Controller for CRDs? Beyond Basic Automation
Building a custom Kubernetes controller is a significant undertaking, requiring a deep understanding of Kubernetes internals and programming best practices. However, the benefits it offers in terms of automation, integration, and operational consistency often far outweigh the initial development effort. A controller for CRDs moves beyond simple task automation; it enables true self-healing, self-managing systems within your Kubernetes environment.
Automating Operational Tasks and Application Lifecycles
One of the primary motivations for building a custom controller is to automate complex operational workflows that are repetitive, error-prone, or require intricate sequencing. Imagine an application that requires not just a Deployment and a Service, but also a specific database instance, a message queue, and custom ingress rules configured in an external load balancer. A human operator would need to manually provision and link all these components.
A custom controller, however, can encapsulate this entire operational knowledge. You define a single custom resource, say MyApplication, with a spec that describes the desired state of your application ecosystem. The controller then watches for MyApplication resources and automatically provisions, configures, and links all necessary Kubernetes resources (Deployments, Services, Secrets, ConfigMaps) and potentially interacts with external api endpoints to provision external infrastructure (e.g., creating an AWS RDS instance, configuring a Google Cloud Pub/Sub topic, or registering a service with an API gateway like ApiPark). This dramatically simplifies application deployment and management for end-users, who only need to interact with a single MyApplication object. This centralized management through a custom resource is a powerful form of API governance, providing a single, consistent api endpoint for managing a complex system.
Integrating with External Systems and Cloud Providers
Kubernetes is a powerful orchestrator, but many applications rely on services outside the cluster. Databases, object storage, identity providers, and specialized SaaS solutions are common examples. A custom controller can act as the bridge between your Kubernetes cluster and these external systems.
For instance, you might define a ManagedDatabase CRD. When a ManagedDatabase object is created, your controller could:
- Call the AWS RDS
apito provision a new PostgreSQL instance. - Create Kubernetes Secrets containing the database credentials.
- Inject connection details into consuming application Pods.
- Monitor the health of the external database and update the
ManagedDatabaseCRD'sstatusfield accordingly. - Handle database backups, scaling, and deletion when the CRD is updated or removed.
This integration transforms external services into Kubernetes-native resources, allowing developers to manage them using familiar kubectl commands and declarative YAML, just like any other Kubernetes object. This unification under the Kubernetes api simplifies operations and provides a consistent experience, extending the principles of API governance beyond the cluster's boundaries.
Implementing Custom Logic and Business Rules
Beyond infrastructure provisioning, custom controllers can embed sophisticated application-specific logic or business rules directly into the Kubernetes control plane. Consider a scenario where you want to automatically provision resources based on a tenant's usage tier or enforce specific deployment patterns for sensitive workloads.
A controller can:
- Enforce Policies: Validate incoming custom resources against specific business rules (e.g., "only allow deployments of this application to specific namespaces" or "ensure all instances have specific labels"). This can be further enhanced by Admission Webhooks, which we'll discuss later. This directly relates to
API governance, where policies are programmatically enforced at theapilevel. - Generate Dependent Resources: Based on a high-level custom resource, generate multiple lower-level Kubernetes resources. For example, a
WebsiteCRD might generate a Deployment, Service, Ingress, and a Certificate resource. - Orchestrate Complex Workflows: Manage multi-step processes, such as blue-green deployments, canary releases, or complex data migrations, all triggered and managed through a simple custom resource.
- Handle Complex Status Reporting: Aggregate health checks and status information from various child resources or external systems into a coherent
statusfield on the custom resource, providing a single source of truth for the application's overall health.
By centralizing this logic within a controller, you ensure that these rules are consistently applied across all deployments, reducing human error and enforcing best practices. This systematic approach to managing custom resources contributes significantly to robust API governance within your Kubernetes ecosystem.
Ensuring Desired State Adherence and Self-Healing Capabilities
The control loop pattern inherent in controllers naturally lends itself to maintaining desired states and providing self-healing capabilities. Unlike imperative scripts that run once and then exit, a controller is always running, continuously observing the cluster.
If a human operator accidentally deletes a required child resource (e.g., a Deployment managed by your controller), the controller will detect this discrepancy during its next reconciliation cycle. It will observe that the actual state no longer matches the desired state defined in the custom resource and will automatically recreate the missing Deployment. Similarly, if an external api call fails, the controller can retry the operation, ensuring eventual consistency.
This continuous reconciliation ensures that your applications and infrastructure always conform to their defined specifications, improving system resilience and reducing the need for manual intervention. This proactive enforcement of the desired state is arguably the most powerful aspect of controllers and a cornerstone of effective API governance for dynamic, cloud-native environments.
Prerequisites and Essential Tools for Controller Development
Embarking on the journey of building a Kubernetes controller requires a specific set of tools and a foundational understanding of certain technologies. Adhering to these prerequisites will significantly streamline the development process.
Go Language: The Idiomatic Choice
While it's theoretically possible to write Kubernetes controllers in any language that can interact with the Kubernetes api (via client-go or client-libraries in other languages), Go (Golang) is overwhelmingly the most common and idiomatic choice. The vast majority of Kubernetes itself and its core components are written in Go. This offers several advantages:
- Rich Ecosystem: The
client-golibrary (Kubernetes Go client) is the official and most comprehensive way to interact with the Kubernetesapi. It provides type-safe access to all Kubernetes resources, informers, workqueues, and other necessary primitives. - Community Support: Most controller development frameworks, examples, and best practices are in Go.
- Performance: Go's concurrency model (goroutines and channels) is well-suited for the asynchronous, event-driven nature of controllers.
- Static Linking: Go applications compile into single, statically linked binaries, making deployment simple and efficient, especially in containerized environments.
For this guide, we will assume familiarity with Go programming concepts, including structs, interfaces, goroutines, and error handling.
Kubernetes Development Environment
You'll need a working Kubernetes environment to develop and test your controller.
kubectl: The command-line tool for interacting with Kubernetes clusters. Ensure it's installed and configured to point to your development cluster.- Docker Desktop/Docker Engine: For building and pushing container images of your controller.
- A Kubernetes Cluster:
- Minikube: A lightweight Kubernetes implementation that runs a single-node cluster inside a VM on your local machine. Excellent for local development and testing.
- Kind (Kubernetes in Docker): Runs local Kubernetes clusters using Docker containers as "nodes." It's faster to start than Minikube and often preferred for CI/CD pipelines and local development.
- Remote Cluster: If you have access to a remote development cluster (e.g., GKE, EKS, AKS, or a self-managed cluster), you can use that as well, ensuring you have appropriate
kubeconfigaccess.
Controller Development Frameworks: Kubebuilder and Operator SDK
Building a controller from scratch using just client-go is possible but incredibly complex and time-consuming. It involves managing low-level details like cache synchronization, event handling, and workqueue management. Fortunately, powerful frameworks exist to abstract away much of this boilerplate:
controller-runtime: This is the foundational library developed by the Kubernetes community. It provides the core building blocks for controllers, including clients, caches, informers, workqueues, and reconcilers. Many other tools build on top ofcontroller-runtime.Kubebuilder: A project that usescontroller-runtimeand provides a set of tools and CLI commands to scaffold new controller projects, generate CRDs, and implement webhooks. It automates much of the initial setup and code generation, allowing developers to focus on the core reconciliation logic. It's highly opinionated and follows best practices.Operator SDK: Originally developed by Red Hat/CoreOS, Operator SDK also leveragescontroller-runtimeand offers similar scaffolding capabilities. It has a broader focus on the "Operator Pattern" and can generate projects in Go, Ansible, or Helm. While it has some unique features, its Go-based workflow is very similar to Kubebuilder.
For this guide, we will primarily use Kubebuilder due to its streamlined Go-centric workflow and strong community adoption for pure Go controller development. Kubebuilder simplifies:
- Project Initialization: Setting up the basic directory structure, Go modules, and boilerplate code.
- CRD Generation: Automatically generating Go types for your custom resources from annotated
structs. - Controller Scaffolding: Generating the basic
Reconcilefunction andSetupWithManagermethod. - Deployment Manifests: Creating
YAMLfiles for RBAC, Service Accounts, and Deployments for your controller.
To get started, install Kubebuilder by following the official documentation, typically involving downloading the binary and placing it in your PATH.
# Example for Linux, check official docs for latest version and OS specific instructions
os=$(go env GOOS)
arch=$(go env GOARCH)
kubebuilder_version="1.0.0" # Check latest stable version
curl -sL "https://go.kubebuilder.io/dl/${kubebuilder_version}/${os}/${arch}" | tar -xz -C /tmp/
sudo mv /tmp/kubebuilder_${kubebuilder_version}_${os}_${arch} /usr/local/kubebuilder
export PATH=$PATH:/usr/local/kubebuilder
Ensure go is installed (version 1.16+ is recommended for module support).
Basic Understanding of Kubernetes Concepts
Before diving into code, it's essential to have a solid grasp of core Kubernetes concepts:
- Resources and Objects: The declarative nature of Kubernetes through resource objects like Pods, Deployments, Services.
- Namespaces: Logical isolation for resources.
- Labels and Selectors: Grouping and querying resources.
- Annotations: Adding non-identifying metadata.
- RBAC (Role-Based Access Control): How permissions are managed in Kubernetes. Your controller will need appropriate RBAC roles to interact with the API Server.
- Service Accounts: Identities for processes running inside Pods. Your controller Pod will run under a Service Account.
With these prerequisites in place, you are well-equipped to begin developing your custom Kubernetes controller.
Controller Architecture Deep Dive: The Inner Workings
Before we jump into the actual coding with Kubebuilder, let's take a closer look at the foundational architecture of a Kubernetes controller. Understanding these components will provide clarity on why certain code patterns are used and how the control loop achieves its robust, self-healing capabilities. This is where the intricacies of the Kubernetes api and its clients become most apparent.
Clients: Interacting with the Kubernetes API
At the heart of any controller is its ability to interact with the Kubernetes API Server. This is achieved through client libraries. In Go, the primary library for this is client-go. However, controller-runtime provides a higher-level, more convenient client abstraction:
client.Client: This interface, provided bycontroller-runtime, offers a unified way to perform CRUD (Create, Read, Update, Delete) operations on Kubernetes resources. It abstracts away the complexities of caching and API versioning, allowing you to interact with both native and custom resources using a single, consistentapi. It primarily operates on theapimachineryruntime.Object interface.DynamicClient(fromclient-go): For scenarios where you don't have static Go types for resources at compile time (e.g., generic controllers that manage arbitrary CRDs),DynamicClientallows you to interact with resources using unstructured maps.RESTClient(fromclient-go): A low-level client for making raw HTTP requests to the Kubernetes API Server. It's generally not used directly in controllers unless for very specific, non-standardapiinteractions.
Controllers primarily use client.Client for fetching the custom resource itself and any child resources it needs to manage.
Informers: The Eyes and Ears of the Controller
Controllers need to know when a resource changes. Constantly polling the API Server for updates is inefficient and can overload the server. This is where Informers (specifically SharedInformers in client-go) come in.
An informer is a mechanism that efficiently watches for changes to a particular resource type in the Kubernetes API Server. It does this by:
- Listing: Performing an initial list operation to get all existing resources of that type.
- Watching: Establishing a long-lived HTTP connection (via WebSockets) to the API Server to receive incremental
events(Add, Update, Delete) for that resource type. - Caching: Maintaining an in-memory cache of the resources it's watching. This cache allows controllers to read resource data without constantly hitting the API Server, significantly reducing API load and improving performance. All read operations performed by
client.Clientoften use this cache.
controller-runtime simplifies informer management, automatically setting them up for the resources your controller needs to watch. When an event is received, the informer doesn't immediately trigger the reconciliation; instead, it adds the resource's key to a workqueue.
Workqueues: Decoupling Events from Processing
The Workqueue (specifically workqueue.RateLimitingInterface from client-go/util/workqueue) is a crucial component that decouples the event handling (from informers) from the actual processing logic (in the reconciler). Its primary roles are:
- Buffering Events: It stores keys (typically
namespace/namestrings) of resources that need to be processed or re-processed. - Debouncing/Coalescing: If multiple events for the same resource occur in quick succession, the workqueue can often coalesce them, ensuring the resource is only processed once for that batch of changes.
- Rate Limiting: It can prevent a controller from getting overwhelmed by a flood of events or from rapidly retrying failed operations. If a reconciliation fails, the item can be re-added to the queue with a delay (exponential backoff is common).
- Guaranteed Delivery: Items are not removed from the queue until successfully processed, or after a certain number of retries.
When an informer detects a change, it adds the resource's key to the workqueue. The controller's workers then continuously pull items from this queue for processing.
The Reconciler: The Brain of the Controller
The Reconciler is the core logic unit of your controller. It implements the Reconcile(context.Context, request.Request) (reconcile.Result, error) method from the controller-runtime reconcile.Reconciler interface. This method is invoked for each item popped from the workqueue.
Inside the Reconcile method, the controller performs its primary duties:
- Fetch the Custom Resource (Desired State): The first step is typically to fetch the custom resource instance (e.g., your
MyApplicationCRD) that triggered the reconciliation using theclient.Client. If the resource no longer exists (e.g., it was deleted), the reconciler needs to handle cleanup. - Observe the Actual State: This involves querying other Kubernetes resources (Pods, Deployments, Services) or external systems to determine the current state related to the custom resource.
- Compare and Act: This is the heart of the control loop. The reconciler compares the desired state (from the custom resource's
spec) with the observed actual state.- Creation: If the actual state doesn't exist but the desired state does, the controller creates necessary child resources or calls external APIs.
- Update: If discrepancies are found (e.g., a field in the
specchanged, or a child resource is misconfigured), the controller updates the actual state to match the desired state. - Deletion: If the custom resource itself is marked for deletion, the controller performs necessary cleanup (e.g., deleting child resources, de-provisioning external services). This often involves Finalizers, which we'll discuss later.
- Update Status: Crucially, after performing its actions, the reconciler updates the
statusfield of the custom resource. This provides real-time feedback to users about the controller's progress, any errors, or the observed state of the managed resources. This is vital for transparentAPI governanceand user observability. - Return
reconcile.Result:reconcile.Result{}, nil: Indicates successful reconciliation, no further re-queue.reconcile.Result{Requeue: true}, nil: Re-queue the item immediately, often used when an action was taken, and further reconciliation steps might be needed, or to ensure eventual consistency.reconcile.Result{RequeueAfter: duration}, nil: Re-queue after a specific duration, useful for operations that need periodic checks.reconcile.Result{}, err: Reconciliation failed due to an error. The item will be re-queued with exponential backoff.
The reconciler's logic must be idempotent. Since reconciliation can be triggered multiple times for the same resource (due to retries, concurrent updates, or informer events), applying the logic repeatedly should always yield the same correct state without adverse side effects.
Manager: Orchestrating Controllers and Webhooks
The Manager (from controller-runtime) is the central orchestrator that sets up and runs all your controllers and webhooks within a single process. It handles common concerns like:
- Shared Client: Provides a single, shared
client.Clientinstance for all controllers to use, configured with caches. - Shared Informers: Manages a single set of shared informers for all controllers, reducing API Server load.
- Leader Election: Ensures that in a multi-replica deployment of your controller, only one instance is actively reconciling at any given time, preventing race conditions.
- Health Checks and Metrics: Provides endpoints for Prometheus metrics and liveness/readiness probes.
- Graceful Shutdown: Handles
contextcancellation and ensures that all goroutines started by the manager are shut down cleanly.
When you initialize a Kubebuilder project, the main.go file sets up and starts the Manager, which then runs all the controllers you've defined. This centralized management simplifies the deployment and operation of multiple controllers within a single binary.
This detailed understanding of controller architecture forms the bedrock for writing robust and efficient Kubernetes controllers. It highlights how the Kubernetes api, informers, workqueues, and reconcilers collaboratively implement the powerful control loop pattern, enabling declarative automation and extending the platform's capabilities with consistent API governance.
APIPark is a high-performance AI gateway that allows you to securely access the most comprehensive LLM APIs globally on the APIPark platform, including OpenAI, Anthropic, Mistral, Llama2, Google Gemini, and more.Try APIPark now! πππ
Step-by-Step Guide with Kubebuilder: Building Our CRD Controller
Now, let's put theory into practice. We'll use Kubebuilder to scaffold a project, define a custom resource, implement its controller, and deploy it to a Kubernetes cluster. Our example will be a simple AppDeployment controller that watches for AppDeployment CRDs and creates a corresponding Kubernetes Deployment and Service.
Step 0: Initial Setup and Kubebuilder Installation
Before starting, ensure Go (1.16+), Docker, kubectl, and Kubebuilder are installed and configured correctly.
# Check Go version
go version
# Check Kubebuilder version
kubebuilder version
# Ensure your kubectl context is set to a development cluster (e.g., Minikube, Kind)
kubectl config current-context
Step 1: Project Initialization
First, create a new directory for your controller project and initialize it with Kubebuilder. This command sets up the basic Go module structure, adds boilerplate files, and configures controller-runtime.
mkdir app-deployment-controller
cd app-deployment-controller
# Initialize the project
# --domain: The domain for your CRD group (e.g., yourcompany.com)
# --repo: The Go module path for your project
kubebuilder init --domain example.com --repo github.com/yourusername/app-deployment-controller
This command will generate a standard project structure:
.
βββ Dockerfile
βββ PROJECT
βββ Makefile
βββ README.md
βββ api
β βββ v1
β βββ appdeployment_types.go # Placeholder for CRD types
β βββ groupversion_info.go
βββ config
β βββ crd
β βββ default
β βββ manager
β βββ rbac
β βββ samples
βββ controllers
β βββ appdeployment_controller.go # Placeholder for controller logic
βββ go.mod
βββ go.sum
βββ main.go
Key files: * main.go: The entry point, sets up the manager and starts controllers. * api/v1/: Contains the Go types for your CRDs. * controllers/: Contains your controller logic. * config/: Holds YAML manifests for CRDs, RBAC, deployment, and samples. * Makefile: Provides convenient commands for building, deploying, and testing.
Step 2: Defining the Custom Resource (CRD)
Now, let's define our AppDeployment CRD. This command generates the Go types in api/v1/appdeployment_types.go and scaffolds the controller in controllers/appdeployment_controller.go.
kubebuilder create api --group apps --version v1 --kind AppDeployment
Kubebuilder will prompt you: "Create resource [y/n]?" and "Create controller [y/n]?". Answer y for both.
This command performs several actions: 1. Creates api/v1/appdeployment_types.go with AppDeploymentSpec and AppDeploymentStatus structs. 2. Creates controllers/appdeployment_controller.go with a basic Reconcile method and SetupWithManager method. 3. Updates main.go to register the new AppDeployment type. 4. Generates initial CRD YAML in config/crd/bases/apps.example.com_appdeployments.yaml.
Editing api/v1/appdeployment_types.go
Open api/v1/appdeployment_types.go. This file defines the Go structs that represent your custom resource. You'll primarily work with AppDeploymentSpec and AppDeploymentStatus.
AppDeploymentSpec: This struct defines the desired state of your application. Let's add fields for the image, replicas, and port.
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN!
// NOTE: json tags are required. Any new fields you add must have json tags the same format as the
// ones already present.
// AppDeploymentSpec defines the desired state of AppDeployment
type AppDeploymentSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make generate" to regenerate code after modifying this file
// +kubebuilder:validation:Minimum=1
// +kubebuilder:validation:Maximum=10
// Replicas is the number of desired pods.
// This is a required field.
Replicas int32 `json:"replicas"`
// Image is the container image to deploy.
// This is a required field.
Image string `json:"image"`
// +kubebuilder:validation:Minimum=1
// +kubebuilder:validation:Maximum=65535
// Port is the container port to expose.
// This is a required field.
Port int32 `json:"port"`
// +optional
// Labels for the Deployment and Service.
Labels map[string]string `json:"labels,omitempty"`
}
// AppDeploymentStatus defines the observed state of AppDeployment
type AppDeploymentStatus struct {
// INSERT ADDITIONAL STATUS FIELDS - observed state of cluster
// Important: Run "make generate" to regenerate code after modifying this file
// +optional
// Replicas is the actual number of pods running.
Replicas int32 `json:"replicas,omitempty"`
// +optional
// AvailableReplicas is the number of available pods.
AvailableReplicas int32 `json:"availableReplicas,omitempty"`
// +optional
// Conditions for the AppDeployment.
Conditions []metav1.Condition `json:"conditions,omitempty"`
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:resource:shortName=appdep
// +kubebuilder:printcolumn:name="Replicas",type="integer",JSONPath=".spec.replicas",description="Desired number of replicas"
// +kubebuilder:printcolumn:name="Image",type="string",JSONPath=".spec.image",description="Container image"
// +kubebuilder:printcolumn:name="Status.Replicas",type="integer",JSONPath=".status.replicas",description="Current number of replicas"
// +kubebuilder:printcolumn:name="Status.Available",type="integer",JSONPath=".status.availableReplicas",description="Current available replicas"
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
// AppDeployment is the Schema for the appdeployments API
type AppDeployment struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec AppDeploymentSpec `json:"spec,omitempty"`
Status AppDeploymentStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// AppDeploymentList contains a list of AppDeployment
type AppDeploymentList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []AppDeployment `json:"items"`
}
func init() {
SchemeBuilder.Register(&AppDeployment{}, &AppDeploymentList{})
}
Kubebuilder Markers: Notice the +kubebuilder: comments. These are "markers" that Kubebuilder uses to generate code, CRD YAML, and OpenAPI schema validation rules. * +kubebuilder:validation:Minimum/Maximum: Adds OpenAPI schema validation for numeric fields. This ensures api requests conform to specified value ranges, a crucial aspect of API governance. * +kubebuilder:subresource:status: Enables the /status subresource, allowing separate updates to the status field. * +kubebuilder:printcolumn: Defines custom columns for kubectl get output, enhancing user experience and observability. * json:"...": Standard Go JSON tags for marshaling/unmarshaling to/from JSON (which is what the Kubernetes api uses).
After modifying appdeployment_types.go, you must run make generate and make manifests. * make generate: Updates zz_generated.deepcopy.go for your custom types. * make manifests: Regenerates the CRD YAML files in config/crd/bases/ based on your updated Go types and markers. This is where the OpenAPI schema validation for your CRD is actually generated and applied.
make generate
make manifests
Inspect config/crd/bases/apps.example.com_appdeployments.yaml to see how your OpenAPI schema validation and other definitions are translated into the CRD YAML.
Step 3: Implementing the Controller Logic
Now, let's open controllers/appdeployment_controller.go and implement the Reconcile method. This is where the core logic of our controller resides. Our controller will:
- Fetch the
AppDeploymentcustom resource. - Create or update a Kubernetes
Deploymentbased on theAppDeployment'sspec. - Create or update a Kubernetes
Servicefor theDeployment. - Update the
AppDeployment'sstatusfield with the actual replica counts. - Handle deletion cleanup using a finalizer.
We'll need to import appsv1 for Deployments, corev1 for Services, and metav1 for common Kubernetes types.
package controllers
import (
"context"
"fmt"
"reflect"
"time"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/log"
appsv1alpha1 "github.com/yourusername/app-deployment-controller/api/v1" // Make sure this import path matches your project
)
// AppDeploymentReconciler reconciles a AppDeployment object
type AppDeploymentReconciler struct {
Client ctrl.Client
Scheme *runtime.Scheme
}
const appDeploymentFinalizer = "apps.example.com/finalizer"
// +kubebuilder:rbac:groups=apps.example.com,resources=appdeployments,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=apps.example.com,resources=appdeployments/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=apps.example.com,resources=appdeployments/finalizers,verbs=update
// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=services,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch // To check pod status if needed, though Deployment controller handles most of this
// Reconcile is part of the main kubernetes reconciliation loop which aims to
// move the current state of the cluster closer to the desired state.
// TODO(user): Modify Reconcile to compare the state specified by
// the AppDeployment object against the actual cluster state, and then
// perform operations to make the cluster state reflect the state specified by
// the user.
//
// For more details, check Reconcile and its Result here:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.16.3/pkg/reconcile
func (r *AppDeploymentReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
logger := log.FromContext(ctx)
logger.Info("Reconciling AppDeployment", "AppDeployment", req.NamespacedName)
// 1. Fetch the AppDeployment instance
appDeployment := &appsv1alpha1.AppDeployment{}
err := r.Client.Get(ctx, req.NamespacedName, appDeployment)
if err != nil {
if errors.IsNotFound(err) {
// Request object not found, could have been deleted after reconcile request.
// Owned objects are automatically garbage collected. For additional cleanup,
// refer to the cleanup section with finalizers.
logger.Info("AppDeployment resource not found. Ignoring since object must be deleted.")
return ctrl.Result{}, nil
}
// Error reading the object - requeue the request.
logger.Error(err, "Failed to get AppDeployment")
return ctrl.Result{}, err
}
// 2. Handle finalization logic
isAppDeploymentMarkedForDeletion := appDeployment.GetDeletionTimestamp() != nil
if isAppDeploymentMarkedForDeletion {
if controllerutil.ContainsFinalizer(appDeployment, appDeploymentFinalizer) {
logger.Info("Performing finalizer cleanup for AppDeployment", "AppDeployment", appDeployment.Name)
// Our cleanup logic involves ensuring any external resources provisioned by the controller
// are de-provisioned. For this simple example, we are implicitly relying on K8s GC
// for Deployment/Service, but if we had external resources, this is where we'd clean them up.
// This is a placeholder for external API calls for cleanup.
// Remove our finalizer from the list and update it.
controllerutil.RemoveFinalizer(appDeployment, appDeploymentFinalizer)
err := r.Client.Update(ctx, appDeployment)
if err != nil {
logger.Error(err, "Failed to remove AppDeployment finalizer")
return ctrl.Result{}, err
}
logger.Info("Successfully removed AppDeployment finalizer", "AppDeployment", appDeployment.Name)
}
return ctrl.Result{}, nil // Stop reconciliation as object is deleted and finalizer handled
}
// 3. Add finalizer for this CR if it doesn't exist
if !controllerutil.ContainsFinalizer(appDeployment, appDeploymentFinalizer) {
controllerutil.AddFinalizer(appDeployment, appDeploymentFinalizer)
err = r.Client.Update(ctx, appDeployment)
if err != nil {
logger.Error(err, "Failed to add AppDeployment finalizer")
return ctrl.Result{}, err
}
logger.Info("Successfully added AppDeployment finalizer", "AppDeployment", appDeployment.Name)
}
// Define Labels for owned resources
labels := map[string]string{
"app": appDeployment.Name,
"controller": "appdeployment-controller",
}
for k, v := range appDeployment.Spec.Labels {
labels[k] = v // Merge user-defined labels
}
// 4. Reconcile Deployment
desiredDeployment := r.desiredDeployment(appDeployment, labels)
foundDeployment := &appsv1.Deployment{}
err = r.Client.Get(ctx, types.NamespacedName{Name: desiredDeployment.Name, Namespace: desiredDeployment.Namespace}, foundDeployment)
if err != nil && errors.IsNotFound(err) {
logger.Info("Creating a new Deployment", "Deployment.Namespace", desiredDeployment.Namespace, "Deployment.Name", desiredDeployment.Name)
err = r.Client.Create(ctx, desiredDeployment)
if err != nil {
logger.Error(err, "Failed to create new Deployment", "Deployment.Namespace", desiredDeployment.Namespace, "Deployment.Name", desiredDeployment.Name)
return ctrl.Result{}, err
}
// Deployment created successfully - return and requeue
return ctrl.Result{Requeue: true}, nil
} else if err != nil {
logger.Error(err, "Failed to get Deployment")
return ctrl.Result{}, err
}
// Check if the deployment spec is outdated and update if necessary
if !reflect.DeepEqual(desiredDeployment.Spec.Replicas, foundDeployment.Spec.Replicas) ||
!reflect.DeepEqual(desiredDeployment.Spec.Template.Spec.Containers[0].Image, foundDeployment.Spec.Template.Spec.Containers[0].Image) ||
!reflect.DeepEqual(desiredDeployment.Spec.Template.Spec.Containers[0].Ports[0].ContainerPort, foundDeployment.Spec.Template.Spec.Containers[0].Ports[0].ContainerPort) ||
!reflect.DeepEqual(desiredDeployment.Spec.Template.ObjectMeta.Labels, foundDeployment.Spec.Template.ObjectMeta.Labels) {
logger.Info("Updating Deployment", "Deployment.Namespace", foundDeployment.Namespace, "Deployment.Name", foundDeployment.Name)
// Update the foundDeployment's spec to match the desired spec
foundDeployment.Spec = desiredDeployment.Spec
err = r.Client.Update(ctx, foundDeployment)
if err != nil {
logger.Error(err, "Failed to update Deployment", "Deployment.Namespace", foundDeployment.Namespace, "Deployment.Name", foundDeployment.Name)
return ctrl.Result{}, err
}
// Deployment updated - return and requeue
return ctrl.Result{Requeue: true}, nil
}
// 5. Reconcile Service
desiredService := r.desiredService(appDeployment, labels)
foundService := &corev1.Service{}
err = r.Client.Get(ctx, types.NamespacedName{Name: desiredService.Name, Namespace: desiredService.Namespace}, foundService)
if err != nil && errors.IsNotFound(err) {
logger.Info("Creating a new Service", "Service.Namespace", desiredService.Namespace, "Service.Name", desiredService.Name)
err = r.Client.Create(ctx, desiredService)
if err != nil {
logger.Error(err, "Failed to create new Service", "Service.Namespace", desiredService.Namespace, "Service.Name", desiredService.Name)
return ctrl.Result{}, err
}
// Service created successfully - return and requeue
return ctrl.Result{Requeue: true}, nil
} else if err != nil {
logger.Error(err, "Failed to get Service")
return ctrl.Result{}, err
}
// Check if the service spec is outdated and update if necessary
if !reflect.DeepEqual(desiredService.Spec.Ports, foundService.Spec.Ports) ||
!reflect.DeepEqual(desiredService.Spec.Selector, foundService.Spec.Selector) {
logger.Info("Updating Service", "Service.Namespace", foundService.Namespace, "Service.Name", foundService.Name)
// Update the foundService's spec to match the desired spec
foundService.Spec.Ports = desiredService.Spec.Ports
foundService.Spec.Selector = desiredService.Spec.Selector
// Preserve cluster IP and node ports if they are dynamically assigned
foundService.Spec.ClusterIP = desiredService.Spec.ClusterIP
foundService.Spec.Type = desiredService.Spec.Type
err = r.Client.Update(ctx, foundService)
if err != nil {
logger.Error(err, "Failed to update Service", "Service.Namespace", foundService.Namespace, "Service.Name", foundService.Name)
return ctrl.Result{}, err
}
// Service updated - return and requeue
return ctrl.Result{Requeue: true}, nil
}
// 6. Update AppDeployment Status
if foundDeployment.Status.Replicas != appDeployment.Status.Replicas ||
foundDeployment.Status.AvailableReplicas != appDeployment.Status.AvailableReplicas {
logger.Info("Updating AppDeployment status", "currentReplicas", foundDeployment.Status.Replicas, "availableReplicas", foundDeployment.Status.AvailableReplicas)
appDeployment.Status.Replicas = foundDeployment.Status.Replicas
appDeployment.Status.AvailableReplicas = foundDeployment.Status.AvailableReplicas
// Add conditions based on deployment status if needed
// For example, if deployment is fully available
if foundDeployment.Status.Replicas == foundDeployment.Status.AvailableReplicas && foundDeployment.Status.UpdatedReplicas == foundDeployment.Status.Replicas {
appDeployment.Status.Conditions = []metav1.Condition{
{
Type: "Available",
Status: metav1.ConditionTrue,
LastTransitionTime: metav1.Now(),
Reason: "DeploymentAvailable",
Message: "AppDeployment is fully available",
},
}
} else {
appDeployment.Status.Conditions = []metav1.Condition{
{
Type: "Available",
Status: metav1.ConditionFalse,
LastTransitionTime: metav1.Now(),
Reason: "DeploymentProgressing",
Message: "AppDeployment is still progressing",
},
}
}
err := r.Client.Status().Update(ctx, appDeployment)
if err != nil {
logger.Error(err, "Failed to update AppDeployment status")
return ctrl.Result{}, err
}
logger.Info("AppDeployment status updated", "AppDeployment", appDeployment.Name)
}
logger.Info("Finished reconciling AppDeployment", "AppDeployment", appDeployment.Name)
return ctrl.Result{}, nil
}
// desiredDeployment creates a Deployment object based on the AppDeployment spec.
func (r *AppDeploymentReconciler) desiredDeployment(appDeployment *appsv1alpha1.AppDeployment, labels map[string]string) *appsv1.Deployment {
dep := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: appDeployment.Name,
Namespace: appDeployment.Namespace,
Labels: labels,
},
Spec: appsv1.DeploymentSpec{
Replicas: &appDeployment.Spec.Replicas,
Selector: &metav1.LabelSelector{
MatchLabels: labels,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: labels,
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{{
Name: appDeployment.Name,
Image: appDeployment.Spec.Image,
Ports: []corev1.ContainerPort{{
ContainerPort: appDeployment.Spec.Port,
Name: "http",
}},
}},
},
},
},
}
// Set AppDeployment instance as the owner and controller
// This ensures that the Deployment is garbage collected when the AppDeployment is deleted
ctrl.SetControllerReference(appDeployment, dep, r.Scheme)
return dep
}
// desiredService creates a Service object based on the AppDeployment spec.
func (r *AppDeploymentReconciler) desiredService(appDeployment *appsv1alpha1.AppDeployment, labels map[string]string) *corev1.Service {
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: appDeployment.Name,
Namespace: appDeployment.Namespace,
Labels: labels,
},
Spec: corev1.ServiceSpec{
Selector: labels,
Ports: []corev1.ServicePort{{
Protocol: corev1.ProtocolTCP,
Port: 80, // Default service port
TargetPort: intstr.FromInt(int(appDeployment.Spec.Port)),
Name: "http",
}},
Type: corev1.ServiceTypeClusterIP, // Can be changed to NodePort or LoadBalancer if needed
},
}
// Set AppDeployment instance as the owner and controller
ctrl.SetControllerReference(appDeployment, svc, r.Scheme)
return svc
}
// SetupWithManager sets up the controller with the Manager.
func (r *AppDeploymentReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&appsv1alpha1.AppDeployment{}). // Watch for AppDeployment resources
Owns(&appsv1.Deployment{}). // Watch for Deployments owned by AppDeployment
Owns(&corev1.Service{}). // Watch for Services owned by AppDeployment
Complete(r)
}
Key Points in the Reconcile function:
log.FromContext(ctx): Kubebuilder integrates structured logging withzap.r.Client.Get(...): Fetches theAppDeploymentresource that triggered this reconciliation. HandlesIsNotFounderrors gracefully.- Finalizers: The
appDeploymentFinalizeris added to theAppDeploymentobject upon creation. When anAppDeploymentis deleted, Kubernetes marks it for deletion but doesn't immediately remove it if finalizers are present. The controller detectsGetDeletionTimestamp() != niland performs cleanup before removing its finalizer. Only after all finalizers are removed will Kubernetes finally delete the object. This is critical for orchestrating cleanup of external resources or child resources that aren't garbage collected by Kubernetes (e.g., if you were to delete an external database). desiredDeploymentanddesiredServicehelper functions: These functions create the desiredDeploymentandServiceobjects based on theAppDeployment'sspec.ctrl.SetControllerReference(owner, owned, scheme): This is crucial! It sets theAppDeploymentas the "owner" of theDeploymentandService. This enables Kubernetes' garbage collection: when theAppDeploymentis deleted, Kubernetes will automatically delete the ownedDeploymentandService. This also allows the controller to reconcile events for its owned resources.r.Client.Create(...): Creates newDeploymentorServiceif they don't exist. WeRequeue: trueafter creation to ensure the controller re-evaluates the state (e.g., to fetch the newly created resource and update status).r.Client.Update(...): Updates existingDeploymentorServiceif theirspecis out of date. We usereflect.DeepEqualfor simplicity in this example to comparespecfields. A more robust approach might involve a patch.r.Client.Status().Update(...): Updates thestatusfield of theAppDeploymentresource. Note thatStatus().Updateis used specifically for status updates, which typically don't trigger new reconciliation cycles by default (unless specified inSetupWithManager).SetupWithManager: This method configures the controller with the Kubebuilder Manager.For(&appsv1alpha1.AppDeployment{}): Tells the controller to watch for changes toAppDeploymentresources.Owns(&appsv1.Deployment{})andOwns(&corev1.Service{}): Tells the controller that it "owns"DeploymentandServiceresources. If aDeploymentorServiceowned by anAppDeploymentchanges, an event for the owner (AppDeployment) will be added to the workqueue, triggering reconciliation. This is how the controller reacts to changes in its child resources.
Step 4: Running and Testing Locally
Before deploying, let's test it locally.
- Install CRDs: Apply the CRD definition to your cluster.
bash make installThis command applies the CRD definition generated inconfig/crd/bases/apps.example.com_appdeployments.yamlto your cluster. You should now be able to runkubectl get crdand seeappdeployments.apps.example.com. - Run the Controller Locally:
bash make runThis will compile and run your controller directly on your machine, connecting to the Kubernetes cluster configured by yourkubeconfig. You'll see logs from your controller. - Observe:
- Check your controller's logs in the terminal where
make runis executing. You should seeReconciling AppDeploymentand logs about creating the Deployment and Service. - Verify resource creation in your cluster:
bash kubectl get appdeployments kubectl get deployments kubectl get services - Check the status of your custom resource:
bash kubectl get appdeployments my-nginx-app -o yamlYou should see thestatusfield populated withreplicasandavailableReplicasas the Deployment stabilizes.
- Check your controller's logs in the terminal where
- Test Updates: Edit
config/samples/apps_v1_appdeployment.yamlagain, for example, changereplicas: 2toreplicas: 3, or changeimagetonginx:latest.bash kubectl apply -f config/samples/apps_v1_appdeployment.yamlObserve your controller logs andkubectl get deploymentsto see the Deployment rolling out. - Test Deletion:
bash kubectl delete -f config/samples/apps_v1_appdeployment.yamlObserve controller logs for finalizer cleanup. Verify that theDeploymentandServiceare also deleted:bash kubectl get deployments kubectl get servicesYou should see them disappear.
Create a Sample Custom Resource: Kubebuilder generates a sample in config/samples/apps_v1_appdeployment.yaml. Edit it to something meaningful:```yaml
config/samples/apps_v1_appdeployment.yaml
apiVersion: apps.example.com/v1 kind: AppDeployment metadata: name: my-nginx-app namespace: default # Ensure this namespace exists or create it spec: replicas: 2 image: nginx:1.21.6 # Use a specific, stable image tag port: 80 labels: environment: development project: my-app ```Apply this sample to your cluster: bash kubectl apply -f config/samples/apps_v1_appdeployment.yaml
Step 5: Deployment to Kubernetes
To deploy your controller into a production-like environment, you'll containerize it and deploy it as a Kubernetes Deployment.
- Deploy to Kubernetes: The
make deploycommand will use the YAML manifests inconfig/to deploy your controller. This includes:bash make deploy- Namespace:
app-deployment-controller-system(default, can be configured). - RBAC:
Role,RoleBinding,ServiceAccountto grant permissions to your controller. This is crucial forAPI governancewithin the cluster, ensuring the controller only has the minimal permissions it needs. - Deployment: The actual Deployment object for your controller, running your container image.
- Namespace:
- Verify Deployment:
bash kubectl -n app-deployment-controller-system get pods kubectl -n app-deployment-controller-system logs -f <controller-pod-name>You should see your controller pod running and its logs. - Test Deployed Controller: Now, apply your sample
AppDeploymentCRD again.bash kubectl apply -f config/samples/apps_v1_appdeployment.yamlObserve the controller logs in the deployed pod and verify that theDeploymentandServiceare created in thedefaultnamespace (or whichever namespace you specified in the sample YAML).bash kubectl get appdeployments my-nginx-app kubectl get deployments my-nginx-app kubectl get services my-nginx-app
Build and Push Docker Image: First, ensure you're logged into a Docker registry (e.g., Docker Hub, GCR, ECR) where you can push images. Replace yourusername with your Docker Hub username or registry path.```bash
Set the image name, adjust if needed
export IMG="yourusername/app-deployment-controller:v0.0.1" make docker-build make docker-push ```
Congratulations! You have successfully built, tested, and deployed a Kubernetes controller that watches for changes to your custom AppDeployment resources. This robust framework allows you to extend Kubernetes' capabilities with sophisticated automation and custom operational logic.
Advanced Controller Concepts: Enhancing Robustness and Capabilities
Building a basic controller is a great start, but real-world scenarios often demand more sophisticated features to handle edge cases, ensure resource integrity, and provide a seamless user experience. Let's explore some advanced concepts that can significantly enhance your controller's robustness and capabilities.
Owner References and Garbage Collection
We briefly touched upon ctrl.SetControllerReference in our example. This mechanism is fundamental to Kubernetes' resource management. When you set an OwnerReference on a child resource (like our Deployment and Service) pointing to a parent resource (our AppDeployment), you establish a clear hierarchy.
Benefits: * Automatic Garbage Collection: When the owner resource is deleted, Kubernetes' garbage collector automatically deletes all owned resources. This simplifies cleanup and prevents resource leaks. * Controller Reactivity: As seen in SetupWithManager's Owns method, changes to owned resources automatically trigger reconciliation of their owner, allowing the controller to react to external modifications or failures of its managed components.
It's crucial to correctly establish owner references for all resources created or managed by your controller. This enforces a clear API governance structure for your custom resource ecosystem.
Finalizers: Graceful Resource Cleanup
Finalizers are special strings added to the metadata.finalizers array of a Kubernetes object. When an object with finalizers is deleted, Kubernetes doesn't immediately remove it from etcd. Instead, it marks the object for deletion (sets metadata.deletionTimestamp) and then waits for all listed finalizers to be removed.
How they work in controllers: 1. Add Finalizer: When your controller first processes a custom resource, it adds its unique finalizer string (e.g., apps.example.com/finalizer) to the resource's metadata.finalizers array. 2. Deletion Request: When a user deletes the custom resource, the API Server sets metadata.deletionTimestamp. 3. Controller Detects Deletion: Your controller's Reconcile function detects the deletionTimestamp. 4. Execute Cleanup: The controller then performs any necessary cleanup operations, such as: * De-provisioning external cloud resources (e.g., deleting an AWS RDS instance). * Calling external APIs for service de-registration. * Performing complex cleanup sequences that Kubernetes' native garbage collection cannot handle. 5. Remove Finalizer: Once all cleanup is successfully completed, the controller removes its finalizer string from the resource. 6. Final Deletion: Kubernetes then proceeds to completely delete the resource from etcd.
Finalizers are indispensable for ensuring API governance around resource lifecycle, guaranteeing that custom resources and their associated external infrastructure are cleanly de-provisioned, preventing orphaned resources and potential cost overruns. In our AppDeployment example, we added a placeholder for finalizer logic. For a production system interacting with external apis, this is where you'd put the cleanup calls.
Webhooks: Admission Controllers (Mutating and Validating)
Webhooks are HTTP callbacks that allow external services to intercept requests to the Kubernetes API Server before they are persisted. Kubebuilder fully supports generating and deploying webhooks for your custom resources, enabling powerful API governance capabilities.
There are two main types of admission webhooks:
- Validating Admission Webhooks: These webhooks inspect incoming requests and can accept or reject them based on custom validation logic. They are executed after
OpenAPIschema validation but before the object is persisted.- Use Cases: Enforcing complex business rules that cannot be expressed purely through
OpenAPIschema (e.g., "only allow deployment ofAppDeployments with images from a trusted registry," or "ensurereplicasis not increased by more than 50% in a single update"). This is a direct form ofAPI governancefor your custom resources.
- Use Cases: Enforcing complex business rules that cannot be expressed purely through
- Mutating Admission Webhooks: These webhooks can modify (mutate) incoming requests. They are executed before validating webhooks.
- Use Cases: Automatically injecting default values, adding labels/annotations, or sidecar containers to Pods generated by your custom resource (e.g., automatically adding a logging agent sidecar to every Pod created by an
AppDeployment).
- Use Cases: Automatically injecting default values, adding labels/annotations, or sidecar containers to Pods generated by your custom resource (e.g., automatically adding a logging agent sidecar to every Pod created by an
Kubebuilder can generate webhook boilerplate using:
kubebuilder create webhook --group apps --version v1 --kind AppDeployment --defaulting --validation
This command generates webhook configurations and Go code for mutating (Default) and validating (ValidateCreate, ValidateUpdate, ValidateDelete) functions. These functions run synchronously with API requests, allowing for immediate feedback and policy enforcement, which is critical for robust API governance.
Context Cancellation and Graceful Shutdown
Kubernetes controllers typically run as long-lived processes. When the controller Pod is terminated (e.g., due to an update, scaling down, or node failure), it needs to shut down gracefully. The context.Context object, passed through the reconciliation loop, is essential for this.
context.Context: Used to carry deadlines, cancellation signals, and other request-scoped values across API boundaries. Incontroller-runtime, thecontext.Contextpassed toReconcileis associated with the controller's lifecycle.- Graceful Shutdown: The
controller-runtimeManager handles graceful shutdown. When it receives a termination signal (e.g.,SIGTERM), it cancels the main context. TheReconcilefunction and any goroutines you start should respect this context by checkingctx.Done()or using context-aware operations (e.g.,client.Clientmethods are context-aware). This ensures that ongoing operations are cleanly stopped, preventing resource corruption or stalled processes.
Metrics and Logging
Observability is paramount for production controllers.
- Logging: As shown,
log.FromContext(ctx)provides structured logging. Ensure your logs are informative, context-rich (e.g., include resource name, namespace, relevant fields), and provide different levels of detail (info, debug, error). - Metrics:
controller-runtimeintegrates with Prometheus. Controllers can expose custom metrics (e.g., reconciliation duration, number of errors, number of resources managed). This allows you to monitor controller health, performance, and operational trends.Metrics.Registry.MustRegister(prometheus.NewCounterVec(...))metrics.Registry.MustRegister(metricName)You can then query these metrics via the/metricsendpoint typically exposed by the controller Pod.
Leader Election
When deploying multiple replicas of your controller for high availability, you must ensure that only one instance is actively performing reconciliation at any given time for a specific resource type. This prevents race conditions where multiple controllers try to modify the same child resources simultaneously.
Leader Election mechanisms (like those provided by controller-runtime) use a lease-locking pattern within Kubernetes itself (typically a Lease object in the kube-system namespace). Only the controller instance that successfully acquires the lease will become the "leader" and perform reconciliation. If the leader fails, another replica will attempt to acquire the lease and take over. The Kubebuilder main.go file includes leader election configuration by default.
Testing Strategies (Unit, Integration, E2E)
Robust testing is crucial for custom controllers.
- Unit Tests: Test individual functions and reconciliation logic in isolation using Go's standard
testingpackage. Mock external dependencies likeclient.Client. - Integration Tests: Test the controller against a real (but in-memory or ephemeral) Kubernetes API Server without needing a full cluster.
controller-runtime/pkg/envtestprovides a lightweightapiserver andetcdfor this purpose. This allows you to test the interaction between your controller and Kubernetes resources efficiently. Kubebuilder scaffoldssuite_test.gofor this. - End-to-End (E2E) Tests: Deploy the controller to a real Kubernetes cluster (like Minikube or Kind) and interact with it via
kubectlto verify its behavior in a complete environment. These are slower but provide the highest confidence.
Designing Good CRD Schemas
A well-designed CRD schema is fundamental for API governance and usability. * Clarity: Field names should be intuitive and clearly convey their purpose. * Validation: Use OpenAPI schema validation (via Kubebuilder markers) extensively to enforce constraints on values, types, and required fields. This prevents invalid configurations from even being accepted by the API Server. * Immutability: Consider marking fields as immutable if they should not change after creation. * Defaults: Use mutating webhooks or controller logic to inject sensible default values, simplifying the user's YAML. * Status Field: Design the status field to provide comprehensive, actionable feedback to users about the observed state, progress, and any issues.
This comprehensive set of advanced concepts empowers you to build highly reliable, scalable, and user-friendly Kubernetes controllers that effectively extend the platform and enforce strong API governance across your custom resource landscape.
Integrating with External APIs: The Role of APIPark
While our controller primarily focuses on managing Kubernetes-native resources like Deployments and Services through custom resources, many sophisticated applications require interaction with external APIs. For instance, a custom resource might represent an application that needs to be registered with an external service directory, provision resources in a cloud provider, or communicate with a suite of AI models. Managing these external API endpoints efficiently and securely is crucial for comprehensive API governance.
This is where platforms like ApiPark become invaluable. APIPark is an open-source AI gateway and API management platform that simplifies the integration and deployment of both AI and traditional REST services. Imagine a scenario where your Kubernetes controller, after provisioning an AppDeployment, needs to:
- Register the new service's external endpoint with an
API gateway. - Expose a prompt-encapsulated AI model as a REST API for your application to consume.
- Apply unified authentication and traffic management for all external
apicalls made by yourAppDeploymentinstances.
APIPark can provide the infrastructure to handle these tasks. For example, if your controller needs to interact with various AI models (like Claude, Deepseek) or other REST services, APIPark offers a unified api format for AI invocation, abstracting away model-specific complexities. Your controller could, in its reconciliation loop, make calls to the APIPark gateway to provision a new AI service or configure an API endpoint that exposes a specific LLM with pre-defined prompts. This simplifies the controller's external api interaction logic, allowing it to focus on orchestration rather than the intricacies of each external service's api design.
By using an API management platform like APIPark, you extend your API governance strategy beyond the Kubernetes cluster. It ensures that any external api interactions orchestrated by your controller are also subject to robust management, monitoring, and security policies, providing a holistic approach to managing your entire application ecosystem. Your controller could, for example, define a custom resource AIModelService that, when created, triggers the controller to interact with APIPark's OpenAPI or custom api to provision and manage access to a specific AI model gateway endpoint, unifying control over both internal and external API aspects.
Best Practices and Considerations for Production Controllers
Developing a functional Kubernetes controller is just the beginning. To ensure it's reliable, scalable, and maintainable in a production environment, adherence to a set of best practices is critical. These considerations extend the principles of API governance to the operational runtime of your controller.
1. Idempotency
As highlighted earlier, idempotency is perhaps the most fundamental principle for controller logic. The reconciliation loop can be triggered multiple times for the same resource, possibly due to: * Retries: After transient errors. * Concurrent Updates: Multiple users or controllers modifying related resources. * External Changes: A child resource is modified or deleted outside the controller's direct action. * Requeues: Intentional re-queues by the controller itself.
Every action your controller takes (creating a Deployment, updating a Service, calling an external api) must be designed such that applying it multiple times has the same effect as applying it once. This typically means: * Check Existence First: Before creating, check if the resource already exists. * Conditional Updates: Only update if a significant change is detected (e.g., using reflect.DeepEqual or strategic merge patches). * Resource Naming: Use stable, predictable names for child resources based on the parent custom resource's name and UID to avoid collisions.
2. Robust Error Handling and Retry Mechanisms
Controllers operate in a dynamic and potentially unreliable environment. Network issues, API Server overloads, invalid user input, and external service failures are common.
- Distinguish Errors: Differentiate between transient errors (e.g., network timeout, API quota exceeded, resource temporarily unavailable) and permanent errors (e.g., invalid configuration that cannot be corrected without user intervention).
- Exponential Backoff: For transient errors, use the workqueue's rate-limiting capabilities to re-queue the item with an exponential backoff delay. This prevents overwhelming the API Server or external services and allows the system to recover.
controller-runtime's defaultRequeuebehavior handles this. - Status Updates for Permanent Errors: For permanent errors, the controller should update the custom resource's
statusfield with clear error messages and conditions. This provides immediate feedback to the user, allowing them to diagnose and correct the issue. Avoid endlessly retrying permanent errors without status updates. - Circuit Breakers: For interactions with external APIs (especially those managed by an
API gatewaylike APIPark), consider implementing circuit breaker patterns to prevent cascading failures if the external service is unhealthy.
3. Security (RBAC and Least Privilege)
Your controller runs as a Service Account within the cluster and makes api calls on its behalf. Adhering to the principle of least privilege is paramount for API governance.
- Specific RBAC: Define very specific
RoleandClusterRoleresources that grant your controller only the permissions it needs on the resources it manages. Avoid wildcards (*) where possible.- Our Kubebuilder markers (
+kubebuilder:rbac:) automatically generate RBAC rules based on the resources your controller interacts with (gets, lists, watches, creates, updates, patches, deletes). Review and refine these generated rules.
- Our Kubebuilder markers (
- Service Account: Ensure your controller Pod uses a dedicated
ServiceAccountthat is bound to these specific RBAC roles. - Secrets Management: If your controller needs to access sensitive credentials (e.g., for external APIs), use Kubernetes
Secretsand ensure they are only mounted to the controller Pod and handled securely within the controller code. Avoid hardcoding credentials.
4. Performance and Scalability
Controllers need to be efficient, especially in large clusters or when managing a high volume of custom resources.
- Efficient Reads: Leverage the informer's in-memory cache for read operations. Avoid making direct
client.Client.Getcalls inside loops if the data can be retrieved from the cache. - Batched Operations: If performing multiple similar operations (e.g., creating several child resources), consider batching them if the target
apisupports it. - Resource Consumption: Monitor your controller's CPU and memory usage. Optimize Go code for efficiency.
- Leader Election: As discussed, use leader election for high availability to prevent multiple replicas from conflicting, but also to efficiently distribute workload.
- Horizontal Scaling: Design your controller to be horizontally scalable by running multiple replicas, allowing
controller-runtimeto handle leader election.
5. Observability (Metrics, Logs, Traces)
A controller running silently in the cluster is an operational blind spot. Robust observability is crucial for monitoring its health, diagnosing issues, and understanding its behavior.
- Structured Logging: Use structured logging (as provided by
controller-runtime'szapintegration) to output machine-readable logs that can be easily parsed, filtered, and analyzed by log aggregation systems. Include relevant resource identifiers (name, namespace, UID) in every log entry. - Prometheus Metrics: Expose Prometheus-compatible metrics to monitor:
- Reconciliation loop duration and frequency.
- Number of reconciliation errors (distinguish between transient/permanent).
- Number of resources reconciled.
- Controller-specific metrics (e.g., calls to external APIs, cache hit rates).
- Tracing (Optional): For complex controllers interacting with many services, distributed tracing can help visualize the flow of requests and pinpoint bottlenecks.
6. Designing Good CRD Schemas (Reiteration)
The schema of your custom resource is its public api. A well-designed schema is critical for usability, validation, and API governance.
- Clear
SpecandStatus: Clearly separate desired state (Spec) from observed actual state (Status). Users should only modifySpec. - Granularity: Design CRDs to be at the right level of abstraction. Too fine-grained, and users have to manage many objects; too coarse-grained, and the controller becomes overly complex.
- Validation Markers: Leverage
OpenAPIschema validation via Kubebuilder markers (e.g.,+kubebuilder:validation:Minimum,Pattern,Enum,Required) to ensure valid input. - Versioning: Plan for
apiversioning (e.g.,v1alpha1,v1beta1,v1) from the outset to manage evolution gracefully.
7. Lifecycle Management of Custom Resources
Consider the entire lifecycle of your custom resources, not just creation and updates.
- Deletion Policy: Clearly define what happens when a custom resource is deleted. Are external resources de-provisioned? Are child resources cleaned up? Use finalizers for external cleanup.
- Upgrades and Downgrades: How will your controller handle upgrades to new versions of the CRD or the controller itself? Ensure backward and forward compatibility. Use conversion webhooks for
apiversion migrations if needed. - Maintenance: Plan for periodic maintenance tasks within the controller, such as cleaning up stale resources or refreshing tokens for external
apis.
By diligently applying these best practices, you can build production-grade Kubernetes controllers that are not only powerful and automated but also secure, stable, and easily manageable, contributing to a robust API governance strategy for your entire cloud-native ecosystem.
Example Table: CRD Fields and Their Purposes
To illustrate the importance of well-defined CRD schemas and the use of OpenAPI validation through Kubebuilder markers, let's look at a table summarizing common fields and their associated markers in our AppDeployment CRD. This highlights how design choices contribute to clear API governance for custom resources.
| Field Name | Type | Purpose | Kubebuilder Markers (OpenAPI Validation) |
|---|---|---|---|
spec.replicas |
int32 |
Defines the desired number of replicas (Pods) for the application. This is a core scaling parameter. | +kubebuilder:validation:Minimum=1 +kubebuilder:validation:Maximum=10 json:"replicas" (required, as default is omitempty) |
spec.image |
string |
Specifies the Docker image to be used for the application's container. This directly influences the application version. | json:"image" (required) |
spec.port |
int32 |
The internal container port that the application listens on. This is used to configure the Kubernetes Service's targetPort. |
+kubebuilder:validation:Minimum=1 +kubebuilder:validation:Maximum=65535 json:"port" (required) |
spec.labels |
map[string]string |
Optional user-defined labels to be applied to the managed Deployment and Service. Allows for custom tagging and selection. | json:"labels,omitempty" +optional |
status.replicas |
int32 |
The actual number of Pod replicas currently running for this AppDeployment, as observed by the controller from the underlying Kubernetes Deployment. Provides real-time feedback. |
json:"replicas,omitempty" +optional |
status.availableReplicas |
int32 |
The number of Pod replicas that are currently available and serving traffic. This is a key indicator of the application's health and readiness. | json:"availableReplicas,omitempty" +optional |
status.conditions |
[]metav1.Condition |
A standard Kubernetes pattern for reporting the overall health and progress of a resource. Each condition describes a specific aspect (e.g., "Available", "Progressing") with its status (True, False, Unknown). Crucial for API governance in communicating system state. |
json:"conditions,omitempty" +optional |
metadata.finalizers |
[]string |
A list of strings that prevent an object from being deleted until all finalizers are removed. Used by the controller to perform custom cleanup logic before the resource is fully removed. Vital for API governance of resource lifecycles. |
Not directly in AppDeploymentSpec/Status, but part of metav1.ObjectMeta which is embedded. Managed by controllerutil.AddFinalizer/RemoveFinalizer. |
metadata.deletionTimestamp |
*metav1.Time |
Indicates that the resource has been requested for deletion. Its presence triggers the controller's finalization logic. | Not directly in AppDeploymentSpec/Status, but part of metav1.ObjectMeta which is embedded. Managed by Kubernetes api. Controller checks object.GetDeletionTimestamp() != nil. |
+kubebuilder:printcolumn |
N/A | Markers used directly on the AppDeployment struct to define custom columns for kubectl get appdeployments output. Enhances user experience and quick observability, extending API governance to the CLI. |
+kubebuilder:printcolumn:name="Replicas",type="integer",JSONPath=".spec.replicas" +kubebuilder:printcolumn:name="Image",type="string",JSONPath=".spec.image" +kubebuilder:printcolumn:name="Status.Available",type="integer",JSONPath=".status.availableReplicas" |
This table clearly demonstrates how a thoughtful design of the CRD schema, coupled with Kubebuilder markers and OpenAPI validation, creates a robust, user-friendly, and governable custom API within Kubernetes.
Conclusion
Building a Kubernetes controller to watch for CRD changes is a powerful paradigm that fundamentally transforms how organizations extend, automate, and manage their cloud-native infrastructure. We've journeyed through the core principles, from Kubernetes' api-driven architecture and the declarative power of Custom Resource Definitions to the continuous reconciliation loop that defines a controller's operational essence. By leveraging tools like Kubebuilder, developers can abstract away much of the boilerplate, focusing their efforts on the unique business logic that their custom resources and controllers bring to life.
This guide provided a detailed, step-by-step walkthrough, demonstrating how to define a custom AppDeployment resource, implement a controller to manage its lifecycle (including creating Deployments and Services), handle graceful deletion with finalizers, and deploy the entire system to a Kubernetes cluster. We also explored advanced concepts such as webhooks for robust API governance and policy enforcement, leader election for high availability, and comprehensive testing strategies to ensure production readiness. Throughout this process, we've seen how principles of API governance, from schema validation with OpenAPI to secure RBAC configurations, are woven into the very fabric of controller development.
The ability to create custom resources and pair them with intelligent controllers unlocks an unprecedented level of automation and domain-specific knowledge encapsulation within Kubernetes. Whether you're integrating with complex external systems, implementing intricate application lifecycle management, or enforcing custom operational policies, custom controllers empower you to treat your infrastructure as code, driven by declarative APIs. This approach not only streamlines operations but also fosters innovation, allowing teams to build tailored solutions that seamlessly integrate with the Kubernetes ecosystem. As you continue to build and manage your cloud-native applications, the skills acquired in developing custom controllers will prove invaluable, positioning you at the forefront of Kubernetes extensibility and advanced cluster automation.
Frequently Asked Questions (FAQs)
1. What is the fundamental difference between a Kubernetes Controller and an Operator?
While often used interchangeably, an Operator is essentially a specialized type of Controller. A Kubernetes Controller is a control loop that watches for specific resource types and reconciles their observed state with a desired state. An Operator, however, extends this concept to manage complex applications or services (often stateful ones) using the Kubernetes API. An Operator typically encapsulates human operational knowledge about a specific application, automating tasks like deployment, scaling, backup, recovery, and upgrades for that particular application. So, while all Operators are Controllers, not all Controllers are Operators (e.g., a simple controller that just sets a label based on an annotation is a controller, but not typically called an Operator).
2. Why is Go the preferred language for writing Kubernetes Controllers?
Go is the preferred language due to several key factors: * Official Client Library: The official Kubernetes client library (client-go) is written in Go, offering the most comprehensive and up-to-date API access. * Performance and Concurrency: Go's efficient concurrency model (goroutines and channels) is well-suited for the asynchronous, event-driven nature of controllers, allowing them to handle many events concurrently. * Static Linking: Go compiles into single, statically linked binaries, making deployment simple and efficient within container images. * Community and Ecosystem: The vast majority of Kubernetes itself, and its controller development frameworks (like controller-runtime and Kubebuilder), are in Go, leading to a rich ecosystem, extensive documentation, and strong community support.
3. How do I ensure my Controller is highly available and avoids race conditions?
To ensure high availability and prevent race conditions when running multiple replicas of your controller: 1. Leader Election: Implement leader election (which controller-runtime includes by default). This ensures that only one instance of your controller is actively performing reconciliation at any given time for a given resource type, while other replicas remain in a standby state, ready to take over if the leader fails. 2. Idempotent Logic: All your controller's actions must be idempotent, meaning applying them multiple times has the same effect as applying them once. This protects against partial operations or out-of-order event processing if leader election fails over or multiple controllers briefly become active. 3. Owner References: Properly configure owner references for all child resources. This enables Kubernetes' native garbage collection and helps inform your controller when its owned resources are modified by other actors, triggering appropriate reconciliation.
4. What is the role of OpenAPI in CRDs, and how does it relate to API Governance?
OpenAPI (specifically OpenAPI v3 schema) is used within Custom Resource Definitions (CRDs) to define and validate the structure and content of your custom resources. When you create a CRD, you specify its schema using an OpenAPI v3 compliant definition. This provides several benefits for API Governance: * Strong Validation: The Kubernetes API Server uses this OpenAPI schema to validate all incoming requests for your custom resources. This ensures that users provide correctly structured data and adhere to type constraints, value ranges, and required fields, preventing malformed or invalid configurations from even entering the system. * Documentation: OpenAPI schemas serve as self-documenting APIs, clearly defining the expected input and output formats for your custom resources. * Tooling Integration: Tools like kubectl can leverage the OpenAPI schema to provide features like client-side validation and autocompletion for custom resources. This robust validation and clear definition are fundamental to maintaining API governance for your custom Kubernetes extensions.
5. When should I consider using a Mutating or Validating Admission Webhook with my Controller?
You should consider using Admission Webhooks when OpenAPI schema validation alone is insufficient for your API Governance requirements: * Validating Webhooks: Use these when you need to enforce complex business rules or cross-field validations that cannot be expressed in OpenAPI schema. For example, ensuring that a resource's spec field adheres to a specific naming convention, validating permissions based on external systems, or enforcing policy checks before resource creation or modification. * Mutating Webhooks: Use these when you need to automatically modify incoming resources, such as injecting default values, adding labels or annotations, or injecting sidecar containers into pods created by your custom resource. This can simplify user input and enforce consistency across resources without requiring manual intervention.
Both types of webhooks are powerful tools for real-time API Governance and automation at the API Server level, intercepting requests before they are persisted to etcd.
πYou can securely and efficiently call the OpenAI API on APIPark in just two steps:
Step 1: Deploy the APIPark AI gateway in 5 minutes.
APIPark is developed based on Golang, offering strong product performance and low development and maintenance costs. You can deploy APIPark with a single command line.
curl -sSO https://download.apipark.com/install/quick-start.sh; bash quick-start.sh

In my experience, you can see the successful deployment interface within 5 to 10 minutes. Then, you can log in to APIPark using your account.

Step 2: Call the OpenAI API.
