In the thriving ecosystem of Kubernetes, Custom Resource Definitions (CRDs) play a vital role in extending the capabilities of Kubernetes to more advanced and tailored use cases. A CRD allows developers to define and manage their own resources, thereby enhancing the functionality of their applications. In this comprehensive guide, we will delve into how to create a controller that watches for changes to CRDs, utilizing the features and benefits associated with Kubernetes. The ultimate goal is to ensure enterprise-safe usage of AI services such as MLflow AI Gateway, LLM Gateway, and Data Format Transformation while leveraging Kubernetes CRD capabilities.
Understanding Custom Resource Definitions (CRDs)
Before diving into creating a controller, let’s clarify what CRDs are. A CRD allows users to extend Kubernetes’ API with their own custom resources. Once you define a CRD, users can interact with it using the standard Kubernetes tools and commands. For example, if you create a CRD called “MachineLearningModel,” you can use Kubernetes commands to manage it just like built-in resources such as Pods and Deployments.
Advantages of CRDs:
- Custom Functionality: Provides the ability to create complex entities that are specific to your business logic.
- Standardization: Allows you to define a stable API for your custom resources, facilitating interaction across various teams.
- Integration with Controllers: CRDs can be watched over by controllers, ensuring that desired states are maintained automatically.
Overview of Kubernetes Controllers
A controller in Kubernetes is a control loop that watches the state of your cluster, compares it to the desired state, and takes action to make the cluster state match that desired state. Controllers are fundamental to Kubernetes, maintaining the health and desirable state of resources.
Responsibilities of the Controller:
- Watch for Changes: Continuously monitor the state of CRDs.
- Respond to Events: Take action when a change occurs (for example, creating or updating a resource).
- Manage Resource States: Monitor and reconcile the actual state of a resource against its desired state to ensure stability and reliability.
Prerequisites
Before getting started, ensure you have the following set up:
- A Kubernetes cluster where you can deploy your custom resources.
- Kubernetes client tools installed (kubectl).
- Familiarity with Go programming language (as we will utilize it to write our controller).
kubebuilder
installed for scaffolding our project.
Creating the Controller
Step 1: Setting Up Project Structure
Start by creating a new directory for your controller:
mkdir mycrd-controller
cd mycrd-controller
Now, initialize a new Kubebuilder project:
kubebuilder init --domain mydomain.com --repo mycrd.controller
Step 2: Define Your Custom Resource
Use kubebuilder
to create a new API:
kubebuilder create api --group AI --version v1 --kind MLModel
This command generates the scaffolding for your custom resource. You’ll see two main components generated: a Knative API type and a controller.
Step 3: Edit the CRD Definition
Navigate to the api/v1/mlmodel_types.go
file and define your custom resource structure. For example:
type MLModelSpec struct {
Name string `json:"name"`
Version string `json:"version"`
Description string `json:"description"`
}
type MLModelStatus struct {
CreatedAt metav1.Time `json:"created_at"`
UpdatedAt metav1.Time `json:"updated_at"`
Status string `json:"status"`
}
Step 4: Implementing the Controller Logic
Now, navigate to the controller file controllers/mlmodel_controller.go
and implement the reconciliation logic to respond to changes.
Here is a code snippet to illustrate how the controller watches for changes in our custom resource:
func (r *MLModelReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
var mlModel AI.MLModel
if err := r.Get(ctx, req.NamespacedName, &mlModel); err != nil {
log.Error(err, "unable to fetch MLModel")
return ctrl.Result{}, client.IgnoreNotFound(err)
}
log.Info("Reconciling MLModel", "MLModel.Name", mlModel.Name)
// Implement your logic here
// For example, trigger an AI processing pipeline when a new model is registered.
return ctrl.Result{}, nil
}
Step 5: Setting Up Watch on CRD
In the main.go
file, you need to set up the watch on your custom resource:
if err = controller.Watch(&source.Kind{Type: &AI.MLModel{}}, &handler.EnqueueRequestForObject{}); err != nil {
log.Error(err, "unable to watch MLModel")
os.Exit(1)
}
APIPark is a high-performance AI gateway that allows you to securely access the most comprehensive LLM APIs globally on the APIPark platform, including OpenAI, Anthropic, Mistral, Llama2, Google Gemini, and more.Try APIPark now! 👇👇👇
Step 6: Deploy the Controller to Kubernetes
Deploying your controller is as simple as running the following commands:
make install
make run
You can also deploy your controller in a more production-like setup using:
make docker-build docker-push IMG=<your-image>
make deploy IMG=<your-image>
Step 7: Testing the Controller
Once deployed, you can test the controller by creating instances of your MLModel
CRD:
apiVersion: AI.mydomain.com/v1
kind: MLModel
metadata:
name: example-mlmodel
spec:
name: ExampleModel
version: "1.0"
description: "An example machine learning model."
Deploy it with kubectl apply -f mlmodel.yaml
.
Step 8: Observing Changes
Once the CRD has been created, you can observe the logs of your controller to see if any changes are detected:
kubectl logs -f deployment/mycrd-controller
Managing AI Services with MLflow AI Gateway and LLM Gateway
Integrating AI services like MLflow AI Gateway and LLM Gateway can be seamlessly accomplished using the controller functionality. You can define workflows that call APIs from these services based on changes detected in your CRD.
Example Workflow for AI Service Interaction
Here’s how you can implement a simple interaction with an AI service every time your MLModel CRD is created or updated:
// Example function to call MLflow AI Gateway
func callMLflowService(mlModel MLModel) {
log.Info("Calling MLflow service for model", "Model.Name", mlModel.Name)
// Call MLflow API to register the model (pseudo code)
_, err := httpClient.Post("http://mlflow.api/register", mlModel)
if err != nil {
log.Error(err, "Error calling MLflow service")
return
}
log.Info("Successfully registered model with MLflow")
}
Conclusion
Creating a controller to watch for changes in Custom Resource Definitions in Kubernetes is a powerful technique to enhance the extendability and automation capabilities of your applications. By leveraging this capability, organizations can ensure enterprise-safe usage of AI services, manage complex machine learning models efficiently, and maintain the stability of resources while serving their innovative needs in today’s data-driven environment.
Incorporating this functionality not only streamlines your operations but also fosters an agile development process, reducing time-to-market for your AI solutions. As the cloud-native paradigm continues to evolve, mastering these skills will position you as a vital contributor to your organization’s technological growth and innovation.
For further inquiries or a deeper look into Kubernetes best practices for AI and ML deployments, feel free to reach out or explore additional resources from Kubernetes documentation.
🚀You can securely and efficiently call the OPENAI API on APIPark in just two steps:
Step 1: Deploy the APIPark AI gateway in 5 minutes.
APIPark is developed based on Golang, offering strong product performance and low development and maintenance costs. You can deploy APIPark with a single command line.
curl -sSO https://download.apipark.com/install/quick-start.sh; bash quick-start.sh
In my experience, you can see the successful deployment interface within 5 to 10 minutes. Then, you can log in to APIPark using your account.
Step 2: Call the OPENAI API.