A GitOps workflow using Kubernetes and

admin 12 2025-01-02 编辑

A GitOps workflow using Kubernetes and

GitOps applies software development best practices to DevOps processes. The popular methodology uses version control, collaboration, and declarative configuration to automate updates to your infrastructure as well as the apps that run within it.

Successful GitOps workflows often use Kubernetes to orchestrate container deployment and scaling operations. But how do you actually get code into Kubernetes from your source repositories?

In this article, you’ll use Kubernetes, Argo CD, and  to set up a functioning GitOps implementation that you can use to deploy your apps and APIs. Argo CD is a declarative continuous delivery tool that brings GitOps to Kubernetes clusters, while is an API management platform that works across clouds, containers, and on-premise environments.

 

How to implement a GitOps workflow for managing a kubernetes cluster

Before you begin this tutorial, the first step is you need GitHub and Amazon Web Services (AWS) accounts to follow along. Additionally, you should have Docker, kubectl, Helm, and the AWS CLI already installed on your system.

In the second step, you are creating a new Kubernetes cluster on Amazon Elastic Kubernetes Service (Amazon EKS), but you can skip over this step if you’d prefer to use an existing cluster.

You’ll be using Argo CD to set up GitOps deployments. Argo CD is purpose-built to automate continuous delivery to Kubernetes clusters. It runs an agent inside your cluster that monitors your repositories and automatically applies changes as they’re detected. This pull-based model is simpler and securer than alternative push-based options, where a third-party server must be granted access to your cluster.

Your workflow’s final component is . provides an API management layer that sits in front of your Kubernetes deployments. It is designed to slot into existing workflows and offers GitOps support via the Operator for Kubernetes:

 

Rough architecture diagram courtesy of James Walker

All the code for this tutorial can be found in this GitHub repo.

 

Create your API

Begin by preparing your API application. You can fork this article’s GitHub repository to get going quickly with a sample app. The container image is publicly available on Docker Hub as well.

The sample app is a simple Node.js project that uses Express to serve a single HTTP endpoint (ie `/time`) that provides the current server time. The repository also contains a set of Kubernetes YAML manifests that allow the app to be deployed.

Start a remote Kubernetes cluster

Once you’ve forked the repository, you’re ready to get started with Kubernetes. It automates the deployment, scaling, and operation of containers in production environments with features such as incremental rollouts, rollbacks, self-healing, and service discovery.

You can run Kubernetes on your own system using an all-in-one distribution, such as minikube, or you can provision a cluster from a managed cloud service like Amazon EKS.

Here, you create a new EKS cluster so you can deploy your API straight to the cloud, ready for production use. Note that this accrues charges to your AWS account.

 

Create IAM Roles

To begin with, log into your AWS account and head to the IAM Dashboard. You can find it using the console’s global search bar:

 

Screenshot of searching for IAM roles in the AWS console

 

Before you can use Amazon EKS, you must set up IAM roles that allow the service to create other resources in your AWS account on your behalf.

Click the Roles link that appears in the left sidebar of the IAM interface. Then press the blue Create role button to define a new role:

 

Screenshot of the Roles page in AWS IAM

 

On the next screen, keep the AWS service selected as the role’s Trusted entity type:

 

Screenshot of selecting a role’s trusted entity type in AWS IAM

 

Scroll down to the Use cases for other AWS services section and use the drop-down menu to select the EKS use case. Then change the use case type to EKS – Cluster:

 

Screenshot of selecting the use case for an AWS IAM role

 

Press the blue Next button at the bottom of the screen. Then click Next again on the following screen to reach the final Name, review, and create stage and give your role a name:

 

Screenshot of setting a new role’s name in AWS IAM

 

Complete the process by scrolling down the page and pressing Create role.

After adding the first role, repeat the steps earlier to create an additional role. Use the same procedure but apply the following changes:

 

  1. Select EC2 as the role’s trusted entity type.
  2. Use the Permissions policy table to add the `AmazonEKSWorkerNodePolicy`, `AmazonEC2ContainerRegistryReadOnly`, `AmazonEBSCSIDriverPolicy`, and `AmazonEKS_CNI_Policy` permissions to your role.
  3. Give your role a name to complete the process.

Create your cluster

Next, use the console’s search bar to switch to the Amazon EKS dashboard and begin creating your Kubernetes cluster. Press the Add cluster button on the landing page. Then select Create from the menu:

 

Screenshot of the Amazon EKS landing page

 

On the following screen, give your cluster a name and check that a role is shown in the Cluster service role drop-down. The drop-down should be prefilled with the first role you created earlier:

 

Screenshot of creating an Amazon EKS cluster

 

You can leave the other settings at their defaults. Keep pressing the Next button to progress through the following steps and create your cluster.

 

Add nodes to your cluster

After confirming your cluster’s creation, you are taken to the cluster dashboard screen. Wait a couple of minutes. Then refresh the screen and check that the cluster’s status shows as Active:

 

Screenshot of the Amazon EKS cluster dashboard screen

 

Now you can begin adding nodes to your cluster. Switch to the Compute tab on the dashboard, scroll down to the Node groups section, and press the Add node group button:

 

Screenshot of the Node groups section of the Amazon EKS cluster dashboard

 

A node group is a set of Amazon Elastic Compute Cloud (Amazon EC2) instances that supply compute capacity to your clusters. The nodes within a node group are created using the same EC2 instance type, but you can add multiple node groups to a single cluster.

Give your node group a name and check that the second IAM role created previously is shown in the Node IAM role drop-down:

 

Screenshot of creating a node group in Amazon EKS

 

Scroll down and press the Next button to begin configuring your node group’s compute settings. This is where you define the hardware resources available to your nodes. The defaults are sufficient for this tutorial—you have two `t3.medium` nodes, each of which provides two vCPUs and 4 GB of memory:

 

Screenshot of selecting instance options for an Amazon EKS node group

 

Use the Next button to click through the next few steps and create your node group. You are taken to the group’s overview page; wait a few minutes until the Status displays as Active:

 

Screenshot of viewing a node group in Amazon EKS

 

Finally, switch back to the cluster dashboard and click the Add-ons option in its tab strip. Afterward, press the yellow Get more add-ons button:

 

Screenshot of the Amazon EKS cluster Add-ons screen

 

On the next screen, enable the Amazon EBS CSI Driver add-on:

 

Screenshot of enabling the Amazon EBS CSI Driver add-on for an Amazon EKS cluster

 

On the following screen, leave the default settings. Complete the installation of the add-on to finish the cluster configuration process. This add-on is required to allow the use of persistent storage volumes in your cluster:

 

Screenshot of the Amazon EBS CSI Driver add-on settings in Amazon EKS

 

Connect to your cluster using kubectl

Once you’ve finished the cluster configuration process, you can connect your local kubectl client to your new Amazon EKS cluster. It’s easiest to use the AWS CLI utility to automatically generate a kubeconfig entry:

 

``` $ aws eks update-kubeconfig --name <your-cluster-name> ```

 

Now you should be able to successfully run kubectl commands against your cluster, such as this one, to check the status of your nodes:

 

``` $ kubectl get nodes NAME                                      STATUS   ROLES AGE VERSION ip-172-31-30-124.eu-west-2.compute.internal   Ready <none>   2m48s   v1.27.3-eks-a55165ad ```

 

Use Argo CD for GitOps-powered CI/CD

Once you’ve connected your cluster using kubectl, you can add Argo CD to your cluster. This installs the agent that connects to your Git repositories, detects changes, and applies them to your cluster.

 

Install Argo CD

To install Argo CD, start by creating a Kubernetes namespace to hold Argo CD’s components:

 

``` $ kubectl create namespace argocd namespace/argocd created ```

 

Next, apply Argo CD’s official YAML manifest to complete the installation in your cluster:

 

``` $ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml ``` Use the `kubectl get deployments` command to check that Argo CD's ready—wait until all six deployments show as available before you continue:

 

``` $ kubectl get deployments -n argocd NAME                           READY   UP-TO-DATE   AVAILABLE   AGE argocd-applicationset-controller   1/1 1        1       37s argocd-dex-server              1/1 1        1       36s argocd-notifications-controller 1/1 1        1       36s argocd-redis                   1/1 1        1       36s argocd-repo-server             1/1 1        1       36s argocd-server                  1/1 1        1       36s ```

 

Set up the Argo CD CLI

You need the Argo CD CLI installed on your machine in order to manage your installation and create application deployments. The following sequence of commands works on Linux to download the CLI binary and deposit it into your path. You can run the binary with the `argocd` console command:

 

``` $ wget https://github.com/argoproj/argo-cd/releases/download/v2.8.0/argocd-linux-amd64 $ chmod +x argocd-linux-amd64 $ mv argocd-linux-amd64 /usr/bin/argocd ```

 

Check GitHub to find the latest version number; substitute it into the previous command instead of `2.8.0`.

Next, use the CLI to discover the password that the Argo CD installation process generated for the default `admin` user account:

 

``` $ argocd admin initial-password -n argocd ```

 

To preserve security, delete the Kubernetes secret that contains the password—you won’t be able to retrieve it again after running this command:

 

``` $ kubectl delete secret argocd-initial-admin-secret -n argocd secret "argocd-initial-admin-secret" deleted.

Connect to Argo CD

Argo CD’s API server isn’t exposed automatically. You must manually open a route to it before you can use the CLI or access the web UI.

kubectl port forwarding is the quickest way to get started for experimentation purposes. However, this method should not be used in production—you can follow the steps in the documentation to permanently expose Argo CD with a TLS-secured Ingress route.

Open a new terminal window. Then run the following command to start a new port forwarding session. It binds your local port 8080 to the Argo CD instance running in your cluster:

 

``` $ kubectl port-forward svc/argocd-server -n argocd 8080:443 ```

Switch back to your first terminal window to log into the Argo CD CLI, specifying the server to connect to:

 

``` $ argocd login localhost:8080 ```

 

Because TLS isn’t being used, you need to acknowledge the self-signed certificate warning:

 

“` WARNING: server certificate had error: tls: failed to verify certificate: x509: certificate signed by unknown authority. Proceed insecurely (y/n)? “`

Argo CD then prompts for your user credentials. Use `admin` as the username and enter the password you retrieved previously:

 

``` 'admin:login' logged in successfully Context 'localhost:8080' updated ```

 

Deploy your application

Now you’re ready to use Argo CD to deploy your app into your cluster!

Run the following command to create your app:

 

``` $ argocd app create api-demo \ --repo https://github.com/<username>/<repo>.git \ --path kubernetes/ \ --dest-server https://kubernetes.default.svc \ --dest-namespace api-demo application 'api-demo' created ```

 

This command instructs Argo CD to register the given repository URL as a new application. It monitors the Kubernetes manifests within the repository’s `kubernetes/` directory; when they change, Argo CD automatically updates your cluster.

The `--dest-namespace` flag defines the Kubernetes namespace that your app will be deployed to (it should match the `metadata.namespace` field set in your Kubernetes manifests). `--dest-server` tells Argo CD which Kubernetes cluster to target, while `https://kubernetes.default.svc` resolves to the cluster that Argo CD is running within.

You can check your app’s status by running the `argocd app list`:

 

``` $ argocd app list NAME              CLUSTER                         NAMESPACE   PROJECT  STATUS     HEALTH   SYNCPOLICY  CONDITIONS  REPO                                                   PATH  TARGET argocd/api-demo  https://kubernetes.default.svc  api-demo   default  OutOfSync  Missing  <none>  <none>  https://github.com/ilmiont/-gitops-demo.git  kubernetes/   ```

 

The app shows as `Missing` and `OutOfSync`. Although the app’s been created, Argo CD hasn’t automatically synced it into the cluster.

A sync is the Argo CD operation that transitions the cluster’s state into the desired state expressed in your repository. Syncs can be requested on demand or scheduled to run automatically.

Run your first sync to deploy your app:

 

``` $ argocd app sync api-demo ... GROUP  KIND    NAMESPACE  NAME  STATUS   HEALTH   HOOK  MESSAGE     Namespace   api-demo   api-demo  Running  Synced         namespace/api-demo created     Service api-demo   api-demo  Synced   Progressing    service/api-demo created apps   Deployment  api-demo   api-demo  Synced   Progressing    deployment.apps/api-demo created     Namespace          api-demo  Synced                   ```

 

The app should now be healthy and displaying the `Synced` status:

 

``` $ argocd app list NAME         CLUSTER                     NAMESPACE  PROJECT  STATUS  HEALTH   SYNCPOLICY  CONDITIONS  REPO                                        PATH     TARGET argocd/api-demo  https://kubernetes.default.svc  api-demo   default  Synced  Healthy  <none>  <none>  https://github.com/ilmiont/-gitops-demo.git  kubernetes/ ```

 

The sample repository defaults to running three replicas of the application using a Kubernetes deployment object. Use kubectl to check that the deployment is ready and has the expected replica count:

 

``` $ kubectl get deployment -n api-demo NAME   READY   UP-TO-DATE   AVAILABLE   AGE api-demo   3/3 3        3       66s ```

 

As you can see, Argo CD has successfully deployed the application!

 

Use GitOps to apply changes

At this point, your GitOps workflow is ready to use. You can apply changes to your deployed application by committing to your repository and then running a new Argo CD sync operation. This pulls the repository’s files, compares them to what’s running in your cluster, and automatically applies any changes. The state of your infrastructure is driven by the content of your Git repository. See this in action.

Open up the `kubernetes/deployment.yml` file in the sample app’s repository. Find the `spec.replicas` field and change its value from `3` to `5` to scale the app’s deployment up:

 

```yml apiVersion: apps/v1 kind: Deployment metadata:   name: api-demo   namespace: api-demo   labels: app.kubernetes.io/name: api-demo spec:   replicas: 5   ... ```

 

Commit your changes and push them to GitHub:

 

``` $ git commit -am "Increase replica count to 5" $ git push ```

 

Next, repeat the `argocd app sync` command to sync the change into your Kubernetes cluster:

 

``` $ argocd app sync api-demo GROUP  KIND    NAMESPACE  NAME  STATUS   HEALTH   HOOK  MESSAGE     Namespace   api-demo   api-demo  Running  Synced         namespace/api-demo unchanged     Service api-demo   api-demo  Synced   Healthy        service/api-demo unchanged apps   Deployment  api-demo   api-demo  Synced   Progressing    deployment.apps/api-demo configured     Namespace          api-demo  Synced                   ```

 

The deployment should now be running five replicas:

 

``` $ kubectl get deployment -n api-demo NAME   READY   UP-TO-DATE   AVAILABLE   AGE api-demo   5/5 5        5       8m14 ```

 

As you can see, you’ve used GitOps to scale the app without directly interacting with Kubernetes.

 

Monitor with the Argo CD Web UI

Using your port forwarding session, you can access Argo CD’s web UI by visiting `localhost:8080` in your browser:

 

Screenshot of the Argo CD web UI

 

The Applications dashboard shows the status of all the apps that Argo CD has deployed. You can configure app options, start a sync, refresh all your apps, and change Argo CD settings. It’s a convenient way to monitor running apps without using the CLI.

 

Add API gateway to secure and monitor your API

The demo application is a simple API that provides the current time. It’s not yet publicly accessible, although the repository configures a ClusterIP service that can only be reached within the cluster.

You could use a Kubernetes LoadBalancer service or an Ingress to expose your API, but there are problems with these methods: directly exposing the API renders it accessible to everyone, and you have no way of monitoring usage. Additionally, building these features yourself would require a substantial development investment.

That’s where can help. It’s an API gateway that’s as simple and reliable as your GitOps workflow. Running in your cluster allows you to benefit from automatic API management without any additional work from your developers.

is available in several different flavors, including open source

A GitOps workflow using Kubernetes and

上一篇: Understanding the Significance of 3.4 as a Root in Mathematics
下一篇: API authentication methods
相关文章