How to deploy your stack in Kubernetes on AWS

admin 4 2025-01-08 编辑

How to deploy your  stack in Kubernetes on AWS

Howdy folks! Let’s look at how easy it is to deploy your stack on AWS’ Elastic Kubernetes Service (EKS) and publicly expose our gateway and dashboard as ingresses through Nginx’s ingress controller. 

Note: you can limit third-party service dependencies by using Operator. offers an ingress controller for your k8s cluster, which dynamically manages ApiDefinition resources for you per the ingress spec and can be a drop-in replacement for a standard Kubernetes ingress.

You can also use any other ingress controller in place of Nginx.

Sounds fun? Let’s get started!

The first thing to do is to create a cluster in EKS. Once that is configured, we need to configure our kubectl to point to the k8s cluster in AWS.

$ aws eks update-kubeconfig --region <region> --name <cluster-name>

Now that we are connected to our cluster, let’s test the connectivity with:

$ kubectl get nodes

We should get back some information letting us know that the `x` nodes in our cluster have been created.

Now we’re going to install in our cluster.

To start, install the helm repo that we’ll be using to deploy :

$ helm repo add -helm https://helm..io/public/helm/charts/ $ helm repo update

Great, now we’ll create our `` namespace.

$ kubectl create namespace

Now let’s install in our namespace using the Pro helm chart. To do so, let’s get our Redis and Mongo dependencies installed.

$ helm install redis -helm/simple-redis -n $ helm install mongo -helm/simple-mongodb -n

Before we install -pro we need to set some custom values. To configure our chart options, run this:

$ helm show values -helm/-pro > values.yaml

Open the `values.yaml` that you created in your directory using your favourite code editor. For the self-managed chart, we need to set our license key under the dash.license field.

Lastly, we need to annotate the ingresses appropriately. So, navigate to the ingress field in both the gateway and dashboard definitions to enable ingress and set the annotation value to reference the nginx ingress controller that we’re going to use. Again, we need to do this for the gateway and dashboard.

ingress:     enabled: true     # specify your ingress controller class name below     className: ""     annotations:       kubernetes.io/ingress.class: nginx       # kubernetes.io/tls-acme: "true"     hosts:       - host: -gw.local         paths:           - path: /             pathType: ImplementationSpecific     tls: []

Note that we are modifying the host value for the ingress to match the gateway hostname in `values.yaml` denoted as gateway.hostName. Here’s what our dashboard definition looks like:

ingress:     enabled: true     # specify your ingress controller class name below     className: ""     annotations:       kubernetes.io/ingress.class: nginx       # kubernetes.io/tls-acme: "true"     hosts:       - host: -dashboard.local         paths:           - path: /             pathType: ImplementationSpecific     tls: []

Awesome – now let’s install the ingress controller in our namespace. We’re going to be using an Nginx ingress controller – to do so, we’ll need to add the nginx chart to the helm.

$ helm repo add nginx-stable https://helm.nginx.com/stable $ helm repo update

We can now install the ingress controller in our namespace.

$ helm install my-release nginx-stable/nginx-ingress -n

Perfect – almost done now. Go ahead and install -pro with:

$ helm install -pro -helm/-pro -f values.yaml --debug --wait -n

Once that’s completed, let’s run the following command to see our functioning ingresses:

$ kubectl get ingresses -n

You’ll notice that the dashboard and gateway are running as ingresses now.

If you’d like to access this cluster locally to make sure it’s working, you’ll need to resolve the hostname address that AWS generates for your services to an external IP. A simple ping to the address will give us the IP address that we will map our dashboard and gateway to.

To finish, simply append the IP address, followed by the hostname at the end of your /etc/hosts file to make sure that knows what hostname to map your ingress controller to and expose your service.

 

 

 

Once you save the `/etc/hosts` file, you will be able to visit http://-dashboard.local/ to view your install on EKS!

Nice one!

 

A version of Operator is available within the open-source repository, but it has been archived and will be unmaintained. The latest release of Operator will be available exclusively to paying customers.


How to deploy your stack in Kubernetes on AWS

上一篇: Understanding the Significance of 3.4 as a Root in Mathematics
下一篇: Get productive with the IntelliSense extension
相关文章