This article introduces the relevant steps of how to use Prometheus to monitor Ingress Controller and the display effect of some indicators.
Whether in the days of monolithic applications or today's cloud-native era, "monitoring functions" have always played a very important role. A good monitoring system can help engineers quickly understand the status of services running in production environments, and quickly locate problems or warn of anomalies when they occur.
Apache Ingress Controller has been enhanced to support Prometheus Metrics in recent releases. In this article, we will introduce how to use Prometheus to collect Metrics data from Ingress Controller and subsequently visualize it through Grafana.
Step 1: Install Ingress Controller
First we deploy Apache , ETCD and Ingress Controller to the local Kubernetes cluster via Helm.
helm repo add https://charts.apiseven.comhelm repo updatekubectl create namespace ingress-helm install / --namespace ingress- \ --set ingress-controller.enabled=true
After installation, please wait until all services are up and running. Specific status confirmation can be checked with the following command.
kubectl get all -n ingress-
Step 2: Enable the Prometheus Plugin
In most cases, the monitoring function must involve more than just the Ingress Controller component. If you need to monitor Apache at the same time, you can create the following ClusterConfig
resource.
Installing Prometheus and Grafana
Next, we will enable the Prometheus service through the Prometheus Operator, so you will need to install the Prometheus Operator first.
note
The following command will also install Grafana.
helm repo add prometheus-community https://prometheus-community.github.io/helm-chartsheml repo updatekubectl create namespace prometheushelm install prometheus prometheus-community/kube-prometheus-stack -n prometheus
After installation, you need to prepare the RBAC configuration for the Prometheus instance. This configuration gives Prometheus the ability to obtain Pod and Service resources from the Kubernetes API Server.
apiVersion: v1kind: ServiceAccountmetadata: name: ingress- namespace: ingress----apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name: ingress-rules:- apiGroups: [""] resources: - nodes - nodes/metrics - services - endpoints - pods verbs: ["get", "list", "watch"]- apiGroups: [""] resources: - configmaps verbs: ["get"]- apiGroups: - networking.k8s.io resources: - ingresses verbs: ["get", "list", "watch"]- nonResourceURLs: ["/metrics"] verbs: ["get"]---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: ingress-roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-subjects:- kind: ServiceAccount name: ingress- namespace: ingress-
After completing the above instance configuration, PodMonitor needs to be defined, or you can choose to use ServiceMonior depending on the scenario requirements. The following PodMonitor resources will focus on Metrics collection for the Ingress Controller Pod.
apiVersion: monitoring.coreos.com/v1kind: PodMonitormetadata: name: ingress- namespace: ingress- labels: use-for: ingress-spec: selector: matchLabels: app.kubernetes.io/name: ingress-controller podMetricsEndpoints: - port: http
note
The reason for not using ServiceMonitor here is that the http
port is not exposed to the Service level.
Finally, the Prometheus instance can be defined with the following command.
apiVersion: monitoring.coreos.com/v1kind: Prometheusmetadata: name: ingress- namespace: ingress-spec: serviceAccountName: ingress- podMonitorSelector: matchLabels: use-for: ingress- resources: requests: memory: 400Mi<span