All Products
Search
Document Center

Container Service for Kubernetes:Deploy Ingresses in a high-reliability architecture

Last Updated:Jun 06, 2024

An Ingress is a set of rules that authorize external access to Services within a Kubernetes cluster. Ingresses provide Layer 7 load balancing. You can configure Ingresses to specify the URLs, Server Load Balancer (SLB) instances, Secure Sockets Layer (SSL) connections, and name-based virtual hosts that allow external access. The high reliability of Ingresses is important because Ingresses manage external access to Services within a cluster. This topic describes how to deploy Ingresses in a high-reliability architecture for a Container Service for Kubernetes (ACK) cluster.

Prerequisites

High-reliability deployment architecture

A multi-replica deployment architecture is a common solution to provide high reliability and resolve single point of failures (SPOFs). In ACK clusters, Ingresses are deployed in a multi-node architecture to ensure the high reliability of the access layer. Ingresses manage external access to Services within a cluster. We recommend that you use exclusive Ingress nodes to prevent applications and Ingresses from competing for resources.

image

The access layer in the preceding figure is composed of multiple exclusive Ingress nodes. You can also increase the number of Ingress nodes based on the traffic volume to the backend applications. If your cluster size is small, you can deploy Ingresses and applications together. In this case, we recommend that you isolate resources and restrict resource consumption.

Query the pods of the NGINX Ingress controller and the public IP address of the SLB instance

After an ACK cluster is created, the NGINX Ingress controller with two pods is automatically deployed. An Internet-facing SLB instance is also created as the frontend load balancing Service.

  1. To query the pods that are provisioned for the NGINX Ingress controller, run the following command:

    kubectl -n kube-system get pod | grep nginx-ingress-controller

    Expected output:

    nginx-ingress-controller-8648ddc696-2bshk                    1/1     Running   0          3h
    nginx-ingress-controller-8648ddc696-jvbs9                    1/1     Running   0          3h
  2. To query the public IP address of the frontend SLB instance, run the following command:

    kubectl -n kube-system get svc nginx-ingress-lb

    Expected output:

    NAME               TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)                      AGE
    nginx-ingress-lb   LoadBalancer   172.XX.XX.XX   118.XX.XX.XX   80:32XXX/TCP,443:31XXX/TCP   21d

Deploy an Ingress access layer with high reliability

As the cluster size grows, you must expand the access layer to ensure high performance and high availability of the access layer. You can use one of the following methods to expand the access layer:

  • Method 1: Increase the number of pods

    You can increase the number of pods that are provisioned for the Deployment of the NGINX Ingress controller to expand the access layer.

    1. Run the following command to increase the number of pods to 3:

      kubectl -n kube-system scale --replicas=3 deployment/nginx-ingress-controller

      Expected output:

      deployment.extensions/nginx-ingress-controller scaled
    2. Run the following command to query the pods that are provisioned for the NGINX Ingress controller:

      kubectl -n kube-system get pod | grep nginx-ingress-controller

      Expected output:

      nginx-ingress-controller-8648ddc696-2bshk                    1/1     Running   0          3h
      nginx-ingress-controller-8648ddc696-jvbs9                    1/1     Running   0          3h
      nginx-ingress-controller-8648ddc696-xqmfn                    1/1     Running   0          33s
  • Method 2: Deploy Ingresses on nodes with higher specifications

    You can add labels to nodes with higher specifications. Then, pods provisioned for the NGINX Ingress controller are scheduled to these nodes.

    1. To query information about the nodes in the cluster, run the following command:

      kubectl get node

      Expected output:

      NAME                                 STATUS   ROLES    AGE   VERSION
      cn-hangzhou.i-bp11bcmsna8d4bp****   Ready    master   21d   v1.11.5
      cn-hangzhou.i-bp12h6biv9bg24l****   Ready    <none>   21d   v1.11.5
      cn-hangzhou.i-bp12h6biv9bg24l****   Ready    <none>   21d   v1.11.5
      cn-hangzhou.i-bp12h6biv9bg24l****   Ready    <none>   21d   v1.11.5
      cn-hangzhou.i-bp181pofzyyksie****   Ready    master   21d   v1.11.5
      cn-hangzhou.i-bp1cbsg6rf3580z****   Ready    master   21d   v1.11.5
    2. Run the following command to add the node-role.kubernetes.io/ingress="true" label to the cn-hangzhou.i-bp12h6biv9bg24lmdc2o and cn-hangzhou.i-bp12h6biv9bg24lmdc2p nodes.

      kubectl label nodes cn-hangzhou.i-bp12h6biv9bg24lmdc2o node-role.kubernetes.io/ingress="true"

      Expected output:

      node/cn-hangzhou.i-bp12h6biv9bg24lmdc2o labeled
      kubectl label nodes cn-hangzhou.i-bp12h6biv9bg24lmdc2p node-role.kubernetes.io/ingress="true"

      Expected output:

      node/cn-hangzhou.i-bp12h6biv9bg24lmdc2p labeled
      Note
      • The number of nodes to which the label is added must be equal to or greater than the number of pods that are provisioned for the NGINX Ingress controller. This ensures that each pod runs on an exclusive node.

      • If the value of the ROLES parameter is none in the returned results, the related node is a worker node.

      • We recommend that you add the label to and deploy Ingresses on worker nodes.

    3. Run the following command to update the related Deployment by adding the nodeSelector field:

      kubectl -n kube-system patch deployment nginx-ingress-controller -p '{"spec": {"template": {"spec": {"nodeSelector": {"node-role.kubernetes.io/ingress": "true"}}}}}'

      Expected output:

      deployment.extensions/nginx-ingress-controller patched
    4. Run the following command to query the pods that are provisioned for the NGINX Ingress controller. The output indicates that pods are scheduled to the nodes to which the node-role.kubernetes.io/ingress="true" label is added.

      kubectl -n kube-system get pod -o wide | grep nginx-ingress-controller

      Expected output:

      nginx-ingress-controller-8648ddc696-2bshk     1/1     Running   0     3h    172.16.XX.XX    cn-hangzhou.i-bp****   <none>
      nginx-ingress-controller-8648ddc696-jvbs9     1/1     Running   0     3h    172.16.XX.XX    cn-hangzhou.i-bp****   <none>