All Products
Search
Document Center

Container Service for Kubernetes:Deploy multiple Ingress controllers for traffic isolation

Last Updated:Mar 26, 2026

Deploy multiple independent Nginx Ingress controllers in a cluster to isolate traffic across different services, environments, or network boundaries. Each controller manages its own load balancer and Ingress rules independently, providing full fault and configuration isolation.

How it works

Each controller is identified by a unique IngressClass name. When you create an Ingress resource, set the spec.ingressClassName field to the target controller's IngressClass name. Only the matching controller processes that resource — all others ignore it. This mechanism enforces traffic isolation between controllers.

The following diagram shows a public and private network isolation scenario.

image
IngressClass-based isolation relies on correct use of the ingressClassName field, not RBAC enforcement. Any user with permission to create Ingress resources in the cluster can target any IngressClass. If multiple teams share a cluster, ensure each team understands which IngressClass belongs to their controller and restrict Ingress creation permissions accordingly.

Helm controllers vs. Component Management controllers

By default, ACK deploys a Nginx Ingress Controller through the Component Management page. Deploy additional controllers as Helm applications to serve different traffic domains.

Important

Controllers installed as Helm applications differ from those deployed through Component Management:

CapabilityComponent ManagementHelm
Grayscale upgradeSupportedNot supported
Logs and monitoringSupportedNot supported
Cluster inspectionSupportedNot supported
Lifecycle managementManaged by ACKSelf-managed (upgrades, configuration changes, troubleshooting)

Limitations

Prerequisites

Before you begin, make sure that you have:

  • An ACK cluster running Kubernetes 1.22 or later

  • Access to the ACK console and permission to install Helm applications

  • (Optional) A default Nginx Ingress Controller already deployed via Component Management — see Expose services with Nginx Ingress if not yet set up

Deploy a new Ingress controller

  1. On the ACK Clusters page, click the name of your cluster. In the left navigation pane, click Applications > Helm.

  2. Click Create and install ack-ingress-nginx-v1. Configure the following key parameters. Leave all other settings at their defaults.

    Important

    When deploying multiple controllers, controller.ingressClassResource.name and controller.ingressClassResource.controllerValue must each be unique within the cluster to avoid IngressClass conflicts. Neither value can reuse the defaults reserved by the default controller (nginx and k8s.io/ingress-nginx).

    ParameterDescription
    Application NameEnter a name that is unique within the cluster. This name is used as a prefix for auto-generated Service resources. The resulting Service name follows the format <Application Name>-ack-ingress-nginx-v1-controller for public services, or <Application Name>-ack-ingress-nginx-v1-controller-internal for private services. The total name length must not exceed 63 characters.
    ChartSearch for and select ack-ingress-nginx-v1. The older ack-ingress-nginx chart is no longer maintained.
    Chart VersionKubernetes 1.24 or later: use chart version 4.0.22 or later. Kubernetes 1.22: use chart versions 4.0.16 to 4.0.21.
    Chart ParametersBy default, the chart deploys a Nginx Ingress Controller with 2 replicas as a Deployment and automatically creates a public LoadBalancer-type Service backed by a Classic Load Balancer (CLB) instance. For the full parameter list, see Parameter reference. Example — private network controller: set controller.service.external.enabled to false and controller.service.internal.enabled to true.
  3. After the Helm release is created, go to the Helm page. In the Basic Information area, note the namespace. In the Resources area, note the IngressClass name and Service name — you will need these in the next steps.

Verify traffic isolation

This section walks through a public and private network separation scenario to verify that each controller handles only its assigned Ingress resources.

  • Default controller: The Nginx Ingress Controller deployed through Component Management, bound to a public SLB instance.

  • New controller: The controller deployed in the previous section, bound to a private SLB instance accessible only within the VPC.

Step 1: Deploy a test application

  1. Create an nginx.yaml file with the following content.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          run: nginx
      template:
        metadata:
          labels:
            run: nginx
        spec:
          containers:
          - image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            imagePullPolicy: Always
            name: nginx
            ports:
            - containerPort: 80
              protocol: TCP
          restartPolicy: Always
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx
    spec:
      ports:
      - port: 80
        protocol: TCP
        targetPort: 80
      selector:
        run: nginx
      sessionAffinity: None
      type: NodePort
  2. Deploy the application.

    kubectl apply -f nginx.yaml

    The expected output is similar to:

    deployment.apps/nginx created
    service/nginx created

Step 2: Create Ingress rules targeting the new controller

  1. Create an ingress.yaml file with the following content.

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: nginx
    spec:
      # Replace with the IngressClass name of your new controller (controller.ingressClassResource.name).
      ingressClassName: "<YOUR_INGRESS_CLASS>"
      rules:
      # The following domain name is for testing only. Replace it with your actual domain name in production.
      - host: foo.bar.com
        http:
          paths:
          - path: /
            backend:
              service:
                name: nginx
                port:
                  number: 80
            pathType: ImplementationSpecific
  2. Create the Ingress resource.

    kubectl apply -f ingress.yaml

    The expected output is similar to:

    ingress.networking.k8s.io/nginx created

Step 3: Test access

  1. Get the IP addresses of both controllers. Default public controller:

    PUBLIC_IP=$(kubectl get svc -n kube-system nginx-ingress-lb -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    echo "Public Ingress IP: $PUBLIC_IP"

    New private controller:

    # Replace <YourNamespace> with the namespace of the new controller (for example, default).
    # Replace <YourChartName> with the Helm release name of the new controller.
    INTERNAL_IP=$(kubectl get svc -n <YourNamespace> <YourChartName>-ack-ingress-nginx-v1-controller-internal -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    echo "Internal Ingress IP: $INTERNAL_IP"
  2. From a machine inside the VPC, send a request through the private controller. A 200 response confirms the private controller is correctly proxying traffic.

    curl -o /dev/null -s -w "%{http_code}\n" -H "Host: foo.bar.com" http://$INTERNAL_IP

    Expected output:

    200
  3. Send the same request through the public controller. A 404 Not Found response confirms the public controller did not process this Ingress rule — traffic isolation is working.

    curl -H "Host: foo.bar.com" http://$PUBLIC_IP

    Expected output:

    404 Not Found

Apply in production

Before using the new controller in production, apply the following settings.

High availability

Set the following parameters in the Helm chart:

  • controller.replicaCount: 2 or more

  • controller.resources.requests and controller.resources.limits: appropriate values for your workload

  • controller.affinity: add podAntiAffinity rules to spread pods across different nodes

Monitoring and alerting

Set controller.metrics.enabled: true and controller.metrics.serviceMonitor.enabled: true to export metrics to Prometheus. Monitor request latency and error rates (4xx/5xx), and configure alerting rules.

Performance

For low-latency workloads, use NLB instead of CLB:

  • Private network: controller.service.internal.loadBalancerClass: "alibabacloud.com/nlb"

  • Public network: controller.service.loadBalancerClass: "alibabacloud.com/nlb"

Version maintenance

Parameter reference

The following table describes the main parameters for the ack-ingress-nginx-v1 Helm chart.

ParameterDescription
controller.image.repositoryContainer image registry address for the Nginx Ingress Controller.
controller.image.tagImage version of the Nginx Ingress Controller.
controller.ingressClassResource.nameName of the IngressClass resource. Must be unique within the cluster and cannot be nginx (reserved by the default controller).
controller.ingressClassResource.controllerValueController class identifier. Must be unique within the cluster and cannot be k8s.io/ingress-nginx (reserved by the default controller).
controller.replicaCountNumber of controller pod replicas. Set to 2 or more for high availability.
controller.service.enabledWhether to create a LoadBalancer-type Service for the controller.
controller.service.external.enabledIf true, creates a public-facing SLB Service.
controller.service.internal.enabledIf true, creates a private SLB Service accessible only within the VPC.
controller.kindWorkload type: Deployment or DaemonSet.
controller.electionIDIdentifier for leader election among controller replicas. Must be unique when deploying multiple controllers in the same namespace.
controller.metrics.enabledIf true, exposes a Prometheus metrics endpoint.
controller.metrics.serviceMonitor.enabledIf true, creates a ServiceMonitor resource for automatic Prometheus discovery. Requires controller.metrics.enabled: true.
controller.service.loadBalancerClassLoad balancer type for the public network Service: "alibabacloud.com/clb" (default, CLB) or "alibabacloud.com/nlb" (NLB).
controller.service.internal.loadBalancerClassLoad balancer type for the private network Service: "alibabacloud.com/clb" (default, CLB) or "alibabacloud.com/nlb" (NLB).

What's next