Deploy multiple independent Nginx Ingress controllers in a cluster to isolate traffic across different services, environments, or network boundaries. Each controller manages its own load balancer and Ingress rules independently, providing full fault and configuration isolation.
How it works
Each controller is identified by a unique IngressClass name. When you create an Ingress resource, set the spec.ingressClassName field to the target controller's IngressClass name. Only the matching controller processes that resource — all others ignore it. This mechanism enforces traffic isolation between controllers.
The following diagram shows a public and private network isolation scenario.
IngressClass-based isolation relies on correct use of the ingressClassName field, not RBAC enforcement. Any user with permission to create Ingress resources in the cluster can target any IngressClass. If multiple teams share a cluster, ensure each team understands which IngressClass belongs to their controller and restrict Ingress creation permissions accordingly.Helm controllers vs. Component Management controllers
By default, ACK deploys a Nginx Ingress Controller through the Component Management page. Deploy additional controllers as Helm applications to serve different traffic domains.
Controllers installed as Helm applications differ from those deployed through Component Management:
| Capability | Component Management | Helm |
|---|---|---|
| Grayscale upgrade | Supported | Not supported |
| Logs and monitoring | Supported | Not supported |
| Cluster inspection | Supported | Not supported |
| Lifecycle management | Managed by ACK | Self-managed (upgrades, configuration changes, troubleshooting) |
Limitations
Requires Kubernetes 1.22 or later.
Components for Kubernetes 1.20 and earlier are no longer maintained. See Announcement on the end of maintenance for Nginx Ingress Controller v1.2 and earlier. To upgrade, see Manually upgrade an ACK cluster.
Prerequisites
Before you begin, make sure that you have:
An ACK cluster running Kubernetes 1.22 or later
Access to the ACK console and permission to install Helm applications
(Optional) A default Nginx Ingress Controller already deployed via Component Management — see Expose services with Nginx Ingress if not yet set up
Deploy a new Ingress controller
On the ACK Clusters page, click the name of your cluster. In the left navigation pane, click Applications > Helm.
Click Create and install
ack-ingress-nginx-v1. Configure the following key parameters. Leave all other settings at their defaults.ImportantWhen deploying multiple controllers,
controller.ingressClassResource.nameandcontroller.ingressClassResource.controllerValuemust each be unique within the cluster to avoid IngressClass conflicts. Neither value can reuse the defaults reserved by the default controller (nginxandk8s.io/ingress-nginx).Parameter Description Application Name Enter a name that is unique within the cluster. This name is used as a prefix for auto-generated Service resources. The resulting Service name follows the format <Application Name>-ack-ingress-nginx-v1-controllerfor public services, or<Application Name>-ack-ingress-nginx-v1-controller-internalfor private services. The total name length must not exceed 63 characters.Chart Search for and select ack-ingress-nginx-v1. The olderack-ingress-nginxchart is no longer maintained.Chart Version Kubernetes 1.24 or later: use chart version 4.0.22 or later. Kubernetes 1.22: use chart versions 4.0.16 to 4.0.21. Chart Parameters By default, the chart deploys a Nginx Ingress Controller with 2 replicas as a Deployment and automatically creates a public LoadBalancer-type Service backed by a Classic Load Balancer (CLB) instance. For the full parameter list, see Parameter reference. Example — private network controller: set controller.service.external.enabledtofalseandcontroller.service.internal.enabledtotrue.After the Helm release is created, go to the Helm page. In the Basic Information area, note the namespace. In the Resources area, note the IngressClass name and Service name — you will need these in the next steps.
Verify traffic isolation
This section walks through a public and private network separation scenario to verify that each controller handles only its assigned Ingress resources.
Default controller: The Nginx Ingress Controller deployed through Component Management, bound to a public SLB instance.
New controller: The controller deployed in the previous section, bound to a private SLB instance accessible only within the VPC.
Step 1: Deploy a test application
Create an
nginx.yamlfile with the following content.apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 1 selector: matchLabels: run: nginx template: metadata: labels: run: nginx spec: containers: - image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6 imagePullPolicy: Always name: nginx ports: - containerPort: 80 protocol: TCP restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: nginx spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: run: nginx sessionAffinity: None type: NodePortDeploy the application.
kubectl apply -f nginx.yamlThe expected output is similar to:
deployment.apps/nginx created service/nginx created
Step 2: Create Ingress rules targeting the new controller
Create an
ingress.yamlfile with the following content.apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx spec: # Replace with the IngressClass name of your new controller (controller.ingressClassResource.name). ingressClassName: "<YOUR_INGRESS_CLASS>" rules: # The following domain name is for testing only. Replace it with your actual domain name in production. - host: foo.bar.com http: paths: - path: / backend: service: name: nginx port: number: 80 pathType: ImplementationSpecificCreate the Ingress resource.
kubectl apply -f ingress.yamlThe expected output is similar to:
ingress.networking.k8s.io/nginx created
Step 3: Test access
Get the IP addresses of both controllers. Default public controller:
PUBLIC_IP=$(kubectl get svc -n kube-system nginx-ingress-lb -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo "Public Ingress IP: $PUBLIC_IP"New private controller:
# Replace <YourNamespace> with the namespace of the new controller (for example, default). # Replace <YourChartName> with the Helm release name of the new controller. INTERNAL_IP=$(kubectl get svc -n <YourNamespace> <YourChartName>-ack-ingress-nginx-v1-controller-internal -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo "Internal Ingress IP: $INTERNAL_IP"From a machine inside the VPC, send a request through the private controller. A
200response confirms the private controller is correctly proxying traffic.curl -o /dev/null -s -w "%{http_code}\n" -H "Host: foo.bar.com" http://$INTERNAL_IPExpected output:
200Send the same request through the public controller. A
404 Not Foundresponse confirms the public controller did not process this Ingress rule — traffic isolation is working.curl -H "Host: foo.bar.com" http://$PUBLIC_IPExpected output:
404 Not Found
Apply in production
Before using the new controller in production, apply the following settings.
High availability
Set the following parameters in the Helm chart:
controller.replicaCount: 2 or morecontroller.resources.requestsandcontroller.resources.limits: appropriate values for your workloadcontroller.affinity: addpodAntiAffinityrules to spread pods across different nodes
Monitoring and alerting
Set controller.metrics.enabled: true and controller.metrics.serviceMonitor.enabled: true to export metrics to Prometheus. Monitor request latency and error rates (4xx/5xx), and configure alerting rules.
Performance
For low-latency workloads, use NLB instead of CLB:
Private network:
controller.service.internal.loadBalancerClass: "alibabacloud.com/nlb"Public network:
controller.service.loadBalancerClass: "alibabacloud.com/nlb"
Version maintenance
Track the Nginx Ingress Controller release notes and apply security patches promptly.
Use network policies to restrict the backend services each controller can access.
Parameter reference
The following table describes the main parameters for the ack-ingress-nginx-v1 Helm chart.
| Parameter | Description |
|---|---|
controller.image.repository | Container image registry address for the Nginx Ingress Controller. |
controller.image.tag | Image version of the Nginx Ingress Controller. |
controller.ingressClassResource.name | Name of the IngressClass resource. Must be unique within the cluster and cannot be nginx (reserved by the default controller). |
controller.ingressClassResource.controllerValue | Controller class identifier. Must be unique within the cluster and cannot be k8s.io/ingress-nginx (reserved by the default controller). |
controller.replicaCount | Number of controller pod replicas. Set to 2 or more for high availability. |
controller.service.enabled | Whether to create a LoadBalancer-type Service for the controller. |
controller.service.external.enabled | If true, creates a public-facing SLB Service. |
controller.service.internal.enabled | If true, creates a private SLB Service accessible only within the VPC. |
controller.kind | Workload type: Deployment or DaemonSet. |
controller.electionID | Identifier for leader election among controller replicas. Must be unique when deploying multiple controllers in the same namespace. |
controller.metrics.enabled | If true, exposes a Prometheus metrics endpoint. |
controller.metrics.serviceMonitor.enabled | If true, creates a ServiceMonitor resource for automatic Prometheus discovery. Requires controller.metrics.enabled: true. |
controller.service.loadBalancerClass | Load balancer type for the public network Service: "alibabacloud.com/clb" (default, CLB) or "alibabacloud.com/nlb" (NLB). |
controller.service.internal.loadBalancerClass | Load balancer type for the private network Service: "alibabacloud.com/clb" (default, CLB) or "alibabacloud.com/nlb" (NLB). |
What's next
Configure the network type of an Nginx Ingress Controller: Configure public and private network access on the default Component Management controller. This approach uses a single set of controller pods for all traffic and does not provide fault or configuration isolation.