All Products
Search
Document Center

Container Service for Kubernetes:Use ACK GlobalNetworkPolicy

Last Updated:Mar 03, 2026

A Kubernetes NetworkPolicy uses label selectors to define network policies at the pod level. ACK GlobalNetworkPolicy extends this functionality to the cluster level, allowing you to manage network policies for an entire cluster. This topic describes how to use an ACK GlobalNetworkPolicy to implement granular network security policies for your cluster.

Prerequisites

Before you begin, ensure that you have:

Step 1: Install Poseidon

Poseidon is the container network policy component that enables standard Kubernetes NetworkPolicy and ACK GlobalNetworkPolicy support.

Install Poseidon version 0.5.1 or later:

  1. Log on to the ACK console. In the left navigation pane, click Clusters.

  2. On the Clusters page, find the target cluster and click its name. In the left navigation pane, click Add-ons.

  3. On the Add-ons page, click the Networking tab. On the Poseidon card, click Install.

  4. In the Install Poseidon dialog box, select Enable ACK NetworkPolicy, then click OK.

Step 2: Define a GlobalNetworkPolicy

The definition and usage of an ACK GlobalNetworkPolicy are similar to those of a Kubernetes NetworkPolicy. By default, its rules apply to all nodes and pods in the cluster unless specified otherwise.

GlobalNetworkPolicy follows these behavioral rules:

  • Additive evaluation: When multiple policies select the same pod, the allowed traffic is the union of all matching policies. Policies never conflict -- they combine additively.

  • Implicit isolation: A pod becomes isolated for a given traffic direction (ingress or egress) only when at least one policy with that policyType selects it. Unselected pods remain fully open.

  • Bidirectional matching: For a connection to succeed, both the egress policy on the source pod and the ingress policy on the destination pod must allow it.

Syntax

apiVersion: network.alibabacloud.com/v1beta2
kind: GlobalNetworkPolicy
metadata:
  name: example
spec:
  podSelector:
    matchLabels:
      foo: bar
  namespaceSelector:
    matchLabels:
      foo: bar
  policyTypes:
    - Ingress
    - Egress
  ingress: []
  egress: []

Field reference

FieldDescriptionDefault
podSelectorRequired. Selects pods by label. Set to {} to select all pods.All pods
namespaceSelectorSelects namespaces by label. When omitted or set to null, selects all namespaces. Combined with podSelector using AND logic.All namespaces
policyTypesTraffic directions the policy applies to. Valid values: Ingress, Egress, or both.Based on whether ingress/egress rules are present
ingressIngress rules. An empty array [] denies all inbound traffic.No restriction
egressEgress rules. An empty array [] denies all outbound traffic.No restriction
Important

If you set podSelector to {} and omit namespaceSelector, the policy applies to every pod in the cluster. Exercise caution when configuring a GlobalNetworkPolicy with this scope.

Ingress and egress rules

The ingress and egress fields define which traffic sources and destinations are allowed. Use from in ingress rules and to in egress rules to specify the allowed scope.

Each rule entry supports two selector types:

SelectorDescription
ipBlockMatches traffic by CIDR block. Use this for traffic outside the cluster.
podSelectorMatches pods by label. Use this for traffic inside the cluster.

Supported protocols: TCP, UDP, and SCTP. If not specified, the protocol defaults to TCP.

How YAML structure changes selector logic

The YAML structure of your selectors determines whether they combine with AND or OR logic. This is a common source of misconfiguration.

AND (single from/to entry) -- Both selectors must match:

ingress:
  - from:
      - namespaceSelector:
            matchLabels:
              team: backend
        podSelector:
            matchLabels:
              role: api

This allows traffic only from pods labeled role: api in namespaces labeled team: backend.

OR (separate from/to entries) -- Either selector can match:

ingress:
  - from:
      - namespaceSelector:
            matchLabels:
              team: backend
      - podSelector:
            matchLabels:
              role: api

This allows traffic from any pod in namespaces labeled team: backend, OR from pods labeled role: api in any namespace.

ipBlock and podSelector are mutually exclusive

ipBlock cannot appear in the same list entry as podSelector or namespaceSelector. Separate them into different entries.

Correct:

ingress:
  - from:
      - ipBlock:
          cidr: "192.168.0.0/16"
      - podSelector:
          matchLabels:
            key: value
    ports:
      - protocol: TCP
        port: 80

Incorrect:

ingress:
  - from:
      - ipBlock:
          cidr: "192.168.0.0/16"
        podSelector:              # Cannot coexist with ipBlock in the same entry
          matchLabels:
            key: value
    ports:
      - protocol: TCP
        port: 443

Full ingress and egress example

apiVersion: network.alibabacloud.com/v1beta2
kind: GlobalNetworkPolicy
metadata:
  name: example
spec:
  podSelector: {}
  namespaceSelector: null
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - namespaceSelector:
              matchLabels:
                foo: bar
          podSelector:
              matchLabels:
                foo: bar
        ports:
          - protocol: TCP
            port: 443
    - from:
        - ipBlock:
            cidr: "172.16.0.0/16"
            except:
              - "172.16.1.0/24"
  egress:
    - to:
        - namespaceSelector:
              matchLabels:
                foo: bar
          podSelector:
              matchLabels:
                foo: bar
    - to:
        - ipBlock:
            cidr: "172.16.0.0/16"
            except:
              - "172.16.1.0/24"

Usage examples

Deny all traffic for specific pods

Apply this policy to block all inbound and outbound traffic for pods labeled foo: bar:

apiVersion: network.alibabacloud.com/v1beta2
kind: GlobalNetworkPolicy
metadata:
  name: default-deny
spec:
  podSelector:
    matchLabels:
      foo: bar
  namespaceSelector: null
  policyTypes:
    - Ingress
    - Egress
  ingress: []
  egress: []

Deny all ingress traffic cluster-wide

Isolate all pods in the cluster from inbound traffic. Pair this with more specific allow policies to implement a default-deny security model:

apiVersion: network.alibabacloud.com/v1beta2
kind: GlobalNetworkPolicy
metadata:
  name: default-deny-all-ingress
spec:
  podSelector: {}
  namespaceSelector: null
  policyTypes:
    - Ingress
Note

This policy does not affect egress traffic. To deny both directions, add Egress to policyTypes and include an empty egress: [] field.

Deny all egress traffic cluster-wide

Block all outbound traffic from every pod in the cluster:

apiVersion: network.alibabacloud.com/v1beta2
kind: GlobalNetworkPolicy
metadata:
  name: default-deny-all-egress
spec:
  podSelector: {}
  namespaceSelector: null
  policyTypes:
    - Egress
  egress: []

Allow specific pods to access DNS

After applying a deny-all egress policy, allow pods labeled foo: bar to reach the cluster DNS service:

apiVersion: network.alibabacloud.com/v1beta2
kind: GlobalNetworkPolicy
metadata:
  name: allow-dns
spec:
  podSelector:
    matchLabels:
      foo: bar
  namespaceSelector: null
  policyTypes:
    - Egress
  egress:
    - to:
        - namespaceSelector:
              matchLabels:
                kubernetes.io/metadata.name: kube-system
          podSelector:
              matchLabels:
                k8s-app: kube-dns

Allow ingress from a specific CIDR block

Allow inbound TCP traffic on port 443 from a specific IP range:

apiVersion: network.alibabacloud.com/v1beta2
kind: GlobalNetworkPolicy
metadata:
  name: allow-external-https
spec:
  podSelector:
    matchLabels:
      app: web
  namespaceSelector: null
  policyTypes:
    - Ingress
  ingress:
    - from:
        - ipBlock:
            cidr: "10.0.0.0/8"
      ports:
        - protocol: TCP
          port: 443

Verify a GlobalNetworkPolicy

After applying a policy, verify that it works as expected.

  1. Apply the policy:

       kubectl apply -f <your-policy-file>.yaml
  2. Confirm the policy is created: Expected output:

       kubectl get globalnetworkpolicies
       NAME                        AGE
       default-deny                5s
  3. Test connectivity from an affected pod. For example, to verify that a deny-all policy blocks traffic: If the policy is working, the request times out.

       # Start a temporary test pod
       kubectl run test-client --rm -it --image=busybox --labels="foo=bar" -- /bin/sh
    
       # Inside the pod, try to reach an external address
       wget --timeout=3 -q -O- http://example.com
  4. To verify that an allow rule works, label the test pod to match the allow policy and repeat the connectivity test.

Limits

The following limits apply per cluster:

ResourceLimit
GlobalNetworkPolicy resourcesLess than 100
ingress + egress rules per policyLess than 20
ports per ruleLess than 10

References