A Kubernetes NetworkPolicy uses label selectors to define network policies at the pod level. ACK GlobalNetworkPolicy extends this functionality to the cluster level, allowing you to manage network policies for an entire cluster. This topic describes how to use an ACK GlobalNetworkPolicy to implement granular network security policies for your cluster.
Prerequisites
Before you begin, ensure that you have:
An ACK Pro managed cluster that uses the Terway network plugin. For more information, see Create an ACK managed cluster
Terway network plugin version 1.9.4 or later with the NetworkPolicy feature enabled. For more information, see Enable network policies
Nodes running the Terway network plugin. Exclusive Elastic Network Interface (ENI) mode, virtual nodes, hybrid cloud nodes, and other non-Alibaba Cloud nodes are not supported
Step 1: Install Poseidon
Poseidon is the container network policy component that enables standard Kubernetes NetworkPolicy and ACK GlobalNetworkPolicy support.
Install Poseidon version 0.5.1 or later:
Log on to the ACK console. In the left navigation pane, click Clusters.
On the Clusters page, find the target cluster and click its name. In the left navigation pane, click Add-ons.
On the Add-ons page, click the Networking tab. On the Poseidon card, click Install.
In the Install Poseidon dialog box, select Enable ACK NetworkPolicy, then click OK.
Step 2: Define a GlobalNetworkPolicy
The definition and usage of an ACK GlobalNetworkPolicy are similar to those of a Kubernetes NetworkPolicy. By default, its rules apply to all nodes and pods in the cluster unless specified otherwise.
GlobalNetworkPolicy follows these behavioral rules:
Additive evaluation: When multiple policies select the same pod, the allowed traffic is the union of all matching policies. Policies never conflict -- they combine additively.
Implicit isolation: A pod becomes isolated for a given traffic direction (ingress or egress) only when at least one policy with that
policyTypeselects it. Unselected pods remain fully open.Bidirectional matching: For a connection to succeed, both the egress policy on the source pod and the ingress policy on the destination pod must allow it.
Syntax
apiVersion: network.alibabacloud.com/v1beta2
kind: GlobalNetworkPolicy
metadata:
name: example
spec:
podSelector:
matchLabels:
foo: bar
namespaceSelector:
matchLabels:
foo: bar
policyTypes:
- Ingress
- Egress
ingress: []
egress: []Field reference
| Field | Description | Default |
|---|---|---|
podSelector | Required. Selects pods by label. Set to {} to select all pods. | All pods |
namespaceSelector | Selects namespaces by label. When omitted or set to null, selects all namespaces. Combined with podSelector using AND logic. | All namespaces |
policyTypes | Traffic directions the policy applies to. Valid values: Ingress, Egress, or both. | Based on whether ingress/egress rules are present |
ingress | Ingress rules. An empty array [] denies all inbound traffic. | No restriction |
egress | Egress rules. An empty array [] denies all outbound traffic. | No restriction |
If you set podSelector to {} and omit namespaceSelector, the policy applies to every pod in the cluster. Exercise caution when configuring a GlobalNetworkPolicy with this scope.
Ingress and egress rules
The ingress and egress fields define which traffic sources and destinations are allowed. Use from in ingress rules and to in egress rules to specify the allowed scope.
Each rule entry supports two selector types:
| Selector | Description |
|---|---|
ipBlock | Matches traffic by CIDR block. Use this for traffic outside the cluster. |
podSelector | Matches pods by label. Use this for traffic inside the cluster. |
Supported protocols: TCP, UDP, and SCTP. If not specified, the protocol defaults to TCP.
How YAML structure changes selector logic
The YAML structure of your selectors determines whether they combine with AND or OR logic. This is a common source of misconfiguration.
AND (single from/to entry) -- Both selectors must match:
ingress:
- from:
- namespaceSelector:
matchLabels:
team: backend
podSelector:
matchLabels:
role: apiThis allows traffic only from pods labeled role: api in namespaces labeled team: backend.
OR (separate from/to entries) -- Either selector can match:
ingress:
- from:
- namespaceSelector:
matchLabels:
team: backend
- podSelector:
matchLabels:
role: apiThis allows traffic from any pod in namespaces labeled team: backend, OR from pods labeled role: api in any namespace.
ipBlock and podSelector are mutually exclusive
ipBlock cannot appear in the same list entry as podSelector or namespaceSelector. Separate them into different entries.
Correct:
ingress:
- from:
- ipBlock:
cidr: "192.168.0.0/16"
- podSelector:
matchLabels:
key: value
ports:
- protocol: TCP
port: 80Incorrect:
ingress:
- from:
- ipBlock:
cidr: "192.168.0.0/16"
podSelector: # Cannot coexist with ipBlock in the same entry
matchLabels:
key: value
ports:
- protocol: TCP
port: 443Full ingress and egress example
apiVersion: network.alibabacloud.com/v1beta2
kind: GlobalNetworkPolicy
metadata:
name: example
spec:
podSelector: {}
namespaceSelector: null
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
foo: bar
podSelector:
matchLabels:
foo: bar
ports:
- protocol: TCP
port: 443
- from:
- ipBlock:
cidr: "172.16.0.0/16"
except:
- "172.16.1.0/24"
egress:
- to:
- namespaceSelector:
matchLabels:
foo: bar
podSelector:
matchLabels:
foo: bar
- to:
- ipBlock:
cidr: "172.16.0.0/16"
except:
- "172.16.1.0/24"Usage examples
Deny all traffic for specific pods
Apply this policy to block all inbound and outbound traffic for pods labeled foo: bar:
apiVersion: network.alibabacloud.com/v1beta2
kind: GlobalNetworkPolicy
metadata:
name: default-deny
spec:
podSelector:
matchLabels:
foo: bar
namespaceSelector: null
policyTypes:
- Ingress
- Egress
ingress: []
egress: []Deny all ingress traffic cluster-wide
Isolate all pods in the cluster from inbound traffic. Pair this with more specific allow policies to implement a default-deny security model:
apiVersion: network.alibabacloud.com/v1beta2
kind: GlobalNetworkPolicy
metadata:
name: default-deny-all-ingress
spec:
podSelector: {}
namespaceSelector: null
policyTypes:
- IngressThis policy does not affect egress traffic. To deny both directions, add Egress to policyTypes and include an empty egress: [] field.
Deny all egress traffic cluster-wide
Block all outbound traffic from every pod in the cluster:
apiVersion: network.alibabacloud.com/v1beta2
kind: GlobalNetworkPolicy
metadata:
name: default-deny-all-egress
spec:
podSelector: {}
namespaceSelector: null
policyTypes:
- Egress
egress: []Allow specific pods to access DNS
After applying a deny-all egress policy, allow pods labeled foo: bar to reach the cluster DNS service:
apiVersion: network.alibabacloud.com/v1beta2
kind: GlobalNetworkPolicy
metadata:
name: allow-dns
spec:
podSelector:
matchLabels:
foo: bar
namespaceSelector: null
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dnsAllow ingress from a specific CIDR block
Allow inbound TCP traffic on port 443 from a specific IP range:
apiVersion: network.alibabacloud.com/v1beta2
kind: GlobalNetworkPolicy
metadata:
name: allow-external-https
spec:
podSelector:
matchLabels:
app: web
namespaceSelector: null
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: "10.0.0.0/8"
ports:
- protocol: TCP
port: 443Verify a GlobalNetworkPolicy
After applying a policy, verify that it works as expected.
Apply the policy:
kubectl apply -f <your-policy-file>.yamlConfirm the policy is created: Expected output:
kubectl get globalnetworkpoliciesNAME AGE default-deny 5sTest connectivity from an affected pod. For example, to verify that a deny-all policy blocks traffic: If the policy is working, the request times out.
# Start a temporary test pod kubectl run test-client --rm -it --image=busybox --labels="foo=bar" -- /bin/sh # Inside the pod, try to reach an external address wget --timeout=3 -q -O- http://example.comTo verify that an allow rule works, label the test pod to match the allow policy and repeat the connectivity test.
Limits
The following limits apply per cluster:
| Resource | Limit |
|---|---|
| GlobalNetworkPolicy resources | Less than 100 |
ingress + egress rules per policy | Less than 20 |
ports per rule | Less than 10 |