All Products
Search
Document Center

Container Service for Kubernetes:Container Service for Kubernetes:[Deprecated] Use pod security policies

Last Updated:Mar 26, 2026
Warning

This topic applies only to ACK clusters running Kubernetes earlier than 1.26. For clusters running Kubernetes 1.25 or later, use Pod Security Admission instead. To migrate from PSP, see Migrate from PodSecurityPolicy to the built-in PodSecurity admission controller.

The PSP admission controller validates pod creation and update requests against rules you define. Requests that don't match the rules are rejected with an error.

Prerequisites

Before you begin, ensure that you have:

Default ACK pod security policy

Standard ACK dedicated clusters and standard ACK managed clusters running Kubernetes 1.16.6 have the PSP admission controller enabled by default. A policy named ack.privileged is pre-configured.

ack.privileged grants all authenticated users full, unrestricted access — equivalent to having the PSP admission controller disabled. It exists so that your workloads run normally out of the box, and is the baseline you replace when you enforce stricter policies.

Run the following command to inspect the default policy:

kubectl get psp ack.privileged

Expected output:

NAME             PRIV   CAPS   SELINUX    RUNASUSER   FSGROUP    SUPGROUP   READONLYROOTFS   VOLUMES
 ack.privileged   true   *      RunAsAny   RunAsAny    RunAsAny   RunAsAny   false            *

For full details, run:

kubectl describe psp ack.privileged

Expected output:

Name:  ack.privileged

Settings:
  Allow Privileged:                       true
  Allow Privilege Escalation:             true
  Default Add Capabilities:               <none>
  Required Drop Capabilities:             <none>
  Allowed Capabilities:                   *
  Allowed Volume Types:                   *
  Allow Host Network:                     true
  Allow Host Ports:                       0-65535
  Allow Host PID:                         true
  Allow Host IPC:                         true
  Read Only Root Filesystem:              false
  SELinux Context Strategy: RunAsAny
    User:                                 <none>
    Role:                                 <none>
    Type:                                 <none>
    Level:                                <none>
  Run As User Strategy: RunAsAny
    Ranges:                               <none>
  FSGroup Strategy: RunAsAny
    Ranges:                               <none>
  Supplemental Groups Strategy: RunAsAny
    Ranges:                               <none>

View the complete YAML for the default policy, ClusterRole, and ClusterRoleBinding

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: ack.privileged
  annotations:
    kubernetes.io/description: 'privileged allows full unrestricted access to
      pod features, as if the PodSecurityPolicy controller was not enabled.'
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
  labels:
    kubernetes.io/cluster-service: "true"
    ack.alicloud.com/component: pod-security-policy
spec:
  privileged: true
  allowPrivilegeEscalation: true
  allowedCapabilities:
  - '*'
  volumes:
  - '*'
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  hostIPC: true
  hostPID: true
  runAsUser:
    rule: 'RunAsAny'
  seLinux:
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'RunAsAny'
  fsGroup:
    rule: 'RunAsAny'
  readOnlyRootFilesystem: false

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ack:podsecuritypolicy:privileged
  labels:
    kubernetes.io/cluster-service: "true"
    ack.alicloud.com/component: pod-security-policy
rules:
- apiGroups:
  - policy
  resourceNames:
  - ack.privileged
  resources:
  - podsecuritypolicies
  verbs:
  - use

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ack:podsecuritypolicy:authenticated
  annotations:
    kubernetes.io/description: 'Allow all authenticated users to create privileged pods.'
  labels:
    kubernetes.io/cluster-service: "true"
    ack.alicloud.com/component: pod-security-policy
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ack:podsecuritypolicy:privileged
subjects:
  - kind: Group
    apiGroup: rbac.authorization.k8s.io
    name: system:authenticated

Enforce a custom pod security policy

To apply a stricter policy, remove the default ClusterRoleBinding so that ack.privileged no longer applies to all authenticated users. Complete this in two stages to avoid locking out all pod creation.

Important

Never delete or modify the ack.privileged policy or the ack:podsecuritypolicy:privileged ClusterRole. These resources are required for the ACK cluster to function. Only the ClusterRoleBinding (ack:podsecuritypolicy:authenticated) should be removed.

Stage 1: Create your custom policy and RBAC binding

Before removing the default ClusterRoleBinding, create and bind a custom pod security policy. If you remove the ClusterRoleBinding first, no users, controllers, or service accounts will be able to create or update pods.

Stage 2: Remove the default ClusterRoleBinding

After your custom policy and its Role-Based Access Control (RBAC) binding are in place, run the following command to remove the default ClusterRoleBinding:

View the command to delete the default ClusterRoleBinding

cat <<EOF | kubectl delete -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ack:podsecuritypolicy:authenticated
  annotations:
    kubernetes.io/description: 'Allow all authenticated users to create privileged pods.'
  labels:
    kubernetes.io/cluster-service: "true"
    ack.alicloud.com/component: pod-security-policy
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ack:podsecuritypolicy:privileged
subjects:
  - kind: Group
    apiGroup: rbac.authorization.k8s.io
    name: system:authenticated
EOF

Restore the default pod security policy

If the default ack.privileged policy or its ClusterRoleBinding is accidentally deleted, run the following command to restore it:

View the command to restore the default policy and its RBAC binding

cat <<EOF | kubectl apply -f -
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: ack.privileged
  annotations:
    kubernetes.io/description: 'privileged allows full unrestricted access to
      pod features, as if the PodSecurityPolicy controller was not enabled.'
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
  labels:
    kubernetes.io/cluster-service: "true"
    ack.alicloud.com/component: pod-security-policy
spec:
  privileged: true
  allowPrivilegeEscalation: true
  allowedCapabilities:
  - '*'
  volumes:
  - '*'
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  hostIPC: true
  hostPID: true
  runAsUser:
    rule: 'RunAsAny'
  seLinux:
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'RunAsAny'
  fsGroup:
    rule: 'RunAsAny'
  readOnlyRootFilesystem: false

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ack:podsecuritypolicy:privileged
  labels:
    kubernetes.io/cluster-service: "true"
    ack.alicloud.com/component: pod-security-policy
rules:
- apiGroups:
  - policy
  resourceNames:
  - ack.privileged
  resources:
  - podsecuritypolicies
  verbs:
  - use

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ack:podsecuritypolicy:authenticated
  annotations:
    kubernetes.io/description: 'Allow all authenticated users to create privileged pods.'
  labels:
    kubernetes.io/cluster-service: "true"
    ack.alicloud.com/component: pod-security-policy
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ack:podsecuritypolicy:privileged
subjects:
  - kind: Group
    apiGroup: rbac.authorization.k8s.io
    name: system:authenticated
EOF

FAQ

Pod creation fails with "no providers available to validate pod request"

The full error is no providers available to validate pod request or unable to validate against any pod security policy. This happens when the default ack.privileged policy was accidentally deleted.

Restore the policy by following the steps in Restore the default pod security policy.

Pod creation fails with "Forbidden: unsafe sysctl"

The full error looks like:

PodSecurityPolicy: unable to admit pod: [pod.spec.securityContext.sysctls[0]: Forbidden: unsafe sysctl "***" is not allowed]

Clusters don't allow unsafe sysctl parameters by default. To grant this permission for a specific workload, create a new pod security policy — do not modify the preset ack.privileged policy or any resources whose names start with ack:podsecuritypolicy:.

Warning

Do not modify or delete the following preset resources. The ACK cluster depends on them, and unauthorized changes may cause cluster features to break or be automatically reset:

  • The pod security policy named ack.privileged

  • Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings whose names start with ack:podsecuritypolicy:

Step 1: Create a pod security policy that allows unsafe sysctl parameters.

Create a file named unsafe-sysctl-psp.yaml with the following content. Adjust allowedUnsafeSysctls as needed.

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.allow-unsafe-sysctls
spec:
  allowedUnsafeSysctls:
  - '*'
  privileged: true
  allowPrivilegeEscalation: true
  allowedCapabilities:
  - '*'
  volumes:
  - '*'
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  hostIPC: true
  hostPID: true
  runAsUser:
    rule: 'RunAsAny'
  seLinux:
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'RunAsAny'
  fsGroup:
    rule: 'RunAsAny'
  readOnlyRootFilesystem: false

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: podsecuritypolicy:allow-unsafe-sysctls
rules:
- apiGroups:
  - policy
  resourceNames:
  - psp.allow-unsafe-sysctls
  resources:
  - podsecuritypolicies
  verbs:
  - use

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: podsecuritypolicy:allow-unsafe-sysctls:authenticated
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: podsecuritypolicy:allow-unsafe-sysctls
subjects:
  - kind: Group
    apiGroup: rbac.authorization.k8s.io
    name: system:authenticated

Step 2: Apply the policy.

kubectl create -f unsafe-sysctl-psp.yaml

Expected output:

podsecuritypolicy.policy/psp.allow-unsafe-sysctls created
clusterrole.rbac.authorization.k8s.io/podsecuritypolicy:allow-unsafe-sysctls created
clusterrolebinding.rbac.authorization.k8s.io/podsecuritypolicy:allow-unsafe-sysctls:authenticated created

Step 3: Configure the node pool to allow unsafe sysctl parameters.

Customize the kubelet parameters for the node pool where the workload runs. See Supported custom kubelet parameters.

Step 4: Verify with a test pod.

Deploy a test pod that uses unsafe sysctl parameters. If the kubelet is configured only on specific nodes (for example, a particular node pool), add a nodeSelector to ensure the pod schedules to those nodes.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: sysctl-example
spec:
#  nodeSelector:
#    alibabacloud.com/nodepool-id: npd912756***  # Replace with the target node pool ID
  securityContext:
    sysctls:
    - name: net.ipv4.tcp_syncookies
      value: "1"
    - name: net.core.somaxconn
      value: "1024"
    - name: net.ipv4.tcp_max_syn_backlog
      value: "65536"
  containers:
  - name: test
    image: nginx
EOF

Expected output:

pod/sysctl-example created

If a SysctlForbidden event occurs while the pod is running, the kubelet on the scheduled node is not configured to allow unsafe sysctl parameters. Check and adjust the pod's nodeSelector to target a node where the kubelet is correctly configured.

References