Security Center supports connecting self-managed Kubernetes clusters for centralized threat detection and risk management. This topic describes how to onboard a self-managed Kubernetes cluster to Security Center and, optionally, enable log-based threat detection.
Edition requirements
| Billing method | Required edition | Server-level requirement |
|---|---|---|
| Subscription | Ultimate | Protection edition must be set to Ultimate — see Attach a protection edition to a server |
| Pay-as-you-go | Host and Container Security enabled | Protection level must be set to Host and Container Protection — see Attach a server protection level |
If your current edition does not meet the requirement, upgrade Security Center or purchase the service before continuing.
Region limitations
Region restrictions apply only to clusters deployed in a virtual private cloud (VPC):
VPC-based clusters: The cluster must reside in China (Hangzhou), China (Beijing), China (Shanghai), China (Shenzhen), or China (Hong Kong).
Internet-connected clusters: No region restrictions apply.
Prerequisites
Before you begin, make sure you have:
A Kubernetes cluster running on the target server
Docker installed on the server
(For cluster exposure analysis) Network configuration completed based on your deployment type — see Manage clusters and images
Configure traffic forwarding for hybrid cloud deployments
If your cluster is deployed on a hybrid cloud and is not accessible over the Internet, configure port forwarding on an Elastic Compute Service (ECS) instance to route traffic to the on-premises server running the cluster's API server. The cluster cannot communicate with Security Center if this forwarding is not configured.
The examples below forward traffic from Port A on ECS instance 10.0.XX.XX to Port B on the on-premises server 192.168.XX.XX.
CentOS 7 — firewall-cmd
firewall-cmd --permanent --add-forward-port=port=<Port A>:proto=tcp:toaddr=<192.168.XX.XX>:toport=<Port B>CentOS 7 — iptables
# Enable IP forwarding
echo "1" > /proc/sys/net/ipv4/ip_forward
# Add the forwarding rule
iptables -t nat -A PREROUTING -p tcp --dport <Port A> -j DNAT --to-destination <192.168.XX.XX>:<Port B>Windows — netsh
netsh interface portproxy add v4tov4 listenport=<Port A> listenaddress=* connectaddress=<192.168.XX.XX> connectport=<Port B> protocol=tcpAdd Security Center IP addresses to the whitelist
If access control policies are in place on your cluster, add the Security Center IP addresses for your region to the whitelist. The cluster cannot communicate with Security Center if these addresses are blocked.
| Region | Public IP address | Private IP address |
|---|---|---|
| China (Hangzhou) | 47.96.166.214 | 100.104.12.64/26 |
| China (Shanghai) | 139.224.15.48, 101.132.180.26, 47.100.18.171, 47.100.0.176, 139.224.8.64, 101.132.70.106, 101.132.156.228, 106.15.36.12, 139.196.168.125, 47.101.178.223, and 47.101.220.176 | 100.104.43.0/26 |
| China (Qingdao) | 47.104.111.68 | 100.104.87.192/26 |
| China (Beijing) | 47.95.202.245 | 100.104.114.192/26 |
| China (Zhangjiakou) | 39.99.229.195 | 100.104.187.64/26 |
| China (Hohhot) | 39.104.147.68 | 100.104.36.0/26 |
| China (Shenzhen) | 120.78.64.225 | 100.104.250.64/26 |
| China (Guangzhou) | 8.134.118.184 | 100.104.111.0/26 |
| China (Hong Kong) | 8.218.59.176 | 100.104.130.128/26 |
| Japan (Tokyo) | 47.74.24.20 | 100.104.69.0/26 |
| Singapore | 8.219.240.137 | 100.104.67.64/26 |
| US (Silicon Valley) | 47.254.39.224 | 100.104.145.64/26 |
| US (Virginia) | 47.252.4.238 | 100.104.36.0/26 |
| Germany (Frankfurt) | 47.254.158.71 | 172.16.0.0/20 |
| UK (London) | 8.208.14.12 | 172.16.0.0/20 |
| Indonesia (Jakarta) | 149.129.238.99 | 100.104.193.128/26 |
Add a self-managed Kubernetes cluster to Security Center
Log on to the Security Center consoleSecurity Center consoleSecurity Center console. In the top navigation bar, select the region of the asset: China or Outside China.
In the left-side navigation pane, choose Assets > Container.
On the Cluster tab, click Self-built cluster access.
In the Self-built cluster management panel, click Self-built cluster access. In the panel that appears, configure the cluster parameters and click Generate Command.
Parameter Description Cluster name A name for the cluster. Example: text-001.Expiration Time The expiration time of the generated onboarding command. Group The group to assign the cluster to. Set this to the group of the server on which the cluster runs. Service Provider The provider of the server on which the cluster runs. (Optional) In the Enable Log Collection section, choose whether to enable log-based threat detection. When enabled, Security Center collects additional audit logs for deeper risk analysis. This requires Logtail components and cluster audit settings to be configured first — see Enable log-based threat detection.
Log on to the server running the cluster. Create a YAML file named after your cluster (for example,
text-001.yaml), paste the generated command into the file, and run:kubectl apply -f text-001.yamlAfter the command completes, the cluster appears in the cluster list on the Cluster tab.
Replace text-001 in both the filename and the command with the value you entered for Cluster name in step 4.Add master nodes and tainted nodes
The generated command does not schedule DaemonSet pods on master nodes or tainted nodes by default. To include these nodes, add tolerations to the pod template in the YAML file before running kubectl apply.
For master nodes — add the following under spec > template > spec:
spec:
template:
spec:
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoScheduleThis toleration allows DaemonSet pods to be scheduled on nodes with the node-role.kubernetes.io/master:NoSchedule taint, which adds the master nodes to Security Center as part of the cluster.
For other tainted nodes — apply the same toleration pattern, matching the taint key and effect for each node type.
Enable log-based threat detection
Log-based threat detection is available for clusters running Kubernetes 1.16 or later. When enabled, Security Center detects high-risk operations and attack behavior by analyzing API server audit logs.
Step 1. Install Logtail
Follow the Install Logtail instructions in Install Logtail components in a self-managed Kubernetes cluster.
Step 2. Enable cluster auditing
The following steps are based on Enable cluster auditing for registered clusters.
Create an ACK One registered cluster and add the self-managed Kubernetes cluster to it. See Create ACK One registered clusters.
On each master node, update
/etc/kubernetes/audit-policy.yamlwith the following policy:For clusters running Kubernetes earlier than 1.24, set
apiVersiontoaudit.k8s.io/v1beta1. For 1.24 and later, useaudit.k8s.io/v1. See (Discontinued) Kubernetes 1.24.apiVersion: audit.k8s.io/v1beta1 kind: Policy # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # The following requests were manually identified as high-volume and low-risk, # so drop them. - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core resources: ["endpoints", "services"] - level: None users: ["system:unsecured"] namespaces: ["kube-system"] verbs: ["get"] resources: - group: "" # core resources: ["configmaps"] - level: None users: ["kubelet"] # legacy kubelet identity verbs: ["get"] resources: - group: "" # core resources: ["nodes"] - level: None userGroups: ["system:nodes"] verbs: ["get"] resources: - group: "" # core resources: ["nodes"] - level: None users: - system:kube-controller-manager - system:kube-scheduler - system:serviceaccount:kube-system:endpoint-controller verbs: ["get", "update"] namespaces: ["kube-system"] resources: - group: "" # core resources: ["endpoints"] - level: None users: ["system:apiserver"] verbs: ["get"] resources: - group: "" # core resources: ["namespaces"] # Don't log these read-only URLs. - level: None nonResourceURLs: - /healthz* - /version - /swagger* # Don't log events requests. - level: None resources: - group: "" # core resources: ["events"] # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data, # so only log at the Metadata level. - level: Metadata resources: - group: "" # core resources: ["secrets", "configmaps"] - group: authentication.k8s.io resources: ["tokenreviews"] # Get responses can be large; skip them. - level: Request verbs: ["get", "list", "watch"] resources: - group: "" # core - group: "admissionregistration.k8s.io" - group: "apps" - group: "authentication.k8s.io" - group: "authorization.k8s.io" - group: "autoscaling" - group: "batch" - group: "certificates.k8s.io" - group: "extensions" - group: "networking.k8s.io" - group: "policy" - group: "rbac.authorization.k8s.io" - group: "settings.k8s.io" - group: "storage.k8s.io" # Default level for known APIs - level: RequestResponse resources: - group: "" # core - group: "admissionregistration.k8s.io" - group: "apps" - group: "authentication.k8s.io" - group: "authorization.k8s.io" - group: "autoscaling" - group: "batch" - group: "certificates.k8s.io" - group: "extensions" - group: "networking.k8s.io" - group: "policy" - group: "rbac.authorization.k8s.io" - group: "settings.k8s.io" - group: "storage.k8s.io" # Default level for all other requests. - level: MetadataOn each master node, update
/etc/kubernetes/manifests/kube-apiserver.yaml:Add the following
--audit-log-*flags to thecommandsection:spec: containers: - command: - kube-apiserver - --audit-log-maxbackup=10 - --audit-log-maxsize=100 - --audit-log-path=/var/log/kubernetes/kubernetes.audit - --audit-log-maxage=30 - --audit-policy-file=/etc/kubernetes/audit-policy.yaml ...Add the following environment variables to the
envsection. Replace{cluster_id}with your cluster ID — find it on the Cluster tab in the Security Center console.env: - name: aliyun_logs_audit-${cluster_id} value: /var/log/kubernetes/kubernetes.audit - name: aliyun_logs_audit-${cluster_id}_tags value: audit=apiserver - name: aliyun_logs_audit-${cluster_id}_product value: k8s-audit - name: aliyun_logs_audit-${cluster_id}_jsonfile value: "true"Mount the audit log directory and policy file into the kube-apiserver pods:
volumeMounts: - mountPath: /var/log/kubernetes name: k8s-audit - mountPath: /etc/kubernetes/audit-policy.yaml name: audit-policy readOnly: true volumes: - hostPath: path: /var/log/kubernetes type: DirectoryOrCreate name: k8s-audit - hostPath: path: /etc/kubernetes/audit-policy.yaml type: FileOrCreate name: audit-policy
Step 3. Verify log collection
Log on to the Simple Log Service console.Log on to the Security Center console.Log on to the Security Center console.
Click the name of the project created during Logtail installation.
Confirm that audit logs are flowing into the expected Logstore.
Step 4. Enable threat detection in Security Center
Log on to the Security Center consoleSecurity Center consoleSecurity Center console. In the top navigation bar, select China or Outside China.
In the left-side navigation pane, choose Assets > Container.
On the Cluster tab, click Self-built cluster access.
Find the cluster and click Edit in the Actions column.
On the Enable Log Collection tab, select Enable Kubernetes Log Reporting to Detect Threats, configure the following parameters, and click Save.
Parameter Description Example Region of Log Audit Service The region where you want to store logs — Project of Log Audit Service The Simple Log Service project created during Logtail installation k8s-log-custom-sd89ehdqLogstore of Log Audit Service The Logstore automatically created during Logtail installation audit-027b007a7dd11967a9f7e2449d8dc497