The API server records every Kubernetes API request and response as an audit log. For registered clusters — where an external Kubernetes cluster is connected to ACK — cluster administrators can route these logs to Alibaba Cloud Log Service and use them to answer questions such as:
-
What happened, and when?
-
Who initiated the request?
-
Which resource was affected?
This makes it possible to trace the full history of cluster operations and investigate security incidents during O&M.
Prerequisites
Before you begin, make sure that you have:
-
A registered cluster with an external Kubernetes cluster connected to it. For setup instructions, see Create a registered cluster in the ACK console
Step 1: Configure the audit policy on master nodes
Log on to a master node and edit /etc/kubernetes/audit-policy.yaml using the template below. Repeat this step on all other master nodes.
The apiVersion value depends on your Kubernetes version:
-
Kubernetes earlier than 1.24: use
audit.k8s.io/v1beta1 -
Kubernetes 1.24 and later: use
audit.k8s.io/v1
For details, see Kubernetes 1.24 release notes.
Audit levels
The policy template uses four audit levels. Understanding these levels helps you decide whether to adjust the rules for your environment:
| Level | What is logged |
|---|---|
None |
Nothing. Events matching this rule are not logged. |
Metadata |
Request metadata only: user, timestamp, resource, and verb. Request and response bodies are not logged. |
Request |
Metadata and request body. Response body is not logged. Does not apply to non-resource requests. |
RequestResponse |
Metadata, request body, and response body. Does not apply to non-resource requests. |
Audit policy template
apiVersion: audit.k8s.io/v1beta1 # Use audit.k8s.io/v1 for Kubernetes >= 1.24
kind: Policy
# Suppress events in the RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# High-volume, low-risk requests — not logged.
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core
resources: ["endpoints", "services"]
- level: None
users: ["system:unsecured"]
namespaces: ["kube-system"]
verbs: ["get"]
resources:
- group: "" # core
resources: ["configmaps"]
- level: None
users: ["kubelet"] # legacy kubelet identity
verbs: ["get"]
resources:
- group: "" # core
resources: ["nodes"]
- level: None
userGroups: ["system:nodes"]
verbs: ["get"]
resources:
- group: "" # core
resources: ["nodes"]
- level: None
users:
- system:kube-controller-manager
- system:kube-scheduler
- system:serviceaccount:kube-system:endpoint-controller
verbs: ["get", "update"]
namespaces: ["kube-system"]
resources:
- group: "" # core
resources: ["endpoints"]
- level: None
users: ["system:apiserver"]
verbs: ["get"]
resources:
- group: "" # core
resources: ["namespaces"]
# Read-only URLs — not logged.
- level: None
nonResourceURLs:
- /healthz*
- /version
- /swagger*
# Events — not logged.
- level: None
resources:
- group: "" # core
resources: ["events"]
# Secrets, ConfigMaps, and TokenReviews contain sensitive or binary data —
# log metadata only.
- level: Metadata
resources:
- group: "" # core
resources: ["secrets", "configmaps"]
- group: authentication.k8s.io
resources: ["tokenreviews"]
# Read requests for known API groups — log request metadata only (responses can be large).
- level: Request
verbs: ["get", "list", "watch"]
resources:
- group: "" # core
- group: "admissionregistration.k8s.io"
- group: "apps"
- group: "authentication.k8s.io"
- group: "authorization.k8s.io"
- group: "autoscaling"
- group: "batch"
- group: "certificates.k8s.io"
- group: "extensions"
- group: "networking.k8s.io"
- group: "policy"
- group: "rbac.authorization.k8s.io"
- group: "settings.k8s.io"
- group: "storage.k8s.io"
# Write requests for known API groups — log full request and response.
- level: RequestResponse
resources:
- group: "" # core
- group: "admissionregistration.k8s.io"
- group: "apps"
- group: "authentication.k8s.io"
- group: "authorization.k8s.io"
- group: "autoscaling"
- group: "batch"
- group: "certificates.k8s.io"
- group: "extensions"
- group: "networking.k8s.io"
- group: "policy"
- group: "rbac.authorization.k8s.io"
- group: "settings.k8s.io"
- group: "storage.k8s.io"
# All other requests — log metadata only.
- level: Metadata
Step 2: Configure kube-apiserver on master nodes
Log on to a master node and edit /etc/kubernetes/manifests/kube-apiserver.yaml. Repeat this step on all other master nodes.
The configuration has three parts: command-line flags, environment variables for log collection, and volume mounts.
Add audit log flags
Add the following flags to the command section:
spec:
containers:
- command:
- kube-apiserver
- --audit-log-maxbackup=10
- --audit-log-maxsize=100
- --audit-log-path=/var/log/kubernetes/kubernetes.audit
- --audit-log-maxage=30
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml
Each flag controls a specific aspect of log rotation and policy:
| Flag | Description |
|---|---|
--audit-log-path |
Path where the log backend writes audit events. Not setting this flag disables the log backend entirely. |
--audit-log-maxsize |
Maximum log file size in megabytes before rotation. Set to 100. |
--audit-log-maxbackup |
Maximum number of rotated log files to retain. Set to 10. |
--audit-log-maxage |
Maximum number of days to retain old log files. Set to 30. |
--audit-policy-file |
Path to the audit policy file configured in Step 1. |
Add log collection environment variables
Add the following variables to the env section. Replace {cluster_id} with your actual cluster ID in all four variable names. To find your cluster ID, see View cluster information.
env:
- name: aliyun_logs_audit-${cluster_id}
value: /var/log/kubernetes/kubernetes.audit
- name: aliyun_logs_audit-${cluster_id}_tags
value: audit=apiserver
- name: aliyun_logs_audit-${cluster_id}_product
value: k8s-audit
- name: aliyun_logs_audit-${cluster_id}_jsonfile
value: "true"
image: registry-vpc.cn-shenzhen.aliyuncs.com/acs/kube-apiserver:v1.20.4-aliyun.1
| Variable | Description |
|---|---|
aliyun_logs_audit-${cluster_id} |
Path to the audit log file that Logtail collects from. |
aliyun_logs_audit-${cluster_id}_tags |
Tag added to each log entry to identify the source as the API server. |
aliyun_logs_audit-${cluster_id}_product |
Log type identifier used by the log backend. |
aliyun_logs_audit-${cluster_id}_jsonfile |
Instructs Logtail to parse the log file as JSON. |
Mount the audit policy file and log directory
Add volumeMounts entries to the container spec:
volumeMounts:
- mountPath: /var/log/kubernetes
name: k8s-audit
- mountPath: /etc/kubernetes/audit-policy.yaml
name: audit-policy
readOnly: true
Then add the corresponding volumes entries at the pod spec level:
volumes:
- hostPath:
path: /var/log/kubernetes
type: DirectoryOrCreate
name: k8s-audit
- hostPath:
path: /etc/kubernetes/audit-policy.yaml
type: FileOrCreate
name: audit-policy
Step 3: Install the logtail-ds component
Install the logtail-ds component to enable log collection from the master nodes. For installation steps, see Step 2: Install logtail-ds.
What to do next
After completing the setup, view and analyze your audit logs in the ACK console. See Work with cluster auditing.