Security Center allows you to add self-managed Kubernetes clusters to Security Center for centralized management and risk detection. This topic describes how to add a self-managed Kubernetes cluster to Security Center.
Limits
Only the Ultimate edition of Security Center supports this feature. For more information about how to purchase and upgrade Security Center, see Purchase Security Center and Upgrade and downgrade Security Center.
Limits
Self-managed Kubernetes clusters must reside in supported regions.
If a self-managed Kubernetes cluster that you want to add is deployed in a virtual private cloud (VPC), the cluster must reside in the China (Hangzhou), China (Beijing), China (Shanghai), China (Shenzhen), or China (Hong Kong) region.
If a self-managed Kubernetes cluster that you want to add is deployed on the Internet, no limits are imposed on the region of the cluster.
Prerequisites
A Kubernetes cluster is created on your server.
Docker is installed.
If your self-managed Kubernetes cluster is deployed on a hybrid cloud and is not accessible over the Internet, traffic forwarding rules are configured and the network connection is normal.
If access control policies are configured for your cluster, make sure that the IP addresses that correspond to the region of your container are added to the whitelist.
Add a self-managed Kubernetes cluster to Security Center
Log on to the Security Center console. In the top navigation bar, select the region of the asset that you want to manage. You can select China or Outside China.
In the left-side navigation pane, choose .
On the Cluster tab, click Self-built cluster access.
In the Self-built cluster management panel, click Self-built cluster access. In the panel that appears, configure the cluster that you want to add to Security Center and click Generate Command.
Parameter
Description
Cluster name
Enter the name of the self-managed Kubernetes cluster. Example: text-001.
Expiration Time
Select the expiration time of the command that is used to add the self-managed Kubernetes cluster.
Group
Select the group to which you want to add the cluster. Set this parameter to the group of the server on which the cluster is created.
Service Provider
Select the provider of the server on which the cluster is created.
Optional. In the Enable Log Collection section, specify whether to enable log-based threat detection for the Kubernetes cluster.
After you enable log-based threat detection, Security Center collects more audit logs for further risk detection. Before you enable log-based threat detection, you must install the Logtail components on the Kubernetes cluster and configure audit-related settings. For more information, see Enable log-based threat detection.
Log on to the server on which the cluster is created, create a YAML file named text-001 on the server, copy the generated command to the file, and then run the
kubectl apply -f text-001.yaml
command on the server. Then, the cluster is added to Security Center.NoteIn this step, text-001 in both text-001.yaml and
kubectl apply -f text-001.yaml
is an example value of the Cluster name parameter. In actual operations, you must replace text-001 with the value that you specify for the Cluster name parameter.After the self-managed Kubernetes cluster is added to Security Center, you can view the cluster information in the cluster list on the Cluster tab.
Enable log-based threat detection
If the Kubernetes version of the cluster is 1.16 or later, you can enable log-based threat detection for more comprehensive risk detection on the cluster. Risks such as high-risk operations and attack behavior can be detected.
Step 1. Install the Logtail components
The following procedure is for reference only. For more information, see Install Logtail components in a self-managed Kubernetes cluster.
Log on to the Simple Log Service console.
Create a project. For more information, see Create a project.
We recommend that you create a project whose name starts with
k8s-log-custom-
. Example: k8s-log-custom-sd89ehdq.Log on to your Kubernetes cluster.
Run the following commands to install Logtail and dependent components.
ImportantMake sure that the kubectl command-line tool is installed on the machine on which you want to run the commands.
alibaba-log-controller is available only in Kubernetes 1.6 or later.
If you no longer need to use custom resource definitions (CRDs), you can delete the
alibaba-cloud-log/templates/alicloud-log-config.yaml
file and rerun the following commands.
Download and decompress the installation package.
Regions in China
wget https://logtail-release-cn-hangzhou.oss-cn-hangzhou.aliyuncs.com/kubernetes/0.4.0/alibaba-cloud-log-all.tgz; tar xvf alibaba-cloud-log-all.tgz; chmod 744 ./alibaba-cloud-log-all/k8s-custom-install.sh
Regions outside China
wget https://logtail-release-ap-southeast-1.oss-ap-southeast-1.aliyuncs.com/kubernetes/0.4.0/alibaba-cloud-log-all.tgz; tar xvf alibaba-cloud-log-all.tgz; chmod 744 ./alibaba-cloud-log-all/k8s-custom-install.sh
Modify the
./alibaba-cloud-log-all/values.yaml
configuration file.# ===================== Required settings ===================== # The name of the project. SlsProjectName: # The region where the project resides. Region: # The ID of the Alibaba Cloud account to which the project belongs. You must enclose the ID in double quotation marks (""). AliUid: "11**99" # The AccessKey ID and AccessKey secret of the Alibaba Cloud account or a RAM user. The RAM user must have the AliyunLogFullAccess permission. AccessKeyID: AccessKeySercret: # The custom ID of the cluster. The ID can contain only letters, digits, and hyphens (-). ClusterID: # ========================================================== # Specifies whether to enable metric collection for the related components. Valid values: true and false. Default value: true. SlsMonitoring: true # The network type. Valid values: Internet and Intranet. Default value: Internet. Net: Internet
The following table describes the parameters that are included in the preceding command. You can configure the parameters based on your business scenario.
Parameter
Description
SlsProjectName
The name of the project that you created in Step 2.
Region
The ID of the region where your project resides. For example, the ID of the China (Hangzhou) region is
cn-hangzhou
. For more information, see Supported regions.AliUid
The ID of your Alibaba Cloud account. You must enclose the ID in double quotation marks (""). Example:
AliUid: "11**99"
. For information about how to obtain the ID of an Alibaba Cloud account, see Obtain the ID of the Alibaba Cloud account for which Simple Log Service is activated.AccessKeyID
The AccessKey ID of your Alibaba Cloud account. We recommend that you use the AccessKey pair of a RAM user and attach the AliyunLogFullAccess policy to the RAM user. For more information, see Create a RAM user and authorize the RAM user to access Simple Log Service.
AccessKeySercret
The AccessKey secret of your Alibaba Cloud account. We recommend that you use the AccessKey pair of a RAM user and attach the AliyunLogFullAccess policy to the RAM user. For more information, see Create a RAM user and authorize the RAM user to access Simple Log Service.
ClusterID
The custom ID of the cluster. The ID can contain only letters, digits, and hyphens (-). This parameter corresponds to the
${your_k8s_cluster_id}
variable in the following operations.ImportantDo not specify the same cluster ID for different Kubernetes clusters.
SlsMonitoring
Specifies whether to enable metric collection for the related components. Valid values:
true (default)
false
Net
The network type. Valid values:
Internet (default)
Intranet
Install Logtail and dependent components.
bash k8s-custom-install.sh; kubectl apply -R -f result
The following table describes the Simple Log Service resources that are automatically created after you install Logtail and dependent components.
Do not delete the
config-operation-log
Logstore.If you install Logtail components in a self-managed Kubernetes cluster, Logtail is granted the
privileged
permissions by default. This prevents thecontainer text file busy
error that may occur when other pods are deleted. For more information, see Bug 1468249, Bug 1441737, and Issue 34538.
Step 2. Enable the cluster audit feature
The following procedure is for reference only. For more information, see Enable cluster auditing for clusters.
Create a registered cluster and add the self-managed Kubernetes cluster to the registered cluster. For more information, see Create a registered cluster in the ACK console.
Configure the audit-policy.yaml file for master nodes.
Log on to a master node and modify the /etc/kubernetes/audit-policy.yaml file based on the following template. You must also perform this step on the other master nodes.apiVersion: audit.k8s.io/v1beta1 # This is required. kind: Policy # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # The following requests were manually identified as high-volume and low-risk, # so drop them. - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core resources: ["endpoints", "services"] - level: None users: ["system:unsecured"] namespaces: ["kube-system"] verbs: ["get"] resources: - group: "" # core resources: ["configmaps"] - level: None users: ["kubelet"] # legacy kubelet identity verbs: ["get"] resources: - group: "" # core resources: ["nodes"] - level: None userGroups: ["system:nodes"] verbs: ["get"] resources: - group: "" # core resources: ["nodes"] - level: None users: - system:kube-controller-manager - system:kube-scheduler - system:serviceaccount:kube-system:endpoint-controller verbs: ["get", "update"] namespaces: ["kube-system"] resources: - group: "" # core resources: ["endpoints"] - level: None users: ["system:apiserver"] verbs: ["get"] resources: - group: "" # core resources: ["namespaces"] # Don't log these read-only URLs. - level: None nonResourceURLs: - /healthz* - /version - /swagger* # Don't log events requests. - level: None resources: - group: "" # core resources: ["events"] # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data, # so only log at the Metadata level. - level: Metadata resources: - group: "" # core resources: ["secrets", "configmaps"] - group: authentication.k8s.io resources: ["tokenreviews"] # Get repsonses can be large; skip them. - level: Request verbs: ["get", "list", "watch"] resources: - group: "" # core - group: "admissionregistration.k8s.io" - group: "apps" - group: "authentication.k8s.io" - group: "authorization.k8s.io" - group: "autoscaling" - group: "batch" - group: "certificates.k8s.io" - group: "extensions" - group: "networking.k8s.io" - group: "policy" - group: "rbac.authorization.k8s.io" - group: "settings.k8s.io" - group: "storage.k8s.io" # Default level for known APIs - level: RequestResponse resources: - group: "" # core - group: "admissionregistration.k8s.io" - group: "apps" - group: "authentication.k8s.io" - group: "authorization.k8s.io" - group: "autoscaling" - group: "batch" - group: "certificates.k8s.io" - group: "extensions" - group: "networking.k8s.io" - group: "policy" - group: "rbac.authorization.k8s.io" - group: "settings.k8s.io" - group: "storage.k8s.io" # Default level for all other requests. - level: Metadata
Configure the kube-apiserver.yaml file for master nodes.
Log on to a master node and modify the /etc/kubernetes/manifests/kube-apiserver.yaml file based on the following description. You must also perform this step on the other master nodes.
- Add
--audit-log-*
parameters to the command section:... spec: containers: - command: - kube-apiserver - --audit-log-maxbackup=10 - --audit-log-maxsize=100 - --audit-log-path=/var/log/kubernetes/kubernetes.audit - --audit-log-maxage=30 - --audit-policy-file=/etc/kubernetes/audit-policy.yaml ...
Add the
aliyun_logs_audit-*
parameters to the env section.You must replace {cluster_id} with the ID of your cluster. To obtain the ID of your cluster, perform the following operations: Log on to the Security Center console and open the Cluster tab on the Container page. The following figure shows how to obtain the ID of your cluster.
... spec: containers: - command: - kube-apiserver - --audit-log-maxbackup=10 - --audit-log-maxsize=100 - --audit-log-path=/var/log/kubernetes/kubernetes.audit - --audit-log-maxage=30 - --audit-policy-file=/etc/kubernetes/audit-policy.yaml ... ... env: - name: aliyun_logs_audit-${cluster_id} value: /var/log/kubernetes/kubernetes.audit - name: aliyun_logs_audit-${cluster_id}_tags value: audit=apiserver - name: aliyun_logs_audit-${cluster_id}_product value: k8s-audit - name: aliyun_logs_audit-${cluster_id}_jsonfile value: "true" image: registry-vpc.cn-shenzhen.aliyuncs.com/acs/kube-apiserver:v1.20.4-aliyun.1
- Use the following template to mount /etc/kubernetes/audit-policy.yaml to the pods of kube-apiserver:
... spec: containers: - command: - kube-apiserver - --audit-log-maxbackup=10 - --audit-log-maxsize=100 - --audit-log-path=/var/log/kubernetes/kubernetes.audit - --audit-log-maxage=30 - --audit-policy-file=/etc/kubernetes/audit-policy.yaml ... ... env: - name: aliyun_logs_audit-${cluster_id} value: /var/log/kubernetes/kubernetes.audit - name: aliyun_logs_audit-${cluster_id}_tags value: audit=apiserver - name: aliyun_logs_audit-${cluster_id}_product value: k8s-audit - name: aliyun_logs_audit-${cluster_id}_jsonfile value: "true" image: registry-vpc.cn-shenzhen.aliyuncs.com/acs/kube-apiserver:v1.20.4-aliyun.1 ... ... volumeMounts: - mountPath: /var/log/kubernetes name: k8s-audit - mountPath: /etc/kubernetes/audit-policy.yaml name: audit-policy readOnly: true ... ... volumes: - hostPath: path: /var/log/kubernetes type: DirectoryOrCreate name: k8s-audit - hostPath: path: /etc/kubernetes/audit-policy.yaml type: FileOrCreate name: audit-policy ...
- Add
Step 3. Check whether logs are collected
Log on to the Simple Log Service console.
Click the name of the required project.
Check whether related logs are collected to the specified Logstore in the project.
Step 4. Enable threat detection
Log on to the Security Center console. In the top navigation bar, select the region of the asset that you want to manage. You can select China or Outside China.
In the left-side navigation pane, choose .
On the Cluster tab, click Self-built cluster access.
Find the required self-managed Kubernetes cluster and click Edit in the Actions column.
On the Enable Log Collection tab, select Enable Kubernetes Log Reporting to Detect Threats, configure the following parameters, and then click Save.
Region of Log Audit Service: Select the region in which you want to store logs.
Project of Log Audit Service: Enter the name of the project that you created in Step 1. Install the Logtail components. Example: k8s-log-custom-sd89ehdq.
Logstore of Log Audit Service: Enter the name of the Logstore that is automatically created in Step 1. Install the Logtail components. Example: audit-027b007a7dd11967a9f7e2449d8dc497.