All Products
Search
Document Center

Security Center:Add a self-managed Kubernetes cluster to Security Center

Last Updated:Apr 02, 2024

Security Center allows you to add self-managed Kubernetes clusters to Security Center for centralized management and risk detection. This topic describes how to add a self-managed Kubernetes cluster to Security Center.

Limits

Only the Ultimate edition of Security Center supports this feature. For more information about how to purchase and upgrade Security Center, see Purchase Security Center and Upgrade and downgrade Security Center.

Limits

Self-managed Kubernetes clusters must reside in supported regions.

  • If a self-managed Kubernetes cluster that you want to add is deployed in a virtual private cloud (VPC), the cluster must reside in the China (Hangzhou), China (Beijing), China (Shanghai), China (Shenzhen), or China (Hong Kong) region.

  • If a self-managed Kubernetes cluster that you want to add is deployed on the Internet, no limits are imposed on the region of the cluster.

Prerequisites

  • A Kubernetes cluster is created on your server.

  • Docker is installed.

  • If your self-managed Kubernetes cluster is deployed on a hybrid cloud and is not accessible over the Internet, traffic forwarding rules are configured and the network connection is normal.

    How do I configure a traffic forwarding rule?

    Specify an Elastic Compute Service (ECS) instance and configure traffic forwarding rules to forward the traffic destined for the ECS instance to an on-premises server on which the API server for the self-managed Kubernetes cluster is installed.

    In the following command examples, the traffic on Port A of the ECS instance that uses the IP address 10.0.XX.XX is forwarded to Port B of the on-premises server that uses the IP address 192.168.XX.XX.

    • Command examples for CentOS 7

      • Use firewall-cmd

        firewall-cmd --permanent --add-forward-port=port=<Port A>:proto=tcp:toaddr=<192.168.XX.XX>:toport=<Port B>
      • Use iptables

      • # Enable port forwarding.
        echo "1" 	> /proc/sys/net/ipv4/ip_forward
        
        Configure port forwarding.
        iptables -t nat -A PREROUTING -p tcp --dport <Port A> -j DNAT --to-destination <192.168.XX.XX>:<Port B>
    • Command examples for Windows

      netsh interface portproxy add v4tov4 listenport=<Port A> listenaddress=* connectaddress=<192.168.XX.XX> connectport=<Port B> protocol=tcp
  • If access control policies are configured for your cluster, make sure that the IP addresses that correspond to the region of your container are added to the whitelist.

    Regions and IP addresses

    Region

    Public IP address

    Private IP address

    China (Hangzhou)

    47.96.166.214

    100.104.12.64/26

    China (Shanghai)

    139.224.15.48, 101.132.180.26, 47.100.18.171, 47.100.0.176, 139.224.8.64, 101.132.70.106, 101.132.156.228, 106.15.36.12, 139.196.168.125, 47.101.178.223, and 47.101.220.176

    100.104.43.0/26

    China (Qingdao)

    47.104.111.68

    100.104.87.192/26

    China (Beijing)

    47.95.202.245

    100.104.114.192/26

    China (Zhangjiakou)

    39.99.229.195

    100.104.187.64/26

    China (Hohhot)

    39.104.147.68

    100.104.36.0/26

    China (Shenzhen)

    120.78.64.225

    100.104.250.64/26

    China (Guangzhou)

    8.134.118.184

    100.104.111.0/26

    China (Hong Kong)

    8.218.59.176

    100.104.130.128/26

    Japan (Tokyo)

    47.74.24.20

    100.104.69.0/26

    Singapore

    8.219.240.137

    100.104.67.64/26

    US (Silicon Valley)

    47.254.39.224

    100.104.145.64/26

    US (Virginia)

    47.252.4.238

    100.104.36.0/26

    Germany (Frankfurt)

    47.254.158.71

    172.16.0.0/20

    UK (London)

    8.208.14.12

    172.16.0.0/20

    Indonesia (Jakarta)

    149.129.238.99

    100.104.193.128/26

Add a self-managed Kubernetes cluster to Security Center

  1. Log on to the Security Center console. In the top navigation bar, select the region of the asset that you want to manage. You can select China or Outside China.

  2. In the left-side navigation pane, choose Assets > Container.

  3. On the Cluster tab, click Self-built cluster access.

  4. In the Self-built cluster management panel, click Self-built cluster access. In the panel that appears, configure the cluster that you want to add to Security Center and click Generate Command.

    Parameter

    Description

    Cluster name

    Enter the name of the self-managed Kubernetes cluster. Example: text-001.

    Expiration Time

    Select the expiration time of the command that is used to add the self-managed Kubernetes cluster.

    Group

    Select the group to which you want to add the cluster. Set this parameter to the group of the server on which the cluster is created.

    Service Provider

    Select the provider of the server on which the cluster is created.

  5. Optional. In the Enable Log Collection section, specify whether to enable log-based threat detection for the Kubernetes cluster.

    After you enable log-based threat detection, Security Center collects more audit logs for further risk detection. Before you enable log-based threat detection, you must install the Logtail components on the Kubernetes cluster and configure audit-related settings. For more information, see Enable log-based threat detection.

  6. Log on to the server on which the cluster is created, create a YAML file named text-001 on the server, copy the generated command to the file, and then run the kubectl apply -f text-001.yaml command on the server. Then, the cluster is added to Security Center.

    Note

    In this step, text-001 in both text-001.yaml and kubectl apply -f text-001.yaml is an example value of the Cluster name parameter. In actual operations, you must replace text-001 with the value that you specify for the Cluster name parameter.

    After the self-managed Kubernetes cluster is added to Security Center, you can view the cluster information in the cluster list on the Cluster tab.

Enable log-based threat detection

If the Kubernetes version of the cluster is 1.16 or later, you can enable log-based threat detection for more comprehensive risk detection on the cluster. Risks such as high-risk operations and attack behavior can be detected.

Step 1. Install the Logtail components

The following procedure is for reference only. For more information, see Install Logtail components in a self-managed Kubernetes cluster.

  1. Log on to the Simple Log Service console.

  2. Create a project. For more information, see Create a project.

    We recommend that you create a project whose name starts with k8s-log-custom-. Example: k8s-log-custom-sd89ehdq.

  3. Log on to your Kubernetes cluster.

  4. Run the following commands to install Logtail and dependent components.

    Important
    • Make sure that the kubectl command-line tool is installed on the machine on which you want to run the commands.

    • alibaba-log-controller is available only in Kubernetes 1.6 or later.

    • If you no longer need to use custom resource definitions (CRDs), you can delete the alibaba-cloud-log/templates/alicloud-log-config.yaml file and rerun the following commands.

    1. Download and decompress the installation package.

      • Regions in China

        wget https://logtail-release-cn-hangzhou.oss-cn-hangzhou.aliyuncs.com/kubernetes/0.4.0/alibaba-cloud-log-all.tgz; tar xvf alibaba-cloud-log-all.tgz; chmod 744 ./alibaba-cloud-log-all/k8s-custom-install.sh 
      • Regions outside China

        wget https://logtail-release-ap-southeast-1.oss-ap-southeast-1.aliyuncs.com/kubernetes/0.4.0/alibaba-cloud-log-all.tgz; tar xvf alibaba-cloud-log-all.tgz; chmod 744 ./alibaba-cloud-log-all/k8s-custom-install.sh 
    2. Modify the ./alibaba-cloud-log-all/values.yaml configuration file.

      # ===================== Required settings =====================
      # The name of the project. 
      SlsProjectName: 
      # The region where the project resides. 
      Region: 
      # The ID of the Alibaba Cloud account to which the project belongs. You must enclose the ID in double quotation marks (""). 
      AliUid: "11**99"
      # The AccessKey ID and AccessKey secret of the Alibaba Cloud account or a RAM user. The RAM user must have the AliyunLogFullAccess permission. 
      AccessKeyID: 
      AccessKeySercret: 
      # The custom ID of the cluster. The ID can contain only letters, digits, and hyphens (-). 
      ClusterID: 
      # ==========================================================
      # Specifies whether to enable metric collection for the related components. Valid values: true and false. Default value: true. 
      SlsMonitoring: true
      # The network type. Valid values: Internet and Intranet. Default value: Internet. 
      Net: Internet

      The following table describes the parameters that are included in the preceding command. You can configure the parameters based on your business scenario.

      Parameter

      Description

      SlsProjectName

      The name of the project that you created in Step 2.

      Region

      The ID of the region where your project resides. For example, the ID of the China (Hangzhou) region is cn-hangzhou. For more information, see Supported regions.

      AliUid

      The ID of your Alibaba Cloud account. You must enclose the ID in double quotation marks (""). Example: AliUid: "11**99". For information about how to obtain the ID of an Alibaba Cloud account, see Obtain the ID of the Alibaba Cloud account for which Simple Log Service is activated.

      AccessKeyID

      The AccessKey ID of your Alibaba Cloud account. We recommend that you use the AccessKey pair of a RAM user and attach the AliyunLogFullAccess policy to the RAM user. For more information, see Create a RAM user and authorize the RAM user to access Simple Log Service.

      AccessKeySercret

      The AccessKey secret of your Alibaba Cloud account. We recommend that you use the AccessKey pair of a RAM user and attach the AliyunLogFullAccess policy to the RAM user. For more information, see Create a RAM user and authorize the RAM user to access Simple Log Service.

      ClusterID

      The custom ID of the cluster. The ID can contain only letters, digits, and hyphens (-). This parameter corresponds to the ${your_k8s_cluster_id} variable in the following operations.

      Important

      Do not specify the same cluster ID for different Kubernetes clusters.

      SlsMonitoring

      Specifies whether to enable metric collection for the related components. Valid values:

      • true (default)

      • false

      Net

      The network type. Valid values:

      • Internet (default)

      • Intranet

    3. Install Logtail and dependent components.

      bash k8s-custom-install.sh; kubectl apply -R -f result

The following table describes the Simple Log Service resources that are automatically created after you install Logtail and dependent components.

Important
  • Do not delete the config-operation-log Logstore.

  • If you install Logtail components in a self-managed Kubernetes cluster, Logtail is granted the privileged permissions by default. This prevents the container text file busy error that may occur when other pods are deleted. For more information, see Bug 1468249, Bug 1441737, and Issue 34538.

Step 2. Enable the cluster audit feature

The following procedure is for reference only. For more information, see Enable cluster auditing for clusters.

  1. Create a registered cluster and add the self-managed Kubernetes cluster to the registered cluster. For more information, see Create a registered cluster in the ACK console.

  2. Configure the audit-policy.yaml file for master nodes.

    Log on to a master node and modify the /etc/kubernetes/audit-policy.yaml file based on the following template. You must also perform this step on the other master nodes.
    apiVersion: audit.k8s.io/v1beta1 # This is required.
    kind: Policy
    # Don't generate audit events for all requests in RequestReceived stage.
    omitStages:
      - "RequestReceived"
    rules:
      # The following requests were manually identified as high-volume and low-risk,
      # so drop them.
      - level: None
        users: ["system:kube-proxy"]
        verbs: ["watch"]
        resources:
          - group: "" # core
            resources: ["endpoints", "services"]
      - level: None
        users: ["system:unsecured"]
        namespaces: ["kube-system"]
        verbs: ["get"]
        resources:
          - group: "" # core
            resources: ["configmaps"]
      - level: None
        users: ["kubelet"] # legacy kubelet identity
        verbs: ["get"]
        resources:
          - group: "" # core
            resources: ["nodes"]
      - level: None
        userGroups: ["system:nodes"]
        verbs: ["get"]
        resources:
          - group: "" # core
            resources: ["nodes"]
      - level: None
        users:
          - system:kube-controller-manager
          - system:kube-scheduler
          - system:serviceaccount:kube-system:endpoint-controller
        verbs: ["get", "update"]
        namespaces: ["kube-system"]
        resources:
          - group: "" # core
            resources: ["endpoints"]
      - level: None
        users: ["system:apiserver"]
        verbs: ["get"]
        resources:
          - group: "" # core
            resources: ["namespaces"]
      # Don't log these read-only URLs.
      - level: None
        nonResourceURLs:
          - /healthz*
          - /version
          - /swagger*
      # Don't log events requests.
      - level: None
        resources:
          - group: "" # core
            resources: ["events"]
      # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
      # so only log at the Metadata level.
      - level: Metadata
        resources:
          - group: "" # core
            resources: ["secrets", "configmaps"]
          - group: authentication.k8s.io
            resources: ["tokenreviews"]
      # Get repsonses can be large; skip them.
      - level: Request
        verbs: ["get", "list", "watch"]
        resources:
          - group: "" # core
          - group: "admissionregistration.k8s.io"
          - group: "apps"
          - group: "authentication.k8s.io"
          - group: "authorization.k8s.io"
          - group: "autoscaling"
          - group: "batch"
          - group: "certificates.k8s.io"
          - group: "extensions"
          - group: "networking.k8s.io"
          - group: "policy"
          - group: "rbac.authorization.k8s.io"
          - group: "settings.k8s.io"
          - group: "storage.k8s.io"
      # Default level for known APIs
      - level: RequestResponse
        resources:
          - group: "" # core
          - group: "admissionregistration.k8s.io"
          - group: "apps"
          - group: "authentication.k8s.io"
          - group: "authorization.k8s.io"
          - group: "autoscaling"
          - group: "batch"
          - group: "certificates.k8s.io"
          - group: "extensions"
          - group: "networking.k8s.io"
          - group: "policy"
          - group: "rbac.authorization.k8s.io"
          - group: "settings.k8s.io"
          - group: "storage.k8s.io"
      # Default level for all other requests.
      - level: Metadata
  3. Configure the kube-apiserver.yaml file for master nodes.

    Log on to a master node and modify the /etc/kubernetes/manifests/kube-apiserver.yaml file based on the following description. You must also perform this step on the other master nodes.

    • Add --audit-log-* parameters to the command section:
      ...
      spec:
        containers:
        - command:
          - kube-apiserver
          - --audit-log-maxbackup=10
          - --audit-log-maxsize=100
          - --audit-log-path=/var/log/kubernetes/kubernetes.audit
          - --audit-log-maxage=30
          - --audit-policy-file=/etc/kubernetes/audit-policy.yaml
          ...
    • Add the aliyun_logs_audit-* parameters to the env section.

      You must replace {cluster_id} with the ID of your cluster. To obtain the ID of your cluster, perform the following operations: Log on to the Security Center console and open the Cluster tab on the Container page. The following figure shows how to obtain the ID of your cluster.获取集群Cluster-ID

      ...
      spec:
        containers:
        - command:
          - kube-apiserver
          - --audit-log-maxbackup=10
          - --audit-log-maxsize=100
          - --audit-log-path=/var/log/kubernetes/kubernetes.audit
          - --audit-log-maxage=30
          - --audit-policy-file=/etc/kubernetes/audit-policy.yaml
          ...
          ...
          env:
          - name: aliyun_logs_audit-${cluster_id}
            value: /var/log/kubernetes/kubernetes.audit
          - name: aliyun_logs_audit-${cluster_id}_tags
            value: audit=apiserver
          - name: aliyun_logs_audit-${cluster_id}_product
            value: k8s-audit
          - name: aliyun_logs_audit-${cluster_id}_jsonfile
            value: "true"
          image: registry-vpc.cn-shenzhen.aliyuncs.com/acs/kube-apiserver:v1.20.4-aliyun.1
    • Use the following template to mount /etc/kubernetes/audit-policy.yaml to the pods of kube-apiserver:
      ...
      spec:
        containers:
        - command:
          - kube-apiserver
          - --audit-log-maxbackup=10
          - --audit-log-maxsize=100
          - --audit-log-path=/var/log/kubernetes/kubernetes.audit
          - --audit-log-maxage=30
          - --audit-policy-file=/etc/kubernetes/audit-policy.yaml
          ...
          ...
          env:
          - name: aliyun_logs_audit-${cluster_id}
            value: /var/log/kubernetes/kubernetes.audit
          - name: aliyun_logs_audit-${cluster_id}_tags
            value: audit=apiserver
          - name: aliyun_logs_audit-${cluster_id}_product
            value: k8s-audit
          - name: aliyun_logs_audit-${cluster_id}_jsonfile
            value: "true"
          image: registry-vpc.cn-shenzhen.aliyuncs.com/acs/kube-apiserver:v1.20.4-aliyun.1
          ...
          ...
          volumeMounts:
          - mountPath: /var/log/kubernetes
            name: k8s-audit
          - mountPath: /etc/kubernetes/audit-policy.yaml
            name: audit-policy
            readOnly: true
          ...
          ...
        volumes:
        - hostPath:
            path: /var/log/kubernetes
            type: DirectoryOrCreate
          name: k8s-audit
        - hostPath:
            path: /etc/kubernetes/audit-policy.yaml
            type: FileOrCreate
          name: audit-policy
        ...

Step 3. Check whether logs are collected

  1. Log on to the Simple Log Service console.

  2. Click the name of the required project.

  3. Check whether related logs are collected to the specified Logstore in the project.

Step 4. Enable threat detection

  1. Log on to the Security Center console. In the top navigation bar, select the region of the asset that you want to manage. You can select China or Outside China.

  2. In the left-side navigation pane, choose Assets > Container.

  3. On the Cluster tab, click Self-built cluster access.

  4. Find the required self-managed Kubernetes cluster and click Edit in the Actions column.

  5. On the Enable Log Collection tab, select Enable Kubernetes Log Reporting to Detect Threats, configure the following parameters, and then click Save.

    • Region of Log Audit Service: Select the region in which you want to store logs.

    • Project of Log Audit Service: Enter the name of the project that you created in Step 1. Install the Logtail components. Example: k8s-log-custom-sd89ehdq.

    • Logstore of Log Audit Service: Enter the name of the Logstore that is automatically created in Step 1. Install the Logtail components. Example: audit-027b007a7dd11967a9f7e2449d8dc497.