This topic describes how to use Log Service to collect stdout files and log files from application containers in a serverless Kubernetes (ASK) cluster.

Prerequisites

Step 1: Configure log collection by using a YAML template

  1. Log on to the Container Service for Kubernetes (ACK) console.
  2. In the left-side navigation pane of the ACK console, click Clusters.
  3. On the Clusters page, click the name of a cluster or click Details in the Actions column.
  4. In the left-side navigation pane of the details page, choose Workloads > Deployments.
  5. On the Deployments page, click Create from YAML in the upper-right corner.
  6. Create a custom template and copy the following content to the template.

    YAML templates comply with the Kubernetes syntax. You must specify environment variables in the env field to collect log files from containers. The following code block is an example of a Deployment for collecting log files:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: alpine
      name: alpine
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: alpine
      template:
        metadata:
          labels:
            app: alpine
        spec:
          containers:
          - image: alpine
            imagePullPolicy: Always
            args:
            - ping
            - 127.0.0.1
            name: alpine
            env:
            # Specify environment variables. 
            # Specify a Log Service project. If you want to use the default Log Service project for the cluster, do not specify the variable. 
            - name: aliyun_logs_test-stdout_project
              value: k8s-log-xxx
            - name: aliyun_logs_test-file_project
              value: k8s-log-xxx
            # Specify a node group. If you want to use the default node group for the default Log Service project, do not specify the variable. 
            - name: aliyun_logs_test-stdout_machinegroup
              value: k8s-group-app-alpine
            - name: aliyun_logs_test-file_machinegroup
              value: k8s-group-app-alpine
            # Specify a Logstore that is used to store the collected stout and stderr files. In this example, the test-stdout Logstore is used. 
            - name: aliyun_logs_test-stdout
              value: stdout
            # Specify a Logstore that is used to store the log files collected from the /log/*.log directory. In this example, the test-file Logstore is used. 
            - name: aliyun_logs_test-file
              value: /log/*.log
            ######### The retention period of log files that are collected to a Logstore. This variable takes effect only for the specified Logstore. ###########
            - name: aliyun_logs_test-stdout_ttl
              value: "7"
            ######### The number of shards in a Logstore. This variable takes effect only for the specified Logstore. ###########
            - name: aliyun_logs_test-stdout_shard
              value: "2"

    Configure the following variables in sequence based on your requirements:

    • Configure log collection by specifying environment variables. Make sure that all specified environment variables use the aliyun_logs_ prefix. Add log collection configurations in the following format:
      - name: aliyun_logs_test-stdout
        value: stdout

      In the preceding YAML template, two environment variables are added to the log collection configuration. Environment variable aliyun_logs_test-stdout indicates that a Logstore named test-stdout is created to store the stdout files collected from containers. This way, the stdout files of containers are collected to the Logstore named test-stdout.

    • If you specify a log path other than stdout, you must collect stdout files to the specified log path.
  7. After you modify the YAML template, click Create to submit the configurations.
    After the Deployment is created, you can run the following command to query the status of pods:
    kubectl get Pods

    Expected output:

    NAME                      READY     STATUS    RESTARTS   AGE       IP             NODE
    alpine-76d978dbdd-g****   1/1       Running   0          21m       10.1.XX.XX   viking-c619c41329e624975a7bb50527180****
    alpine-76d978dbdd-v****   1/1       Running   0          21m       10.1.XX.XX   viking-c619c41329e624975a7bb50527180****

Step 2: Configure advanced settings in the env field

You can specify various environment variables to configure log collection. The following table describes the variables.
Notice This configuration method is not applicable to edge computing scenarios.
Variable Description Example Remarks
aliyun_logs_{key}
  • Required. {key} can contain only lowercase letters, digits, and hyphens (-).
  • If the specified aliyun_logs_{key}_logstore does not exist, a Logstore named {key} is created.
  • To collect the stdout of the container, set the value to stdout. You can also set the value to a path inside the container to collect the log files.
  • - name: aliyun_logs_catalina
    
       stdout
  • - name: aliyun_logs_access-log
    
       /var/log/nginx/access.log
aliyun_logs_{key}_tags Optional. This parameter is used to add tags to log data. The value must be in the following format: {tag-key}={tag-value}.
- name: aliyun_logs_catalina_tags

   app=catalina
-
aliyun_logs_{key}_project Optional. This variable specifies a project in Log Service. By default, the project that you specified when you created the cluster is used.
- name: aliyun_logs_catalina_project

   my-k8s-project
The project must be deployed in the same region as Logtail.
aliyun_logs_{key}_logstore Optional. This variable specifies a Logstore in Log Service. By default, the Logstore is named {key}.
- name: aliyun_logs_catalina_logstore

   my-logstore
-
aliyun_logs_{key}_shard Optional. This variable specifies the number of shards in the Logstore. Valid values: 1 to 10. Default value: 2.
- name: aliyun_logs_catalina_shard

   4
-
aliyun_logs_{key}_ttl Optional. This variable specifies the number of days for which log data is retained. Valid values: 1 to 3650.
  • To retain log data permanently, set the value to 3650.
  • Default value: 90.
- name: aliyun_logs_catalina_ttl

   3650
-
aliyun_logs_{key}_machinegroup Optional. This variable specifies the node group in which the application is deployed. The default node group is the one in which Logtail is deployed.
- name: aliyun_logs_catalina_machinegroup

   my-machine-group
-
  • Scenario 1: Collect log data from multiple applications and store them in the same Logstore

    In this scenario, set the aliyun_logs_{key}_logstore variable. The following example shows how to collect stdout from two applications and store the output in stdout-logstore.

    Configure the following environment variables for Application 1:
    ######### Configure environment variables. ###########
        - name: aliyun_logs_app1-stdout
          value: stdout
        - name: aliyun_logs_app1-stdout_logstore
          value: stdout-logstore
    Configure the following environment variables for Application 2:
    ######### Configure environment variables. ###########
        - name: aliyun_logs_app2-stdout
          value: stdout
        - name: aliyun_logs_app2-stdout_logstore
          value: stdout-logstore
  • Scenario 2: Collect log data from different applications and store them in different projects
    In this scenario, perform the following steps:
    1. Create a machine group in each project and set the machine group ID in the following format: k8s-group-{cluster-id}, where {cluster-id} is the ID of the cluster. You can customize the machine group name.
    2. Specify the project, Logstore, and the created machine group in the environment variables for each application.
      ######### Configure environment variables. ###########
          - name: aliyun_logs_app1-stdout
            value: stdout
          - name: aliyun_logs_app1-stdout_project
            value: app1-project
          - name: aliyun_logs_app1-stdout_logstore
            value: app1-logstore
          - name: aliyun_logs_app1-stdout_machinegroup
            value: app1-machine-group

Step 3: View log data

  1. Log on to the Log Service console.
  2. In the Projects section, click the project that is associated with the Kubernetes cluster to go to the Logstores tab. By default, the project name is in the format of k8s-log-{Kubernetes cluster ID}.
  3. In the Logstore list, find the Logstore that is specified when you configure log collection, click the Navigation icon icon, and then select Search & Analysis from the drop-down list.
    In this example, find the test-stdout Logstore, click the Navigation icon icon, and then select Search & Analysis from the drop-down list. On the page that appears, you can check the stdout files that are collected from elastic container instances.