All Products
Search
Document Center

Simple Log Service:Overview

Last Updated:Nov 16, 2023

Alibaba Cloud Simple Log Service is compatible with Kafka consumer groups. You can use native Kafka clients to read data in Simple Log Service.

Concepts in Kafka and Simple Log Service

Kafka

Simple Log Service

Description

Topic

Logstore

  • Topics in Kafka are used to distinguish between different types of information.

  • Logstores in Simple Log Service are used to collect, store, and query data.

Partition

Shard

Data is stored in partitions or shards.

  • Partitions are continuous. You can add partitions, but cannot remove partitions.

  • You can merge or split shards.

Offset

Cursor

  • An offset is the sequence ID of a message in a partition.

  • A cursor is a relative offset in Simple Log Service. You can use a cursor to locate a group of logs.

Permission configuration

If you want to use a Resource Access Management (RAM) user to consume data, you must attach the AliyunLogReadOnlyAccess policy to the RAM user. The policy grants the read-only permissions on Simple Log Service. For more information, see Create a RAM user and authorize the RAM user to access Simple Log Service.

If you want to implement access control in a finer-grained manner, you can create a custom policy and attach the policy to the RAM user. Example:

{
    "Version": "1",
    "Statement": [
        {
            "Action": "log:GetProject",
            "Resource": "acs:log:*:*:project/Project name",
            "Effect": "Allow"
        },
        {
            "Action": [
                "log:GetLogStore",
                "log:ListShards",
                "log:GetCursorOrData"
            ],
            "Resource": "acs:log:*:*:project/Project name/logstore/*",
            "Effect": "Allow"
        }
    ]
}

Examples

You can use different Kafka SDKs to consume data in Simple Log Service based on consumer groups.

Formats of Kafka data that is obtained after consumption

  • Scenario 1: If a log in a Logstore contains a single field, such as the content field, the key obtained after you use Kafka to consume the log is content, and the value is the value of the content field.

  • Scenario 2: If a log in a Logstore contains multiple fields, such as the url and method fields, the key obtained after you use Kafka to consume the log is null, and the value is JSON-formatted content, such as {"url" : "/", "method" : "get"}.

  • Scenario 3: If a log in a Logstore contains the kafka_topic, kafka_partition, kafka_offset, key, and value fields, the consumption program determines that the log is imported from Kafka. For more information, see Import data from Kafka to Simple Log Service. During consumption, the key and value fields are mapped to a key and a value in Kafka. This ensures that the data obtained after consumption is consistent with the data before import.

Note
  • If a log contains a single field, the value obtained after consumption is a single field value, as described in Scenario 1. If a log contains multiple fields, the value obtained after consumption is JSON-formatted content, as described in Scenario 2.

  • If you specify JSON as the format of the values obtained after consumption and you want to consume logs in Scenario 1, make sure that the field values in the logs are in the JSON format. Otherwise, a consumption error occurs.

Latency monitoring on consumer groups

You can view the status of data consumption and configure alerts in the Simple Log Service console.

Billing

  • If your Logstore uses the pay-by-feature billing mode, you are charged for multiple billable items, such as read and write traffic and requests, when you use Kafka to consume data. For more information, see Billable items of pay-by-feature.

  • If your Logstore uses the pay-by-ingested-data billing mode, you are not charged when you consume data. For more information, see Billable items of pay-by-ingested-data.

Limits

  • Simple Log Service allows you to use Kafka 2.1.0 to consume data.

  • Each Kafka consumer group can consume data in up to 50 Simple Log Service Logstores.

  • Up to 15 Kafka consumer groups can consume data in a Simple Log Service Logstore.

    Note

    When you use Kafka to consume data, your read traffic is limited by the quota specified by Simple Log Service. The number of Simple Log Service consumer groups is limited by the quota specified by Simple Log Service. The number of Kafka consumer groups is not limited by the quota. For more information, see Data read and write.

  • Only the SASL_SSL protocol is supported for connections. This helps ensure the security of data transmission.

  • An offset is generated by encoding a cursor in Simple Log Service. The offset cannot be used to calculate consumption latency.

  • If a log group contains more than 100,000 logs, the log group is truncated to reserve 100,000 logs when you use Kafka for consumption.

  • If you delete and recreate a Logstore after you use Kafka to consume data in the Logstore, you may encounter exceptions when you consume data in the Logstore again. In this case, you must manually run code to delete related Kafka consumer groups. Sample code:

    Important

    Simple Log Service creates mappings between shards and partitions when you use Kafka to consume data in a Logstore. If you do not delete the mappings before you recreate the Logstore, Simple Log Service obtains the existing mappings when you consume data in the Logstore again. As a result, exceptions occur.

    package org.example;
    
    import org.apache.kafka.clients.admin.*;
    import java.util.*;
    import java.util.concurrent.ExecutionException;
    import java.util.concurrent.TimeUnit;
    
    public class Main {
    
        public static void main(String[] args){
    
            Properties props = new Properties();
            String project = "project";
            // The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations in Simple Log Service is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. 
            // In this example, the AccessKey ID and AccessKey secret are configured as environment variables. You can also save your AccessKey ID and AccessKey secret to a configuration file. 
            // To prevent AccessKey pair leaks, we recommend that you do not specify the AccessKey ID or AccessKey secret in code.
            String accessKeyID = System.getenv("SLS_ACCESS_KEY_ID");
            String accessKeySecret = System.getenv("SLS_ACCESS_KEY_SECRET");
            String endpoint = "cn-hangzhou.log.aliyuncs.com";
            String port = "10012";
    
            // You can use an internal endpoint and port to access Simple Log Service. An internal network link provides higher quality and security than a public network link. 
            //String endpoint = "cn-hangzhou-intranet.log.aliyuncs.com";
            //String port = "10011";
            String hosts = project + "." + endpoint + ":" + port;
    
            props.put("bootstrap.servers", hosts);
            props.put("security.protocol", "sasl_ssl");
            props.put("sasl.mechanism", "PLAIN");
            props.put("sasl.jaas.config",
                    "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"" +
                            project + "\" password=\"" + accessKeyID + "#" + accessKeySecret + "\";");
    
            List<String> deleteGroupId = new ArrayList<>();
            // The name of the consumer group that you want to delete.
            deleteGroupId.add("kafka-test-112");
    
            AdminClient client = KafkaAdminClient.create(props);
    
            try {
                DeleteConsumerGroupsResult deleteConsumerGroupsResult =  client.deleteConsumerGroups(deleteGroupId);
                deleteConsumerGroupsResult.all().get(10, TimeUnit.SECONDS);
    
            } catch (final InterruptedException | ExecutionException | java.util.concurrent.TimeoutException e) {
                e.printStackTrace();
            }
        }
    
    }