All Products
Search
Document Center

Simple Log Service:Get started with Log Service SDK for Go

Last Updated:Feb 27, 2026

This tutorial walks you through the core operations of the Simple Log Service (SLS) SDK for Go: creating a project and Logstore, configuring an index, writing logs, and querying the results.

Before you begin

Procedure

Step 1: Create a project

Call CreateProject to create an SLS project.

ParameterTypeRequiredDescriptionExample
projectNamestringYesThe project name. The name must be globally unique within an Alibaba Cloud region and cannot be changed after the project is created. Naming rules:
  • Can contain only lowercase letters, digits, and hyphens (-).
  • Must start and end with a lowercase letter or a digit.
  • Must be 3 to 63 characters in length.
test-project
descriptionstringYesThe description of the project.this is test
// Create a project.
ProjectName := "aliyun-test-project"
Description := "test"
project, err := client.CreateProject(ProjectName, Description)
if err != nil {
    if e, ok := err.(*sls.Error); ok && e.Code == "ProjectAlreadyExist" {
        log.Printf("Project : %s already created or has a global name conflict in Alibaba Cloud scope", ProjectName)
    } else {
        log.Fatalf("Create project : %s failed %v", ProjectName, err)
        os.Exit(1)
    }
} else {
    log.Printf("Project : %s created successfully", project.Name)
    time.Sleep(60 * time.Second)
}

Step 2: Create a Logstore

Call CreateLogStore to create a Logstore inside the project.

ParameterTypeRequiredDescriptionExample
projectstringYesThe name of the project.ali-test-project
logstoreNamestringYesThe name of the Logstore. The name must be unique within the project. Naming rules:
  • Can contain only lowercase letters, digits, hyphens (-), and underscores (_).
  • Must start and end with a lowercase letter or a digit.
  • Must be 3 to 63 characters in length.
my-logstore
shardCountintYesThe number of shards.2
ttlintYesThe data retention period, in days. Valid values: 1 to 3650. A value of 3650 specifies permanent retention.1
autoSplitboolNoSpecifies whether to enable automatic sharding. true: enables automatic sharding. false: disables automatic sharding.true
maxSplitShardintNoThe maximum number of shards for automatic splitting. Valid values: 1 to 256. Note: This parameter is required if autoSplit is set to true.64
// Create a Logstore.
LogStoreName := "aliyun-test-logstore"
var ttl, shardCnt, maxSplitShard int = 3, 2, 64
var autoSplit bool = true
err = client.CreateLogStore(ProjectName, LogStoreName, ttl, shardCnt, autoSplit, maxSplitShard)
if err != nil {
    if e, ok := err.(*sls.Error); ok && e.Code == "LogStoreAlreadyExist" {
        log.Printf("LogStore : %s already created", LogStoreName)
    } else {
        log.Fatalf("Create LogStore : %s failed %v", LogStoreName, err)
        os.Exit(1)
    }
} else {
    log.Printf("Create logstore : %v successfully", LogStoreName)
    time.Sleep(10 * time.Second)
}

Step 3: Create an index

Call CreateIndex to enable log querying on the Logstore. The index has two parts:

  • Field index (Keys): Indexes specific fields. Each sls.IndexKey specifies the delimiter characters (Token), case sensitivity (CaseSensitive), and data type (Type).

  • Full-text index (Line): Enables full-text search across all log content. The sls.IndexLine struct specifies the delimiter characters (Token), case sensitivity, and optional field inclusion/exclusion lists (IncludeKeys, ExcludeKeys).

The Token field contains the delimiter characters used for text segmentation. SLS splits text at these delimiter characters to produce searchable terms.

// Create an index for the Logstore.
index := sls.Index{
    // Field index.
    Keys: map[string]sls.IndexKey{
        "col_0": {
            Token:         []string{" "},
            CaseSensitive: false,
            Type:          "long",
        },
        "col_1": {
            Token:         []string{",", ":", " "},
            CaseSensitive: false,
            Type:          "text",
        },
    },
    // Full-text index.
    Line: &sls.IndexLine{
        Token:         []string{",", ":", " "},
        CaseSensitive: false,
        IncludeKeys:   []string{},
        ExcludeKeys:   []string{},
    },
}
err = client.CreateIndex(ProjectName, LogStoreName, index)
if err != nil {
    if e, ok := err.(*sls.Error); ok && e.Code == "IndexAlreadyExist" {
        log.Printf("Index : already created")
    } else {
        log.Fatalf("Create Index failed %v", err)
        os.Exit(1)
    }
} else {
    log.Println("CreateIndex success")
    time.Sleep(60 * time.Second)
}

Step 4: Write data

Call PutLogs to write log data to the Logstore. Each request sends one log group (sls.LogGroup), which contains a batch of log entries.

ParameterTypeRequiredDescriptionExample
projectstringYesThe destination project.aliyun-test-project
logstorestringYesThe destination Logstore.aliyun-test-logstore
topicstringNoThe log topic. If this parameter is not specified or is left empty, the value is set to "".test
sourcestringNoThe log source. If this parameter is not specified or is left empty, the IP address of the host on which the producer runs is used.203.0.113.10
contentSliceYesThe log or list of logs to send. The log must be in the LogItem format.-

The following example writes 10 log groups, each containing 100 logs with 10 fields (col_0 through col_9) -- 1,000 logs in total.

// Write data to the Logstore.
for loggroupIdx := 0; loggroupIdx < 10; loggroupIdx++ {
    logs := []*sls.Log{}
    for logIdx := 0; logIdx < 100; logIdx++ {
        content := []*sls.LogContent{}
        for colIdx := 0; colIdx < 10; colIdx++ {
            if colIdx == 0 {
                content = append(content, &sls.LogContent{
                    Key:   proto.String(fmt.Sprintf("col_%d", colIdx)),
                    Value: proto.String(fmt.Sprintf("%d", rand.Intn(10000000))),
                })
            } else {
                content = append(content, &sls.LogContent{
                    Key:   proto.String(fmt.Sprintf("col_%d", colIdx)),
                    Value: proto.String(fmt.Sprintf("loggroup idx: %d, log idx: %d, col idx: %d, value: %d", loggroupIdx, logIdx, colIdx, rand.Intn(10000000))),
                })
            }
        }
        log := &sls.Log{
            Time:     proto.Uint32(uint32(time.Now().Unix())),
            Contents: content,
        }
        logs = append(logs, log)
    }
    loggroup := &sls.LogGroup{
        Topic:  proto.String("test"),
        Source: proto.String("203.0.113.10"),
        Logs:   logs,
    }

    err = client.PutLogs(ProjectName, LogStoreName, loggroup)
    if err != nil {
        log.Fatalf("PutLogs failed %v", err)
        os.Exit(1)
    }
    log.Println("PutLogs success")
    time.Sleep(time.Second)
}

Step 5: Query data

Call GetLogs to query log data. Pagination behavior depends on the statement type:

Statement typelineoffsetreverseHow to paginate
Query-only (for example, * and col_0 > 9000000)Valid. Maximum value: 100. Specifies the maximum number of logs to return per request.Valid. Specifies the start position.Valid.Set line and offset. For example, first page: line=100, offset=0. Second page: line=100, offset=100.
Analytic (contains SQL aggregation or transformation)Set to 0.Set to 0.Set to false.Use LIMIT, OFFSET, and ORDER BY clauses in the SQL statement. Setting line, offset, or reverse to other values causes an error.

For more information, see Paginate query and analysis results.

// Query logs.
// For a query-only statement, the line, offset, and reverse parameters are valid.
// For an analytic statement, set line to 0, offset to 0, and reverse to false.
response, err := client.GetLogs(ProjectName, LogStoreName, "test", time.Now().Unix()-1800, time.Now().Unix(), "* and col_0 > 9000000", 100, 1, true)
if err != nil {
    log.Fatalf("GetLogs failed %v", err)
    os.Exit(1)
}
log.Printf("Get %d logs", response.Count)
logs := response.Logs
for i := range logs {
    for k, v := range logs[i] {
        log.Printf("key: %s, value: %s", k, v)
    }
    log.Println("======")
}

Complete sample code

Create a file named SLSQuickStart.go and paste the following code. This program runs all five steps end-to-end.

package main

import (
    "fmt"
    "log"
    "math/rand"
    "os"
    "time"

    sls "github.com/aliyun/aliyun-log-go-sdk"
    "github.com/gogo/protobuf/proto"
)

func main() {
    // The endpoint of Simple Log Service. This example uses the endpoint of the China (Hangzhou) region.
    // Replace the value with the actual endpoint. Format: {region-id}.log.aliyuncs.com
    Endpoint := "cn-hangzhou.log.aliyuncs.com"

    // Obtain the AccessKey ID and AccessKey Secret from environment variables.
    AccessKeyId := os.Getenv("ALIBABA_CLOUD_ACCESS_KEY_ID")
    AccessKeySecret := os.Getenv("ALIBABA_CLOUD_ACCESS_KEY_SECRET")
    // The temporary security token for the RAM (Resource Access Management) user role.
    // An empty value indicates that no temporary security token is used.
    SecurityToken := ""
    // Create a Simple Log Service client.
    provider := sls.NewStaticCredentialsProvider(AccessKeyId, AccessKeySecret, SecurityToken)
    client := sls.CreateNormalInterfaceV2(Endpoint, provider)

    // Create a project.
    ProjectName := "aliyun-test-project"
    Description := "test"
    project, err := client.CreateProject(ProjectName, Description)
    if err != nil {
        if e, ok := err.(*sls.Error); ok && e.Code == "ProjectAlreadyExist" {
            log.Printf("Project : %s already created or has a global name conflict in Alibaba Cloud scope", ProjectName)
        } else {
            log.Fatalf("Create project : %s failed %v", ProjectName, err)
            os.Exit(1)
        }
    } else {
        log.Printf("Project : %s created successfully", project.Name)
        time.Sleep(60 * time.Second)
    }

    // Create a Logstore.
    LogStoreName := "aliyun-test-logstore"
    err = client.CreateLogStore(ProjectName, LogStoreName, 3, 2, true, 64)
    if err != nil {
        if e, ok := err.(*sls.Error); ok && e.Code == "LogStoreAlreadyExist" {
            log.Printf("LogStore : %s already created", LogStoreName)
        } else {
            log.Fatalf("Create LogStore : %s failed %v", LogStoreName, err)
            os.Exit(1)
        }
    } else {
        log.Printf("Create logstore : %v successfully", LogStoreName)
        time.Sleep(10 * time.Second)
    }

    // Create an index for the Logstore.
    index := sls.Index{
        // Field index.
        Keys: map[string]sls.IndexKey{
            "col_0": {
                Token:         []string{" "},
                CaseSensitive: false,
                Type:          "long",
            },
            "col_1": {
                Token:         []string{",", ":", " "},
                CaseSensitive: false,
                Type:          "text",
            },
        },
        // Full-text index.
        Line: &sls.IndexLine{
            Token:         []string{",", ":", " "},
            CaseSensitive: false,
            IncludeKeys:   []string{},
            ExcludeKeys:   []string{},
        },
    }
    err = client.CreateIndex(ProjectName, LogStoreName, index)
    if err != nil {
        if e, ok := err.(*sls.Error); ok && e.Code == "IndexAlreadyExist" {
            log.Printf("Index : already created")
        } else {
            log.Fatalf("Create Index failed %v", err)
            os.Exit(1)
        }
    } else {
        log.Println("CreateIndex success")
        time.Sleep(60 * time.Second)
    }

    // Write data to the Logstore.
    for loggroupIdx := 0; loggroupIdx < 10; loggroupIdx++ {
        logs := []*sls.Log{}
        for logIdx := 0; logIdx < 100; logIdx++ {
            content := []*sls.LogContent{}
            for colIdx := 0; colIdx < 10; colIdx++ {
                if colIdx == 0 {
                    content = append(content, &sls.LogContent{
                        Key:   proto.String(fmt.Sprintf("col_%d", colIdx)),
                        Value: proto.String(fmt.Sprintf("%d", rand.Intn(10000000))),
                    })
                } else {
                    content = append(content, &sls.LogContent{
                        Key:   proto.String(fmt.Sprintf("col_%d", colIdx)),
                        Value: proto.String(fmt.Sprintf("loggroup idx: %d, log idx: %d, col idx: %d, value: %d", loggroupIdx, logIdx, colIdx, rand.Intn(10000000))),
                    })
                }
            }
            log := &sls.Log{
                Time:     proto.Uint32(uint32(time.Now().Unix())),
                Contents: content,
            }
            logs = append(logs, log)
        }
        loggroup := &sls.LogGroup{
            Topic:  proto.String("test"),
            Source: proto.String("203.0.113.10"),
            Logs:   logs,
        }

        err = client.PutLogs(ProjectName, LogStoreName, loggroup)
        if err != nil {
            log.Fatalf("PutLogs failed %v", err)
            os.Exit(1)
        }
        log.Println("PutLogs success")
        time.Sleep(time.Second)
    }

    // Query logs.
    // For a query-only statement, the line, offset, and reverse parameters are valid.
    // The line parameter specifies the maximum number of logs to return per query (max 100).
    // Use line and offset for paging. Example: first page line=100, offset=0;
    // second page line=100, offset=100.
    //
    // For an analytic statement, set line to 0, offset to 0, and reverse to false.
    // Use LIMIT, OFFSET, and ORDER BY in the SQL statement for pagination.
    // Otherwise, an error is reported.
    //
    // For more information, see Paginate query and analysis results.
    response, err := client.GetLogs(ProjectName, LogStoreName, "test", time.Now().Unix()-1800, time.Now().Unix(), "* and col_0 > 9000000", 100, 1, true)
    if err != nil {
        log.Fatalf("GetLogs failed %v", err)
        os.Exit(1)
    }
    log.Printf("Get %d logs", response.Count)
    logs := response.Logs
    for i := range logs {
        for k, v := range logs[i] {
            log.Printf("key: %s, value: %s", k, v)
        }
        log.Println("======")
    }
}

For more sample code, see Aliyun Log Go SDK.

Expected output

When the program runs successfully, the output looks similar to the following:

Project : aliyun-test-project created successfully.
Create logstore : aliyun-test-logstore successfully.
CreateIndex success
PutLogs success
PutLogs success
PutLogs success
PutLogs success
PutLogs success
PutLogs success
PutLogs success
PutLogs success
PutLogs success
PutLogs success
Get 61 logs
key: source, value: 203.0.113.10
key: time, value: 1627282116
key: col_0, value: 9886757
key: col_1, value: loggroup idx: 6, log idx: 87, col idx: 1, value: 2673724
key: col_2, value: loggroup idx: 6, log idx: 87, col idx: 2, value: 5822012
key: col_8, value: loggroup idx: 6, log idx: 87, col idx: 8, value: 3996746
key: topic, value: test
key: col_9, value: loggroup idx: 6, log idx: 87, col idx: 9, value: 7646111
key: col_3, value: loggroup idx: 6, log idx: 87, col idx: 3, value: 8872632
key: col_4, value: loggroup idx: 6, log idx: 87, col idx: 4, value: 1839836
key: col_5, value: loggroup idx: 6, log idx: 87, col idx: 5, value: 6967415
key: col_6, value: loggroup idx: 6, log idx: 87, col idx: 6, value: 5872057
key: col_7, value: loggroup idx: 6, log idx: 87, col idx: 7, value: 3227909
======
......

What's next