All Products
Search
Document Center

Lindorm:Import incremental data from SLS

Last Updated:Mar 28, 2026

Stream incremental log data from a Simple Log Service (SLS) Loghub logstore into a Lindorm wide table in real time using Lindorm Tunnel Service (LTS).

Important

This feature was unpublished on June 16, 2023. LTS instances purchased after that date cannot use it. If your LTS instance was purchased before June 16, 2023, the feature remains available.

This feature supports:

  • Real-time streaming from an SLS logstore to a Lindorm wide table

  • Configurable consumer parallelism to handle high-volume logstores

  • Direct field mapping and expression-based value transformation (Jtwig syntax)

  • Primary key declaration per column

Prerequisites

Before you begin, ensure that you have:

Supported destination table types

Only tables created using Lindorm SQL are supported as replication destinations.

Set up a real-time data replication channel

  1. On the LTS action page, click Import Lindorm/HBase > SLS Real-time Data Replication.

  2. On the SLS Real-time Data Replication page, click Create Task.

  3. Enter a Channel Name, select the source and destination clusters, and specify the tables to synchronize or migrate.

  4. Click Create. After the channel is created, its details are displayed.

Configure channel parameters

The channel configuration is a JSON object with a reader section (source) and a writer section (destination).

{
  "reader": {
    "columns": [
      "__client_ip__",
      "C_Source",
      "id",
      "name"
    ],
    "consumerSize": 2,
    "logstore": "LTS-test"
  },
  "writer": {
    "columns": [
      {
        "name": "col1",
        "value": "{{ concat('xx', name) }}"
      },
      {
        "name": "col2",
        "value": "__client_ip__"
      },
      {
        "isPk": true,
        "name": "id",
        "value": "id"
      }
    ],
    "tableName": "default.sls"
  }
}

Reader parameters

ParameterDescriptionDefault
columnsList of Loghub field names to read from the logstore.
consumerSizeNumber of consumers subscribing to the Loghub data.1
logstoreName of the source logstore.

Writer parameters

The writer.columns array maps source fields to destination table columns. Each entry is an object with the following fields:

FieldDescription
nameColumn name in the destination Lindorm wide table.
valueSource field name to map, or a Jtwig expression that transforms the value.
isPkSet to true to mark this column as the primary key. No column family is required for primary key columns.

The tableName field specifies the destination table in <schema>.<table> format (for example, default.sls).

Column mapping example:

The following table shows how the example configuration maps source fields to destination columns:

Source fieldExpressionDestination columnNotes
name{{ concat('xx', name) }}col1Concatenates the prefix xx with the name field
__client_ip__(direct mapping)col2Maps the field value without transformation
id(direct mapping)idPrimary key column (isPk: true)

Expression syntax

The value field supports simple Jtwig expressions for value transformation.

{
  "name": "hhh",
  "value": "{{ concat(title, id) }}"
}