All Products
Search
Document Center

AnalyticDB:Synchronize SLS data via the APS data synchronization feature

Last Updated:Mar 28, 2026

Stream log data from Simple Log Service (SLS) into an AnalyticDB for MySQL cluster in real time using the AnalyticDB Pipeline Service (APS). APS ingests all SLS records written after a start offset you choose, enabling real-time log analysis without writing custom data pipeline code.

Key concepts

TermDefinition
Data sourceThe SLS Logstore configuration that APS reads from. Created once and reused across multiple sync tasks.
Data linkThe active pipeline that connects a data source to a destination table and runs the incremental synchronization.
Sync taskThe running job that executes a data link. Start, pause, or delete it from the Actions column.
Start offsetThe point in time from which the sync task begins consuming SLS data. Records before this timestamp are skipped.
Dirty dataA record where the source field type does not match the destination table column type, causing a conversion failure.

Prerequisites

Before you begin, make sure you have:

Limits

  • A single destination table can synchronize data from only one Logstore. To ingest data from multiple Logstores, create a separate destination table for each.

Billing

Elastic resources are billed on a pay-as-you-go basis. Fees are calculated based on the number of AnalyticDB compute units (ACUs) used by the data link. For details, see Pricing.

Choose a console

Select a console based on whether you need cross-account access:

ScenarioConsole to useWhat you do
SLS data is in the same Alibaba Cloud account as your clusterSLS consoleCreate the data link only — APS auto-creates the data source
SLS data is in a different Alibaba Cloud accountAnalyticDB for MySQL consoleConfigure RAM authorization, create the data source, then create the data link

Option 1: Create a sync task in the SLS console

Step 1: Create a data source and a data link

  1. Log on to the Simple Log Service console.

  2. In the Project list, click the target project. In the left navigation pane, click image to open the Log Storage tab, then expand the target Logstore.

  3. Click Data Processing > Export, then click the + icon next to AnalyticDB.

  4. In the Shipping Note dialog box, select Create in AnalyticDB for MySQL Console.

    image

  5. On the AnalyticDB for MySQL Log Synchronization page, configure parameters across three tabs.

    Source and destination settings

    ParameterDescription
    Job NameName of the data link. Auto-generated from the data source type and current time. Rename as needed.
    Simple Log Service ProjectThe source SLS project.
    Simple Log Service LogstoreThe source SLS Logstore.
    Destination AnalyticDB for MySQL ClusterThe destination AnalyticDB for MySQL cluster.
    AnalyticDB for MySQL AccountThe database account of the cluster.
    AnalyticDB for MySQL PasswordThe password of the database account.

    Destination database and table settings

    ParameterDescription
    Database NameThe name of the destination database in the cluster.
    Table NameThe name of the destination table in the cluster.
    Source Data PreviewClick View Latest 10 Logstore Data Entries to preview the 10 most recent records.
    Schema Field MappingAnalyticDB for MySQL automatically populates the Destination Table Field and Source Field columns. If a mapping is incorrect, update the Source Field manually. For example, if the cluster table column is name but the SLS field is user_name, change the Source Field from name to user_name.

    Synchronization settings

    ParameterDescription
    Start OffsetThe point in time from which the sync task starts consuming data. For example, setting this to 2024-04-09 13:10 starts from the first record after 13:10 on April 9, 2024.
    Dirty Data Processing ModeControls behavior when a source field value cannot be converted to the destination column type (for example, the source value is abc but the destination type is int). Stop Synchronization (default) halts the task so you can fix the type mismatch. Treat as Null writes NULL for the failing field and continues syncing the remaining fields. Use Treat as Null when occasional type mismatches are acceptable and continuous ingestion is more important than strict type fidelity.
    Convert Unix Timestamp into DatetimeEnable this if an SLS field contains a Unix timestamp (for example, 1710604800) and the destination column type is DATETIME or TIMESTAMP. After enabling, select the precision: Timestamp Accurate to Seconds, Timestamp Accurate to Milliseconds, or Timestamp Accurate to Microseconds.
    Job Resource GroupThe job-specific resource group to run the incremental synchronization. Required for Enterprise Edition, Basic Edition, and Data Lakehouse Edition only.
    ACUs for Incremental SynchronizationThe initial number of ACUs for the sync task. The starting value is fixed at 1 ACU; the maximum equals the capacity of the selected resource group. APS auto-scales ACUs based on workload, up to 64 or down to 1. Required for Enterprise Edition, Basic Edition, and Data Lakehouse Edition only.
  6. Click Submit.

APS creates the data source and data link automatically, then redirects you to the Simple Log Service/Kafka Data Synchronization page in the AnalyticDB for MySQL console.

Step 2: Start the sync task

  1. In the Actions column of the data link, click Start.

  2. Click Query in the upper-right corner. When the status shows Running, the sync task is active.

Step 3: Manage the sync task

From the Actions column, perform the following operations:

OperationDescription
StartStarts the sync task.
View DetailsShows the source and destination configurations, run logs, and monitoring metrics.
EditUpdates the start offset, field mappings, and other settings.
PausePauses the sync task. Click Start to resume from the offset where it paused.
DeletePermanently deletes the sync task. This action cannot be undone.

Option 2: Create a sync task in the AnalyticDB for MySQL console

Use this option when the SLS data is in a different Alibaba Cloud account from your cluster.

Step 1: (Optional) Configure RAM authorization

Skip this step if you are syncing SLS data from the same Alibaba Cloud account. Go to Step 2: Create a data source.

To sync SLS data from another Alibaba Cloud account, create a RAM role in the source account, grant it the required permissions, and update its trust policy.

  1. In the source account, create a RAM role for a trusted Alibaba Cloud account.

    When configuring Principal Name, select Other Account and enter the ID of the Alibaba Cloud account that owns your AnalyticDB for MySQL cluster. To find this ID, log on to the Account Center and check the Account ID on the Security Settings page.
  2. Using fine-grained authorization, grant the AliyunAnalyticDBAccessingLogRolePolicy permission to the RAM role.

  3. Modify the trust policy of the RAM role to allow the AnalyticDB for MySQL cluster in the specified account to assume the role:

    {
      "Statement": [
        {
          "Action": "sts:AssumeRole",
          "Effect": "Allow",
          "Principal": {
            "RAM": [
                "acs:ram::<Alibaba Cloud account ID>:root"
            ],
            "Service": [
                "<Alibaba Cloud account ID>@ads.aliyuncs.com"
            ]
          }
        }
      ],
      "Version": "1"
    }

    Replace <Alibaba Cloud account ID> with the ID of the Alibaba Cloud account that owns the AnalyticDB for MySQL cluster (the same ID you entered in step 1). Do not include the angle brackets.

Step 2: Create a data source

Skip this step if you have already added an SLS data source. Go to Step 3: Create a data link.
  1. Log on to the AnalyticDB for MySQL console. In the upper-left corner, select a region. In the left navigation pane, click Clusters, find the target cluster, and click the cluster ID.

  2. In the left navigation pane, choose Data Ingestion > Data Sources.

  3. In the upper-right corner, click Create Data Source.

  4. Configure the parameters on the Create Data Source page:

    ParameterDescription
    Data Source TypeSelect SLS.
    Data Source NameAuto-generated from the data source type and current time. Rename as needed.
    Data Source DescriptionAn optional description, such as the application scenario or business constraints.
    Cloud providerOnly Alibaba Cloud Instance is supported.
    Region of Simple Log Service ProjectThe region where the SLS project resides. Only the region of the AnalyticDB for MySQL cluster is selectable.
    Across Alibaba Cloud AccountsSelect No to sync from the current account. Select Yes to sync from another account — then enter the Alibaba Cloud Account (the ID of the account that owns the SLS project) and the RAM Role (the role created in Step 1).
    Simple Log Service ProjectThe source SLS project.
    Simple Log Service LogstoreThe source SLS Logstore.
  5. Click Create.

Step 3: Create a data link

  1. In the left navigation pane, choose Data Ingestion > Simple Log Service/Kafka Data Synchronization.

  2. In the upper-right corner, click Create Synchronization Job.

  3. Configure the parameters on the Create Synchronization Job page across three tabs.

    Source and destination settings

    ParameterDescription
    Job NameAuto-generated from the data source type and current time. Rename as needed.
    Data SourceSelect an existing SLS data source, or create a new one.
    Destination TypeFor Enterprise Edition, Basic Edition, and Data Lakehouse Edition clusters, select Data Warehouse - AnalyticDB for MySQL Storage. Not required for Data Warehouse Edition clusters.
    AnalyticDB for MySQL AccountThe database account of the cluster.
    AnalyticDB for MySQL PasswordThe password of the database account.

    Destination database and table settings

    ParameterDescription
    Database NameThe name of the destination database in the cluster.
    Table NameThe name of the destination table in the cluster.
    Source Data PreviewClick View Latest 10 Logstore Data Entries to preview the 10 most recent records.
    Schema Field MappingAnalyticDB for MySQL automatically populates the Destination Table Field and Source Field columns. If a mapping is incorrect, update the Source Field manually.

    Synchronization settings

    ParameterDescription
    Start OffsetThe point in time from which the sync task starts consuming data. For example, setting this to 2024-04-09 13:10 starts from the first record after 13:10 on April 9, 2024.
    Dirty Data Processing ModeControls behavior when a source field value cannot be converted to the destination column type. Stop Synchronization (default) halts the task so you can fix the type mismatch. Treat as Null writes NULL for the failing field and continues syncing the remaining fields.
    Convert Unix Timestamp into DatetimeEnable this if an SLS field contains a Unix timestamp (for example, 1710604800) and the destination column type is DATETIME or TIMESTAMP. After enabling, select the precision: Timestamp Accurate to Seconds, Timestamp Accurate to Milliseconds, or Timestamp Accurate to Microseconds.
  4. Click Submit.

Step 4: Start the sync task

  1. On the Simple Log Service/Kafka Data Synchronization page, find the sync task and click Start in the Actions column.

  2. When the status shows Running, the sync task is active.

Step 5: Manage the sync task

From the Actions column, perform the following operations:

OperationDescription
StartStarts the sync task.
View DetailsShows the source and destination configurations, run logs, and monitoring metrics.
EditUpdates the start offset, field mappings, and other settings.
PausePauses the sync task. Click Start to resume from the offset where it paused.
DeletePermanently deletes the sync task. This action cannot be undone.