All Products
Search
Document Center

AnalyticDB:Use data synchronization to synchronize data from SLS

Last Updated:Sep 18, 2025

You can use the data synchronization feature to synchronize data from Simple Log Service (SLS) to an AnalyticDB for MySQL cluster in real time. This feature synchronizes data generated after a specified point in time to meet your real-time log analysis needs.

Prerequisites

Notes

Currently, a table in an AnalyticDB for MySQL cluster can be synchronized with only one Logstore from SLS. To synchronize multiple Logstores, you must create multiple tables.

Billing

You are charged for elastic ACU resources on a pay-as-you-go basis. The fees are calculated based on the number of ACUs that the data link uses. For more information about billing, see Pricing.

Procedure

You can create a sync task in the SLS console or the AnalyticDB for MySQL console. The differences are described in the following sections:

  • Create a sync task in the Simple Log Service console: This method supports importing SLS data only from the same Alibaba Cloud account. You need to create only a data link. The system automatically creates an SLS data source based on the SLS Project and SLS logstore parameters that you specify.

  • Create a sync task in the AnalyticDB for MySQL console: This method supports importing SLS data from other Alibaba Cloud accounts. You must first create an SLS data source and then create a data link based on the data source.

Create a sync task in the Simple Log Service console

Step 1: Create a data source and a data link

  1. Log on to the Simple Log Service console.

  2. In the Projects section, click the destination project. In the navigation pane on the left, click the image icon to go to the Logstores tab. Expand the tabs under the destination Logstore.

  3. Click Data Processing > Export. Click the + icon to the right of AnalyticDB.

  4. In the Deliver Hint dialog box, select Go To ADB Console To Create.

    image

  5. On the ADB For MySQL Log Synchronization page, configure the Source And Destination Settings, Destination Database And Table Settings, and Synchronization Settings sections.

    • The following table describes the parameters in the Source And Destination Settings section.

      Parameter

      Description

      Data Link Name

      The name of the data link. The system generates a name based on the data source type and the current time. You can change the name as needed.

      SLS Project

      The SLS project.

      SLS Logstore

      The SLS Logstore.

      Destination ADB Instance

      Select an AnalyticDB for MySQL cluster.

      ADB Account

      The database account of the AnalyticDB for MySQL cluster.

      ADB Password

      The password of the database account for the AnalyticDB for MySQL cluster.

    • The following table describes the parameters in the Destination Database And Table Settings section.

      Parameter

      Description

      Database Name

      The database name of the AnalyticDB for MySQL cluster.

      Table Name

      The table name in the AnalyticDB for MySQL cluster.

      Source Data Preview

      Click Click To View The Last 10 Data Entries In The Logstore to view the 10 latest data entries from the source SLS Logstore.

      Schema Field Mapping

      The system automatically fills the Destination Field and Source Field columns with the fields of the AnalyticDB for MySQL cluster table. If the mapping between the Destination Field and the Source Field is incorrect, modify it manually.

      For example, the field name in the AnalyticDB for MySQL cluster table is `name`, and the source SLS data field name is `user_name`. The system automatically fills both the Source Field and Destination Field with `name`. In this case, you must manually change the Source Field to `user_name`.

    • The following table describes the parameters in the Synchronization Settings section.

      Parameter

      Description

      Delivery Start Offset

      When the sync task starts, it consumes SLS data from the selected point in time.

      For example, if you set Delivery Start Offset to 2024-04-09 13:10, the system starts consuming data from the first entry after 13:10 on April 9, 2024.

      Dirty Data Processing Mode

      During data synchronization, if the data type of a field in the destination table does not match the data type of the source SLS data, the synchronization fails. For example, if the source data is abc and the destination field type is int, a synchronization error occurs because the data cannot be converted.

      The following values are available for this parameter:

      • Stop Synchronization (Default): The data synchronization stops. You must modify the field type in the destination table or select another dirty data processing mode, and then restart the sync task.

      • Process As NULL: The dirty data is written as a NULL value to the destination table, and the original dirty data is discarded.

        For example, a row of SLS data has three fields: col1, col2, and col3. If col2 is dirty data, the value of col2 is converted to NULL and written to the destination table. The data for col1 and col3 is written normally.

      Convert UNIX Timestamp To Datetime

      If a source SLS field is a UNIX timestamp, such as 1710604800, and the destination field type is DATETIME or TIMESTAMP, enable this feature for conversion. After you enable this feature, you can select Second-level Precision Timestamp, Millisecond-level Precision Timestamp, or Microsecond-level Precision Timestamp based on the precision of the SLS timestamp data.

      Job Resource Group

      Select the job resource group to run the incremental synchronization task.

      Important

      This parameter is required only when the AnalyticDB for MySQL cluster is of Enterprise Edition, Basic Edition, or Data Lakehouse Edition.

      Number Of ACUs For Incremental Synchronization

      • The initial number of ACUs used to run the incremental synchronization task. The value is fixed at 1 ACU. The value must be in the range of [1, maximum resources of the job resource group].

      • After the data link is created, AnalyticDB for MySQL automatically scales the number of ACUs used to run the incremental synchronization task based on the business workload. The number of ACUs can be scaled up to 64 and down to 1.

      Important

      This parameter is required only when the AnalyticDB for MySQL cluster is of Enterprise Edition, Basic Edition, or Data Lakehouse Edition.

  6. After you configure the parameters, click Submit.

    The system automatically creates an SLS data source and a data link in AnalyticDB for MySQL and redirects you to the SLS/Kafka Data Synchronization page in the AnalyticDB for MySQL console.

Step 2: Start the sync task

  1. In the Actions column of the destination data link, click Start.

  2. After the task starts, click Query in the upper-right corner. If the status changes to Running, the data synchronization task has started.

Step 3: Manage the sync task

You can perform the following operations in the Actions column:

Operation

Description

Start

Starts the data synchronization job.

Execution Details

View the details of the data synchronization job, including source and destination configuration information, run logs, and run monitoring.

Edit

Edit the job's start offset, field mappings, and other settings.

Pause

Pauses the data synchronization job. You can click Start to resume a paused job. The synchronization automatically resumes from the offset where it was paused.

Delete

Deletes the data synchronization job. This operation cannot be undone. Proceed with caution.

Create a sync task in the AnalyticDB for MySQL console

(Optional) Step 1: Configure RAM authorization

Note

If you want to synchronize SLS data only from your Alibaba Cloud account, skip this step and create a data source. For more information, see Step 2: Create a data source.

If you want to synchronize SLS data from another Alibaba Cloud account to AnalyticDB for MySQL, you must create a RAM role in the source account, grant precise permissions to the RAM role, and modify the trust policy of the RAM role.

  1. Create a RAM role. For more information, see Create a RAM role for a trusted Alibaba Cloud account.

    Note

    When you configure the Select Trusted Cloud Account parameter, select Other Cloud Account and enter the Alibaba Cloud account ID of the AnalyticDB for MySQL cluster. You can log on to the Account Center and view the Account ID on the Overview page.

  2. Grant precise permissions to the RAM role by attaching the AliyunAnalyticDBAccessingLogRolePolicy permission.

  3. Modify the trust policy of the RAM role to allow the AnalyticDB for MySQL cluster that belongs to the specified Alibaba Cloud account to assume this RAM role.

    {
      "Statement": [
        {
          "Action": "sts:AssumeRole",
          "Effect": "Allow",
          "Principal": {
            "RAM": [
                "acs:ram::<Alibaba Cloud account ID>:root"
            ],
            "Service": [
                "<Alibaba Cloud account ID>@ads.aliyuncs.com"
            ]
          }
        }
      ],
      "Version": "1"
    }
    Note

    The Alibaba Cloud account ID is the ID of the Alibaba Cloud account to which the AnalyticDB for MySQL cluster belongs, as specified in Step 1. Do not include the angle brackets (<>).

Step 2: Create a data source

Note

If you have added an SLS data source, skip this step and create a data link. For more information, see Step 3: Create a data link.

  1. Log on to the AnalyticDB for MySQL console. In the upper-left corner of the console, select a region. In the left-side navigation pane, click Clusters. Find the cluster that you want to manage and click the cluster ID.

  2. In the navigation pane on the left, click SLS Data Ingest > Data Source Management.

  3. In the upper-right corner, click Create Data Source.

  4. On the Create Data Source page, configure the parameters. The following table describes the parameters.

    Parameter

    Description

    Data Source Type

    Select SLS.

    Data Source Name

    The system generates a name based on the data source type and the current time. You can change the name as needed.

    Data Source Description

    A description for the data source, such as its scenario or business restrictions.

    Deployment Mode

    Currently, only Alibaba Cloud Instance is supported.

    Region Of SLS Project

    The region where the SLS project resides.

    Note

    You can select only the region where the AnalyticDB for MySQL cluster resides.

    Across Alibaba Cloud Accounts

    An AnalyticDB for MySQL cluster can synchronize SLS data from the same Alibaba Cloud account or from other Alibaba Cloud accounts.

    • No: Synchronize SLS data from the current Alibaba Cloud account to the AnalyticDB for MySQL cluster.

    • Yes: Synchronize SLS data from another Alibaba Cloud account to the AnalyticDB for MySQL cluster. If you select Yes, you must configure RAM authorization and specify the Alibaba Cloud Account and RAM Role Name parameters. For more information about how to configure RAM authorization, see Configure RAM authorization.

      Note
      • Alibaba Cloud Account: The ID of the Alibaba Cloud account to which the SLS project belongs.

      • RAM Role Name: The name of the RAM role in the Alibaba Cloud account to which the SLS project belongs. This is the RAM role created in Step 1 of Configure RAM authorization.

    SLS Project

    The source SLS project.

    SLS Logstore

    The source SLS Logstore.

  5. After you configure the parameters, click Create.

Step 3: Create a data link

  1. In the navigation pane on the left, click SLS Data Ingest > Data Synchronization.

  2. In the upper-right corner, click Create Data Link.

  3. On the Create Data Link page, configure the Source And Destination Settings, Destination Database And Table Settings, and Synchronization Settings sections. The following tables describe the parameters.

    • The following table describes the parameters in the Source And Destination Settings section.

      Parameter

      Description

      Data Link Name

      The name of the data link. The system generates a name based on the data source type and the current time. You can change the name as needed.

      Data Source

      Select an existing SLS data source or create a new one.

      Destination Type

      • For Enterprise Edition, Basic Edition, and Data Lakehouse Edition clusters, select Data Warehouse - ADB Storage.

      • This parameter is not required for Data Warehouse Edition (V3.0) clusters.

      ADB Account

      The database account of the AnalyticDB for MySQL cluster.

      ADB Password

      The password of the database account for the AnalyticDB for MySQL cluster.

    • The following table describes the parameters in the Destination Database And Table Settings section.

      Parameter

      Description

      Database Name

      The database name of the AnalyticDB for MySQL cluster.

      Table Name

      The table name in the AnalyticDB for MySQL cluster.

      Source Data Preview

      Click Click To View The Last 10 Data Entries In The Logstore to view the 10 latest data entries from the source SLS Logstore.

      Schema Field Mapping

      The system automatically fills the Destination Field and Source Field columns with the fields of the AnalyticDB for MySQL cluster table. If the mapping between the Destination Field and the Source Field is incorrect, modify it manually.

      For example, the field name in the AnalyticDB for MySQL cluster table is `name`, and the source SLS data field name is `user_name`. The system automatically fills both the Source Field and Destination Field with `name`. In this case, you must manually change the Source Field to `user_name`.

    • The following table describes the parameters in the Synchronization Settings section.

      Parameter

      Description

      Delivery Start Offset

      When the sync task starts, it consumes SLS data from the selected point in time.

      For example, if you set Delivery Start Offset to 2024-04-09 13:10, the system starts consuming data from the first entry after 13:10 on April 9, 2024.

      Dirty Data Processing Mode

      During data synchronization, if the data type of a field in the destination table does not match the data type of the source SLS data, the synchronization fails. For example, if the source data is abc and the destination field type is int, a synchronization error occurs because the data cannot be converted.

      The following values are available for this parameter:

      • Stop Synchronization (Default): The data synchronization stops. You must modify the field type in the destination table or select another dirty data processing mode, and then restart the sync task.

      • Process As NULL: The dirty data is written as a NULL value to the destination table, and the original dirty data is discarded.

        For example, a row of SLS data has three fields: col1, col2, and col3. If col2 is dirty data, the value of col2 is converted to NULL and written to the destination table. The data for col1 and col3 is written normally.

      Convert UNIX Timestamp To Datetime

      If a source SLS field is a UNIX timestamp, such as 1710604800, and the destination field type is DATETIME or TIMESTAMP, enable this feature for conversion. After you enable this feature, you can select Second-level Precision Timestamp, Millisecond-level Precision Timestamp, or Microsecond-level Precision Timestamp based on the precision of the SLS timestamp data.

  4. After you configure the parameters, click Submit.

Step 4: Start the data synchronization task

  1. On the Data Synchronization page, find the data synchronization task that you created. In the Actions column, click Start.

  2. In the upper-right corner, click Query. If the status changes to Running, the data synchronization task has started.

Step 5: Manage the Data Source

On the Data Synchronization page, you can perform the following operations in the Actions column.

Operation

Description

Start

Starts the data synchronization job.

Execution Details

View the details of the data synchronization job, including source and destination configuration information, run logs, and run monitoring.

Edit

Edit the job's start offset, field mappings, and other settings.

Pause

Pauses the data synchronization job. You can click Start to resume a paused job. The synchronization automatically resumes from the offset where it was paused.

Delete

Deletes the data synchronization job. This operation cannot be undone. Proceed with caution.