All Products
Search
Document Center

ApsaraDB for Redis:Use DTS to migrate data from a self-managed Redis database to an ApsaraDB for Redis instance

Last Updated:Jan 17, 2024

You can use Data Transmission Service (DTS) to migrate a self-managed Redis database that is deployed on premises, on an Elastic Compute Service (ECS) instance, or on a third-party cloud server to ApsaraDB for Redis without affecting your business. DTS enables you to migrate data from your existing Redis database to ApsaraDB for Redis without interrupting your database service. DTS supports both full and incremental data migration. Compared with the append-only file (AOF) method, DTS offers higher performance and security.

Overview

  • Full data migration

    DTS allows you to transfer the existing data from your source database to the destination database without incurring additional costs.

  • Incremental data migration

    After full data migration, DTS can synchronize incremental data from the source database to the destination database in real time. To perform incremental migration, you must run the PSYNC or SYNC command in the source database. Otherwise, you can perform only full migration. Incremental data migration by using DTS may incur costs based on the duration of the migration, rather than the volume of data being transferred. For more information, see Billable items.

    Note

    To ensure that incremental data migration tasks run as expected, we recommend that you remove the limit on the replication output buffer for the source database. To remove the limit, connect to the source database and run the following command: CONFIG SET client-output-buffer-limit 'slave 0 0 0'.

Prerequisites

An ApsaraDB for Redis instance is created, and memory allocated to the instance is larger than the memory used by the self-managed Redis database. For more information, see Step 1: Create an ApsaraDB for Redis instance.

Note

We recommend that you keep the total memory of the destination database at least 10% larger than the memory used by the source database. If the memory of the destination database is insufficient during migration, it can lead to issues such as data inconsistency or task failures. In such cases, you need to empty the destination database and reconfigure the migration task to ensure a successful migration process.

Usage notes

During the migration, do not scale or change the specifications or endpoints of the source and destination databases. Otherwise, the task fails. You must reconfigure the task. In addition, the migration operation consumes resources in the source and destination databases. We recommend that you perform the migration operation during off-peak hours.

Procedure

  1. Go to the Data Migration Tasks page.

    1. Log on to the DMS console.

    2. In the top navigation bar, move the pointer over DTS.

    3. Choose DTS (DTS) > Data Migration.

  2. Click Create Task.

  3. Configure the source and destination databases and click Test Connectivity and Proceed in the lower part of the page.

    Category

    Parameter

    Description

    N/A

    Task Name

    The name of the task. DTS automatically assigns a name to the task. We recommend that you specify a descriptive name that makes it easy to identify the task. You do not need to specify a unique task name.

    Source Database

    Select an existing DMS database instance

    If you have registered a source database with Data Management (DMS), you can select the source database. After you select the source database, you do not need to enter the source database information. If no source database is registered with DMS, ignore this option.

    Database Type

    The type of the source database. Select ApsaraDB for Redis Enhanced Edition (Tair).

    Access Method

    The access method of the source database. Select the access method based on the deployment location of the source database. If the database is deployed in an on-premises data center or third-party cloud platform, select Public IP Address.

    In this example, Self-managed Database on ECS is selected.

    Instance Region

    The region in which the ECS instance resides. If the instance is deployed in an on-premises data center or third-party cloud platform, select the region of the nearest source database.

    Replicate Data Across Alibaba Cloud Accounts

    Specifies whether data is migrated across Alibaba Cloud accounts. In this example, No is selected.

    ECS Instance ID

    The ID of the ECS instance on which the source database is deployed.

    Note

    If the source database uses the cluster architecture, select the ID of the ECS instance in which any master node resides.

    Instance Mode

    The architecture of the source database. Select Standalone or Cluster.

    Port Number

    The service port number of the source Redis database. Default value: 6379.

    Note

    If the source database uses the cluster architecture, enter the service port number of any master node.

    Database Password

    The password of the source Redis database.

    Note
    • This parameter is optional. You can leave the parameter empty.

    • Specify the database password in the <user>:<password> format. For example, if the username of the account that you use to log on to the source Redis database is admin and the password is Rp829dlwa, the database password is admin:Rp829dlwa.

    Destination Database

    Select an existing DMS database instance

    If you have registered a destination database with DMS, you can select the destination database. After you select the destination database, you do not need to enter the destination database information. If no destination database is registered with DMS, ignore this option.

    Database Type

    By default, ApsaraDB for Redis Enhanced Edition (Tair) is selected.

    Access Method

    The access method of the destination database. Select Alibaba Cloud Instance.

    Instance Region

    The region in which the destination ApsaraDB for Redis instance resides.

    Instance ID

    The ID of the destination ApsaraDB for Redis instance.

    Database Password

    The database password of the destination ApsaraDB for Redis instance.

    Note

    Specify the database password in the <user>:<password> format. For example, if the username of the account that you use to log on to the destination ApsaraDB for Redis instance is admin and the password is Rp829dlwa, the database password is admin:Rp829dlwa.

  4. Configure task objects and click Next: Advanced Settings in the lower part of the page.

    Parameter

    Description

    Migration Types

    The migration type. Select a migration type based on your business requirements.

    • Full Data Migration and Incremental Data Migration (default): uses the native synchronization logic of Redis to write data to the destination database by means of an in-memory snapshot. This way, the source database is migrated without downtime.

      If you do not have the SYNC or PSYNC permission on the source database, select Full Data Migration.

    • Full Data Migration: runs the SCAN command to traverse the source database and writes the traversed data to the destination database. To ensure data consistency, do not write new data to the source instance during migration.

    Processing Mode of Conflicting Tables

    • Precheck and Report Errors (default): checks whether keys exist in the destination database.

      If keys exist, an error is returned during the precheck and the migration task cannot be started. If keys do not exist, the precheck is passed.

    • Ignore Errors and Proceed: skips the Check the existence of objects in the destination database check item. If a key with the same name already exists in the destination database, the key is overwritten.

    Source Objects and Selected Objects

    Select the objects to be migrated from the Source Objects section and click image.png to move the objects to the Selected Objects section. To remove a selected object, click the selected object in the Selected Objects section and click the image.png icon to move the object to the Source Objects section.

    Note

    You can select databases (DB 0 to DB 255) as the objects to be migrated.

  5. Configure the advanced settings and click Next: Save Task Settings and Precheck in the lower part of the page.

    In most cases, you can retain the default settings. For more information, see Enable data verification and Appendix: Advanced settings.

  6. Perform a precheck, and then click Next: Purchase Instance in the lower part of the page.

    If Warning or Failed items are generated during the precheck, check the items one by one. You can click View Details and troubleshoot the issues as prompted. You can also click Confirm Alert Details and ignore the check items. However, issues such as data inconsistency may occur, which may pose risks to your business. For more information, see FAQ. After you complete the preceding operations, perform the precheck again.

  7. On the buy page, configure the parameters and click Buy and Start.

    • (Optional) Select the resource group to which the DTS data migration instance belongs. The default value is default resource group.

    • (Optional) Select the specifications of the DTS data migration instance. The higher the specifications, the faster the migration speed and the higher the cost. The default value is large. For more information, see Specifications of data migration instances.

    • Read and select the terms of service.

    After you purchase the data migration instance, the migration task starts. You can view the progress on the data migration page.

What to do next

  • If you perform incremental migration, you must manually terminate or release the task in the console after you complete the migration.

  • You can verify the data. For more information, see Verify migrated Redis data.

References

If your database does not need to be migrated online, you can use the lightweight tool redis-cli to import AOFs for data migration. For more information, see Use AOF files to migrate data.

FAQ

  • Why does the connectivity test fail?

    Consider the following aspects for troubleshooting:

    • The account password is invalid. The password must be in the user:password format. For more information, see Logon methods.

    • If the source database is a self-managed database in an on-premises data center or a third-party cloud database, a network firewall may block the access from DTS servers. In this case, you must manually add the CIDR blocks of DTS servers in the corresponding region to allow access from DTS servers. For more information, see Add the CIDR blocks of DTS servers to the security settings of on-premises databases.

    • DTS does not support the migration of Redis 7.0 databases.

  • Why does the migration task fail to run?

    • During the migration process, if you make changes such as scaling or changing the specifications or endpoints of the source and destination databases, the migration task fails. In such cases, you must reconfigure the migration task to account for the changes.

    • If the destination instance has insufficient available memory, or if the destination instance is a cluster instance whose specific shard has reached the upper memory limit, the DTS migration task fails due to an out of memory (OOM) error.

    • If transparent data encryption (TDE) is enabled for the destination instance, you cannot use DTS to migrate data.

  • Why are data volumes in the source and destination databases inconsistent?

    • If an expiration policy is enabled for specific keys in the source database, these keys may not be deleted at the earliest opportunity after they expire. Therefore, the number of keys in the destination database may be less than that in the source database.

    • If the network is interrupted during the execution of a full migration, DTS may execute multiple full migrations upon reestablishing the connection. In this case, DTS automatically overwrites existing keys with the same name in the destination database. If you perform a delete operation on the source database at this time, the command is not synchronized to the destination database. As a result, the destination database may have more keys than the source database.

  • Why should I check whether the eviction policy is noeviction?

    By default, the maxmemory-policy parameter that specifies how data is evicted is set to volatile-lru for ApsaraDB for Redis instances. If the destination database has insufficient memory, data inconsistency may occur between the source and destination databases due to data eviction. In this case, the data migration task does not stop running. To prevent data inconsistency, we recommend that you set maxmemory-policy to noeviction for the destination database. This way, the data migration task fails if the destination database has insufficient memory, but data loss can be prevented for the destination database. For more information about data eviction policies, see How does ApsaraDB for Redis evict data by default?

  • Why is DTS_REDIS_TIMESTAMP_HEARTBEAT available in the source database?

    To ensure the quality of data migration, DTS inserts a key whose prefix is DTS_REDIS_TIMESTAMP_HEARTBEAT into the source database to record the timestamp of the last update. If the source database uses the cluster architecture, DTS inserts the key into each shard. The key is filtered out during data migration. After the data migration task is completed, the key expires.

  • Which commands are supported for incremental migration?

    • The following commands are supported for incremental migration:

      • APPEND

      • BITOP, BLPOP, BRPOP, and BRPOPLPUSH

      • DECR, DECRBY, and DEL

      • EVAL, EVALSHA, EXEC, EXPIRE, and EXPIREAT

      • GEOADD and GETSET

      • HDEL, HINCRBY, HINCRBYFLOAT, HMSET, HSET, and HSETNX

      • INCR, INCRBY, and INCRBYFLOAT

      • LINSERT, LPOP, LPUSH, LPUSHX, LREM, LSET, and LTRIM

      • MOVE, MSET, MSETNX, and MULTI

      • PERSIST, PEXPIRE, PEXPIREAT, PFADD, PFMERGE, PSETEX, and PUBLISH

      • RENAME, RENAMENX, RESTORE, RPOP, RPOPLPUSH, RPUSH, and RPUSHX

      • SADD, SDIFFSTORE, SELECT, SET, SETBIT, SETEX, SETNX, SETRANGE, SINTERSTORE, SMOVE, SPOP, SREM, and SUNIONSTORE

      • ZADD, ZINCRBY, ZINTERSTORE, ZREM, ZREMRANGEBYLEX, ZUNIONSTORE, ZREMRANGEBYRANK, and ZREMRANGEBYSCORE

    • If you run the EVAL or EVALSHA command to call Lua scripts, DTS cannot identify whether these Lua scripts are executed in the destination database. This is because the destination database does not explicitly return the execution results of Lua scripts during incremental data migration.

Appendix: Advanced settings

Parameter

Description

Set Alerts

Specifies whether to configure alerting for the data migration task. If the task fails or the migration latency exceeds the specified threshold, the alert contacts are notified. Valid values:

  • No (default)

  • Yes: If you select this option, you must also configure the alert threshold and alert contacts.

Retry Time for Failed Connections

The retry time range for failed connections. If the source or destination database fails to be connected after the data migration task is started, DTS continuously retries a connection within the time range. Valid values: 10 to 1440. Unit: minutes. Default value: 720. If DTS is reconnected to the source and destination databases within the specified time range, DTS resumes the data migration task. Otherwise, the data migration task fails. We recommend that you set the retry time range based on your business requirements, preferably to a value greater than 30.

When DTS retries a connection, you are charged for the DTS instance.

The wait time before a retry when other issues occur in the source and destination databases.

The retry time range for other issues. If non-connectivity issues occur on the source or destination database after the data migration task is started, DTS reports errors and continuously retries the operations within the retry time range. Valid values: 10 to 1440. Unit: minutes. Default value: 10. If the failed operations are successfully performed within the specified retry time range, DTS resumes the data migration task. Otherwise, the data migration task fails. We recommend that you set the parameter to a value greater than 10.

Enable Throttling for Incremental Data Migration

The load on the destination database may increase when DTS migrates incremental data to the destination database. You can configure throttling thresholds for the number of rows and the amount of data that can be incrementally migrated per second based on your business requirements. This helps reduce the load on the destination database. The default value is No.

Environment Tag

The environment tag that is used to identify the DTS instance. You can select an environment tag based on your business requirements.

Extend Expiration Time of Destination Database Key

The extended expiration time for keys in the destination database that have time-to-live (TTL) values configured. Default value: 1800. Unit: seconds. During the migration process, if a key has already expired in the source database, the key is not migrated to the destination database.

Use Slave Node

If the Instance Mode parameter of the source self-managed Redis database is set to Cluster, you can choose whether to read data from the replica nodes. By default, No is selected, which indicates that DTS reads data from the master nodes.

Configure ETL

Specifies whether to configure the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values: