All Products
Search
Document Center

ApsaraDB for Redis:Configure one-way data migration between ApsaraDB for Redis instances

Last Updated:Apr 01, 2024

If you want to migrate data from one ApsaraDB for Redis instance to another ApsaraDB for Redis instance but you have not created an instance as the destination instance, we recommend that you use a backup set or the data flashback feature to clone data to a new instance. If you have created an instance and want to migrate data to the instance, we recommend that you use Data Transmission Service (DTS) to implement one-way data migration between the Redis instances, which can be self-managed Redis databases or ApsaraDB for Redis instances. DTS supports full and incremental data migration. When you configure a data migration task, you can select both types to ensure service continuity. This topic compares the three data migration methods and details how to use DTS to migrate Redis data.

Comparison of methods for data migration between ApsaraDB for Redis instances

Item

DTS

Restore data from a backup set to a new instance

Use data flashback to restore data by point in time

Scenario

Migrate data to an existing ApsaraDB for Redis instance.

Clone a new instance based on the backup data of an existing instance.

Clone a new instance based on the backup data of an existing instance.

Fees for data migration between ApsaraDB for Redis instances

  • Configuration fee for data migration instances:

    • Instances for full data migration are available free of charge.

    • Instances for incremental data migration are billed on a pay-as-you-go basis.

  • Internet traffic fee:

    When you transfer data from Alibaba Cloud to the Internet over a public IP address, you are charged based on the amount of data transferred. In other cases, you are not charged any fees.

For more information about billing, see Billable items.

  • Data restoration does not incur charges.

  • The resources consumed by the new instance that you create incur charges.

  • During the trial period of the data flashback feature, you can restore data to a point in time within the last seven days free of charge. For more information, see the Billing section of the "Use data flashback to restore data by point in time" topic.

  • The resources consumed by the new instance that you create incur charges.

Objects to migrate

Database.

Instance: All data in the specified backup set in the instance is restored.

  • Instance: All data from a specified point in time in the instance is restored.

  • Key: Specified keys are restored.

Incremental data migration

Supported.

Not supported.

Not supported.

Cross-region migration

Supported.

Not supported.

Not supported.

Data migration across different database versions

Supported1.

Not supported.

Not supported.

Data migration across different architectures

Supported2.

Partially supported2.

Partially supported2.

Important

1 If you use DTS to migrate data, we recommend that you keep the database versions identical between the source and destination databases to prevent compatibility issues.

2 If you want to migrate data from a standard instance to a cluster or read/write splitting instance, be aware of the limits on commands supported by cluster instances and read/write splitting instances. For more information, see Limits on commands supported by cluster instances and read/write splitting instances.

DTS overview

  • Full data migration

    DTS allows you to migrate all existing data from a source database to a destination database free of charge.

  • Incremental data migration

    After full data migration, DTS can synchronize incremental data from the source database to the destination database in real time. To perform incremental migration, you must run the PSYNC or SYNC command in the source database. Otherwise, you can perform only full migration. You are charged for incremental data migration based on the duration of the migration, rather than the volume of data being transferred. For more information, see Billable items.

    Note

    To ensure that incremental data migration tasks run as expected, we recommend that you remove the limit on the replication output buffer for the source database. To remove the limit, connect to the source database and run the following command: CONFIG SET client-output-buffer-limit 'slave 0 0 0'.

Prerequisites

The destination Redis instance is created and the memory allocated to the destination instance is larger than the memory used by the source instance. For more information, see Step 1: Create an ApsaraDB for Redis instance.

Note

We recommend that you keep the total memory of the destination database at least 10% larger than the memory used by the source database. If the memory of the destination database is insufficient during migration, it can lead to issues such as data inconsistency or task failures. In such cases, empty the destination database and reconfigure the migration task.

Precautions

During the migration process, do not scale or change the specifications or endpoint of the source or destination database. Otherwise, the task fails. If the task fails, reconfigure the task to account for the changes. In addition, the migration operation consumes resources of the source and destination databases. We recommend that you perform the migration operation during off-peak hours.

Procedure

  1. Go to the Data Migration Tasks page.

    1. Log on to the DMS console.

    2. In the top navigation bar, click DTS.

    3. Choose DTS (DTS) > Data Migration.

  2. Click Create Task.

  3. Configure the source and destination databases and click Test Connectivity and Proceed in the lower part of the page.

    Section

    Parameter

    Description

    N/A

    Task Name

    The name of the DTS task. DTS automatically generates a task name. We recommend that you specify a descriptive name that makes it easy to identify the task. You do not need to specify a unique task name.

    Source Database

    Select a DMS database instance

    If you have registered a source database with Data Management (DMS), you can select the source database. After you select the source database, you do not need to enter the source database information. If no source database is registered with DMS, ignore this option.

    Database Type

    The type of the source database. Set this parameter to ApsaraDB for Redis Enhanced Edition (Tair).

    Access Method

    The access method of the source database. In this example, Alibaba Cloud Instance is selected.

    Instance Region

    The region in which the source database resides.

    Replicate Data Across Alibaba Cloud Accounts

    Specifies whether to migrate data across Alibaba Cloud accounts. In this example, No is selected.

    Instance ID

    The ID of the source instance.

    Database Password

    The password of the source Redis database.

    Note
    • This parameter is optional. You can leave the parameter empty.

    • Specify the database password in the <user>:<password> format. For example, if the username of the account that you use to log on to the source Redis database is admin and the password is Rp829dlwa, the database password is admin:Rp829dlwa.

    Destination Database

    Select a DMS database instance

    If you have registered a destination database with DMS, you can select the destination database. After you select the destination database, you do not need to enter the destination database information. If no destination database is registered with DMS, ignore this option.

    Database Type

    By default, ApsaraDB for Redis Enhanced Edition (Tair) is selected.

    Access Method

    The access method of the destination database. Select Alibaba Cloud Instance.

    Instance Region

    The region in which the destination instance resides.

    Instance ID

    The ID of the destination instance.

    Database Password

    The password of the destination Redis database.

    Note

    Specify the database password in the <user>:<password> format. For example, if the username of the account that you use to log on to the destination Redis database is admin and the password is Rp829dlwa, the database password is admin:Rp829dlwa.

  4. Configure task objects and click Next: Advanced Settings in the lower part of the page.

    Parameter

    Description

    Migration Types

    The migration type. Select a migration type based on your business requirements.

    • Full Data Migration and Incremental Data Migration (default): uses the native synchronization logic of Redis to write data to the destination database by means of an in-memory snapshot. This way, the source database is migrated without downtime.

      If you do not have the SYNC or PSYNC permission on the source database, select Full Data Migration.

    • Full Data Migration: runs the SCAN command to traverse the source database and writes the traversed data to the destination database. To ensure data consistency, do not write new data to the source instance during migration.

    Processing Mode of Conflicting Tables

    • Precheck and Report Errors (default): checks whether keys exist in the destination database.

      If keys exist, an error is returned during the precheck and the migration task cannot be started. If keys do not exist, the precheck is passed.

    • Ignore Errors and Proceed: skips the Check the existence of objects in the destination database check item. If a key with the same name already exists in the destination database, the key is overwritten.

    Source Objects and Selected Objects

    Select the objects to be migrated from the Source Objects section and click image.png to move the objects to the Selected Objects section. To remove a selected object, click the selected object in the Selected Objects section and click the image.png icon to move the object to the Source Objects section.

    Note

    You can select databases (DB 0 to DB 255) as the objects to be migrated.

  5. Configure the advanced settings and click Next: Verification Configurations in the lower part of the page.

    In most cases, you can retain the default settings. For more information, see Configure data verification and Appendix: Advanced settings.

  6. Configure data verification and click Next: Save Task Settings and Precheck in the lower part of the page.

    In most cases, you can retain the default settings. For more information, see Configure data verification.

  7. Perform a precheck, and then click Next: Purchase Instance in the lower part of the page.

    If Warning or Failed items are generated during the precheck, check the items one by one. You can click View Details and troubleshoot the issues as prompted. You can also click Confirm Alert Details and ignore the check items. However, issues such as data inconsistency may occur, which may pose risks to your business. For more information, see FAQ. After you complete the preceding operations, perform the precheck again.

  8. On the buy page, configure the parameters and click Buy and Start.

    • (Optional) Select the resource group to which the DTS data migration instance belongs. The default value is default resource group.

    • (Optional) Select the specifications of the DTS data migration instance. The higher the specifications, the faster the migration speed and the higher the cost. The default value is large. For more information, see Specifications of data migration instances.

    • Read and select the terms of service.

    After you purchase the data migration instance, the migration task starts. You can view the progress on the data migration page.

What to do next

  • If you perform incremental migration, you must manually terminate or release the task in the console after you complete the migration.

  • You can verify the data. For more information, see Verify migrated Redis data.

References

If you want to clone the full data of an existing ApsaraDB for Redis instance to a new ApsaraDB for Redis instance, you can use the backup and restoration feature. For information about the differences between cloning data to a new instance by means of backup and restoration and migrating data by using DTS, see Comparison of methods for data migration between ApsaraDB for Redis instances.

For more information, see Restore data from a backup set to a new instance and Use data flashback to restore data by point in time.

FAQ

  • Why does the connectivity test fail?

    Consider the following aspects for troubleshooting:

    • The account password is invalid. The password must be in the user:password format. For more information, see Logon methods.

    • If the source database is a self-managed database that is deployed in an on-premises data center or on a third-party cloud platform, a network firewall may block the access from DTS servers. In this case, manually add the CIDR blocks of DTS servers in the corresponding region to allow access from DTS servers. For more information, see Add the CIDR blocks of DTS servers.

  • Why does the migration task fail to run?

    • During the migration process, if you make changes such as scaling or changing the specifications or endpoint of the source or destination database, the migration task fails. In this case, reconfigure the task to account for the changes.

    • If the destination instance has insufficient available memory, or if the destination instance is a cluster instance whose specific shard has reached the upper memory limit, the DTS migration task fails due to an out of memory (OOM) error.

    • If transparent data encryption (TDE) is enabled for the destination instance, you cannot use DTS to migrate data.

  • Why are data volumes in the source and destination databases inconsistent?

    • If an expiration policy is enabled for specific keys in the source database, these keys may not be deleted at the earliest opportunity after they expire. Therefore, the number of keys in the destination database may be less than that in the source database.

    • When you run the PSYNC or SYNC command to transmit list data, DTS does not perform the FLUSH operation on the existing data in the destination database. As a result, duplicate data may exist.

    • If the network is interrupted during a full migration, DTS may perform multiple full migrations upon reestablishing the connection. In this case, DTS automatically overwrites existing keys with the same name in the destination database. If you perform a delete operation on the source database at this time, the command is not synchronized to the destination database. As a result, the destination database may have more keys than the source database.

  • Why am I unable to select an ApsaraDB for Redis instance that runs Redis 2.8?

    DTS does not support ApsaraDB for Redis instances that run Redis 2.8.

  • Why should I check whether the eviction policy is noeviction?

    By default, the maxmemory-policy parameter that specifies how data is evicted is set to volatile-lru for ApsaraDB for Redis instances. If the destination database has insufficient memory, data inconsistency may occur between the source and destination databases due to data eviction. In this case, the data migration task does not stop running. To prevent data inconsistency, we recommend that you set maxmemory-policy to noeviction for the destination database. This way, the data migration task fails if the destination database has insufficient memory, but data loss can be prevented for the destination database. For more information about data eviction policies, see How does ApsaraDB for Redis evict data by default?

  • Why is DTS_REDIS_TIMESTAMP_HEARTBEAT available in the source database?

    To ensure the quality of data migration, DTS inserts a key whose prefix is DTS_REDIS_TIMESTAMP_HEARTBEAT into the source database to record the timestamp of the last update. If the source database uses the cluster architecture, DTS inserts the key into each shard. The key is filtered out during data migration. After the data migration task is completed, the key expires.

  • Which commands are supported for incremental migration?

    • The following commands are supported for incremental migration:

      • APPEND

      • BITOP, BLPOP, BRPOP, and BRPOPLPUSH

      • DECR, DECRBY, and DEL

      • EVAL, EVALSHA, EXEC, EXPIRE, and EXPIREAT

      • FLUSHALL and FLUSHDB

      • GEOADD and GETSET

      • HDEL, HINCRBY, HINCRBYFLOAT, HMSET, HSET, and HSETNX

      • INCR, INCRBY, and INCRBYFLOAT

      • LINSERT, LPOP, LPUSH, LPUSHX, LREM, LSET, and LTRIM

      • MOVE, MSET, MSETNX, and MULTI

      • PERSIST, PEXPIRE, PEXPIREAT, PFADD, PFMERGE, PSETEX, and PUBLISH

      • RENAME, RENAMENX, RESTORE, RPOP, RPOPLPUSH, RPUSH, and RPUSHX

      • SADD, SDIFFSTORE, SELECT, SET, SETBIT, SETEX, SETNX, SETRANGE, SINTERSTORE, SMOVE, SPOP, SREM, and SUNIONSTORE

      • ZADD, ZINCRBY, ZINTERSTORE, ZREM, ZREMRANGEBYLEX, ZUNIONSTORE, ZREMRANGEBYRANK, and ZREMRANGEBYSCORE

    • If you run the EVAL or EVALSHA command to call Lua scripts, DTS cannot identify whether these Lua scripts are executed in the destination database. This is because the destination database does not explicitly return the execution results of Lua scripts during incremental data migration.