Data Transmission Service

Supports data migration and data synchronization between data engines, such as relational database, NoSQL and OLAP

Data Transmission Service (DTS) helps you migrate data between data storage types, such as relational database, NoSQL, and OLAP. The service supports homogeneous migrations as well as heterogeneous migration between different data storage types.

DTS also can be used for continuous data replication with high availability. In addition, DTS can help you subscribe to the change data function of ApsaraDB for RDS. With DTS, you can easily implement scenarios such as data migration, remote real time data backup, real time data integration and cache refresh.

Benefits

High Performance
The rate of migrating existing data can be up to 70 MB/s
The rate of replicating change data can be up to 30,000+ records per second
Real-Time Data Replication
DTS supports real-time data synchronization from one RDS database to another RDS database
You can modify the synchronization object during the synchronization process
Simple-to-Use
You can begin a database migration with just a few clicks from the DTS Management Console
You can monitor and manage your tasks in the management console anytime, anywhere
Reliable
DTS continually monitors the source and target databases, when the connection changes, DTS dynamically modifies the connection in order to optimize performance

Features

  • Zero Downtime

    Data Transmission Service helps you migrate data with virtually no downtime. All data changes to the source database that occurs during the migration is continuously replicated to the target, allowing the source database to remain fully operational during the migration process. After the database migration is complete, the target database will remain synchronized with the source for if you choose, allowing you to switch over the database at a convenient time.

  • Supports Most Widely Used Databases

    DTS can migrate your data to and from most of the widely used commercial and open source databases. It supports homogeneous migrations such as MySQL to MySQL, as well as heterogeneous migrations between different database platforms, such as Oracle to MySQL.


    Migrations can be from on-premises databases to RDS or ECS, databases running on ECS to RDS, or vice versa, as well as from one RDS database to another RDS database.


    DTS supports multiple transmission modes, such as data migration, real time data replication, and change data subscription.

  • Data Migration

    Zero Downtime

    All data changes to the source database that occur during the migration are continuously replicated to the target, allowing the source database to be fully operational during the migration process.
    After completion of the database migration, the target database will remain synchronized with the source if you choose, allowing you to switch over the database at a convenient time.


    Supports Most Widely Used Databases

    DTS supports homogeneous migration such as MySQL to MySQL, SQL Server to SQL Server, as well as heterogeneous migration such as Oracle to MySQL.
    DTS supports heterogeneous migrations between different database platforms. Migrations can be from on-premises databases to ApsaraDB for RDS or Alibaba Cloud ECS, databases running on ECS to RDS, or vice versa, as well as from one RDS database to another RDS database.

  • Real-Time Change Data Subscription

    DTS supports real-time subscribing of change data of RDS.


    You can modify the subscription object after subscription instance created.

  • Automatic Monitoring

    Provides crucial instance information like replication delay, transmission status, and consumption delay in real time so that you can monitor and protect business critical applications. Try out over 40 products for free.

  • Simple-to-Use

    DTS manages all the complexities of the migration process including automatically replicating data changes that occur in the source database during the migration process.

  • Reliable

    DTS continually monitors the source and target databases, when the connection changes, DTS dynamically modifies the connection in order to optimize performance.

  • High Performance

    DTS's high performance supports replication in parallel, and supports multiple network optimization features, such as data compression and packet retransmission.

How it works

  • Database Migration With No Downtime
  • Remote Data Disaster Recovery
  • Decrease Remote Access
  • Real-time Big Data Analytics
  • Cache Refresh
  • Message Notify
Database Migration With No Downtime

Database Migration With No Downtime

Data Transmission Service helps you migrate data with virtually no downtime. All data changes to the source database that occur during the migration are continuously replicated to the target, allowing the source database to be fully operational during the migration process. After the database migration is complete, the target database will remain synchronized with the source for as long as you choose, allowing you to switchover the database at a convenient time.

Remote Data Disaster Recovery

Remote Data Disaster Recovery

With DTS, you can perform real-time data replication between two RDS instances in deferent regions. The remote disaster recovery instance is a slave of the primary instance. When disaster occur, applications can switch to the remote disaster recovery instance from the primary instance to guarantee business availability.

Decrease Remote Access

Decrease Remote Access

When an application deploys in a single region, user access from other regions will suffer access latency and poor user experience. To improve access experience, you utilize use the recommended architecture as follows:

Recommended Configuration

  • This architecture consists of a center and units. Write requests of users in all regions are routed back to the center. Data in the center is synchronized to units with DTS. Read requests of users in different regions may be routed to the nearby units so as to avoid remote access, and reduce access latency.

Real-time Big Data Analytics

Real-time Big Data Analytics

DTS provides optimized and high performance delivery to Analytic DB from RDS to support customers with their real-time big data analytics initiatives. With this solution, customers can perform ad-hoc discovery, organization, and enrichment of low-latency data before it traverses to more refined sets of analytics tools.

Cache Refresh

Cache Refresh

In order to support high-speed access to data, you may use caching services with RDS. With data subscription of DTS, you can perform low latency cache refresh, without degrading the performance of the RDS instance.

Recommended Configuration

  • DTS can subscribe to change data of the RDS instance in real time. It can also refresh cached data when data of RDS is changed.

Message Notify

Message Notify

When two applications have asynchronous coupling, you can use data subscription of DTS to perform low latency message notification without degrading the performance of the source application. With data subscription, you don’t have to publish messages during the source application. With this solution, you can therefore make the core application more stable and reliable.

FAQs

1. Does DTS support data migration between RDS instances under two different Alibaba Cloud accounts?

Yes. When migrating data between RDS instances under different Alibaba Cloud accounts, you need to login DTS console with the account of the target RDS instance.
When configuring the migration task, you need to select on-permit databases with public IP addresses for the source instance,and configure the connection of the source RDS instance.

2. Does DTS support migrating the change data of the source instance during data migration?

Yes. All data changes to the source database that occur during the migration are continuously replicated to the target. DTS allows the source database to be fully operational during the migration process.

3. What are the basic principles of change data migration through DTS?

The basic principles of change data migration through DTS are described below:
During data migration, DTS starts the log parsing module to capture and parse the change logs of the source database in real time. Then, DTS starts migrating the existing data. After data loading, DTS replicates the captured change data to the target instance, and the target database will remain synchronized with the source for as long as you choose.

4. Are tables locked during data migration through DTS?

When you choose migration existing data and replication change data, during the full data migration, DTS checks whether the source database contains any non-transactional tables having no primary keys (for example, MyISAM). If such tables exist, DTS places read-only locks on the tables to ensure data migration consistency. In other cases, DTS does not place locks on source databases.

5. Which network (intranet or Internet) is used to access ECS instance during data migration through DTS?

If the network type of the ECS instance is VPC, DTS connects to the ECS instance with the Internet.
If the ECS instance is the source instance of the migration task and is located in a region different from the target instance of the migration task, DTS connect to the ECS with the Internet.
Otherwise, DTS connects to the ECS instance with an intranet.

6. Which network (intranet or Internet) is used to access the RDS instance during data migration through DTS?

If the RDS instance is the source instance of the migration task and is located in a region different from the target instance of the migration task, DTS connects to the RDS instance with the Internet.
Otherwise, DTS connects to the RDS instance with intranet.

7. If the RDS instance is the source instance of the migration task and is located in a region different from the target instance of the migration task, DTS can connect to the RDS instance with the Internet. Otherwise, DTS connects to the RDS instance with an intranet.

If database type of the source instance is MySQL or MongoDB, then the DDL operation will be synchronized.
Otherwise, the DDL operation will not be synchronized.

8. Does DTS support migrating the database on a VPC ECS instance to an RDS instance?

Yes, but the ECS instance must be attached with an EIP address. When configuring the migration task, select the ECS instance for the source instance. DTS accesses the ECS instance with the EIP address of the ECS instance.

9. From which database (active/standby) does DTS capture data during data migration?

DTS captures data from the active database of the RDS instance during data migration.

10. Can DTS migrate the database C in the RDS instance A to the database D in the RDS instance B?

Yes. DTS supports database name mapping which allows data migration between two different databases in two RDS instances.

11. Is the data in the source database deleted after migration through DTS?

No. DTS only copies data in the source database during data migration, and thus the data in the source database is not affected.

12. Why have I received the following error: "Failed to obtain the structure object "[java.sql.SQLException: I/O exception: The Network Adapter co" reported?

If such error is reported, DTS fails to connect to the source database. Possible causes include:
(1) The connection address is incorrect.
(2) The firewall is enabled for the local database.
(3) Remote listening is not enabled for the database.

13. What is the table "increment_trx" generated in the target database during data migration?

The table "increment_trx" is created by DTS. It is mainly used to record the checkpoint of migration. When the task is interrupt, DTS automatically restarts the process and continues the migration from the checkpoint recorded.
Do not drop the table; otherwise, the migration task fails.

14. Why is the size of the target RDS instance larger than the source database after migration through DTS.

DTS migrates data through SQL. It will generate binlog in the target instance. Therefore, the size of the target RDS instance after migration is larger than the source database.

15. Why have I received the following error: "java.sql.BatchUpdateException: INSERT, DELETE command denied to user 'user'"?

The general cause is that the target RDS instance is locked, and the write privilege of the account is revoked.
To resolve the problem, you can upgrade the space of the target RDS instance, and restart the task on the DTS Console.

16. Does the data within the tables of the target database overwritten during data migration through DTS?

No. The table of the target instance to be migrated needs to be empty before data migration. If the table to be migrated already exists in the target database, the precheck will fail.

17. How can I migrate a database to the RDS instance from another Alibaba Cloud account.

You will need to log on to the DTS Console from the Alibaba Cloud account of the target RDS instance. Set the source instance type to on-permit database, and configure the connection of the source RDS instance.

18. Does the release of a finished migration task affect the use of the migrated database?

No.

19. Can DTS support the synchronization between on_permit database and the RDS instance?

Yes. You can use DTS to perform synchronization between the cloud instance and the on-permit database.

20. Which network (intranet or Internet) is used during data DTS synchronization?

DTS transfers data with intranet during data synchronization.

21. Why is it that my data subscription SDK cannot subscribe to any message and the prompt "client partition is empty, wait partition balance" is always reported?

Why my data subscription SDK cannot subscribe to any message and the prompt "client partition is empty, wait partition balance" is always reported.

22. Why is "keep alive error" reported at the data subscription SDK?

The consumption timestamp is not in the data range of the data subscription instance. You need to modify the consumption timestamp and restart the SDK.

23. Why does the system report an error: "failed to get master store addr for topic aliyun_sz_ecs_ApsaraDBr*****y-1-0" when I use the data subscription function?

Firstly, check whether the sePublicIp in SDK is set to true.
If usePublicIp = true, check whether the consumption timestamp is within the data range of the subscription instance. If not, modify the consumption timestamp and restart the SDK.

24. Why does the system report an error: "Specified signature is not matched with our calculation. at com.aliyuncs.DefaultAcsClient.parseAcsResponse(DefaultAcsClient.java:139) at" when I start the SDK for data subscription?

The Access Key/Access Secret configured in the SDK does not belong to the Alibaba Cloud account corresponding to the subscription instance. Modify Access Key/Access Secret and restart the SDK.

25. Can an SDK client subscribe to multiple channels?

No.

26.Why does the system report "get guid info failed" when I start SDK subscription?

The subscription instance ID set in the SDK is incorrect. You need to replace the "subscription instance ID in the sample code client with the ID of the subscription instance to which you want to subscribe.