This section introduces the system architecture of DTS and the basic design concepts of its main functions.
The following figure demonstrates the system architecture of DTS.
Each unit in DTS owns a set of workers, which includes primary and backup devices to ensure high availability of the whole system. The Disaster Recovery System detects the status of every primary point in real time. If interruption occurs, DTS immediately switches the task to the backup points, and continues the task flow from where it is stopped.
For data subscription and synchronization instance, the Disaster Recovery System also monitors the source and target endpoints. If connection address changes are discovered, DTS dynamically adapts the new data source connection, and automatically allocates a new connection address for the endpoint.
A complete data migration process in DTS consists of three stages: schema migration, existing data migration, and incremental replication. If all three stages are performed when creating a data migration task, zero downtime data migration is achieved.
The design concept of data migration is as follows:
DTS handles schema conversion in schema migration. When migrating between heterogeneous databases, DTS obtains the syntax definition of the source scheme from the source database, converts the schema into the target database’s syntax definition format, and imports the modified schema to the target instance.
Existing data migration
In this stage, DTS migrates all the existing data from source to target. To ensure migration consistency, the change data capture unit starts to capture change data on the source database from when the migration starts. The captured change data is then resolved, encapsulated, and stored in local queue.
After running data migration, DTS starts the data replication unit. The replication unit pulls change data and replicates it to the target instance after performing mapping, filtering, and encapsulating processes.
Data subscription is used to capture the incremental log from specific RDS instances in real time. With the DTS SDK, you can subscribe to the captured data and consume it according to your requirements.
The design concept of data subscription is as follows:
Data subscription instance uses the log capture unit to capture change data from RDS instances. The captured data is stored in a local queue after performing data resolution, filtering, and standard formatting processes. The log capture unit connects the source instance based on the database protocol. For example, if the source instance is RDS MySQL, the log capture unit connects to the source instance through the Binlog dump protocol.
Both the log capture unit and the SDK data consumption process are highly available.
The Disaster Recovery System continually monitors the subscription instance. In detection of abnormality in the log capture unit, it restarts the log capture unit on another health point from where the process is stopped.
If multiple SDK consumptions are started for the same subscription instance, DTS only pushes change data to one consumer. In the case of an interruption, DTS switches to another health consumer, and continues to push data from where the process is stopped.
Data synchronization supports real-time synchronization between RDS instances and RDS/ADS instances.
The design concept of data synchronization is as follows:
The creation of a synchronization link includes:
Existing data initialization
The existing data of the source instance is replicated to the target instance.
Incremental data synchronization
The incremental data of the source database is synchronized to the target database in real time. Eventually, the source and target databases are dynamically synchronized.
The incremental data synchronization is built on the following units:
Change data capture unit
The change data capture unit reads raw data from the source instance, and stores the data in the local queue after performing data resolution, filtering, and standard formatting processes. It connects the source instance based on the database protocol and reads the incremental data. For example, if the source instance is RDS MySQL, the change data capture unit connects to the source instance through the Binlog dump protocol.
Change data replay unit
The change data replay unit requests incremental data from the change data capture unit, and filters the obtained data according to the customized synchronization object. After that, the replay unit starts to synchronize the log record to the target instance.
Both the change data capture unit and the change data replay unit are highly available.
When detecting link abnormality, the disaster recovery system restarts the link on another health point from where the process is stopped.