All Products
Search
Document Center

PolarDB:Migrate data between PolarDB for PostgreSQL (Compatible with Oracle) clusters

Last Updated:Apr 09, 2025

You can use Data Transmission Service (DTS) to migrate data between PolarDB for PostgreSQL (Compatible with Oracle) clusters.

Overview

PolarDB for PostgreSQL (Compatible with Oracle) is an enterprise-level relational database management system that has a wide range of applications and provides strong community support. Each new release brings many new features and improvements to enhance performance, availability, and security. PolarDB for PostgreSQL (Compatible with Oracle) 2.0 can significantly improve database performance and user experience.

PolarDB for PostgreSQL (Compatible with Oracle) 2.0 introduces new query optimization algorithms and a storage engine to improve the query speed and concurrent processing capabilities. It can process large amounts of data faster and improve the responsiveness and performance of the database. PolarDB for PostgreSQL (Compatible with Oracle) 2.0 also uses many new features and enhancements to provide better user experience and developer tools. For example, enhanced support for JSON data types makes it easier to process and query JSON data. Abundant monitoring and diagnostic tools help you better understand and optimize the performance of your database. Database security and reliability are strengthened. Stricter RAM policies and permission management features protect your data from potential security threats. Backup and recovery improvements provides better data protection and restoration.

After you upgrade to PolarDB for PostgreSQL (Compatible with Oracle) 2.0, you can gain more resources and supports from the PostgreSQL community to help you solve problems and learn about database best practices. We recommend that you upgrade to PolarDB for PostgreSQL (Compatible with Oracle) 2.0 to make full use of its benefits.

Prior evaluation

If you have purchased a PolarDB for PostgreSQL (Compatible with Oracle) 1.0 cluster, you can use the migration evaluation feature to verify the compatibility of the cluster before you migrate the cluster. This way, you can identify and handle precondition that affect the migration progress in advance to reduce processing and resource costs.

Supported regions

The migration evaluation feature is available in the following regions:

China (Hangzhou), China (Shanghai), China (Shenzhen), China (Beijing), China (Zhangjiakou), China (Ulanqab), China (Chengdu), China (Hong Kong), Japan (Tokyo), Singapore, Indonesia (Jakarta), US (Silicon Valley), and US (Virginia).

Impacts

The migration evaluation does not affect your workloads.

Create a migration evaluation task

  1. Log on to the PolarDB console.

  2. In the upper-left corner of the console, select the region in which the cluster that you want to manage is deployed.

  3. Create a migration evaluation task. PolarDB provides the following methods for creating an evaluation task:

    1. From the Clusters page: Click Migration/Upgrade Evaluation in the upper-left corner.

      f741374229beacceee41de1406e3146b

    2. From the Migration/Upgrade page: Click Migration/Upgrade Evaluation in the upper-left corner.

      bc27346fe35a0fce09344544b84d85d8

  4. In the Migration /Upgrade Evaluation dialog box, configure the parameters and click Next.

    Parameter

    Description

    Creation Method

    Select Upgrade from PolarDB.

    Source PolarDB Version

    Select Oracle 1.0.

    Source PolarDB Cluster

    Select the source cluster.

    Destination Database Engine

    Select Oracle 2.0.

    Database Name

    Select the source database.

  5. In the Migration/Upgrade Evaluation dialog box, you can check the overview and details about the compatibility of the cluster.

    1. Check the overview.

      3a515ca73a87b7c94447c2b31be9925d

    2. Check the details.

      dcda2a6170011b2215a7ccc37921cf6c

      Note

      You can focus on the incompatible objects in the compatibility evaluation results.

      • You can ignore Oracle native objects.

      • You must adapt the incompatible objects involved in the SQL statements. If you cannot adapt them at the business side, you can contact us. The R&D engineers will help you.

Manage the migration evaluation task

To view the details of the created evaluation task, go to the Migration/Evaluation page.

Note

The evaluation task is retained for seven days and then automatically deleted. If your evaluation task has expired, create a new one.

Prerequisites

  • The source and destination PolarDB for PostgreSQL (Compatible with Oracle) clusters are created. For more information, see Create a PolarDB for PostgreSQL (Compatible with Oracle) cluster.

  • The wal_level parameters for the source and destination PolarDB for PostgreSQL (Compatible with Oracle) clusters are set to logical. For more information, see Configure cluster parameters.

  • The available storage space of the destination database is larger than the total size of the data in the source database.

Limits

Note
  • During schema synchronization, DTS synchronizes foreign keys from the source database to the destination database.

  • During full data synchronization and incremental data synchronization, DTS temporarily disables the constraint check and cascade operations on foreign keys at the session level. If cascade updates or deletions are executed on the source database during data synchronization, data inconsistency may occur.

Item

Description

Limits on the source database (PolarDB for PostgreSQL (Compatible with Oracle) 1.0)

  • Bandwidth requirements: The server to which the source database belongs must have sufficient egress bandwidth. Otherwise, the data migration speed is affected.

  • The tables to be migrated must have PRIMARY KEY or UNIQUE constraints and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

  • If you select tables as the objects to be migrated and you need to modify the tables in the destination database, such as renaming tables or columns, you can migrate up to 1,000 tables in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to migrate the tables, or configure a task to migrate the entire database.

  • If you need to migrate incremental data, make sure that the following requirements are met:

    • The write-ahead logging (WAL) feature must be enabled.

    • For incremental data migration, the WAL logs of the source database must be stored for more than 24 hours. For full data and incremental data migration, the WAL logs of the source database must be stored for at least seven days. Otherwise, DTS may fail to obtain the WAL logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. After full data migration is complete, you can set the retention period to more than 24 hours. Make sure that you set the retention period of WAL logs based on the preceding requirements. Otherwise, the service reliability and performance stated in the Service Level Agreement (SLA) of DTS may not be guaranteed.

  • Limits on operations to perform on the source database:

    • During schema migration and full data migration, do not execute DDL statements to change the schemas of databases or tables. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data will be inconsistent between the source and destination databases. To ensure data consistency, we recommend that you select schema migration, full data migration, and incremental data migration as the migration types.

    • If you want to perform a primary/secondary switchover on the source PolarDB for PostgreSQL (Compatible with Oracle) cluster, the Logical Replication Slot Failover feature must be enabled. This prevents logical subscriptions from being interrupted and ensures that your data synchronization task can run as expected. For more information, see Logical replication slot failover.

  • If the source database has long transactions and the instance contains incremental migration tasks, the WAL value of the source database before the long transactions are committed may not be cleared and may accumulate. As a result, the source database has insufficient disk space.

Other limits

  • Each migration task is limited to migrating data from a single source database. To migrate data from multiple databases, you must create a data migration task for each database.

  • During incremental data migration, if you select a schema as the object to be migrated and create a table in the schema or execute the RENAME statement to rename a table in the schema, you must execute the ALTER TABLE schema.table REPLICA IDENTITY FULL; statement before you write data to the table.

    Note

    Replace the schema and table variables in the preceding statement with your schema name and table name.

  • To ensure the accuracy of the displayed latency during incremental data migration, DTS creates a table named dts_postgres_heartbeat in the source database. The following figure shows the structure and content of the table.表结构

  • During incremental data migration, DTS creates a replication slot for the source database. The replication slot is prefixed with dts_sync_. DTS can obtain the incremental logs of the source database within the last 15 minutes by using this replication slot.

    Note
    • After the DTS instance is released, the replication slot is automatically deleted. If you change the password of the source database or delete the IP address whitelist of DTS, the replication slot cannot be automatically deleted. In that case, you must manually delete the replication slot from the source database to prevent the replication slot from piling up and prevent the PolarDB for PostgreSQL (Compatible with Oracle) cluster from becoming unavailable.

    • If the data migration task is released or fails, DTS automatically clears the replication slot. If a primary/secondary switchover is performed on the PolarDB for PostgreSQL (Compatible with Oracle) cluster, you must log on to the secondary instance to clear the replication slot.

  • Before you migrate data, evaluate the impact of data migration on the performance of the source database and destination cluster. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses the read and write resources of the source and destination databases. This may increase the loads of the database servers.

  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination database. After a full data migration is complete, the size of used tablespace of the destination database is larger than that of the source database.

  • Make sure that the precision settings for columns of the FLOAT or DOUBLE data type meet your business requirements. DTS uses the ROUND(COLUMN,PRECISION) function to retrieve values from columns of the FLOAT or DOUBLE data type. If you do not specify a precision, DTS sets the precision for columns of the FLOAT type to 38 digits and the precision for columns of the DOUBLE type to 308 digits.

  • DTS attempts to resume data migration tasks that failed within the last seven days. Before you switch workloads to the destination database, you must stop or release the failed tasks. You can also execute the REVOKE statement to revoke the write permissions from the accounts that are used by DTS to access the destination database. Otherwise, the data in the source database overwrites the data in the destination database after the task is resumed.

  • DTS does not check the validity of metadata such as sequences. You must manually check the validity of metadata.

  • After your workloads are switched to the destination database, newly written sequences do not increment from the maximum value of the sequences in the source database. Therefore, you must query the maximum value of the sequences in the source database before you switch your workloads to the destination database. Then, you must specify the queried maximum value as the initial value of the sequences in the destination database. To query the maximum value of the sequences in the source database, execute the following statements:

    do language plpgsql $$
    DECLARE
     nsp name;
     rel name;
     val int8;
    BEGIN
     for nsp,rel in select nspname,relname from pg_class t2 , pg_namespace t3 where t2.relnamespace=t3.oid and t2.relkind='S' and relowner != 10
     loop
     execute format($_$select last_value from %I.%I$_$, nsp, rel) into val;
     raise notice '%',
     format($_$select setval('%I.%I'::regclass, %s);$_$, nsp, rel, val+1);
     end loop;
    END;
    $$;

Billing rules

Synchronization type

Instance configuration fee

Schema synchronization and full data synchronization

Free of charge.

Incremental data synchronization

Charged. For more information, see Billing overview.

Migration types

Migration type

Notes

Schema migration 1

DTS migrates the schemas of the required objects to the destination database. DTS supports schema migration for the following types of objects: table, view, synonym, trigger, stored procedure, stored function, package, and user-defined type.

Note

In this scenario, DTS does not support schema migration for triggers. We recommend that you delete the triggers of the source database to prevent data inconsistency caused by the triggers. For more information, see Configure a data synchronization or migration task for a source database that contains a trigger.

Full data migration

DTS migrates the historical data of objects from the source database to the destination database.

Note

During schema migration and full data migration, do not perform DDL operations on the objects to be migrated. Otherwise, the objects may fail to be migrated.

Incremental data migration

DTS retrieves WAL log files from the source database. Then, DTS synchronizes incremental data from the source database to the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster.

Incremental data migration allows you to ensure service continuity when you migrate data between PolarDB for PostgreSQL (Compatible with Oracle) clusters.

SQL statements that can be synchronized

Type

SQL statement

DML

INSERT, UPDATE, and DELETE

DDL

Important

If the database account of the source database is a privileged account, data synchronization tasks support the following DDL operations:

  • CREATE TABLE and DROP TABLE

  • ALTER TABLE, including RENAME TABLE, ADD COLUMN, ADD COLUMN DEFAULT, ALTER COLUMN TYPE, DROP COLUMN, ADD CONSTRAINT, ADD CONSTRAINT CHECK, and ALTER COLUMN DROP DEFAULT

  • TRUNCATE TABLE (The database engine version of the source PolarDB for PostgreSQL (Compatible with Oracle) cluster must be 1.0 or later.)

  • CREATE INDEX ON TABLE

Important
  • You cannot synchronize additional information of DDL statements, such as CASCADE or RESTRICT.

  • You cannot synchronize DDL statements from a session that executes the SET session_replication_role = replica statement.

  • If the SQL statements submitted by the source database at a time contain both DML and DDL statements, DTS does not synchronize the DDL statements.

  • If multiple SQL statements committed by the source database at the same time contain DDL statements for objects that are not synchronized, the DDL statements are not synchronized.

Permissions required for database accounts

Database

Required permission

Source PolarDB for PostgreSQL (Compatible with Oracle) cluster

The permissions of the privileged account.

Destination PolarDB for PostgreSQL (Compatible with Oracle) cluster

The permissions of the database owner.

Important

You can specify the database owner when you create a database. For more information about how to create and authorize a database account, see Create a database account.

Configure a data synchronization task

  1. Use one of the following methods to go to the Data Migration page and select the region in which the data migration instance resides.

    DTS console

    1. Log on to the DTS console.

    2. In the left-side navigation pane, click Data Migration.

    3. In the upper-left corner of the page, select the region in which the data migration instance resides.

    DMS console

    Note

    The actual operation may vary based on the mode and layout of the DMS console. For more information, see Simple mode and Customize the layout and style of the DMS console.

    1. Log on to the DMS console.

    2. In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration .

    3. From the drop-down list to the right of Data Migration Tasks, select the region in which the data synchronization instance resides.

  2. Click Create Task and configure the source and destination databases.

    Note

    Set Database Type to PolarDB for PostgreSQL (Compatible with Oracle).

    Section

    Parameter

    Description

    N/A

    Task Name

    The name of the task. DTS automatically assigns a name to the task. We recommend that you specify a name that can help you identify the task. You do not need to specify a unique task name.

    Source Database (PolarDB for PostgreSQL (Compatible with Oracle) 1.0)

    Select Existing Connection

    The database instance that you want to use. You can choose whether to select an existing instance based on your business requirements.

    • If you select an existing instance, DTS automatically populates the parameters for the instance.

    • If you do not use an existing instance, you must configure parameters for the destination database.

    Database Type

    Select PolarDB for PostgreSQL (Compatible with Oracle).

    Access Method

    The access method of the destination database. Select Alibaba Cloud Instance.

    Instance Region

    The region where the source PolarDB for PostgreSQL (Compatible with Oracle) cluster is deployed.

    Replicate Data Across Alibaba Cloud Accounts

    Specifies whether to synchronize data across Alibaba Cloud accounts. In this example, No is selected.

    Instance ID

    The ID of the source PolarDB for PostgreSQL (Compatible with Oracle) cluster.

    Database Name

    The name of the source PolarDB for PostgreSQL (Compatible with Oracle) cluster.

    Database Account

    The database account of the source PolarDB for PostgreSQL (Compatible with Oracle) cluster.

    Database Password

    The password of the database account.

    Destination Database (PolarDB for PostgreSQL (Compatible with Oracle) 2.0)

    Select Existing Connection

    The database instance that you want to use. You can choose whether to select an existing instance based on your business requirements.

    • If you select an existing instance, DTS automatically populates the parameters for the instance.

    • If you do not use an existing instance, you must configure parameters for the destination database.

    Database Type

    Select PolarDB for PostgreSQL (Compatible with Oracle).

    Access Method

    The access method of the destination database. Select Alibaba Cloud Instance.

    Instance Region

    The region where the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster is deployed.

    Instance ID

    The ID of the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster.

    Database Name

    The name of the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster.

    Database Account

    The privileged database account of the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster. For more information, see Create a database account.

    Database Password

    The password of the database account.

  3. In the lower part of the page, click Test Connectivity and Proceed.

    DTS adds the CIDR blocks of DTS servers to the whitelist of the destination cluster.

    Warning
    If the CIDR blocks of DTS servers are automatically or manually added to the IP address whitelist of the database or instance, or to the ECS security group rules, security risks may arise. Therefore, before you use DTS to migrate data, you must understand and acknowledge the potential risks and take preventive measures, including but not limited to the following measures: enhance the security of your username and password, limit the ports that are exposed, authenticate API calls, regularly check the whitelist or ECS security group rules and forbid unauthorized CIDR blocks, or connect the database to DTS by using Express Connect, VPN Gateway, or Smart Access Gateway.
  4. Select the objects to be migrated and configure advanced settings.

    Note

    All namespaces in the source database are listed. Select a namespace based on your business requirements.

    Parameter

    Description

    Synchronization Type

    The synchronization types. By default, Incremental Data Synchronization is selected. You must also select Schema Synchronization and Full Data Synchronization. After the precheck is complete, DTS synchronizes the historical data of selected objects from the source instance to the destination instance. The historical data is the basis for subsequent incremental synchronization.

    Note

    You must select all synchronization types.

    Synchronization Topology

    • One-way synchronization: Only one-way synchronization links are established from the source database to the destination database, including the existing data synchronization link and incremental synchronization link.

    • Two-way synchronization: Two-way synchronization links are established from the source database to the destination database. Incremental data synchronization is performed from the source database to the destination database, and reverse data transmission is performed from the destination database to the source database.

    Note

    Reverse data transmission supports only data synchronization, but not DDL operations.

    Processing Mode in Existed Target Table

    • Precheck and Report Errors: checks whether the destination database contains tables that have the same names as tables in the source database. If the source and destination databases do not contain identical table names, the precheck is passed. Otherwise, an error is returned during the precheck, and the data synchronization task cannot be started.

      Note

      If the source and destination databases contain identical table names and the tables in the destination database cannot be deleted or renamed, you can use the object name mapping feature to rename the tables that are synchronized to the destination database. For more information, see Map object names.

    • Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.

      Warning

      If you select Ignore Errors and Proceed, data inconsistency may occur, and your business may be exposed to potential risks.

      • If the source and destination databases have the same schemas, and a data record has the same primary key value as an existing data record in the destination database:

        • During full data synchronization, DTS does not synchronize the data record to the destination database. The existing data record in the destination database is retained.

        • During incremental data synchronization, DTS synchronizes the data record to the destination database. The existing data record in the destination database is overwritten.

      • If the source and destination databases have different schemas, data may fail to be initialized, only some columns are synchronized, or the data synchronization task fails. Operate with caution.

    Capitalization of Object Names in Destination Instance

    The capitalization of database names, table names, and column names in the destination instance. By default, DTS default policy is selected. You can select other options to make sure that the capitalization of object names is consistent with that in the source or destination database. For more information, see Specify the capitalization of object names in the destination instance.

    Source Objects

    Select one or more objects from the Source Objects section and click the向右 icon to add the objects to the Selected Objects section.

    Note

    You can select columns, tables, or databases as the objects to be synchronized. If you select tables or columns as the objects to be synchronized, DTS does not synchronize other objects such as views, triggers, or stored procedures to the destination database.

    Selected Objects

    • To rename an object that you want to synchronize to the destination instance, right-click the object in the Selected Objects section. For more information, see Map object names.

    • To change the names of multiple objects to be synchronized, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Database-table column name mapping.

    Note
    • To synchronize SQL operations that are performed on a specific database or table, right-click the object in the Selected Objects section. In the dialog box that appears, select the SQL operations that you want to synchronize. For more information, see the SQL operations that can be synchronized section of this topic.

    • To specify WHERE conditions to filter data, right-click the object in the Selected Objects section. In the dialog box that appears, specify the conditions. For more information, see Use SQL conditions to filter data.

  5. Click Next: Advanced Settings to configure advanced settings.

    Parameter

    Description

    Dedicated Cluster for Task Scheduling

    A DTS dedicated cluster consists of multiple Elastic Compute Service (ECS) instances of the same specifications in a region. A DTS dedicated cluster is used to manage and configure DTS data migration, data synchronization, and change tracking tasks. Compared with DTS public clusters, DTS dedicated clusters are characterized by exclusive resources, better stability, better performance, and reduced costs. For more information, see What is a DTS dedicated cluster.

    Specify the retry time range for failed connections

    The retry time range for failed connections. If the source or destination database fails to be connected after the data synchronization task is started, DTS immediately retries a connection within the time range. Valid values: 10 to 1440. Unit: minutes. Default value: 720. We recommend that you set the parameter to a value greater than 30. If DTS reconnects to the source and destination databases within the specified time range, DTS resumes the data synchronization task. Otherwise, the data synchronization task fails.

    Note
    • If you set different retry time ranges for multiple DTS tasks that have the same source or destination database, the shortest retry time range that is set takes precedence.

    • When DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time range based on your business requirements, or release the DTS instance at the earliest opportunity after the source and destination instances are released.

    Retry Time for Other Issues

    The retry time range for other issues. For example, if the DDL or DML operations fail to be performed after the data synchronization task is started, DTS immediately retries the operations within the time range. Valid values: 1 to 1440. Unit: minutes. Default value: 10. We recommend that you set the parameter to a value greater than 10. If the failed operations are successfully performed within the specified time range, DTS resumes the data synchronization task. Otherwise, the data synchronization task fails.

    Important

    The value of the Retry Time for Other Issues parameter must be smaller than the value of the Retry Time for Failed Connections parameter.

    Enable Throttling for Full Data Migration

    During full data migration, concurrent reads are enabled for the source database and concurrent writes are enabled for the destination database. This behavior may bring pressure on your database. If you do not want to affect the databases, you can limit the migration rate.

    Enable Throttling for Incremental Data Migration

    During incremental data migration, concurrent reads are enabled for the source database and concurrent writes are enabled for the destination database. If one database has high business pressure, the other database may encounter write pressure. If you do not want to affect the databases, you can limit the migration rate.

    Environment Tag

    Environment tags help identify the importance of DTS tasks. They do not affect the normal running of tasks.

    Configure ETL

    Specifies whether to configure the extract, transform, and load (ETL) feature. For more information, see What is ETL. Valid values:

    Monitoring and Alerting

    Specifies whether to configure alerting for the data synchronization task. If the task fails or the synchronization latency exceeds the specified threshold, alert contacts will receive notifications.

    • Select No if you do not want to set alerts.

    • If you select Yes, you must also specify the alert threshold and alert contacts. For more information, see Configure monitoring and alerting.

  6. Click Next Step: Data Verification. On the Data Verification page, configure the following parameters.

    Parameter

    Description

    Data Verification Mode

    • Full Data Verification: verifies all data. If you select Full Data Verification, you must configure the verification parameters and the objects. For more information, see Configure the data verification feature for a data synchronization or migration task in DTS.

    • Incremental Data Verification: verifies data that is incrementally synchronized or migrated. This option is selected by default.

    • Schema Verification: verifies the schemas of the selected objects.

  7. In the lower part of the page, click Next: Save Task Settings and Precheck.

    • To view the parameters in the relevant API operations for configuring the DTS task, move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

    • If you do not need to view or have viewed the parameters, click Next: Save Task Settings and Precheck in the lower part of the page.

      Note
      • Before you can start the data synchronization task, DTS performs a precheck. You can start the data synchronization task only after the task passes the precheck.

      • If the task fails the precheck, click View Details next to each failed item. After you analyze the causes based on the check results, troubleshoot the issues. Then, run a precheck again.

      • If an alert is generated for an item during the precheck, perform the following operations based on the scenario:

        • If the alert item cannot be ignored, click View Details next to the failed item and troubleshoot the issues. Then, run a precheck again.

        • If the alert item can be ignored, click Confirm Alert Details. In the View Details dialog box, click Ignore. In the message that appears, click OK. Then, click Precheck Again to run a precheck again. If you ignore the alert item, data inconsistency may occur and your business may be exposed to potential risks.

  8. Wait until the success rate becomes 100%. Then, click Next: Purchase Instance.

  9. On the Purchase Instance page, configure the Billing Method and Instance Class parameters for the data synchronization instance. The following table describes the parameters.

    Section

    Parameter

    Description

    New Instance Class

    Billing Method

    • Subscription: You pay for your subscription when you create an instance. The subscription billing method is more cost-effective than the pay-as-you-go billing method for long-term use.

    • Pay-as-you-go: A pay-as-you-go instance is charged on an hourly basis. The pay-as-you-go billing method is suitable for short-term use. If you no longer require a pay-as-you-go instance, you can release the instance to reduce costs.

    Resource Group

    The resource group on which the instance is run. Default value: default resource group. For more information, see What is Resource Management?

    Instance Class

    DTS provides various synchronization specifications that provide different performance. The synchronization speed varies based on the synchronization specifications that you select. You can select a synchronization specification based on your business scenario. For more information, see Specifications of data synchronization instances.

    Duration

    If you select the subscription billing method, specify the subscription duration and the number of instances that you want to create. The subscription duration can be one to nine months, one year, two years, three years, or five years.

    Note

    This parameter is displayed only if you select the subscription billing method.

  10. Read and select the check box for Data Transmission Service (Pay-as-you-go) Service Terms.

  11. Click Buy and Start to start the data synchronization task. You can view the progress of the task in the task list.

Execute the data synchronization task

Procedure

  1. Precheck: The preparation phase before the data migration or synchronization process starts. It is used to confirm whether all required conditions are met. For example, network connectivity, system permissions, data format and integrity are checked.

  2. Front facing module: In a data processing process, the pre-module usually preprocesses data. In this process, some necessary triggers are created to process subsequent data.

  3. Incremental data collection: Only the data that has been modified or added after the last collection operation is collected. It helps improve efficiency and avoids unchanged data.

  4. Schema migration 1: involves transferring data from one system to another system or converting data from one format or another format while maintaining schema consistency. This is usually a preliminary migration step to create the schema for the new environment. Most migration elements such as schemas, sequences, functions, stored procedures, views, and indexes are created in this step.

  5. Full data migration: Migrate all data from a database or a data warehouse to another system.

  6. Schema migration 2: The second schema adjustment, which is used to fine-tune and optimize the schema, or to solve the issues that occur in the initial migration. Foreign keys are created on some tables in this step.

  7. Incremental write: After the full data migration is completed, the new and modified data needs to be identified and migrated to the destination system. The step implements incremental data writes.

  8. Back facing module: After data migration or synchronization is complete, the post-module performs a series of wrap-up tasks. Clearing data, creating indexes, or performance optimization may be involved.

  9. Full data verification: After data migration, verification is required to ensure data integrity and accuracy. Full data verification checks all the migrated data to confirm whether the data is correctly copied to the new system.

  10. Incremental data verification: is similar to full data verification. Incremental data verification focuses on incremental data. It ensures that any data changes made since the last verification are accurately captured and migrated.

Examples

You can log on to the Data Synchronization page of the new DTS console to view migration details.

  1. Precheck

    image.png

    Note

    This step is automatically executed.

  2. Front facing module

    image.png

    Note

    This step is automatically executed.

  3. Incremental data collection

    image.png

    Note

    This step is automatically executed.

  4. Schema migration

    Most tables, indexes, and sequences are migrated automatically. The a table is migrated.

    Note

    If a failed task occurs during the schema migration, you can use the schema revision feature to manually correct the SQL statements used in the migration.

    image.png

  5. Full data migration

    The data of all tables within the specified range is migrated. You can view the data traffic and transmission speed on the Data Synchronization page.

    image.png

  6. Schema migration 2

    This step focuses on migrating the foreign key constraints of tables. Adding foreign keys is important to ensure data consistency across tables after the data is migrated.

    image.png

  7. Incremental write

    One-way data synchronization from the source database is performed. All data of the source database is subscribed and forwarded, and then processed and applied on the destination database.

    image.png

  8. Back facing module

    image.png

  9. Full data verification

    This step compares data in the source and destination databases. In the example, 835,266 rows in the t1 table of the source database and in the t1 table of the destination database are compared. The data is consistent, so the verification is successful.

    image.png

  10. Incremental data verification

    Once all the preceding tasks are completed, the system automatically starts the incremental synchronization and incremental data verification steps. In the incremental data verification step, the system compares incremental data differences between the two databases.

    image.png

Cutover and release

After the migration is complete, we recommend that you add the destination database to a test environment and test it in a comprehensive manner. The data of the source database must be completely synchronized to the destination database (slight latency is normal during data replay). The test duration may vary depending on your business requirements. We recommend that you perform the test for at least one week to ensure that the destination database is running normally before you start cutover and release.

You must plan a cutover time window, such as 10 hours. During this period, the source database is out of service. After data replay is complete on the destination database, you can update the database connection string in the business and restart the business system. Then, you can start necessary basic business tests. If no issues are found in the tests, the destination database can go into service.

We recommend that you retain the source database for a period of time until you confirm that the destination database is completely stable and then discard the source database.

References

PolarDB for PostgreSQL (Compatible with Oracle) 2.0 available for commercial use