This topic describes how to migrate incremental data from a user-created PostgreSQL database to a MyBase for PostgreSQL instance by using Data Transmission Service (DTS). DTS supports schema migration, full data migration, and incremental data migration. You can select all of the supported migration types at the same time to ensure service continuity.

Prerequisites

Notes

  • DTS uses read and write resources of the source and destination databases during full data migration. This may increase the loads of the database servers. If the database performance is unfavorable, the specification is low, or the data volume is large, database services may become unavailable. For example, DTS occupies a large amount of read and write resources in the following cases: a large number of slow SQL queries are performed on the source database, the tables have no primary keys, or a deadlock occurs in the destination database. Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. For example, you can migrate data when the CPU utilization of the source and destination databases is less than 30%.
  • The tables to be migrated in the source database must have PRIMARY KEY or UNIQUE constraints and all fields must be unique. Otherwise, the destination database may contain duplicate data records.
  • If you select a schema as the object to be migrated and create a table in the schema or run the RENAME command to rename the table, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command before you write data to the table.
    Note Replace the schema and table in the preceding sample command with the actual schema name and table name.
  • To ensure that the delay time of incremental data migration is accurate, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.
  • During incremental data migration, DTS creates a replication slot for the source database. The replication slot is prefixed with dts_sync_. DTS automatically clears historical replication slots every 90 minutes to prevent them from occupying disk space.
    Note If a migration task is released or fails, DTS automatically clears the replication slot. If a primary/secondary switchover occurs, you must log on to the secondary database instance to clear the replication slot.
    Replication slot information
  • If a data migration task fails, DTS automatically resumes the task. Before you switch your workloads to the destination instance, stop or release the data migration task. Otherwise, the data in the source database will overwrite the data in the destination instance after the task is resumed.

Limits

  • A single data migration task can migrate data from only one database. To migrate data from multiple databases, you must create a data migration task for each database.
  • The name of the source database cannot contain hyphens (-), for example, dts-testdata.
  • Incremental migration does not support resumable transmission after a switchover between primary and secondary ApsaraDB RDS for PostgreSQL databases.
  • Data may be inconsistent between the primary and secondary nodes of the source database due to synchronization delay. Therefore, you must use the primary node as the data source when you migrate data.
    Note We recommend that you migrate data during off-peak hours. You can modify the transfer rate of full data migration based on the read/write performance of the source database. For more information, see Modify the transfer rate of full data migration.
  • During incremental data migration, DTS migrates only data manipulation language (DML) operations. DML operations include INSERT, DELETE, and UPDATE.
    Note Only data migration tasks that are created after October 1, 2020 can migrate data definition language (DDL) operations. To do this, you must create a trigger and function in the source database to obtain the DDL information before you configure the task. For more information, see Use triggers and functions to implement incremental DDL migration for PostgreSQL databases.
  • After your workloads are switched to the destination database, newly written sequences do not increment from the maximum value of the sequences in the source database. Therefore, you must query the maximum value of the sequences in the source database before you switch your workloads to the destination database. Then, you must specify the queried maximum value as the starting value of the sequences in the destination database.
  • DTS does not check the validity of metadata such as sequences. You must manually check the validity of metadata.

Billing

Migration type Instance configuration fee Internet traffic fee
Schema migration and full data migration Free of charge Charged only when data is migrated from Alibaba Cloud over the Internet. For more information, see Pricing.
Incremental data migration Charged. For more information, see Pricing.

Permissions required for database accounts

Database Schema migration Full migration Incremental migration
User-created PostgreSQL database The usage permission on pg_catalog The select permission on the objects to be migrated superuser
MyBase for PostgreSQL instance The create and usage permissions on the objects to be migrated The owner permission on schemas The owner permission on schemas

Create and authorize a database account:

  • For information about how to create and authorize an account of a user-created PostgreSQL database, see CREATE USER and GRANT.
  • For information about how to create and authorize an account of a MyBase for PostgreSQL instance, see Create an account.

Data migration process

To avoid data migration failures caused by dependencies between objects, DTS migrates the schemas and data of the source PostgreSQL database in the following order:

Note For more information about schema migration, full data migration, and incremental data migration, see Terms.
Data migration process Description
1. Schema migration DTS migrates the schemas of tables, views, sequences, functions, user-defined types, rules, domains, operations, and aggregates to the destination database.
Note DTS does not migrate plug-ins. In addition, DTS does not migrate functions that are written in the C programming language.
2. Full data migration DTS migrates historical data of the required objects to the destination database.
3. Schema migration DTS migrates the schemas of triggers and foreign keys to the destination database.
4. Incremental data migration DTS migrates incremental data of the required objects to the destination database.
Note Incremental data migration does not support the BIT data type.

Before you begin

  1. Log on to the server where the user-created PostgreSQL database resides.
  2. Set the value of the wal_level parameter in the postgresql.conf configuration file to logical.
    Set the wal_level parameter
    Note Skip this step if you do not need to perform incremental data migration.
  3. Add the CIDR blocks of DTS servers to the pg_hba.conf configuration file of the user-created PostgreSQL database. Add only the CIDR blocks of the DTS servers that reside in the same region as the destination database. For more information, see Add the CIDR blocks of DTS servers to the security settings of on-premises databases.
    Note For more information, see The pg_hba.conf File. Skip this step if you have set the IP address in the pg_hba.conf file to 0.0.0.0/0.
  4. Optional:Create a trigger and function in the source database to obtain the DDL information. For more information, see Use triggers and functions to implement incremental DDL migration for PostgreSQL databases.
    Note Skip this step if you do not need to migrate DDL operations.

Procedure

  1. Log on to the DTS console.
  2. In the left-side navigation pane, click Data Migration.
  3. At the top of the Migration Tasks page, select the region where the destination cluster resides.
    Select a region
  4. In the upper-right corner of the page, click Create Migration Task.
  5. Configure the source and destination databases for the data migration task.
    Configure the source and destination databases for the data migration task
    Section Parameter Description
    None Task Name DTS automatically generates a task name. We recommend that you specify a name that can describe your business for easy identification. You do not need to use a unique task name.
    Source Database Instance Type Select an instance type based on the location where the source database is deployed. In this example, select User-Created Database with Public IP Address.
    Note If you select another instance type, you must prepare the environment that is required for the source database. For more information, see Preparation overview.
    Instance Region If the instance type is set to User-Created Database with Public IP Address, you do not need to specify the Instance Region parameter.
    Database Type Select PostgreSQL.
    Hostname or IP Address Enter the endpoint that is used to connect to the user-created PostgreSQL database. In this example, enter the public IP address.
    Port Number Enter the service port number of the user-created PostgreSQL database. The port must be accessible over the Internet.
    Database Name Enter the name of the user-created PostgreSQL database to be migrated.
    Database Account Enter the account of the user-created PostgreSQL database. For information about the database account permissions that are required for migration, see Permissions required for database accounts.
    Database Password Enter the password of the specified database account.
    Note After you specify the source database parameters, click Test Connectivity next to Database Password to verify whether the specified parameters are valid. If the specified parameters are valid, the Passed message appears. If the Failed message appears, click Check next to Failed. Modify the source database parameters based on the check results.
    Destination Database Instance Type Select RDS Instance.
    Instance Region Select the region where the destination instance resides.
    RDS Instance ID Select the ID of the destination instance.
    Database Name Enter the name of the database to which data is to be migrated. The name can be different from the name of the source database.
    Note Make sure that you have created the specified database in the destination instance. For more information, see Create a database.
    Database Account Enter the account of the database in the destination instance. For information about the database account permissions that are required for migration, see Permissions required for database accounts.
    Database Password Enter the password of the specified database account.
    Note After you specify the destination database parameters, click Test Connectivity next to Database Password to verify whether the parameters are valid. If the specified parameters are valid, the Passed message appears. If the Failed message appears, click Check next to Failed. Modify the destination database parameters based on the check results.
  6. In the lower-right corner of the page, click Set Whitelist and Next.
    Note The IP addresses of DTS servers are automatically added to a whitelist of the destination instance. This ensures that the DTS servers can connect to the destination instance.
  7. Select migration types and objects to be migrated.
    Parameter Description
    Migration Types
    • To migrate only full data, select Schema Migration and Full Data Migration.
    • To ensure service continuity during data migration, select Schema Migration, Full Data Migration, and Incremental Data Migration. In this example, select all the three migration types.
    Note If you do not select Incremental Data Migration, do not write data to the source database during data migration. This ensures data consistency between the source and destination databases.
    Objects

    Select objects from the Available section and click the Right arrow icon to move the objects to the Selected section.

    Note
    • You can select columns, tables, or schemas as the objects to be migrated.
    • By default, after an object is migrated to the destination instance, the name of the object remains the same as that in the user-created PostgreSQL database. You can use the object name mapping feature to change the names of the objects that are migrated to the destination instance. For more information, see Object name mapping.
    • If you use the object name mapping feature on an object, other objects that are dependent on the object may fail to be migrated.
  8. In the lower-right corner of the page, click Precheck.
    Notice
    • Before you can start the data migration task, a precheck is performed. DTS can migrate data only if the precheck is past.
    • If the precheck fails, click the Prompt icon next to each failed item to view details. Fix the issues as prompted and run the precheck again.
  9. After the precheck is past, click Next.
  10. In the Confirm Settings dialog box, specify the Channel Specification parameter and select Data Transmission Service (Pay-As-You-Go) Service Terms.
  11. Click Buy and Start to start the migration task.

Stop the migration task

Warning We recommend that you prepare a rollback solution to migrate incremental data from the destination database to the source database in real time. This allows you to minimize the negative impact of switching your workloads to the destination database. For more information, see Switch workloads to the destination database. If you do not need to switch your workloads, you can perform the following steps to stop the migration task.
  • Full data migration

    Do not manually stop a task during full data migration. Otherwise, the system may fail to migrate all data. Wait until the migration task automatically ends.

  • Incremental data migration

    The task does not automatically end during incremental data migration. You must manually stop the migration task.

    1. Wait until the task progress bar shows Incremental Data Migration and The migration task is not delayed. Then, stop writing data to the source database for a few minutes. In some cases, the progress bar shows the delay time of incremental data migration.
    2. After the status of incremental data migration changes to The migration task is not delayed, manually stop the migration task.Stop a task during incremental migration