You can use Data Transmission Service (DTS) to migrate incremental data between PostgreSQL databases. The source or destination database can be a self-managed PostgreSQL database or an ApsaraDB RDS for PostgreSQL instance. DTS supports schema migration, full data migration, and incremental data migration. You can select all of the supported migration types to ensure service continuity. This topic describes how to migrate incremental data from a self-managed PostgreSQL database to an ApsaraDB RDS for PostgreSQL instance.

Prerequisites

  • The version of the self-managed PostgreSQL database is 10.1 to 13.
  • An ApsaraDB RDS for PostgreSQL instance is created. For more information, see Create an ApsaraDB RDS for PostgreSQL instance.
    Note To ensure compatibility, you must make sure that the database version of the ApsaraDB RDS for PostgreSQL instance is the same as the version of the self-managed PostgreSQL database.
  • The available storage space of the ApsaraDB RDS for PostgreSQL instance is larger than the total size of the data in the self-managed PostgreSQL database.

Precautions

  • DTS uses read and write resources of the source and destination databases during full data migration. This may increase the loads of the database servers. If the database performance is unfavorable, the specification is low, or the data volume is large, database services may become unavailable. For example, DTS occupies a large amount of read and write resources in the following cases: a large number of slow SQL queries are performed on the source database, the tables have no primary keys, or a deadlock occurs in the destination database. Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. For example, you can migrate data when the CPU utilization of the source and destination databases is less than 30%.
  • The tables to be migrated in the source database must have PRIMARY KEY or UNIQUE constraints and all fields must be unique. Otherwise, the destination database may contain duplicate data records.
  • If you select a schema as the object to be migrated and create a table in the schema or run the RENAME command to rename the table, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command before you write data to the table.
    Note Replace the schema and table in the preceding sample command with the actual schema name and table name.
  • To ensure that the delay time of incremental data migration is accurate, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.
  • During incremental data migration, DTS creates a replication slot for the source database. The replication slot is prefixed with dts_sync_. DTS automatically clears historical replication slots every 90 minutes to reduce storage usage.
    Note If the data migration task is released or fails, DTS automatically clears the replication slot. If a primary/secondary switchover is performed on the source ApsaraDB RDS for PostgreSQL instance, you must log on to the secondary database to clear the replication slot.
    Replication slot information
  • To ensure that the data migration task runs as expected, you can perform a primary/secondary switchover only on an ApsaraDB RDS for PostgreSQL instance V11. In this case, you must set the rds_failover_slot_mode parameter to sync. For more information, see Use the failover slot feature for logical subscriptions.
    Warning If you perform a primary/secondary switchover on a self-managed PostgreSQL database or an ApsaraDB RDS for PostgreSQL instance of other versions, the data migration task stops.
  • If a data migration task fails, DTS automatically resumes the task. Before you switch your workloads to the destination instance, stop or release the data migration task. Otherwise, the data in the source database will overwrite the data in the destination instance after the task is resumed.

Limits

  • A single data migration task can migrate data from only one database. To migrate data from multiple databases, you must create a data migration task for each database.
  • The name of the source database cannot contain hyphens (-), for example, dts-testdata.
  • If a primary/secondary switchover is performed on the source database during incremental data migration, DTS does not support resumable transmission.
  • Data may be inconsistent between the primary and secondary nodes of the source database due to synchronization delay. Therefore, you must use the primary node as the data source when you migrate data.
    Note We recommend that you migrate data during off-peak hours. You can modify the transfer rate of full data migration based on the read/write performance of the source database. For more information, see Modify the transfer rate of full data migration.
  • Incremental data migration does not support the BIT data type.
  • During incremental data migration, DTS migrates only data manipulation language (DML) operations. DML operations include INSERT, DELETE, and UPDATE.
    Note Only data migration tasks that are created after October 1, 2020 can migrate data definition language (DDL) operations. To do this, you must create a trigger and function in the source database to obtain the DDL information before you configure the task. For more information, see Use triggers and functions to implement incremental DDL migration for PostgreSQL databases.
  • After your workloads are switched to the destination database, newly written sequences do not increment from the maximum value of the sequences in the source database. Therefore, you must query the maximum value of the sequences in the source database before you switch your workloads to the destination database. Then, you must specify the queried maximum value as the starting value of the sequences in the destination database.
  • DTS does not check the validity of metadata such as sequences. You must manually check the validity of metadata.

Billing

Migration type Task configuration fee Internet traffic fee
Schema migration and full data migration Free of charge. Charged only when data is migrated from Alibaba Cloud over the Internet. For more information, see Pricing.
Incremental data migration Charged. For more information, see Pricing.

Permissions required for database accounts

Database Schema migration Full data migration Incremental data migration
Self-managed PostgreSQL database The USAGE permission on pg_catalog The SELECT permission on the objects to be migrated The permissions of the superuser role
ApsaraDB RDS for PostgreSQL instance The CREATE and USAGE permissions on the objects to be migrated The permissions of the schema owner The permissions of the schema owner

For more information about how to create and authorize a database account, see the following topics:

Data migration process

The following table describes how DTS migrates the schemas and data of the source PostgreSQL database. The process prevents data migration failures that are caused by dependencies between objects.

Note For more information about schema migration, full data migration, and incremental data migration, see Terms.
Data migration process Description
1.Schema migration DTS migrates the schemas of tables, views, sequences, functions, user-defined types, rules, domains, operations, and aggregates to the destination database.
Note DTS does not migrate plug-ins. In addition, DTS does not migrate functions that are written in the C programming language.
2.Full data migration DTS migrates historical data of the required objects to the destination database.
3.Schema migration DTS migrates the schemas of triggers and foreign keys to the destination database.
4.Incremental data migration DTS migrates incremental data of the required objects to the destination database. Incremental data migration allows you to ensure service continuity when you migrate data from a self-managed PostgreSQL database.
Note
  • During incremental data migration, DTS migrates only DML operations. DML operations include INSERT, DELETE, and UPDATE.
  • Incremental data migration does not support the BIT data type.

Before you begin

  1. Log on to the server where the self-managed PostgreSQL database resides.
  2. Set the value of the wal_level parameter in the postgresql.conf configuration file to logical.
    Set the wal_level parameter
    Note Skip this step if you do not need to perform incremental data migration.
  3. Add the CIDR blocks of DTS servers to the pg_hba.conf configuration file of the self-managed PostgreSQL database. Add only the CIDR blocks of the DTS servers that reside in the same region as the destination database. For more information, see Add the CIDR blocks of DTS servers to the security settings of on-premises databases.
    Note For more information, see The pg_hba.conf File. Skip this step if you have set the IP address in the pg_hba.conf file to 0.0.0.0/0.
  4. Optional:Create a trigger and function in the source database to obtain the DDL information. For more information, see Use triggers and functions to implement incremental DDL migration for PostgreSQL databases.
    Note Skip this step if you do not need to migrate DDL operations.

Procedure

  1. Log on to the DTS console.
  2. In the left-side navigation pane, click Data Migration.
  3. At the top of the Migration Tasks page, select the region where the destination cluster resides.
    Select a region
  4. In the upper-right corner of the page, click Create Migration Task.
  5. Configure the source and destination databases.
    Configure the source and destination databases
    Section Parameter Description
    N/A Task Name DTS automatically generates a task name. We recommend that you specify an informative name for easy identification. You do not need to use a unique task name.
    Source Database Instance Type Select an instance type based on the deployment of the source database. In this example, select User-Created Database with Public IP Address.
    Note If you select other instance types, you must deploy the network environment for the source database. For more information, see Preparation overview.
    Instance Region If you select User-Created Database with Public IP Address as the instance type, you do not need to specify the Instance Region parameter.
    Database Type Select PostgreSQL.
    Hostname or IP Address Enter the endpoint that is used to connect to the self-managed PostgreSQL database. In this example, enter the public IP address.
    Port Number Enter the service port number of the self-managed PostgreSQL database. The port must be accessible over the Internet.
    Database Name Enter the name of the self-managed PostgreSQL database.
    Database Account Enter the account that is used to log on to the self-managed PostgreSQL database. For information about the permissions that are required for the account, see Permissions required for database accounts.
    Database Password Enter the password of the database account.
    Note After you specify the source database parameters, click Test Connectivity next to Database Password to verify whether the specified parameters are valid. If the specified parameters are valid, the Passed message appears. If the Failed message appears, click Check next to Failed. Modify the source database parameters based on the check results.
    Destination Database Instance Type Select RDS Instance.
    Instance Region Select the region where the destination RDS instance resides.
    RDS Instance ID Select the ID of the destination RDS instance.
    Database Name Enter the name of the destination database in the RDS instance. The name can be different from the name of the source database.
    Note Before you configure the data migration task, create a database in the ApsaraDB RDS for PostgreSQL instance. For more information, see Create a database.
    Database Account Enter the database account of the destination RDS instance. For information about the permissions that are required for the account, see Permissions required for database accounts.
    Database Password Enter the password of the database account.
    Note After you specify the destination database parameters, click Test Connectivity next to Database Password to verify whether the parameters are valid. If the specified parameters are valid, the Passed message appears. If the Failed message appears, click Check next to Failed. Modify the destination database parameters based on the check results.
  6. In the lower-right corner of the page, click Set Whitelist and Next.
    Note In this step, DTS adds the CIDR blocks of DTS servers to the whitelist of the ApsaraDB RDS for PostgreSQL instance. This ensures that the DTS servers can connect to the ApsaraDB RDS for PostgreSQL instance.
  7. Select the migration types and the objects to be migrated.
    Select the migration types and the objects to be migrated
    Setting Description
    Select the migration types
    • To perform only full data migration, select Schema Migration and Full Data Migration.
    • To ensure service continuity during data migration, select Schema Migration, Full Data Migration, and Incremental Data Migration. In this example, select all of the three migration types.
    Note If Incremental Data Migration is not selected, do not write data to the source database during full data migration. This ensures data consistency between the source and destination databases.
    Select the objects to be migrated

    Select one or more objects from the Available section and click the Rightwards arrow icon to move the objects to the Selected section.

    Note
    • You can select columns, tables, or schemas as the objects to be migrated.
    • By default, after an object is migrated to the destination RDS instance, the name of the object remains the same as that in the self-managed PostgreSQL database. You can use the object name mapping feature to rename the objects that are migrated to the destination RDS instance. For more information, see Object name mapping.
    • If you use the object name mapping feature to rename an object, other objects that are dependent on the object may fail to be migrated.
    Specify whether to rename objects You can use the object name mapping feature to rename the objects that are migrated to the destination instance. For more information, see Object name mapping.
    Specify the retry time for failed connections to the source or destination database By default, if DTS fails to connect to the source or destination database, DTS retries within the next 12 hours. You can specify the retry time based on your needs. If DTS reconnects to the source and destination databases within the specified time, DTS resumes the data migration task. Otherwise, the data migration task fails.
    Note When DTS retries a connection, you are charged for the DTS instance. We recommend that you specify the retry time based on your business needs. You can also release the DTS instance at your earliest opportunity after the source and destination instances are released.
  8. In the lower-right corner of the page, click Precheck.
    Note
    • Before you can start the data migration task, a precheck is performed. You can start the data migration task only after the task passes the precheck.
    • If the task fails to pass the precheck, you can click the Info icon icon next to each failed item to view details.
      • You can troubleshoot the issues based on the causes and run a precheck again.
      • If you do not need to troubleshoot the issues, you can ignore failed items and run a precheck again.
  9. After the task passes the precheck, click Next.
  10. In the Confirm Settings dialog box, specify the Channel Specification parameter and select Data Transmission Service (Pay-As-You-Go) Service Terms.
  11. Click Buy and Start to start the data migration task.

Stop the migration task

Warning We recommend that you prepare a rollback solution to migrate incremental data from the destination database to the source database in real time. This allows you to minimize the negative impact of switching your workloads to the destination database. For more information, see Switch workloads to the destination database. If you do not need to switch your workloads, you can perform the following steps to stop the migration task.
  • Full data migration

    Do not manually stop a task during full data migration. Otherwise, the system may fail to migrate all data. Wait until the migration task automatically ends.

  • Incremental data migration

    The task does not automatically end during incremental data migration. You must manually stop the migration task.

    1. Wait until the task progress bar shows Incremental Data Migration and The migration task is not delayed. Then, stop writing data to the source database for a few minutes. In some cases, the progress bar shows the delay time of incremental data migration.
    2. After the status of incremental data migration changes to The migration task is not delayed, manually stop the migration task.Stop a task during incremental migration