Heterogeneous databases have different data types. When Data Transmission Service (DTS) migrates data between heterogeneous databases, DTS converts the data types of the source database to those of the destination database during schema migration. This topic lists the data type mappings for you to evaluate the impact of data migration on your business.
Overview
You can view the data type mappings between heterogeneous databases based on the following migration scenarios:
If the data type of the time field in the source database is TIMESTAMP WITH TIME ZONE and the data type of the time field in the destination database is a heterogeneous data type, such as DATETIME, the time zone information in the time field will be lost.
Migrate data from a PolarDB for MySQL cluster, an ApsaraDB RDS for MySQL instance, or a self-managed MySQL database
The following table lists the data type mappings between MySQL and heterogeneous databases. The source instance can be a PolarDB for MySQL cluster, an ApsaraDB RDS for MySQL instance, or a self-managed MySQL database. The destination instance can be an AnalyticDB for MySQL V2.0 cluster or an AnalyticDB for PostgreSQL instance.
If the value range of the data to be migrated from the source instance exceeds the range supported by DTS, the accuracy of the data migrated to the destination instance decreases.
Type | Data type of the source instance | Value range | Data type of AnalyticDB for MySQL | Data type of AnalyticDB for PostgreSQL |
Integer | BIT[(M)] | 1 ~ 64 | VARCHAR | BIT[(M)] |
TINYINT[(M)] | -128 ~ 127 | TINYINT | SMALLINT | |
TINYINT[(M)] [UNSIGNED] | 0 ~ 255 | SMALLINT | SMALLINT | |
SMALLINT[(M)] | -32768 ~ 32767 | SMALLINT | SMALLINT | |
SMALLINT[(M)] [UNSIGNED] | 0 ~ 65535 | INT | INTEGER | |
MEDIUMINT[(M)] | -8388608 ~ 8388607 | INT | INTEGER | |
MEDIUMINT[(M)] [UNSIGNED] | 0 ~ 16777215 | INT | INTEGER | |
INT[(M)] | -2147483648 ~ 2147483647 | INT | INTEGER | |
INT[(M)] [UNSIGNED] | 0 ~ 4294967295 | BIGINT | BIGINT | |
BIGINT[(M)] | -9223372036854775808 ~ 9223372036854775807 | BIGINT | BIGINT | |
BIGINT[(M)] [UNSIGNED] | 0 ~ 18446744073709551615. | DECIMAL(20,0) | NUMERIC(20) | |
Decimal | DECIMAL[(M[,D])] | M: 0 to 65 D: 0 to 30 | DECIMAL[(M[,D])] | DECIMAL[(M[,D])] |
FLOAT(p) | 1.175494351E-38 ~ 3.402823466E+38 | FLOAT | REAL | |
DOUBLE[(M,D)] | 2.2250738585072014E-308 ~ 1.7976931348623157E+308 | DOUBLE | DOUBLE PRECISION | |
Date and time | DATE | 1000-01-01~9999-12-31 Note The format is YYYY-MM-DD, in UTC. | DATE | DATE |
DATETIME[(fsp)] | 1000-01-01 00:00:00.000000 ~ 9999-12-31 23:59:59.999999 Note The format is YYYY-MM-DD hh:mm:ss[.fraction], in UTC. | DATETIME | TIMESTAMP | |
TIMESTAMP[(fsp)] | 1970-01-01 00:00:01.000000 ~ 2038-01-19 03:14:07.999999 Note The format is YYYY-MM-DD hh:mm:ss[.fraction], in UTC. | TIMESTAMP | TIMESTAMP WITH TIME ZONE | |
TIME[(fsp)] | -838:59:59.000000 ~ 838:59:59.000000 Note The format is hh:mm:ss[.fraction], in UTC. | TIME | TIME WITHOUT TIME ZONE | |
YEAR[(4)] | 1901 to 2155, or 0000 | INT | INTEGER | |
String | CHAR[(M)] | 0 to 255 characters | VARCHAR | CHAR |
VARCHAR(M) | 0 to 65,535 characters | VARCHAR | VARCHAR | |
BINARY[(M)] | 0 to 255 bytes | VARBINARY | BYTEA | |
VARBINARY(M) | 0 to 65,535 bytes | VARBINARY | BYTEA | |
TINYBLOB | 255 (2^8 - 1) bytes | VARBINARY | BYTEA | |
TINYTEXT | 255 (2^8 - 1) characters | VARCHAR | TEXT | |
BLOB | 65,535 (2^16 - 1) bytes | VARBINARY | BYTEA | |
TEXT | 65,535 (2^16 - 1) characters | VARCHAR | TEXT | |
MEDIUMBLOB | 16,777,215 (2^24 - 1) bytes | VARBINARY | BYTEA | |
MEDIUMTEXT | 16,777,215 (2^24 - 1) characters | VARCHAR | TEXT | |
LONGBLOB | 4,294,967,295 or 4 GB (2^32 - 1) bytes | VARBINARY | BYTEA | |
LONGTEXT | 4,294,967,295 or 4 GB (2^32 - 1) characters | VARCHAR | TEXT | |
ENUM('value1','value2',...) | An ENUM column can have a maximum of 65,535 distinct elements. | VARCHAR | VARCHAR(128) | |
SET('value1','value2',...) | A SET column can have a maximum of 64 distinct elements. | VARCHAR | VARCHAR(128) | |
Spatial | GEOMETRY | Geometry values of any type | VARBINARY | POLYGON |
POINT | None | VARBINARY | POINT | |
LINESTRING | None | VARBINARY | PATH | |
POLYGON | None | VARBINARY | POLYGON | |
MULTIPOINT | None | VARBINARY | POLYGON | |
MULTILINESTRING | None | VARBINARY | PATH | |
MULTIPOLYGON | None | VARBINARY | POLYGON | |
GEOMETRYCOLLECTION | A collection of geometry values of any type | VARBINARY | POLYGON | |
JSON | JSON | None | JSON | JSON |
Destination instance: a DataHub project, a Message Queue for Apache Kafka instance, or a self-managed Kafka cluster
Category | Data type of the source instance | Value range | Data type of DataHub | Data type of a Message Queue for Apache Kafka instance or a self-managed Kafka cluster |
Integer | BIT[(M)] | 1 ~ 64 | BOOLEAN | STRING | Consistent with the data types of MySQL or PolarDB for MySQL |
TINYINT[(M)] | -128 ~ 127 | BIGINT | ||
TINYINT[(M)] [UNSIGNED] | 0 ~ 255 | BIGINT | ||
SMALLINT[(M)] | -32768 ~ 32767 | BIGINT | ||
SMALLINT[(M)] [UNSIGNED] | 0 ~ 65535 | BIGINT | ||
MEDIUMINT[(M)] | -8388608 ~ 8388607 | BIGINT | ||
MEDIUMINT[(M)] [UNSIGNED] | 0 ~ 16777215 | BIGINT | ||
INT[(M)] | -2147483648 ~ 2147483647 | BIGINT | ||
INT[(M)] [UNSIGNED] | 0 ~ 4294967295 | BIGINT | ||
BIGINT[(M)] | -9223372036854775808 ~ 9223372036854775807 | BIGINT | ||
BIGINT[(M)] [UNSIGNED] | 0 ~ 18446744073709551615 | BIGINT | ||
Decimal | DECIMAL[(M[,D])] | M: 0 to 65 D: 0 to 30 | DECIMAL | |
FLOAT(p) | 1.175494351E-38 ~ 3.402823466E+38 | DOUBLE | ||
DOUBLE[(M,D)] | 2.2250738585072014E-308 ~ 1.7976931348623157E+308 | DOUBLE | ||
Date and time | DATE | 1000-01-01~9999-12-31 Note The format is YYYY-MM-DD, in UTC. | TIMESTAMP | |
DATETIME[(fsp)] | 1000-01-01 00:00:00.000000 ~ 9999-12-31 23:59:59.999999 Note The format is YYYY-MM-DD hh:mm:ss[.fraction], in UTC. | TIMESTAMP | ||
TIMESTAMP[(fsp)] | 1970-01-01 00:00:01.000000 ~ 2038-01-19 03:14:07.999999 Note The format is YYYY-MM-DD hh:mm:ss[.fraction], in UTC. | TIMESTAMP | ||
TIME[(fsp)] | -838:59:59.000000 ~ 838:59:59.000000 Note The format is hh:mm:ss[.fraction], in UTC. | STRING | ||
YEAR[(4)] | 1901 to 2155, or 0000 | STRING | ||
String | CHAR[(M)] | 0 to 255 characters | STRING | |
VARCHAR(M) | 0 to 65,535 characters | STRING | ||
BINARY[(M)] | 0 to 255 bytes | STRING | ||
VARBINARY(M) | 0 to 65,535 bytes | STRING | ||
TINYBLOB | 255 (2^8 - 1) bytes | STRING | ||
TINYTEXT | 255 (2^8 - 1) characters | STRING | ||
BLOB | 65,535 (2^16 - 1) bytes | STRING | ||
TEXT | 65,535 (2^16 - 1) characters | STRING | ||
MEDIUMBLOB | 16,777,215 (2^24 - 1) bytes | STRING | ||
MEDIUMTEXT | 16,777,215 (2^24 - 1) characters | STRING | ||
LONGBLOB | 4,294,967,295 or 4 GB (2^32 - 1) bytes | STRING | ||
LONGTEXT | 4,294,967,295 or 4 GB (2^32 - 1) characters | STRING | ||
ENUM('value1','value2',...) | An ENUM column can have a maximum of 65,535 distinct elements. | STRING | ||
SET('value1','value2',...) | A SET column can have a maximum of 64 distinct elements. | STRING | ||
Spatial | GEOMETRY | Geometry values of any type | STRING | |
POINT | N/A | STRING | ||
LINESTRING | N/A | STRING | ||
POLYGON | N/A | STRING | ||
MULTIPOINT | N/A | STRING | ||
MULTILINESTRING | N/A | STRING | ||
MULTIPOLYGON | N/A | STRING | ||
GEOMETRYCOLLECTION | A collection of geometry values of any type | STRING | ||
JSON | JSON | N/A | STRING |
Destination instance: a MaxCompute project or an Elasticsearch cluster
Category | Data type of the source instance | Value range | MaxCompute | Elasticsearch |
Integer | BIT[(M)] | 1 ~ 64 | BOOLEAN | STRING | BOOLEAN | LONG Note If the data is only one byte long, we recommend that you use the BOOLEAN data type in Elasticsearch. |
TINYINT[(M)] | -128 ~ 127 | BIGINT | SHORT | |
TINYINT[(M)] [UNSIGNED] | 0 ~ 255 | BIGINT | INTEGER | |
SMALLINT[(M)] | -32768 ~ 32767 | BIGINT | SHORT | |
SMALLINT[(M)] [UNSIGNED] | 0 ~ 65535 | BIGINT | INTEGER | |
MEDIUMINT[(M)] | -8388608 ~ 8388607 | BIGINT | INTEGER | |
MEDIUMINT[(M)] [UNSIGNED] | 0 ~ 16777215 | BIGINT | INTEGER | |
INT[(M)] | -2147483648 ~ 2147483647 | BIGINT | INTEGER | |
INT[(M)] [UNSIGNED] | 0 ~ 4294967295 | BIGINT | LONG | |
BIGINT[(M)] | -9223372036854775808 ~ 9223372036854775807 | BIGINT | LONG | |
BIGINT[(M)] [UNSIGNED] | 0 ~ 18446744073709551615 | BIGINT | LONG | |
Decimal | DECIMAL[(M[,D])] | M: 0 to 65 D: 0 to 30 | DOUBLE | DOUBLE Note If the DECIMAL value contains a decimal point, we recommend that you use the TEXT data type in Elasticsearch to ensure data consistency. |
FLOAT(p) | 1.175494351E-38 ~ 3.402823466E+38 | DOUBLE | FLOAT | |
DOUBLE[(M,D)] | 2.2250738585072014E-308 ~ 1.7976931348623157E+308 | DOUBLE | DOUBLE | |
Date and time | DATE | 1000-01-01~9999-12-31 Note The format is YYYY-MM-DD, in UTC. | DATETIME | DATE Note The format is YYYY-MM-DD, in UTC. For more information, see Date format mappings. |
DATETIME[(fsp)] | 1000-01-01 00:00:00.000000 ~ 9999-12-31 23:59:59.999999 Note The format is YYYY-MM-DD hh:mm:ss[.fraction], in UTC. | DATETIME | DATE Note The DATE format is yyyy-MM-dd'T'HH:mm:ss, in UTC. If DATE is accurate to microseconds, its format is yyyy-MM-dd'T'HH:mm:ss.S. For more information, see Date format mappings. | |
TIMESTAMP[(fsp)] | 1970-01-01 00:00:01.000000 ~ 2038-01-19 03:14:07.999999 Note The format is YYYY-MM-DD hh:mm:ss[.fraction], in UTC. | DATETIME | DATE Note The DATE format is yyyy-MM-dd'T'HH:mm:ss, in UTC. If DATE is accurate to microseconds, its format is yyyy-MM-dd'T'HH:mm:ss.S. For more information, see Date format mappings. | |
TIME[(fsp)] | -838:59:59.000000 ~ 838:59:59.000000 Note The format is hh:mm:ss[.fraction], in UTC. | STRING | DATE Note The DATE format is YYYY-MM-DD, in UTC. For more information, see Date format mappings. | |
YEAR[(4)] | 1901 to 2155, or 0000 | STRING | DATE Note The DATE format is yyyy, in UTC. For more information, see Date format mappings. | |
String | CHAR[(M)] | 0 to 255 characters | STRING | TEXT |
VARCHAR(M) | 0 to 65,535 characters | STRING | TEXT | |
BINARY[(M)] | 0 to 255 bytes | STRING | BINARY | |
VARBINARY(M) | 0 to 65,535 bytes | STRING | BINARY | |
TINYBLOB | 255 (2^8 - 1) bytes | STRING | BINARY | |
TINYTEXT | 255 (2^8 - 1) characters | STRING | TEXT | |
BLOB | 65,535 (2^16 - 1) bytes | STRING | BINARY | |
TEXT | 65,535 (2^16 - 1) characters | STRING | TEXT | |
MEDIUMBLOB | 16,777,215 (2^24 - 1) bytes | STRING | BINARY | |
MEDIUMTEXT | 16,777,215 (2^24 - 1) characters | STRING | TEXT | |
LONGBLOB | 4,294,967,295 or 4 GB (2^32 - 1) bytes | STRING | BINARY | |
LONGTEXT | 4,294,967,295 or 4 GB (2^32 - 1) characters | STRING | TEXT | |
ENUM('value1','value2',...) | An ENUM column can have a maximum of 65,535 distinct elements. | STRING | KEYWORD | |
SET('value1','value2',...) | A SET column can have a maximum of 64 distinct elements. | STRING | KEYWORD | |
Spatial | GEOMETRY | Geometry values of any type | STRING | GEO_SHAPE |
POINT | N/A | STRING | GEO_POINT | |
LINESTRING | N/A | STRING | GEO_SHAPE | |
POLYGON | N/A | STRING | GEO_SHAPE | |
MULTIPOINT | N/A | STRING | GEO_SHAPE Note If the data is only one byte long, we recommend that you use the BOOLEAN data type in Elasticsearch. | |
MULTILINESTRING | N/A | STRING | GEO_SHAPE | |
MULTIPOLYGON | N/A | STRING | GEO_SHAPE | |
GEOMETRYCOLLECTION | A collection of geometry values of any type | STRING | GEO_SHAPE | |
JSON | JSON | N/A | STRING | OBJECT Note If the data is only one byte long, we recommend that you use the BOOLEAN data type in Elasticsearch. |
Migrate data from an Oracle database
The following table describes the data type mappings between a self-managed Oracle database and heterogeneous databases. The destination instance can be a self-managed MySQL database, a PolarDB for MySQL cluster, an AnalyticDB for MySQL cluster, an AnalyticDB for PostgreSQL instance, or a PolarDB for PostgreSQL (Compatible with Oracle) cluster.
If the value range of the data to be migrated from the source instance exceeds the range supported by DTS, the accuracy of the data migrated to the destination instance decreases.
Type | Data type of Oracle | Value range | Data type of MySQL, PolarDB for MySQL, or PolarDB-X | Data type of ApsaraDB RDS for PPAS | Data type of AnalyticDB for MySQL | Data type of AnalyticDB for PostgreSQL | Data type of PolarDB for PostgreSQL (Compatible with Oracle) |
Numeric | NUMBER(p,s) | 1 to 22 bytes The argument p indicates the precision. Valid values: 1 to 38. The argument s indicates the scale. Valid values: -84 to 127. | DECIMAL[(p[,s])] | NUMBER[(p[,s])] | DECIMAL | TINYINT | SMALLINT | INTEGER | BIGINT | DECIMAL | TINYINT | SMALLINT | INTEGER | BIGINT | NUMBER(p,s) |
FLOAT(p) | 1 to 22 bytes The variable p indicates a pointer. Valid values: 1 to 126 bits. | DOUBLE | DOUBLE PRECISION | DOUBLE | DOUBLE PRECISION | DOUBLE PRECISION | |
BINARY_FLOAT | A 32-bit floating-point number (4 bytes) | DECIMAL(65,8) | REAL | DOUBLE | DOUBLE PRECISION | REAL | |
BINARY_DOUBLE | A 64-bit floating-point number (8 bytes) | DOUBLE | DOUBLE PRECISION | DOUBLE | DOUBLE PRECISION | DOUBLE PRECISION | |
Date and time | DATE | None | DATETIME | DATE | DATETIME | TIMESTAMP(0) | DATE |
TIMESTAMP [(fractional_seconds_precision)] | None | DATETIME[(fractional_seconds_precision)] | TIMESTAMP [(fractional_seconds_precision)] | DATETIME | TIMESTAMP | TIMESTAMP [(fractional_seconds_precision)] | |
TIMESTAMP [(fractional_seconds_precision)] WITH TIME ZONE | None | DATETIME[(fractional_seconds_precision)] | TIMESTAMP [(fractional_seconds_precision)] WITH TIME ZONE | TIMESTAMP | TIMESTAMP WITH TIME ZONE | TIMESTAMP [(fractional_seconds_precision)] WITH TIME ZONE | |
TIMESTAMP [(fractional_seconds_precision)] WITH LOCAL TIME ZONE | None | DATETIME[(fractional_seconds_precision)] | TIMESTAMP [(fractional_seconds_precision)] WITH TIME ZONE | DATETIME | TIMESTAMP WITH TIME ZONE | TIMESTAMP [(fractional_seconds_precision)] WITH TIME ZONE | |
INTERVAL YEAR [(year_precision)] TO MONTH | None | Not supported | Not supported | VARCHAR | VARCHAR(32) | INTERVAL | |
INTERVAL DAY [(day_precision)] TO SECOND [(fractional_seconds_precision)] | None | Not supported | Not supported | VARCHAR | VARCHAR(32) | INTERVAL | |
String | CHAR [(size [BYTE | CHAR])] | 2,000 bytes | CHAR[(n)] | CHAR[(n)] | VARCHAR | CHAR | CHAR [(size [BYTE | CHAR])] |
NCHAR[(size)] | 2,000 bytes | NATIONAL CHAR[(n)] | NCHAR[(n)] | VARCHAR | VARCHAR | NCHAR[(size)] | |
VARCHAR2(size [BYTE | CHAR]) | If MAX_STRING_SIZE is set to EXTENDED, the maximum size is 32,767 bytes. If MAX_STRING_SIZE is set to STANDARD, the maximum size is 4,000 bytes. | VARCHAR(n) | VARCHAR2[(n)] | VARCHAR | VARCHAR | VARCHAR2(size [BYTE | CHAR]) | |
NVARCHAR2(size) | If MAX_STRING_SIZE is set to EXTENDED, the maximum size is 32,767 bytes. If MAX_STRING_SIZE is set to STANDARD, the maximum size is 4,000 bytes. | NATIONALVARCHAR[(n)] | VARCHAR2[(n)] | VARCHAR | VARCHAR | NVARCHAR2(size) | |
LONG | The maximum size is 2 GB (2^31 - 1). | LONGTEXT | LONG | VARCHAR | TEXT | LONG | |
RAW(size) | The maximum size is 32,767 bytes or 2,000 bytes. | VARBINARY(2000) | RAW(size) | VARBINARY | BYTEA | RAW(size) | |
LONG RAW | The maximum size is 2 GB. | LONGBLOB | LONG RAW | VARBINARY | BYTEA | LONG RAW | |
CLOB | The maximum size is (4 GB - 1) × DB_BLOCK_SIZE. | LONGTEXT | CLOB | VARCHAR | TEXT | CLOB | |
NCLOB | The maximum size is (4 GB - 1) × DB_BLOCK_SIZE. | LONGTEXT | NCLOB | VARCHAR | TEXT | CLOB | |
BLOB | The maximum size is (4 GB - 1) × DB_BLOCK_SIZE. | LONGBLOB | BLOB | VARBINARY | BYTEA | BLOB | |
BFILE | The maximum size is 4 GB. | Not supported | Not supported | Not supported | Not supported | Not supported | |
JSON | JSON | The maximum size is 32 MB. | Not supported | Not supported | JSON | JSON | JSON |
ROWID | ROWID | 64 characters | Not supported | Not supported | ROWID | OID | VARCHAR |
Spatial | Customization required | Not supported |
Destination instance: MySQL, PolarDB for MySQL, or PolarDB-X
If a CHAR field in the source database is greater than 255 bytes in length, DTS converts this field to the VARCHAR(n) type in the destination database.
MySQL does not support the following Oracle data types: BFILE, INTERVAL YEAR TO MONTH, and INTERVAL DAY TO SECOND. Therefore, DTS does not convert these data types during schema migration.
If a table to be migrated contains these data types, schema migration fails. You must make sure that the columns of these data types are excluded from the objects to be migrated.
The TIMESTAMP data type of MySQL databases does not contain the time zone information. However, the IMESTAMP WITH TIME ZONE and TIMESTAMP WITH LOCAL TIME ZONE data types in Oracle databases provide the time zone information. Therefore, DTS converts the values of these data types to UTC time in the destination database.
Destination instance: ApsaraDB RDS for PPAS
ApsaraDB RDS for PPAS does not support the TIMESTAMP[(fractional_seconds_precision)] WITH LOCAL TIME ZONE data type. DTS converts the data of this type to UTC time and then stores the data in the destination ApsaraDB RDS for PPAS instance by using the TIMESTAMP[(fractional_seconds_precision)] WITH TIME ZONE data type.
Destination instance: AnalyticDB for PostgreSQL
If an Oracle data type is not supported by AnalyticDB for PostgreSQL, DTS converts the data type to BYTEA. If the conversion fails, DTS sets the field value to NULL.
Migrate data from an SQL Server database
The following table lists the data type mappings between SQL Server and AnalyticDB databases. The source instance can be a self-managed SQL Server database or an ApsaraDB RDS for SQL Server instance. The destination instance can be an AnalyticDB for MySQL cluster or an AnalyticDB for PostgreSQL instance.
If the value range of the data to be migrated from the source instance exceeds the range supported by DTS, the accuracy of the data migrated to the destination instance decreases.
Type | Data type of SQL Server | Value range | Data type of AnalyticDB for MySQL | Data type of AnalyticDB for PostgreSQL |
Integer | BIT | An INTEGER data type that can take a value of 1, 0, or NULL | BOOLEAN | BIT(1) |
TINYINT | 0 to 255 | TINYINT | SMALLINT | |
SMALLINT | -32768 (-2^15) to 32767 (2^15 - 1) | SMALLINT | SMALLINT | |
INT | -2147483648 (-2^31) to 2147483647 (2^31 - 1) | INTEGER | INTEGER | |
BIGINT | -9223372036854775808 (-2^63) to 9223372036854775807 (2^63 - 1) | BIGINT | BIGINT | |
Decimal | NUMERIC[ (p[ ,s] )] | -10^38 + 1 to 10^38 - 1 (1 <= p <= 38) | DECIMAL | DECIMAL |
DECIMAL[ (p[ ,s] )] | -10^38 + 1 to 10^38 - 1 (1 <= p <= 38) | DECIMAL | DECIMAL | |
FLOAT | -1.79E + 308 to -2.23E - 308, 0, and 2.23E - 308 to 1.79E + 308 | DOUBLE | DOUBLE PRECISION | |
REAL | -3.40E + 38 to -1.18E - 38, 0, and 1.18E - 38 to 3.40E + 38 | FLOAT | REAL | |
Monetary | MONEY | -922337203685477.5808 to 922337203685477.5807 | DECIMAL(19, 4) | DECIMAL(19, 4) |
SMALLMONEY | -214748.3648 to 214748.3647 | DECIMAL(10, 4) | DECIMAL(10, 4) | |
Date and time | DATE | 0001-01-01 to 9999-12-31 | DATE | DATE |
DATETIME | Date range: January 1, 1753 to December 31, 9999 Time range: 00:00:00 to 23:59:59.997 | DATETIME | TIMESTAMP(3) WITHOUT TIME ZONE | |
DATETIME2[ (fractional seconds precision) ] | Date range: January 1, 1 CE to December 31, 9999 CE Time range: 00:00:00 to 23:59:59.9999999 | DATETIME | TIMESTAMP(7) WITHOUT TIME ZONE | |
DATETIMEOFFSET [ (fractional seconds precision) ] | Date range: January 1, 1 CE to December 31, 9999 CE Time range: 00:00:00 to 23:59:59.9999999 Time zone offset range: -14:00 to +14:00 | TIMESTAMP | TIMESTAMP(7) WITH TIME ZONE | |
SMALLDATETIME | The time is based on a 24-hour day, with seconds always zero (:00) and without fractional seconds. | DATETIME | TIMESTAMP WITHOUT TIME ZONE | |
TIME [ (fractional second scale) ] | 00:00:00.0000000 to 23:59:59.9999999 | TIME | TIME(7) WITH TIME ZONE | |
String | BINARY [ ( n ) ] | Valid values of n: 1 to 8000. | VARBINARY | BYTEA |
VARBINARY [ ( n | max) ] | Valid values of n: 1 to 8000. max indicates that the maximum storage size is 2^31 - 1 bytes. | VARBINARY | BYTEA | |
CHAR [ ( n ) ] | Valid values of n: 1 to 8000. The storage size is n bytes. | VARCHAR | CHARACTER | |
VARCHAR [ ( n | max ) ] | Valid values of n: 1 to 8000. max indicates that the maximum storage size is 2^31 - 1 bytes (2 GB). | VARCHAR | CHARACTER | |
NCHAR [ ( n ) ] | n defines the string size in byte-pairs. Valid values of n: 1 to 4000. The storage size is two times n bytes. | VARCHAR | CHARACTER VARYING | |
NVARCHAR [ ( n | max ) ] | n defines the string size in byte-pairs. Valid values of n: 1 to 4000. max indicates that the maximum storage size is 2^30 - 1 characters (2 GB). | VARCHAR | CHARACTER VARYING | |
NTEXT | Variable-length Unicode data with a maximum string length of 1,073,741,823 (2^30 - 1) bytes. | VARCHAR | TEXT | |
TEXT | The maximum string length is 2,147,483,647 (2^31 - 1) bytes. | VARCHAR | TEXT | |
IMAGE | Variable-length binary data from 0 to 2,147,483,647 (2^31 - 1) bytes. | VARBINARY | BYTEA | |
Spatial (geography and geometry) | GEOGRAPHY | None | VARCHAR | Not supported |
GEOMETRY | None | VARCHAR | Not supported | |
XML | XML ( [ CONTENT | DOCUMENT ] xml_schema_collection ) | None | VARCHAR | XML |
Other types | UNIQUEIDENTIFIER | None | VARCHAR | CHARACTER(36) |
SQL_VARIANT | None | Not supported | Not supported | |
HIERARCHYID | None | Not supported | Not supported | |
SYSNAME | None | VARCHAR | CHARACTER VARYING(128) |
Migrate data from a self-managed TiDB database
The following table lists the data type mappings between a self-managed TiDB database and a MySQL database.
Data type of TiDB | Data type of MySQL |
BIGINT | BIGINT |
BIGINT UNSIGNED | DECIMAL(20,0) |
BINARY | BINARY |
BIT | BIT |
BOOL/BOOLEAN | TINYINT |
CHAR | CHAR |
DATE | DATE |
DATETIME | DATETIME |
DECIMAL | DECIMAL |
DOUBLE | DOUBLE |
ENUM | ENUM |
FLOAT | FLOAT |
INT | INT |
INT UNSIGNED | BIGINT |
INTEGER | INTEGER |
JSON | JSON |
MEDIUMBLOB/LONGBLOB TINYBLOB/BLOB/ | MEDIUMBLOB/LONGBLOB TINYBLOB/BLOB/ |
MEDIUMINT | MEDIUMINT |
SET | SET |
SMALLINT | SMALLINT |
SMALLINT UNSIGNED | INT |
TEXT/LONGTEXT | TEXT/LONGTEXT |
TIME | TIME |
TIMESTAMP | TIMESTAMP |
TINYINT | TINYINT |
TINYINT UNSIGNED | SMALLINT |
VARBINARY | VARBINARY |
VARCHAR | VARCHAR |
YEAR | YEAR |
Migrate data from a Db2 for LUW database
The following table lists the data type mappings between a Db2 for LUW database and a MySQL database.
If the value range of the data to be migrated from the source instance exceeds the range supported by DTS, the accuracy of the data migrated to the destination instance decreases.
Type | Data type of Db2 for LUW | Value range | Data type of MySQL |
Integer | SMALLINT | -32,768 to +32,767 | SMALLINT |
INTEGER | -2,147,483,648 to +2,147,483,647 | INT | |
BIGINT | -9,223,372,036,854,775,808 to +9,223,372,036,854,775,807 | BIGINT | |
Decimal | DECIMAL(precision-integer, scale-integer) | p<=38 | DECIMAL |
FLOAT(integer) | The value range is 1 to 53. If the integer is in the range of 1 to 24, the format is single precision floating-point. If the integer is in the range of 25 to 53, the format is double precision floating-point. | FLOAT | |
DECFLOAT(precision-integer) | None | DECIMAL(65,10) | |
Date and time | DATE | 0001-01-01 to 9999-12-31 | DATE |
TIME | 00:00:00 to 24:00:00 | TIME | |
TIMESTAMP(integer) | 0001-01-01-00.00.00.000000000000 to 9999-12-31-24.00.00.000000000000 (0 <= p <= 12) | DATETIME | |
String | CHARACTER(integer) | 254 | CHAR | VARCHAR |
VARCHAR(integer) | 32,672 | VARCHAR | |
CHARACTER(integer) FOR BIT DATA | 254 | BLOB | |
CLOB | 2,147,483,647 | LONGTEXT | |
GRAPHIC (integer) | 127 | CHAR(length*4) | |
VARGRAPHIC (integer) | 16,336 | CHAR(length*4) | |
DBCLOB (integer) | 1,073,741,823 | VARCHAR | LONGTEXT | |
BLOB | 2,147,483,647 | LONGBLOB | |
Other types | XML | 2,147,483,647 | VARCHAR | LONGTEXT |
Migrate data from a Db2 for i database
The following table lists the data type mappings between a Db2 for i database and a MySQL database.
If the value range of the data to be migrated from the source instance exceeds the range supported by DTS, the accuracy of the data migrated to the destination instance decreases.
Type | Data type of Db2 for i | Value range | Data type of MySQL |
Integer | SMALLINT | -32,768 to +32,767 | SMALLINT |
INTEGER | -2,147,483,648 to +2,147,483,647 | INT | |
BIGINT | -9,223,372,036,854,775,808 to +9,223,372,036,854,775,807 | BIGINT | |
Decimal | DECIMAL(precision-integer, scale-integer) | p<=63 | DECIMAL |
NUMERIC | None | DECIMAL | |
FLOAT(integer) | None | FLOAT | |
DECFLOAT(precision-integer) | None | DECIMAL(65,10) | |
Date and time | DATE | 0001-01-0 to 9999-12-31 | DATE |
TIME | 00:00:00 to 24:00:00 | TIME | |
TIMESTAMP(integer) | 0001-01-01-00.00.00.000000000000 to 9999-12-31-24.00.00.000000000000 (0 <= p <= 12) | DATETIME | |
String | CHAR(integer) | 32,765 | CHAR | VARCHAR |
VARCHAR(integer) | 32,739 | VARCHAR | |
CHAR(integer) FOR BIT DATA | None | BLOB | |
CLOB | 2,147,483,647 | LONGTEXT | |
GRAPHIC (integer) | 16,382 | CHAR | |
VARGRAPHIC (integer) | 16,369 | VARCHAR | |
DBCLOB (integer) | 1,073,741,823 | LONGTEXT | |
BINARY | 32,765 | BINARY | |
VARBIN | 32,739 | VARBINARY | |
BLOB | 2,147,483,647 | LONGBLOB | |
Other types | DATALINK | None | VARCHAR | LONGTEXT |
ROWID | 40 | VARCHAR | LONGTEXT | |
XML | 2,147,483,647 | VARCHAR | LONGTEXT |
Migrate data from a Teradata database
The following table lists the data type mappings between a Teradata database and an AnalyticDB for PostgreSQL instance.
Data type of Teradata | Data type of AnalyticDB for PostgreSQL |
BYTEINT | SMALLINT |
SMALLINT | SMALLINT |
BIGINT | BIGINT |
INTEGER | INTEGER |
DATE | DATE |
JSON | JSON |
XML | XML |
CLOB | text |
Float | real |
CHAR | CHAR |
VARCHAR | VARCHAR |
Timestamp | Timestamp |
TIME | TIME |
Timestamp With Time Zone | Timestamp With Time Zone |
Time With Time Zone | Time With Time Zone |
Decimal | Decimal |
Number | numeric |
BYTE | bytea |
VARBYTE | bytea |
BLOB | bytea |
PERIOD | varchar(100) |
INTERVAL | varchar(100) |
Data type of Teradata | Data type of AnalyticDB for PostgreSQL |
SMALLINT | SMALLINT |
INTEGER | INT |
BIGINT | BIGINT |
DECIMAL(precision-integer, scale-integer) | DECIMAL |
NUMERIC | DECIMAL |
FLOAT(integer) | FLOAT |
DECFLOAT(precision-integer) | DECIMAL(65,10) |
DATE | DATE |
TIME | TIME |
TIMESTAMP(integer) | DATETIME |
CHAR(integer) | CHAR | VARCHAR |
VARCHAR(integer) | VARCHAR |
CHAR(integer) FOR BIT DATA | BLOB |
CLOB | LONGTEXT |
GRAPHIC (integer) | CHAR |
VARGRAPHIC (integer) | VARCHAR |
DBCLOB (integer) | LONGTEXT |
BINARY | BINARY |
VARBIN | VARBINARY |
BLOB | LONGBLOB |
DATALINK | VARCHAR | LONGTEXT |
ROWID | VARCHAR | LONGTEXT |
XML | VARCHAR | LONGTEXT |