This topic describes the data types and parameters that are supported by AnalyticDB for PostgreSQL Writer. For example, you can configure the write mode, field mappings, and connections for AnalyticDB for PostgreSQL Writer. This topic also provides an example to describe how to configure AnalyticDB for PostgreSQL Writer.
- AnalyticDB for PostgreSQL Writer generates the SQL statement based on the table, column, and where parameters that you set, and sends the generated SQL statement to the AnalyticDB for PostgreSQL database.
- If you set the querySql parameter, AnalyticDB for PostgreSQL Writer sends the value of this parameter to the AnalyticDB for PostgreSQL database.
Data types
AnalyticDB for PostgreSQL Writer supports most AnalyticDB for PostgreSQL data types. Make sure that your data types are supported.
The following table describes the data types that are supported by AnalyticDB for PostgreSQL Writer.
Category | AnalyticDB for PostgreSQL data type |
---|---|
LONG | BIGINT, BIGSERIAL, INTEGER, SMALLINT, and SERIAL |
DOUBLE | DOUBLE, PRECISION, MONEY, NUMERIC, and REAL |
STRING | VARCHAR, CHAR, TEXT, BIT, and INET |
DATE | DATE, TIME, and TIMESTAMP |
BOOLEAN | BOOL |
BYTES | BYTEA |
- AnalyticDB for PostgreSQL Writer supports only the data types that are described in the preceding table.
- You can convert the MONEY, INET, and BIT types by using syntax such as
a_inet::varchar
.
Parameters
Parameter | Description | Required | Default value |
---|---|---|---|
datasource | The connection name. It must be the same as the name of the created connection. You can create connections in the code editor. | Yes | N/A |
table | The name of the table to be synchronized. | Yes | N/A |
writeMode | The write mode. Valid values: insert and copy.
|
No | insert |
column | The columns in the destination table to which data is written. Separate the columns
with commas (,). Example: "column":["id","name","age"] . To write data to all the columns in the destination table, set this parameter to
an asterisk (*). Example: "column":["*"].
|
Yes | N/A |
preSql | The SQL statement to execute before the sync node is run. For example, you can clear outdated data. You can execute only one SQL statement on the codeless UI and multiple SQL statements in the code editor. | No | N/A |
postSql | The SQL statement to execute after the sync node is run. For example, you can add a timestamp. You can execute only one SQL statement on the codeless UI and multiple SQL statements in the code editor. | No | N/A |
batchSize | The number of data records to be written at a time. Set this parameter to an appropriate value. This can greatly reduce the interactions between Data Integration and the AnalyticDB for PostgreSQL database on the network, and increase the throughput. However, an excessively great value may lead to the out of memory (OOM) error during the data synchronization. | No | 1,024 |
Codeless UI mode
- Configure the connections.
Configure the connections to the source and destination data stores for the sync node.
Parameter Description Connection The datasource parameter in the preceding parameter description. Select the name of a connection that you have configured. Table The table parameter in the preceding parameter description. Select the table to be synchronized. Statement Run Before Writing The preSql parameter in the preceding parameter description. Enter the SQL statement to execute before the sync node is run. Statement Run After Writing The postSql parameter in the preceding parameter description. Enter the SQL statement to execute after the sync node is run. Write Method The writeMode parameter in the preceding parameter description. Valid values: insert and copy. - Configure field mappings. It is equivalent to setting the column parameter in the preceding parameter description. Fields in the source table on the
left have a one-to-one mapping with fields in the destination table on the right.
Parameter Description Map Fields with the Same Name Click Map Fields with the Same Name to establish a mapping between fields with the same name. The data types of the fields must match. Map Fields in the Same Line Click Map Fields in the Same Line to establish a mapping between fields in the same row. The data types of the fields must match. Delete All Mappings Click Delete All Mappings to remove mappings that have been established. Auto Layout You can sort the fields based on specified rules. - Configure channel control policies.
Parameter Description Expected Maximum Concurrency The maximum number of concurrent threads that the sync node uses to read data from the source data store and write data to the destination data store. You can configure the concurrency for the sync node on the codeless UI. Bandwidth Throttling You can enable bandwidth throttling and set a maximum transmission rate to avoid heavy read workload of the source data store. We recommend that you enable bandwidth throttling and set the maximum transmission rate to a proper value based on the configurations of the source data store. Dirty Data Records Allowed The maximum number of dirty data records that are allowed.
Code editor mode
{
"type": "job",
"steps": [
{
"parameter": {},
"name": "Reader",
"category": "reader"
},
{
"parameter": {
"postSql": [], // The statement that is to be executed after the sync node is run.
"datasource": "test_004", // The name of the connection.
"column": [ // The columns in the destination table.
"id",
"name",
"sex",
"salary",
"age"
],
"table": "public.person", // The name of the destination table.
"preSql": [] // The statement that is to be executed before the sync node is run.
},
"name": "Writer",
"category": "writer"
}
],
"version": "2.0", // The version number.
"order": {
"hops": [
{
"from": "Reader",
"to": "Writer"
}
]
},
"setting": {
"errorLimit": { // The maximum number of dirty data records that are allowed.
"record": ""
},
"speed": {
"concurrent": 6, // The number of concurrent threads.
"throttle": false // Specifies that bandwidth throttling is disabled.
}
}
}