| object | | |
| RequestId | string | The request ID. You can use the ID to query logs and troubleshoot issues. | C99E2BE6-9DEA-5C2E-8F51-1DDCFEADE490 |
| PagingInfo | object | The pagination information. | |
| Id | long | The ID of the synchronization task. | 32601 |
| Description | string | The description of the synchronization task. | description |
| DestinationDataSourceSettings | array<object> | The properties of the destination. | |
| DestinationDataSourceSettings | object | The properties of the destination. | |
| DataSourceName | string | The name of the data source. | dw_mysql |
| DestinationDataSourceType | string | The destination type. Valid values: Hologres, OSS-HDFS, OSS, MaxCompute, LogHub, StarRocks, DataHub, AnalyticDB_For_MySQL, Kafka, Hive. | Hologres |
| JobName | string | The name of the synchronization task. | imp_ods_dms_det_dealer_info_df |
| JobSettings | object | | |
| ChannelSettings | string | The channel control settings for the synchronization task. You can configure special channel control settings for the following synchronization links: data synchronization between Hologres data sources and data synchronization from Hologres to Kafka.
- Holo2Kafka
- Example: {"destinationChannelSettings":{"kafkaClientProperties":[{"key":"linger.ms","value":"100"}],"keyColumns":["col3"],"writeMode":"canal"}}
- kafkaClientProperties: the parameters related to a Kafka producer, which are used when you write data to a Kafka data source.
- keyColumns: the names of Kafka columns to which data is written.
- writeMode: the writing format. Valid values: json and canal.
- Holo2Holo
- Example: {"destinationChannelSettings":{"conflictMode":"replace","dynamicColumnAction":"replay","writeMode":"replay"}}
- conflictMode: the policy used to handle a conflict that occurs during data writing to Hologres. Valid values: replace and ignore.
- writeMode: the mode in which data is written to Hologres. Valid values: replay and insert.
- dynamicColumnAction: the mode in which data is written to dynamic columns in a Hologres table. Valid values: replay, insert, and ignore.
| {"structInfo":"MANAGED","storageType":"TEXTFILE","writeMode":"APPEND","partitionColumns":[{"columnName":"pt","columnType":"STRING","comment":""}],"fieldDelimiter":""} |
| ColumnDataTypeSettings | array<object> | The data type mappings between source fields and destination fields. | |
| ColumnDataTypeSettings | object | The data type mapping between a source field and a destination field. | |
| DestinationDataType | string | The data type of the destination field. Valid values: bigint, boolean, string, text, datetime, timestamp, decimal, and binary. Different types of data sources support different data types. | text |
| SourceDataType | string | The data type of the source field. Valid values: bigint, boolean, string, text, datetime, timestamp, decimal, and binary. Different types of data sources support different data types. | bigint |
| CycleScheduleSettings | object | The settings for periodic scheduling. | |
| CycleMigrationType | string | The synchronization type that requires periodic scheduling. Valid values:
- Full: full synchronization
- OfflineIncremental: batch incremental synchronization
| Full |
| ScheduleParameters | string | The scheduling parameters. | bizdate=$bizdate
|
| DdlHandlingSettings | array<object> | The DDL operation types. Valid values:
- RenameColumn
- ModifyColumn
- CreateTable
- TruncateTable
- DropTable
- DropColumn
- AddColumn
| |
| DdlHandlingSettings | object | The type of the DDL operation. Valid values:
- RenameColumn
- ModifyColumn
- CreateTable
- TruncateTable
- DropTable
- DropColumn
- AddColumn
| |
| Action | string | The processing policy for a specific type of DDL message. Valid values:
- Ignore: ignores a DDL message.
- Critical: reports an error for a DDL message.
- Normal: normally processes a DDL message.
| Ignore |
| Type | string | The DDL operation type. Valid values:
- RenameColumn
- ModifyColumn
- CreateTable
- TruncateTable
- DropTable
| CreateTable |
| RuntimeSettings | array<object> | | |
| RuntimeSettings | object | | |
| Name | string | The name of the configuration item. Valid values:
- src.offline.datasource.max.connection: indicates the maximum number of connections that are allowed for reading data from the source of a batch synchronization task.
- dst.offline.truncate: indicates whether to clear the destination table before data writing.
- runtime.offline.speed.limit.enable: indicates whether throttling is enabled for a batch synchronization task.
- runtime.offline.concurrent: indicates the maximum number of parallel threads that are allowed for a batch synchronization task.
- runtime.enable.auto.create.schema: indicates whether schemas are automatically created in the destination of a synchronization task.
- runtime.realtime.concurrent: indicates the maximum number of parallel threads that are allowed for a real-time synchronization task.
- runtime.realtime.failover.minute.dataxcdc: indicates the maximum waiting duration before a synchronization task retries the next restart if the previous restart fails after failover occurs. Unit: minutes.
- runtime.realtime.failover.times.dataxcdc: indicates the maximum number of failures that are allowed for restarting a synchronization task after failovers occur.
| runtime.offline.concurrent
|
| Value | string | The value of the configuration item. | 1 |
| MigrationType | string | The synchronization type. Valid values:
- FullAndRealtimeIncremental: full synchronization and real-time incremental synchronization of data in an entire database
- RealtimeIncremental: real-time incremental synchronization of data in a single table
- Full: full batch synchronization of data in an entire database
- OfflineIncremental: batch incremental synchronization of data in an entire database
- FullAndOfflineIncremental: full synchronization and batch incremental synchronization of data in an entire database
| FullAndRealtimeIncremental |
| JobType | string | 任务类型
-
DatabaseRealtimeMigration(整库实时):将源端多个库的多个表进行流同步,支持仅全量,仅增量,或全量+增量。
-
DatabaseOfflineMigration(整库离线):将源端多个库的多个表进行批同步,支持仅全量,仅增量,或全量+增量。
-
SingleTableRealtimeMigration(单表实时):将源端单个表进行流同步。
| DatabaseRealtimeMigration |
| ProjectId | long | The DataWorks workspace ID. You can log on to the DataWorks console and go to the Workspace page to query the ID.
This parameter indicates the DataWorks workspace to which the API operation is applied. | 98330 |
| ResourceSettings | object | | |
| OfflineResourceSettings | object | The resource used for batch synchronization. | |
| RequestedCu | double | The number of compute units (CUs) in the resource group for scheduling that are used for batch synchronization. | 2.0 |
| ResourceGroupIdentifier | string | The identifier of the resource group for Data Integration used for batch synchronization. | S_res_group_7708_1667792816832 |
| RealtimeResourceSettings | object | The resource used for real-time synchronization. | |
| RequestedCu | double | The number of CUs in the resource group for Data Integration that are used for real-time synchronization. | 2.0 |
| ResourceGroupIdentifier | string | The identifier of the resource group for Data Integration used for real-time synchronization. | S_res_group_235454102432001_1579085295030 |
| ScheduleResourceSettings | object | The resource used for scheduling. | |
| RequestedCu | double | The number of CUs in the resource group for Data Integration that are used for scheduling. | 2.0 |
| ResourceGroupIdentifier | string | The identifier of the resource group for scheduling used by the synchronization task. | S_res_group_235454102432001_1718359176885 |
| SourceDataSourceSettings | array<object> | The settings of the source. Only a single source is supported. | |
| SourceDataSourceSettings | object | The settings of the source. Only a single source is supported. | |
| DataSourceName | string | The name of the data source. | dw_mysql |
| DataSourceProperties | object | The properties of the data source. | |
| Encoding | string | The encoding format of the database. | UTF-8
|
| Timezone | string | | GMT+8
|
| SourceDataSourceType | string | The source type. Valid values: PolarDB, MySQL, Kafka, LogHub, Hologres, Oracle, OceanBase, MongoDB, RedShift, Hive, SQLServer, Doris, ClickHouse. | Mysql |
| TableMappings | array<object> | The list of mappings between rules used to select synchronization objects in the source and transformation rules applied to the selected synchronization objects. Each entry in the list displays a mapping between a rule used to select synchronization objects and a transformation rule applied to the selected synchronization objects.
Note
[ { "SourceObjectSelectionRules":[ { "ObjectType":"Database", "Action":"Include", "ExpressionType":"Exact", "Expression":"biz_db" }, { "ObjectType":"Schema", "Action":"Include", "ExpressionType":"Exact", "Expression":"s1" }, { "ObjectType":"Table", "Action":"Include", "ExpressionType":"Exact", "Expression":"table1" } ], "TransformationRuleNames":[ { "RuleName":"my_database_rename_rule", "RuleActionType":"Rename", "RuleTargetType":"Schema" } ] } ]
| |
| TableMappings | object | Each rule defines a table that needs to be synchronized. | |
| SourceObjectSelectionRules | array<object> | The list of rules used to select synchronization objects in the source. | |
| SourceObjectSelectionRules | object | The rule used to select synchronization objects in the source. The objects can be databases or tables. | |
| Action | string | The operation that is performed to select objects. Valid values: Include and Exclude. | Include |
| Expression | string | | mysql_table_1 |
| ExpressionType | string | The expression type. Valid values: Exact and Regex. | Exact |
| ObjectType | string | The object type. Valid values:
| Table |
| TransformationRules | array<object> | The list of transformation rules that are applied to the synchronization objects selected from the source. Each entry in the list defines a transformation rule. | |
| TransformationRuleNames | object | The transformation rule that is applied to the synchronization objects selected from the source. | |
| RuleName | string | The name of the rule. If the values of the RuleActionType parameter and the RuleTargetType parameter are the same for multiple transformation rules, you must make sure that the transformation rule names are unique. | rename_rule_1 |
| RuleActionType | string | The action type. Valid values:
- DefinePrimaryKey
- Rename
- AddColumn
- HandleDml
| AddColumn |
| RuleTargetType | string | The type of the object on which the action is performed. Valid values:
| Table |
| TransformationRules | array<object> | The list of transformation rules that are applied to the synchronization objects selected from the source.
Note
[ { "RuleName":"my_database_rename_rule", "RuleActionType":"Rename", "RuleTargetType":"Schema", "RuleExpression":"{"expression":"${srcDatasoureName}_${srcDatabaseName}"}" } ]
| |
| TransformationRules | object | The transformation rule that is applied to the synchronization objects selected from the source. | |
| RuleActionType | string | The action type. Valid values:
- DefinePrimaryKey
- Rename
- AddColumn
- HandleDml
- DefineIncrementalCondition
- DefineCycleScheduleSettings
- DefinePartitionKey
| Rename |
| RuleExpression | string | The expression of the rule. The expression is a JSON string.
- Example of a renaming rule
- Example: {"expression":"${srcDatasourceName}_${srcDatabaseName}_0922" }
- expression: the expression of the renaming rule. You can use the following variables in an expression: ${srcDatasourceName}, ${srcDatabaseName}, and ${srcTableName}. ${srcDatasourceName} indicates the name of the source. ${srcDatabaseName} indicates the name of a source database. ${srcTableName} indicates the name of a source table.
- Example of a column addition rule
- Example: {"columns":[{"columnName":"my_add_column","columnValueType":"Constant","columnValue":"123"}]}
- If no rule of this type is configured, no fields are added to the destination and no values are assigned by default.
- columnName: the name of the field that is added.
- columnValueType: the value type of the field. Valid values: Constant and Variable.
- columnValue: the value of the field. If the columnValueType parameter is set to Constant, the value of the columnValue parameter is a constant of the STRING data type. If the columnValueType parameter is set to Variable, the value of the columnValue parameter is a built-in variable. The following built-in variables are supported: EXECUTE_TIME (LONG data type), DB_NAME_SRC (STRING data type), DATASOURCE_NAME_SRC (STRING data type), TABLE_NAME_SRC (STRING data type), DB_NAME_DEST (STRING data type), DATASOURCE_NAME_DEST (STRING data type), TABLE_NAME_DEST (STRING data type), and DB_NAME_SRC_TRANSED (STRING data type). EXECUTE_TIME indicates the execution time. DB_NAME_SRC indicates the name of a source database. DATASOURCE_NAME_SRC indicates the name of the source. TABLE_NAME_SRC indicates the name of a source table. DB_NAME_DEST indicates the name of a destination database. DATASOURCE_NAME_DEST indicates the name of the destination. TABLE_NAME_DEST indicates the name of a destination table. DB_NAME_SRC_TRANSED indicates the database name obtained after a transformation.
- Example of a rule used to specify primary key fields for a destination table
- Example: {"columns":["ukcolumn1","ukcolumn2"]}
- If no rule of this type is configured, the primary key fields in the mapped source table are used for the destination table by default.
- If the destination table is an existing table, Data Integration does not modify the schema of the destination table. If the specified primary key fields do not exist in the destination table, an error is reported when the synchronization task starts to run.
- If the destination table is automatically created by the system, Data Integration automatically creates the schema of the destination table. The schema contains the primary key fields that you specify. If the specified primary key fields do not exist in the destination table, an error is reported when the synchronization task starts to run.
- Example of a rule used to process DML messages
- Example: {"dmlPolicies":[{"dmlType":"Delete","dmlAction":"Filter","filterCondition":"id > 1"}]}
- If no rule of this type is configured, the default processing policy for messages generated for insert, update, and delete operations is Normal.
- dmlType: the DML operation. Valid values: Insert, Update, and Delete.
- dmlAction: the processing policy for DML messages. Valid values: Normal, Ignore, Filter, and LogicalDelete. Filter indicates conditional processing. The value Filter is returned for the dmlAction parameter only when the value of the dmlType parameter is Update or Delete.
- filterCondition: the condition used to filter DML messages. This parameter is returned only when the value of the dmlAction parameter is Filter.
- Example of a rule used to perform incremental synchronization
- Example: {"where":"id > 0"}
- The rule used to perform incremental synchronization is returned.
- Example of a rule used to configure scheduling parameters for an auto triggered task
- Example: {"cronExpress":" * * * * * *", "cycleType":"1"}
- The rule used to configure scheduling parameters for an auto triggered task is returned.
- Example of a rule used to specify a partition key
- Example: {"columns":["id"]}
- The rule used to specify a partition key is returned.
| {"expression":"${srcDatasoureName}_${srcDatabaseName}"} |
| RuleName | string | The name of the rule. If the values of the RuleActionType parameter and the RuleTargetType parameter are the same for multiple transformation rules, you must make sure that the transformation rule names are unique. | rename_rule_1 |
| RuleTargetType | string | The type of the object on which the action is performed. Valid values:
| Table |
| JobStatus | string | | Running |
DIJobIddeprecated | string | This parameter is deprecated. Use the Id parameter instead. | 32601 |