This topic describes the parameters that are supported by ClickHouse Reader and how to configure ClickHouse Reader by using the codeless user interface (UI) and code editor.
ClickHouse Reader reads data from tables stored in ClickHouse databases. ClickHouse Reader connects to a remote ClickHouse database by using Java Database Connectivity (JDBC) and executes SQL statements to read data from the database.
Limits
- Only ApsaraDB for ClickHouse data sources of V20.8 or V21.8 are supported.
- ClickHouse Reader supports only exclusive resource groups for Data Integration, but not the shared resource group for Data Integration.
- ClickHouse Reader connects to a ClickHouse database by using JDBC and can read data from the database only by using JDBC Statement.
- ClickHouse Reader allows you to read data from the specified columns in an order different from that specified in the schema of the source table.
- You must make sure that the driver version is compatible with your ClickHouse database.
ClickHouse Reader supports only the following version of the ClickHouse database driver:
<dependency> <groupId>ru.yandex.clickhouse</groupId> <artifactId>clickhouse-jdbc</artifactId> <version>0.2.4.ali2-SNAPSHOT</version> </dependency>
Background information
ClickHouse Reader is designed for extract, transform, load (ETL) developers to read data from ClickHouse databases. ClickHouse Reader connects to a remote ClickHouse database by using JDBC, generates an SQL statement based on your configurations to read data from the database, and matches the protocol of the writer you use. Then, the writer writes data to the related engine by using the write API that is provided by the engine.
Supported data types
Data type | ClickHouse Reader | ClickHouse Writer |
---|---|---|
Int8 | Supported | Supported |
Int16 | Supported | Supported |
Int32 | Supported | Supported |
Int64 | Supported | Supported |
UInt8 | Supported | Supported |
UInt16 | Supported | Supported |
UInt32 | Supported | Supported |
UInt64 | Supported | Supported |
Float32 | Supported | Supported |
Float64 | Supported | Supported |
Decimal | Supported | Supported |
String | Supported | Supported |
FixedString | Supported | Supported |
Date | Supported | Supported |
DateTime | Supported | Supported |
DateTime64 | Supported | Supported |
Boolean | Supported
Note ClickHouse does not have a separate Boolean type. You can use UInt8 and Int8.
|
Supported |
Array | Partially supported
This data type is supported only if the elements in an array are of one of the following types: integer, floating point, string, or DateTime64 (accurate to milliseconds). |
Supported |
Tuple | Supported | Supported |
Domain(IPv4,IPv6) | Supported | Supported |
Enum8 | Supported | Supported |
Enum16 | Supported | Supported |
Nullable | Supported | Supported |
Nested | Partially supported
This data type is supported only if the nested fields are of one of the following types: integer, floating point, string, or DateTime64 (accurate to milliseconds). |
Supported |
Parameters
Parameter | Description | Required | Default value |
---|---|---|---|
datasource | The name of the data source. It must be the same as the name of the added data source. You can add data sources by using the code editor. | Yes | No default value |
table | The name of the table from which you want to read data. Specify the name in the JSON
format.
Note The table parameter must be included in the connection parameter.
|
Yes | No default value |
fetchSize | The number of data records to read at a time. This parameter determines the number
of interactions between Data Integration and the source database and affects read
efficiency.
Note If you set this parameter to an excessively large value, an out of memory (OOM) error
may occur during data synchronization. You can increase the value of this parameter
based on the workloads on ClickHouse.
|
No | 1,024 |
column | The names of the columns from which you want to read data. Separate the names with
commas (,). Example: "column": ["id", "name", "age"].
Note You must specify the column parameter.
|
Yes | No default value |
jdbcUrl | The JDBC URL of the source database. The jdbcUrl parameter must be included in the
connection parameter.
|
Yes | No default value |
username | The username that you can use to connect to the database. | Yes | No default value |
password | The password that you can use to connect to the database. | Yes | No default value |
splitPk | The field that is used for data sharding when ClickHouse Reader reads data. If you
specify this parameter, the source table is sharded based on the value of this parameter.
Data Integration then runs parallel threads to read data. This way, data can be synchronized
more efficiently.
Note If you specify the splitPk parameter, you must specify the fetchSize parameter.
|
No | No default value |
where | The WHERE clause. For example, you can set this parameter to gmt_create > $bizdate to read the data that is generated on the current day.
You can use the WHERE clause to read incremental data. If the where parameter is not provided or is left empty, ClickHouse Reader reads all data. |
No | No default value |
Configure ClickHouse Reader by using the codeless UI
- Configure data sources.
Configure the source and destination for the synchronization node.
Parameter Description Connection The name of the data source from which you want to read data. This parameter is equivalent to the datasource parameter that is specified in the preceding section. Table The name of the table from which you want to read data. This parameter is equivalent to the table parameter that is specified in the preceding section. Filter This parameter is equivalent to the where parameter that is specified in the preceding section. This parameter specifies the condition that is used to filter the data you want to read. Filtering based on the LIMIT keyword is not supported. The SQL syntax is determined based on the selected data source. Shard Key The shard key. This parameter is equivalent to the splitPk parameter that is specified in the preceding section. You can use a column in the source table as the shard key. We recommend that you use the primary key column or an indexed column. Only integer columns are supported. If you configure this parameter, data sharding is performed based on the value of this parameter, and parallel threads can be used to read data. This improves data synchronization efficiency.Note The Shard Key parameter is displayed only after you select the data source for the synchronization node. - Configure field mappings. This operation is equivalent to setting the column parameter that is described in the preceding section.
Fields in the source on the left have a one-to-one mapping with fields in the destination on the right. You can click Add to add a field. To remove an added field, move the pointer over the field and click the Remove icon.
Operation Description Map Fields with the Same Name Click Map Fields with the Same Name to establish mappings between fields with the same name. The data types of the fields must match. Map Fields in the Same Line Click Map Fields in the Same Line to establish mappings between fields in the same row. The data types of the fields must match. Delete All Mappings Click Delete All Mappings to remove the mappings that are established. Auto Layout Click Auto Layout. Then, the system automatically sorts the fields based on specific rules. Change Fields Click the Change Fields icon. In the Change Fields dialog box, you can manually edit the fields in the source and destination tables. Each field occupies a row. The first and the last blank rows are included, whereas other blank rows are ignored. Add Click Add to add a field. Take note of the following rules when you add a field: - You can enter constants. Each constant must be enclosed in single quotation marks ('), such as 'abc' and '123'.
- You can use scheduling parameters, such as ${bizdate}.
- You can enter functions that are supported by relational databases, such as now() and count(1).
- If the field that you entered cannot be parsed, the value of Type for the field is Unidentified.
- Configure channel control policies.
Parameter Description Expected Maximum Concurrency The maximum number of parallel threads that the synchronization node uses to read data from the source or write data to the destination. You can configure the parallelism for the synchronization node on the codeless UI. Bandwidth Throttling Specifies whether to enable throttling. You can enable throttling and specify a maximum transmission rate to prevent heavy read workloads on the source. We recommend that you enable throttling and set the maximum transmission rate to an appropriate value based on the configurations of the source. Dirty Data Records Allowed The maximum number of dirty data records allowed. Distributed Execution The distributed execution mode that allows you to split your node into pieces and distribute them to multiple Elastic Compute Service (ECS) instances for parallel execution. This speeds up synchronization. If you use a large number of parallel threads to run your synchronization node in distributed execution mode, excessive access requests are sent to the data sources. Therefore, before you use the distributed execution mode, you must evaluate the access load on the data sources. You can enable this mode only if you use an exclusive resource group for Data Integration. For more information about exclusive resource groups for Data Integration, see Exclusive resource groups for Data Integration and Create and use an exclusive resource group for Data Integration.
Configure ClickHouse Reader by using the code editor
{
"type": "job",
"version": "2.0",
"steps": [
{
"stepType": "clickhouse", // The reader type.
"parameter": {
"fetchSize":1024,// The number of data records to read at a time.
"datasource": "example",
"column": [ // The names of the columns from which you want to read data.
"id",
"name"
],
"where": "", // The WHERE clause.
"splitPk": "", // The shard key.
"table": "" // The name of the table from which you want to read data.
},
"name": "Reader",
"category": "reader"
},
{
"stepType": "clickhouse",
"parameter": {
"postSql": [
"update @table set db_modify_time = now() where db_id = 1"
],
"datasource": "example", // The name of the data source.
"batchByteSize": "67108864",
"column": [
"id",
"name"
],
"writeMode": "insert",
"encoding": "UTF-8",
"batchSize": 1024,
"table": "ClickHouse_table",
"preSql": [
"delete from @table where db_id = -1"
]
},
"name": "Writer",
"category": "writer"
}
],
"setting": {
"executeMode": null,
"errorLimit": {
"record": "0" // The maximum number of dirty data records allowed.
},
"speed": {
"throttle":true,// Specifies whether to enable throttling. The value false indicates that throttling is disabled, and the value true indicates that throttling is enabled. The mbps parameter takes effect only when the throttle parameter is set to true.
"concurrent":1 // The maximum number of parallel threads.
"mbps":"12",// The maximum transmission rate.
}
},
"order": {
"hops": [
{
"from": "Reader",
"to": "Writer"
}
]
}
}