This topic describes the data types and parameters that are supported by SAP HANA Reader and how to configure SAP HANA Reader by using the codeless user interface (UI) and code editor. Before you create a Data Integration node, you can refer to this topic to familiarize yourself with the data types and parameters that you must configure for SAP HANA Reader to read data from data sources.
Background information
SAP HANA Reader connects to a remote SAP HANA database by using Java Database Connectivity (JDBC), generates an SQL statement based on your configurations, and sends the statement to the database. The system executes the statement on the database and returns data. Then, SAP HANA Reader assembles the returned data into abstract datasets of the data types supported by Data Integration and sends the datasets to a writer.
Data types
Category | SAP HANA data type |
---|---|
Integer | INT, TINYINT, SMALLINT, MEDIUMINT, and BIGINT |
Floating point | FLOAT, DOUBLE, and DECIMAL |
String | VARCHAR, CHAR, TINYTEXT, TEXT, MEDIUMTEXT, and LONGTEXT |
Date and time | DATE, DATETIME, TIMESTAMP, TIME, and YEAR |
Boolean | BIT and BOOL |
Binary | TINYBLOB, MEDIUMBLOB, BLOB, LONGBLOB, and VARBINARY |
- Data types that are not listed in the preceding table are not supported.
- SAP HANA Reader processes TINYINT(1) as an integer data type.
Parameters
Parameter | Description |
---|---|
username | The username that is used to log on to the SAP HANA database. |
password | The password that is used to log on to the SAP HANA database. |
column | The names of the columns from which you want to read data. To read data from all the columns in the source table, set this parameter to an asterisk (*). |
table | The name of the table from which you want to read data. |
jdbcUrl | The JDBC URL that is used to connect to SAP HANA. Example: jdbc:sap://127.0.0.1:30215?currentschema=TEST. |
splitPk | The field that is used for data sharding when SAP HANA Reader reads data. If you specify
this parameter, the source table is sharded based on the value of this parameter.
Data Integration then runs parallel threads to read data.
You can specify a field of an integer data type for the splitPk parameter. If the source table does not contain fields of integer data types, you can leave this parameter empty. |
Configure SAP HANA Reader by using the codeless UI
- Configure data sources.
Configure Source and Target for the synchronization node.
Parameter Description Connection The name of the data source to which you want to write data. This parameter is equivalent to the datasource parameter that is described in the preceding section. Table This parameter is equivalent to the table parameter that is described in the preceding section. Filter The condition that is used to filter the data you want to read. Filtering based on the LIMIT keyword is not supported. The SQL syntax is determined by the selected data source. Shard Key The shard key. You can use a column in the source table as the shard key. We recommend that you use the primary key column or an indexed column. Only integer columns are supported. If you set this parameter, data sharding is performed based on the value of this parameter, and parallel threads can be used to read data. This improves data synchronization efficiency.Note The Shard Key parameter is displayed only after you select the data source for the synchronization node. - Configure field mappings. This operation is equivalent to setting the column parameter that is described in the preceding section.
Fields in the source on the left have a one-to-one mapping with fields in the destination on the right. You can click Add to add a field. To remove an added field, move the pointer over the field and click the Remove icon.
Operation Description Map Fields with the Same Name Click Map Fields with the Same Name to establish mappings between fields with the same name. The data types of the fields must match. Map Fields in the Same Line Click Map Fields in the Same Line to establish mappings between fields in the same row. The data types of the fields must match. Delete All Mappings Click Delete All Mappings to remove the mappings that are established. Auto Layout Click Auto Layout. Then, the system automatically sorts the fields based on specific rules. Change Fields Click the Change Fields icon. In the Change Fields dialog box, you can manually edit the fields in the source table. Each field occupies a row. The first and the last blank rows are included, whereas other blank rows are ignored. Add Click Add to add a field. Take note of the following rules when you add a field: - You can enter constants. Each constant must be enclosed in single quotation marks ('), such as 'abc' and '123'.
- You can use scheduling parameters, such as ${bizdate}.
- You can enter functions that are supported by relational databases, such as now() and count(1).
- If the field that you entered cannot be parsed, the value of Type for the field is Unidentified.
- Configure channel control policies.
Parameter Description Expected Maximum Concurrency The maximum number of concurrent threads that the synchronization node can use to read data from the source or write data to the destination. You can configure the concurrency for the synchronization node on the codeless UI. Bandwidth Throttling Specifies whether to enable bandwidth throttling. You can enable bandwidth throttling and specify a maximum transmission rate to prevent heavy read workloads on the source. We recommend that you enable bandwidth throttling and set the maximum transmission rate to an appropriate value based on the configurations of the source. Dirty Data Records Allowed The maximum number of dirty data records allowed. Distributed Execution This parameter is not supported for synchronization nodes that use RestAPI Reader.
Configure SAP HANA Reader by using the code editor
- Configure a synchronization node to read data from a table that is not sharded
{ "type":"job", "version":"2.0", // The version number. "steps":[ { "stepType":"saphana",// The reader type. "parameter":{ "column":[ // The names of the columns from which you want to read data. "id" ], "connection":[ { "querySql":["select a,b from join1 c join join2 d on c.id = d.id;"], // The SQL statement that is used to read data from the source table. "datasource":"", // The name of the data source. "table":[// The name of the source table. The table name must be enclosed in brackets []. "xxx" ] } ], "where":"",// The WHERE clause. "splitPk":"", // The shard key. "encoding":"UTF-8"// The encoding format. }, "name":"Reader", "category":"reader" }, { "stepType":"stream", "parameter":{}, "name":"Writer", "category":"writer" } ], "setting":{ "errorLimit":{ "record":"0"// The maximum number of dirty data records allowed. }, "speed":{ "throttle":true,// Specifies whether to enable bandwidth throttling. A value of false indicates that bandwidth throttling is disabled, and a value of true indicates that bandwidth throttling is enabled. The mbps parameter takes effect only when the throttle parameter is set to true. "concurrent":1, // The maximum number of parallel threads. "mbps":"12"// The maximum transmission rate. } }, "order":{ "hops":[ { "from":"Reader", "to":"Writer" } ] } }
- Configure a synchronization node to read data from a sharded table
Note When you configure a synchronization node to read data from a sharded SAP HANA table, you can select multiple partitions that have the same schema.
{ "type": "job", "version": "1.0", "configuration": { "reader": { "plugin": "saphana", "parameter": { "connection": [ { "table": [ "tbl1", "tbl2", "tbl3" ], "datasource": "datasourceName1" }, { "table": [ "tbl4", "tbl5", "tbl6" ], "datasource": "datasourceName2" } ], "singleOrMulti": "multi", "splitPk": "db_id", "column": [ "id", "name", "age" ], "where": "1 < id and id < 100" } }, "writer": { } } }