This topic describes the data types and parameters that are supported by Lindorm Writer and how to configure Lindorm Writer by using the codeless user interface (UI) and code editor.
Background information
- The configuration parameter is required for Lindorm Writer. You can go to the ApsaraDB for Lindorm console to obtain the configuration items that are necessary for Data Integration to connect to an ApsaraDB for Lindorm cluster. The configuration data must be in the JSON format.
- ApsaraDB for Lindorm is a multimode database. Lindorm Writer writes data to the model tables of the table and wideColumn types stored in ApsaraDB for Lindorm databases. For more information about the model tables of the table and wideColumn types, see Overview. You can also consult Lindorm engineers on duty by using DingTalk.
Limits
Lindorm Writer supports only exclusive resource groups for Data Integration, but not the shared resource group or custom resource groups for Data Integration. For more information, see Create and use an exclusive resource group for Data Integration, Use a shared resource group, and Create a custom resource group for Data Integration.
Data types
Lindorm Writer supports most ApsaraDB for Lindorm data types. Make sure that the data types of your database are supported.
Category | ApsaraDB for Lindorm data type |
---|---|
Integer | INT, LONG, and SHORT |
Floating point | DOUBLE, FLOAT, and DOUBLE |
String | STRING |
Date and time | DATE |
Boolean | BOOLEAN |
Binary | BINARYSTRING |
Parameters
Parameter | Description | Required | Default value |
---|---|---|---|
configuration | The configuration items that are necessary for Data Integration to connect to each
ApsaraDB for Lindorm cluster. You can go to the ApsaraDB for Lindorm console to obtain
the configuration items and ask the administrator of the ApsaraDB for Lindorm database
to convert the configurations to data in the following JSON format: {"key1":"value1","key2":"value2"}.
Example: {"lindorm.zookeeper.quorum":"???? "," lindorm.zookeeper.property.clientPort ":"???? "}. Note If you write the JSON code manually, you must escape double quotation marks (") by
using \".
|
Yes | No default value |
dynamicColumn | The dynamic column mode. The configurations of this mode are complex and this mode is not used in most cases. Valid values: true and false. Default value: false. | Yes | false |
table | The name of the table to which you want to write data. The table name is case-sensitive. | Yes | No default value |
namespace | The namespace of the table to which you want to write data. The namespace of the table is case-sensitive. | Yes | No default value |
encoding | The encoding method. Valid values: UTF-8 and GBK. This parameter is used to convert the lindorm byte[] data stored in binary mode to strings. | No | UTF-8 |
columns | The columns of the table to which you want to write data. Lindorm Writer allows you
to write data to the specified columns in a destination table in an order different
from that specified in the schema of the source table.
|
Yes | No default value |
Configure Lindorm Writer by using the codeless UI
This method is not supported.
Configure Lindorm Writer by using the code editor
-
For more information about how to configure a job that writes data from a MySQL data source to a table of the table type in an ApsaraDB for Lindorm database by using the code editor, see Create a sync node by using the code editor.Note Delete the comments from the following code before you run the code:
{ "type": "job", "version": "2.0", "steps": [ { "stepType": "mysql", "parameter": { "checkSlave": true, "datasource": " ", "envType": 1, "column": [ "id", "value", "table" ], "socketTimeout": 3600000, "masterSlave": "slave", "connection": [ { "datasource": " ", "table": [] } ], "where": "", "splitPk": "", "encoding": "UTF-8", "print": true }, "name": "mysqlreader", "category": "reader" }, { "stepType": "lindorm", "parameter": { "configuration": { "lindorm.client.seedserver": "xxxxxxx:30020", "lindorm.client.username": "xxxxxx", "lindorm.client.namespace": "default", "lindorm.client.password": "xxxxxx" }, "nullMode": "skip", "datasource": "", "writeMode": "api", "envType": 1, "columns": [ "id", "name", "age", "birthday", "gender" ], "dynamicColumn": "false", "table": "lindorm_table", "encoding": "utf8", }, "name": "Writer", "category": "writer" } ], "setting": { "jvmOption": "", "executeMode": null, "errorLimit": { "record": "0" }, "speed": { // The transmission rate, in Byte/s. Data Integration runs to reach this rate as much as possible but does not exceed it. "byte": 1048576 }, // The maximum number of dirty data records allowed. "errorLimit": { // The maximum number of dirty data records allowed. If the value of errorlimit is greater than the maximum value, an error is reported. "record": 0, // The maximum percentage of dirty data records. 1.0 indicates 100% and 0.02 indicates 2%. "percentage": 0.02 } }, "order": { "hops": [ { "from": "Reader", "to": "Writer" } ] } }
-
For more information about how to configure the job that writes data from a MySQL data source to a table of the wideColumn type stored in an ApsaraDB for Lindorm database by using the code editor, see Create a sync node by using the code editor.Note Delete the comments from the following code before you run the code:
{ "type": "job", "version": "2.0", "steps": [ { "stepType": "mysql", "parameter": { "envType": 0, "datasource": " ", "column": [ "id", "name", "age", "birthday", "gender" ], "connection": [ { "datasource": " ", "table": [] } ], "where": "", "splitPk": "", "encoding": "UTF-8" }, "name": "Reader", "category": "reader" }, { "stepType": "lindorm", "parameter": { "configuration": { "lindorm.client.seedserver": "xxxxxxx:30020", "lindorm.client.username": "xxxxxx", "lindorm.client.namespace": "default", "lindorm.client.password": "xxxxxx" }, "writeMode": "api", "namespace": "default", "table": "xxxxxx", "encoding": "utf8", "nullMode": "skip", "dynamicColumn": "false", "caching": 128, "columns": [ "ROW|STRING", "cf:id|STRING", "cf:age|INT", "cf:birthday|STRING" ] } } ], "setting": { "jvmOption": "", "errorLimit": { "record": "0" }, "speed": { "concurrent": 3, "throttle": false } } "order": { "hops": [ { "from": "Reader", "to": "Writer" } ] } }