Use Redis Writer in DataWorks Data Integration to write data to a Redis data source in an offline synchronization job.
Limitations
Run the synchronization job on a serverless resource group for Data Integration (recommended) or an exclusive resource group for Data Integration.
When using the List data type, rerunning a synchronization job is not idempotent. Clear the data from Redis before rerunning the job.
Redis Writer does not support Bloom filter configuration. To handle duplicate data, add a Shell, Python, or PyODPS node before or after the synchronization node in your workflow.
Supported data types
Redis Writer supports the following Redis value types: String, List, Set, Sorted Set, and Hash. For more information about Redis data types, see redis.io.
Configure a synchronization job
To configure a synchronization job, follow the steps in the appropriate guide:
Codeless UI: Configure a task in the codeless UI
Code editor: Configure a task in the code editor
For all parameters and a script sample, see the appendix below.
Appendix: Script sample and parameters
Script sample
The following script shows a synchronization job that reads from a MySQL source and writes to a Redis destination using the String data type.
{
"type": "job",
"version": "2.0",
"steps": [
{
"stepType": "mysql",
"parameter": {
"envType": 0,
"datasource": "xc_mysql_demo2",
"column": [
"id",
"value",
"table"
],
"connection": [
{
"datasource": "xc_mysql_demo2",
"table": []
}
],
"where": "",
"splitPk": "",
"encoding": "UTF-8"
},
"name": "Reader",
"category": "reader"
},
{
"stepType": "redis",
"parameter": {
"expireTime": {
"seconds": "1000"
},
"keyFieldDelimiter": "\u0001",
"dateFormat": "yyyy-MM-dd HH:mm:ss",
"datasource": "xc_mysql_demo2",
"envType": 0,
"writeMode": {
"type": "string",
"mode": "set",
"valueFieldDelimiter": "\u0001"
},
"keyIndexes": [0, 1],
"batchSize": "1000",
"column": [
{ "name": "id", "index": "0" },
{ "name": "name", "index": "1" },
{ "name": "age", "index": "2" },
{ "name": "sex", "index": "3" }
]
},
"name": "Writer",
"category": "writer"
}
],
"setting": {
"errorLimit": {
"record": "0"
},
"speed": {
"throttle": true,
"concurrent": 1,
"mbps": "12"
}
},
"order": {
"hops": [
{
"from": "Reader",
"to": "Writer"
}
]
}
}Writer parameters
| Parameter | Description | Required | Default |
|---|---|---|---|
datasource | The data source name. Must match the name configured in the DataWorks console. | Yes | — |
keyIndexes | The zero-based indexes of the source columns to use as the Redis key. Use a single integer for a single-column key, or an array for a composite key. For example, [0, 1] uses the first and second columns. Redis Writer uses all remaining columns as the value. | Yes | — |
writeMode | The Redis data type to write. Valid values: string, list, set, zset, hash. See writeMode parameters for configuration details. | No | string |
expireTime | The expiration time for the key. Configure as seconds (seconds from now) or unixtime (Unix timestamp). If not set, the key never expires. | No | 0 (never expires) |
keyFieldDelimiter | The delimiter used to concatenate multiple source columns into a single Redis key. Only needed for composite keys. | No | \u0001 |
dateFormat | The format for Date-type values written to Redis, such as yyyy-MM-dd HH:mm:ss. | No | — |
selectDatabase | The index of the destination database. Valid values: "0" to "N-1", where N is the number of databases configured in Redis. Not available for Redis clusters. | No | 0 |
batchSize | The number of records to write per batch. A larger value reduces network round trips and improves throughput, but setting it too high may cause an out-of-memory (OOM) error. | No | 1000 |
timeout | The write operation timeout in milliseconds. | No | 30000 |
redisMode | The Redis deployment mode. Set to ClusterMode for cluster mode (direct connection; supports batch writing). Leave blank for non-cluster mode (proxy addresses, read/write splitting, or standard edition; does not support batch writing). | No | — |
column | Column definitions for JSON-format output. Applies only when writeMode.type is string and writeMode.mode is set. See column parameter for details. | No | — |
How keyIndexes works
keyIndexes controls which source columns form the Redis key. All remaining columns become the value.
Given a source row: id=1, name="John", age=18, sex="male"
keyIndexes | Redis key | Redis value (with valueFieldDelimiter="\u0001") |
|---|---|---|
[0] | 1 | John\u000118\u0001male |
[0, 1] | 1\u0001John | 18\u0001male |
To synchronize only specific columns as the value, configure the column parameter in the Reader plugin.
Column parameter
The column parameter controls how values are written when writeMode.type=string and writeMode.mode=set.
Without `column`: Values are concatenated using
valueFieldDelimiter(CSV format). For example,18\u0001male.With `column`: Values are written in JSON format, including column names. For example,
{"id":1,"name":"John","age":18,"sex":"male"}.
Example configuration:
"column": [
{ "name": "id", "index": "0" },
{ "name": "name", "index": "1" },
{ "name": "age", "index": "2" },
{ "name": "sex", "index": "3" }
]writeMode parameters
Configure writeMode by specifying type, mode, and (for some types) valueFieldDelimiter.
| Value type | type | mode | valueFieldDelimiter | Behavior | Re-run idempotent |
|---|---|---|---|---|---|
| String | string | set | \u0001 (default) | Overwrites the existing key value. | Yes |
| List | list | lpush (head) or rpush (tail) | \u0001 (default) | Appends elements to the list. | No — clear Redis data before re-running. |
| Set | set | sadd | \u0001 (default) | Adds members to the set. Overwrites the key if it exists with a different type. | Yes |
| Sorted Set | zset | zadd | Not required | Adds score/member pairs to the sorted set. Overwrites the key if it exists with a different type. Each source record must include exactly one score followed by one member (in addition to the key column). | Yes |
| Hash | hash | hset | Not required | Adds field/value pairs to the hash. Overwrites the key if it exists with a different type. Each source record must include exactly one field followed by one value (in addition to the key column). | Yes |
Sample configurations by data type:
String:
"writeMode": {
"type": "string",
"mode": "set",
"valueFieldDelimiter": "\u0001"
}List:
"writeMode": {
"type": "list",
"mode": "lpush",
"valueFieldDelimiter": "\u0001"
}Set:
"writeMode": {
"type": "set",
"mode": "sadd",
"valueFieldDelimiter": "\u0001"
}Sorted Set:
"writeMode": {
"type": "zset",
"mode": "zadd"
}For zset, the score column must appear before the member column in the source data.Hash:
"writeMode": {
"type": "hash",
"mode": "hset"
}For hash, the field column must appear before the value column in the source data.