All Products
Search
Document Center

Data Transmission Service:Add additional columns

Last Updated:Mar 28, 2026

When you create a data synchronization or migration task in Data Transmission Service (DTS), you can add extra columns to the destination table and assign values to them. After data lands in the destination, filter by those column values to perform metadata management, deduplication, and sorting.

Supported scenarios

Additional columns are supported in the following source-to-destination combinations:

Source database typeDestination database type
AnyDataHub, Lindorm, Kafka, or ClickHouse
DB2 for LUW or DB2 for iSeries (AS/400)MySQL or PolarDB for MySQL
MySQL, MariaDB, or PolarDB for MySQLMySQL, MariaDB, or PolarDB for MySQL
MySQLTair/Redis, AnalyticDB for PostgreSQL, or AnalyticDB for MySQL 3.0
PolarDB for PostgreSQLAnalyticDB for PostgreSQL
SQL ServerMySQL

Usage notes

  • For synchronization instances, set Synchronization Types to Schema Synchronization. For migration instances, set Migration Types to Schema Migration.

  • Before modifying additional column rules on a running synchronization task, check whether the column names conflict with existing columns in the destination table.

  • If the source database is MongoDB, the destination collections cannot contain fields named _id or _value. Otherwise, the synchronization fails.

  • Right-clicking a database in Selected Objects applies the configured additional columns to all tables in that destination database.

  • If extract, transform, and load (ETL) is configured on the synchronization task, the additional column rules run first to generate a value, and then the ETL script computes the final value that is synchronized to the destination.

Add additional columns to a synchronization task

The following steps use a synchronization instance as an example. The same approach applies to migration instances.

Prerequisites

Before you begin, ensure that you have:

Configure additional columns

  1. Go to the Data Synchronization Tasks page.Data Synchronization Tasks page of the new DTS console

    1. Log on to the Data Management (DMS) console.

    2. In the top navigation bar, click Data + AI.

    3. In the left-side navigation pane, choose DTS (DTS) > Data Synchronization.

    The navigation path may vary based on the DMS console mode. For details, see Simple mode console and Customize the layout and style of the DMS console. You can also go directly to the Data Synchronization Tasks page of the new DTS console.
  2. Click Create Task and configure the source and destination databases.

    To add additional columns to a Running synchronization instance, click Reselect Objects instead.
  3. Follow the prompts to the Configure Objects step.

  4. Set Synchronization Types to Schema Synchronization.

  5. In Source Objects, select the databases or tables to synchronize, then click image.png to move them to Selected Objects.

  6. In Selected Objects, right-click the database or table.

  7. In the dialog box, go to the Additional Columns section and click Add Column.

  8. Enter the Column Name, Type, and Assign Value for the new column.

    • Column Name and Type define the column added to the destination table.

    • Assign Value is an expression that references source table columns or built-in variables. Click the ... icon to open the expression editor. For syntax details, see Expression reference.

  9. Click OK.

  10. Follow the prompts to complete the rest of the task configuration.

Expression reference

An additional column value is built from constants, variables, operators, and expression functions. The syntax is compatible with the data processing DSL (Domain-Specific Language) used for ETL.

In expressions, enclose column names in backticks (` `), not single quotation marks ('`).

Common use cases

Before diving into the full reference, here are the most common patterns:

GoalExpression pattern
Tag each row with its source table name__TB__
Tag each row with its source database name__DB__
Record the operation type (insert/update/delete)__OPERATION__
Record when the transaction was committed__COMMIT_TIMESTAMP__
Build a composite key from source columns` 'prefix:'+col1+':'+col2 `
Flag inserts vs. other operations` (op_eq(__OPERATION__,'__OP_INSERT__')? 1 : 0) `
Mask sensitive data in a column` str_mask(phone, 7, 10, '#') `

Constants

TypeExample
int123
float123.4
string"hello1_world"
booleantrue or false
datetimeDATETIME('2021-01-01 10:10:01')

Variables

Use these built-in variables to capture metadata about each replicated row.

VariableDescriptionData typeExample value
__TB__Name of the source tablestringtable
__DB__Name of the source databasestringmydb
__OPERATION__Type of DML operationstring__OP_INSERT__, __OP_UPDATE__, __OP_DELETE__
__COMMIT_TIMESTAMP__Time when the transaction was committeddatetime'2021-01-01 10:10:01'
` column `Value of a source column for the current rowstring` id , name `
__SCN__System Change Number (SCN) — uniquely identifies the version and time of a transaction commitstring22509****
__ROW_ID__Address ID of a data record — uniquely locates the row. Not supported for MySQL sources.stringAAAgWHAAKAAJgX****

Expression functions

Numerical operations

FunctionSyntaxParametersReturn valueExample
Additionop_sum(value1, value2) or value1+value2value1, value2: integer or floatInteger if both inputs are integers; float otherwiseop_sum(col1, 1.0)
Subtractionop_sub(value1, value2) or value1-value2value1, value2: integer or floatInteger if both inputs are integers; float otherwiseop_sub(col1, 1.0)
Multiplicationop_mul(value1, value2) or value1*value2value1, value2: integer or floatInteger if both inputs are integers; float otherwiseop_mul(col1, 1.0)
Divisionop_div_true(value1, value2) or value1/value2value1, value2: integer or floatInteger if both inputs are integers; float otherwiseop_div_true(col1, 2.0) — if col1=15, returns 7.5
Moduloop_mod(value1, value2)value1, value2: integer or floatInteger if both inputs are integers; float otherwiseop_mod(col1, 10) — if col1=23, returns 3

Logical operations

FunctionSyntaxParametersReturn valueExample
Equalsop_eq(value1, value2)integer, float, or stringbooleanop_eq(col1, 23)
Greater thanop_gt(value1, value2)integer, float, or stringbooleanop_gt(col1, 1.0)
Less thanop_lt(value1, value2)integer, float, or stringbooleanop_lt(col1, 1.0)
Greater than or equal toop_ge(value1, value2)integer, float, or stringbooleanop_ge(col1, 1.0)
Less than or equal toop_le(value1, value2)integer, float, or stringbooleanop_le(col1, 1.0)
ANDop_and(value1, value2)booleanbooleanop_and(is_male, is_student)
ORop_or(value1, value2)booleanbooleanop_or(is_male, is_student)
INop_in(value, json_array)value: any type; json_array: JSON-format stringbooleanop_in(id,json_array('["0","1","2","3","4","5","6","7","8"]'))
Is nullop_is_null(value)any typebooleanop_is_null(name)
Is not nullop_is_not_null(value)any typebooleanop_is_not_null(name)

String functions

FunctionSyntaxParametersReturn valueExample
Concatenate stringsop_add(str_1, str_2, ..., str_n)stringsConcatenated string` op_add(col,'hangzhou','dts') `
Format and concatenatestr_format(format, value1, value2, ...)format: string with {} placeholdersFormatted stringstr_format("part1: {}, part2: {}", col1, col2) — if col1="ab" and col2="12", returns "part1: ab, part2: 12"
Replace substringstr_replace(original, oldStr, newStr, count)count: max replacements; -1 replaces allString after replacementstr_replace(name, "a", 'b', -1) — if name="aba", returns "bbb"
Replace in all string fieldstail_replace_string_field(search, replace, all)all: only true is supportedString after replacementtail_replace_string_field('\u000f','',true) — replaces \u000f with an empty string in all varchar, text, and char fields
Strip charactersstr_strip(string_val, charSet)charSet: characters to removeString with leading/trailing characters removedstr_strip(name, 'ab') — if name="axbzb", returns "xbz"
Convert to lowercasestr_lower(value)string column or constantLowercase stringstr_lower(str_col)
Convert to uppercasestr_upper(value)string column or constantUppercase stringstr_upper(str_col)
String to integercast_string_to_long(value)stringIntegercast_string_to_long(col)
Integer to stringcast_long_to_string(value)integerStringcast_long_to_string(col)
Count occurrencesstr_count(str, pattern)str: string; pattern: substring to findNumber of occurrencesstr_count(str_col, 'abc') — if str_col="zabcyabcz", returns 2
Find substringstr_find(str, pattern)str: string; pattern: substring to findPosition of first match; -1 if not foundstr_find(str_col, 'abc') — if str_col="xabcy", returns 1
Check if all lettersstr_isalpha(str)string column or constantbooleanstr_isalpha(str_col)
Check if all digitsstr_isdigit(str)string column or constantbooleanstr_isdigit(str_col)
Regular expression matchregex_match(str, regex)str: string; regex: regex patternbooleanregex_match(__TB__,'user_\\\d+')
Mask part of a stringstr_mask(str, start, end, maskStr)start: start position (min: 0); end: end position (max: length−1); maskStr: single characterMasked stringstr_mask(phone, 7, 10, '#')
Get substring after a stringsubstring_after(str, cond)str: original string; cond: delimiterSubstring after cond (delimiter not included)substring_after(col, 'abc')
Get substring before a stringsubstring_before(str, cond)str: original string; cond: delimiterSubstring before cond (delimiter not included)substring_before(col, 'efg')
Get substring between two stringssubstring_between(str, cond1, cond2)str: original string; cond1, cond2: delimitersSubstring between cond1 and cond2 (delimiters not included)substring_between(col, 'abc','efg')
Check if value is a stringis_string_value(value)string or column namebooleanis_string_value(col1)
Get a field from a MongoDB documentbson_value("field1", "field2", ...)Nested field pathField valuee_set(user_name, bson_value("person","name"))

Conditional expressions

SyntaxParametersReturn valueExample
(cond ? val_1 : val_2)cond: boolean; val_1 and val_2: same typeval_1 if cond is true; val_2 otherwise(id>1000? 1 : 0)

Time functions

FunctionSyntaxParametersReturn valueExample
Current time (second precision)dt_now()NoneDATETIME, accurate to the seconddt_now()
Current time (millisecond precision)dt_now_millis()NoneDATETIME, accurate to the milliseconddt_now_millis()
UTC timestamp (seconds) to DATETIMEdt_fromtimestamp(value, [timezone])value: integer; timezone: optionalDATETIME, accurate to the seconddt_fromtimestamp(1626837629,'GMT+08')
UTC timestamp (milliseconds) to DATETIMEdt_fromtimestamp_millis(value, [timezone])value: integer; timezone: optionalDATETIME, accurate to the milliseconddt_fromtimestamp_millis(1626837629123,'GMT+08')
DATETIME to UTC timestamp (seconds)dt_parsetimestamp(value, [timezone])value: DATETIME; timezone: optionalIntegerdt_parsetimestamp(datetime_col,'GMT+08')
DATETIME to UTC timestamp (milliseconds)dt_parsetimestamp_millis(value, [timezone])value: DATETIME; timezone: optionalIntegerdt_parsetimestamp_millis(datetime_col,'GMT+08')
DATETIME to stringdt_str(value, format)value: DATETIME; format: yyyy-MM-dd HH:mm:ssStringdt_str(col1, 'yyyy-MM-dd HH:mm:ss')
String to DATETIMEdt_strptime(value, format)value: string; format: yyyy-MM-dd HH:mm:ssDATETIMEdt_strptime('2021-07-21 03:20:29', 'yyyy-MM-dd hh:mm:ss')
Adjust DATETIMEdt_add(value, [years=intVal], [months=intVal], [days=intVal], [hours=intVal], [minutes=intVal])value: DATETIME; intVal: integer (negative subtracts)DATETIMEdt_add(datetime_col,years=-1)

FAQ

How do I configure custom keys and values for a DTS task from MySQL to Redis?

By default, DTS maps the entire source row using one of three Cache Mapping Mode options. To extract specific columns and build custom key-value pairs instead, add two special additional columns: __DTS_TP_TO_REDIS_KEY__ and __DTS_TP_TO_REDIS_VALUE__.

image

Configuration steps:

  1. When configuring objects, move the databases and tables to the right pane, then click the edit button for the destination Redis DB.

    image

  2. Add the columns __DTS_TP_TO_REDIS_KEY__ and __DTS_TP_TO_REDIS_VALUE__.

    image

  3. Set the Assign Value expressions using DSL syntax. For example, consider the following MySQL aes table:

    CREATE TABLE `aes` (
        `id`            BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT COMMENT 'Auto-increment primary key',
        `login_time`    INT(10)             NOT NULL DEFAULT '0'    COMMENT 'Logon identifier time',
        `pay_time`      INT(10)             NOT NULL DEFAULT '0'    COMMENT 'Payment identifier time',
        `gid`           INT(10)             NOT NULL DEFAULT '0'    COMMENT 'Game ID',
        `cid`           INT(10)             NOT NULL DEFAULT '0'    COMMENT 'Channel ID',
        `gcp_code`      VARCHAR(40)         NOT NULL DEFAULT ''     COMMENT 'Channel package number. An empty value indicates a new entry for the gid.',
        `uname`         VARCHAR(120)        CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '' COMMENT 'Account',
        PRIMARY KEY (`id`),
        UNIQUE KEY `idx_uq` (`gid`, `gcp_code`, `uname`),
        KEY `idx_uname` (`uname`)
    )ENGINE=InnoDB AUTO_INCREMENT=48022 DEFAULT CHARSET=utf8 COMMENT='Game account activation time information table';

    Business requirements:

    • Key: stat_create_day:{gcp_code}:{uname} — built from two source columns

    • Value: {login_time} — a Unix timestamp converted to datetime format

    Assign Value expressions:

    • __DTS_TP_TO_REDIS_KEY__: ` 'stat_create_day'+':'+gcp_code+':'+uname `

    • __DTS_TP_TO_REDIS_VALUE__: ` dt_fromtimestamp(cast_string_to_long(login_time)) `

  4. After synchronization or migration completes, the key-value pair in Redis appears as follows:

    image