In PolarDB-X, a DDL statement executed physically must apply to every shard independently. If the operation succeeds on some shards but fails on others — for example, because existing data on one shard violates a new column constraint — the compute node (CN) cannot issue a consistent rollback, metadata views become inconsistent across shards, and manual intervention is required.
The two-phase DDL feature eliminates this risk. It constructs a commit barrier on data nodes (DNs) that controls rollback and commit behavior before the commit point, ensuring physical DDL is applied atomically across all shards and that every shard presents a consistent metadata view afterward.
Two-phase DDL applies only to physically executed DDL statements. Logically executed DDL statements already have atomicity by design — the CN creates a temporary table, imports historical data, synchronizes incremental data in double-write mode, and then atomically switches table metadata.
How two-phase DDL works
When you run a supported DDL statement, two-phase DDL does the following:
Constructs a commit barrier on all involved DNs before any physical DDL is applied.
Applies the physical DDL to each shard under barrier control.
If all shards succeed, commits atomically across all shards.
If any shard fails, rolls back the change on all shards before reaching the commit point.
This ensures that at no point can a partial schema change be visible — either all shards reflect the new schema, or none do.
Supported statements
Two-phase DDL applies only to the following physical DDL statements:
| Statement | Operation |
|---|---|
ADD COLUMN | Add a column to a table |
DROP COLUMN | Remove a column from a table |
ADD INDEX | Add an index |
DROP INDEX | Remove an index |
MODIFY COLUMN | Change a column definition |
Two-phase DDL does not apply to tables that contain physical partitions, foreign keys, or full-text indexes.
Limitations
Version requirement
Two-phase DDL requires kernel version 5.4.18-17108394 or later. To verify support, run:
show variables like "enable_two_phase_ddl";If no result is returned, the feature is not supported by your instance.
Topology constraints
| Constraint | Detail |
|---|---|
| Shard connectivity | All shards involved in the DDL must be connected with their threads running |
| Shard count limit | Logical tables with more than 500 shards on a single DN are not supported |
Locking behavior
For physical DDL statements of the COPY type, two-phase DDL changes the locking scope from individual shards to the entire logical table.
Data dictionary modification during the commit phase is a global, mutually exclusive operation and is performed while the table is locked.
If your workload is sensitive to table locking, use the Online Modify Column (OMC) feature instead. OMC changes column types without locking tables, runs as a logically executed operation, and guarantees atomicity. For details, see Change the column types without locking tables.
Check and configure two-phase DDL
Two-phase DDL is enabled by default.
Check whether the feature is supported and its current state:
show variables like "enable_two_phase_ddl";| Result | Meaning |
|---|---|
| No result returned | Feature not supported by this instance |
ON | Feature is enabled |
OFF | Feature is disabled |
Expected output when enabled:
+----------------------+-------+
| Variable_name | Value |
+----------------------+-------+
| enable_two_phase_ddl | ON |
+----------------------+-------+Enable two-phase DDL:
set global enable_two_phase_ddl = true;Disable two-phase DDL:
set global enable_two_phase_ddl = false;Monitor DDL jobs
Use the following statements to inspect DDL job status and verify shard consistency.
| Statement | Purpose |
|---|---|
SHOW DDL | View DDL jobs currently running in the instance |
SHOW DDL RESULT | View historical DDL job results, including errors |
CHECK TABLE <table> | Verify that all shards have the same schema and that the schema matches the CN metadata |
SHOW CREATE TABLE <table> | Confirm the current logical table definition |
SHOW DDL RESULT output includes the following columns:
| Column | Description |
|---|---|
JOB_ID | Unique identifier for the DDL job |
SCHEMA_NAME | Database (schema) name |
OBJECT_NAME | Table name |
DDL_TYPE | Type of DDL operation, such as ALTER_TABLE |
RESULT_TYPE | SUCCESS or ERROR |
RESULT_CONTENT | Detailed result message, including error codes and root cause |
Example: atomic rollback on constraint violation
This example shows how two-phase DDL prevents partial failure when a column type change is incompatible with existing data.
Step 1. Create a logical table t1 partitioned by key a across 16 shards.
CREATE TABLE `t1` (
`a` int(11) DEFAULT NULL,
`b` int(11) DEFAULT NULL
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4
PARTITION BY KEY(`a`)
PARTITIONS 16;Step 2. Insert rows, including one that will violate a tinyint range constraint.
insert into t1 values(1,1),(2,2),(3,3),(4,4),(5,999999);Step 3. Attempt to narrow column b from int to tinyint. The value 999999 exceeds the tinyint range, so the physical DDL fails on the shard holding that row. Because two-phase DDL is enabled, the change is rolled back on all shards.
ALTER TABLE t1 MODIFY COLUMN b tinyint;Expected error:
ERROR 4700 (HY000): ERR-CODE: [TDDL-4700][ERR_SERVER] server error by Failed to execute the DDL task.
Caused by: ERR-CODE: [TDDL-4636][ERR_DDL_JOB_ERROR] failed to execute on group(WUMU_P00000_GROUP):
/* drds_two_phase_ddl(1717761420461916160)*/ALTER TABLE `t1_KdRs_00004` MODIFY COLUMN b tinyint,
Caused by: Data truncation: Out of range value for column 'b' at row 1.Step 4. Verify that no DDL job is still running, and review the job history.
show ddl;
-- Empty set (0.01 sec)
show ddl result;Expected output:
+---------------------+-------------+-------------+-------------+--------------+---------------------------------------------------------------------+
| JOB_ID | SCHEMA_NAME | OBJECT_NAME | DDL_TYPE | RESULT_TYPE | RESULT_CONTENT |
+---------------------+-------------+-------------+-------------+--------------+---------------------------------------------------------------------+
| 1717761420575162368 | wumu | t1 | ALTER_TABLE | ERROR | Failed to execute the DDL task. Caused by: ...Out of range value... |
+---------------------+-------------+-------------+-------------+--------------+---------------------------------------------------------------------+Step 5. Confirm that all shards remain consistent with the original schema.
check table t1;Expected output:
+---------------------------------+-------+----------+----------+
| TABLE | OP | MSG_TYPE | MSG_TEXT |
+---------------------------------+-------+----------+----------+
| wumu.t1:Topology | check | status | OK |
| wumu.t1:Columns | check | status | OK |
| t1.auto_shard_key_a:Local Index | check | status | OK |
+---------------------------------+-------+----------+----------+show create table t1;Expected output — column b remains int(11), unchanged:
+-------+-------------------------------------------------------------------------------------------+
| Table | Create Table |
+-------+-------------------------------------------------------------------------------------------+
| t1 | CREATE TABLE `t1` (
`a` int(11) DEFAULT NULL,
`b` int(11) DEFAULT NULL,
KEY `auto_shard_key_a` USING BTREE (`a`)
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4
PARTITION BY KEY(`a`)
PARTITIONS 16 |
+-------+-------------------------------------------------------------------------------------------+What's next
Change the column types without locking tables — use Online Modify Column (OMC) when table locking is not acceptable