All Products
Search
Document Center

PolarDB:Archive partitioned tables in CSV format

Last Updated:Mar 28, 2026

Data Lifecycle Management (DLM) automatically archives cold data by transferring infrequently accessed partitions from PolarStore to Object Storage Service (OSS) in CSV format. This reduces storage costs while keeping archived data queryable.

Prerequisites

Before you begin, ensure that you have:

  • A PolarDB for MySQL 8.0.2 cluster with minor version 8.0.2.2.9 or later. To check your version, see Query the version number.

  • Cold data archiving enabled. If it is not enabled, the following error is reported when you create a DLM policy:

    ERROR 8158 (HY000): [Data Lifecycle Management] DLM storage engine is not support. The value of polar_dlm_storage_mode is OFF.
On clusters running minor version 8.0.2.2.11.1 or later, DLM does not record binary logs.

Limitations

  • DLM supports only partitioned tables that use RANGE COLUMNS partitioning without subpartitions.

  • DLM cannot be used on tables with a global secondary index (GSI).

  • DLM policies cannot be modified. To change a policy, drop the existing policy and create a new one.

  • Avoid Data Definition Language (DDL) operations on a table that has an active DLM policy, such as adding or removing columns or modifying column types. Such operations make the schema of the archived table inconsistent with the source table, which prevents subsequent archiving jobs from parsing the data correctly. To perform DDL operations, drop the DLM policy first. When you create a new policy afterward, use a different archived table name.

  • DLM policies are not shown in SHOW CREATE TABLE output. View all policies in the mysql.dlm_policies system table.

  • Use the interval range partitioning feature together with DLM to automatically extend partitions while archiving data from older partitions to OSS.

    Interval range partitioning requires minor version 8.0.2.2.0 or later.
  • Specify DLM settings in CREATE TABLE or ALTER TABLE statements.

Usage notes

  • Archived data on OSS is read-only and has lower query performance than local data. Test query performance against your archived data before relying on it in production.

  • After a partition is archived to OSS, the data in that partition is read-only and DDL operations on the partitioned table are blocked.

  • Data transferred to OSS is not included in backups and does not support point-in-time restore (PITR).

Choose an archiving mode

DLM supports three archiving modes. Choose based on whether you need to retain and query the cold data.

ModeWhat happensData retainedQueryable after archiving
TIER TO TABLECold partitions are merged into a single OSS external tableYesYes, as a single table
TIER TO PARTITIONEach cold partition is moved to OSS individually, becoming a CSV partition in the same tableYesYes, as part of a hybrid partitioned table
TIER TO NONECold partitions are deletedNoNo

Use TIER TO TABLE when you want a consolidated archive table with optional query acceleration via OSS_FILE_FILTER. Use TIER TO PARTITION when you need to keep cold data accessible as part of the original table structure. Use TIER TO NONE to discard data that no longer needs to be retained.

TIER TO PARTITION is in canary release. To use it, go to Quota Center, find the quota polardb_mysql_hybrid_partition, and click Request in the Actions column. This mode requires minor version 8.0.2.2.17 or later, and the total number of partitions in the table must not exceed 8,192.

Partition design guidance

Align your partition granularity with your DLM threshold to get the most predictable archiving behavior.

  • Partition by year when your data retention window spans multiple years and your PARTITIONS OVER threshold is low (for example, 3–5 partitions). One partition per year keeps partition counts manageable and archives in year-sized chunks.

  • Partition by month when you need finer-grained control — for example, to archive data that is 6 months old rather than 1–2 years old. Set your PARTITIONS OVER threshold to match the number of months you want to keep hot.

Combining interval range partitioning with DLM lets the system automatically create new partitions as data grows while archiving older ones, so you never need to manually manage partition boundaries.

Syntax

Create a DLM policy

Attach a DLM policy when creating or altering a table:

With CREATE TABLE:

CREATE TABLE [IF NOT EXISTS] tbl_name
    (create_definition,...)
    [table_options]
    [partition_options]
    [dlm_add_options]

dlm_add_options:
    DLM ADD
        [(dlm_policy_definition [, dlm_policy_definition] ...)]

dlm_policy_definition:
    POLICY policy_name
    [TIER TO TABLE | TIER TO PARTITION | TIER TO NONE]
    [ENGINE [=] engine_name]
    [STORAGE SCHEMA_NAME [=] storage_schema_name]
    [STORAGE TABLE_NAME [=] storage_table_name]
    [STORAGE [=] OSS]
    [READ ONLY]
    [COMMENT 'comment_string']
    [EXTRA_INFO 'extra_info']
    ON [(PARTITIONS OVER num)]

With ALTER TABLE:

ALTER TABLE tbl_name
    [alter_option [, alter_option] ...]
    [partition_options]
    [dlm_add_options]

dlm_add_options:
    DLM ADD
        [(dlm_policy_definition [, dlm_policy_definition] ...)]

dlm_policy_definition:
    POLICY policy_name
    [TIER TO TABLE | TIER TO PARTITION | TIER TO NONE]
    [ENGINE [=] engine_name]
    [STORAGE SCHEMA_NAME [=] storage_schema_name]
    [STORAGE TABLE_NAME [=] storage_table_name]
    [STORAGE [=] OSS]
    [READ ONLY]
    [COMMENT 'comment_string']
    [EXTRA_INFO 'extra_info']
    ON [(PARTITIONS OVER num)]

Policy parameters:

ParameterRequiredDescription
tbl_nameYesThe table name.
policy_nameYesThe policy name.
TIER TO TABLEArchives cold partitions to a single OSS external table.
TIER TO PARTITIONMoves each cold partition to OSS individually. Requires minor version 8.0.2.2.17 or later (Enterprise Edition clusters require 8.0.2.2.25 or later). See the note in Choose an archiving mode for additional requirements.
TIER TO NONEDeletes cold partitions without archiving.
ENGINE [=] engine_nameNoThe storage engine for archived data. Currently, only CSV is supported.
STORAGE SCHEMA_NAME [=] storage_schema_nameNoThe database where the archive table resides. Defaults to the current table's database.
STORAGE TABLE_NAME [=] storage_table_nameNoThe name of the archive table. Defaults to <current_table_name>_<policy_name>.
STORAGE [=] OSSNoStores archived data in OSS. This is the default.
READ ONLYNoMakes archived data read-only. This is the default.
COMMENT 'comment_string'NoA comment for the policy.
EXTRA_INFO 'extra_info'NoSpecifies OSS_FILE_FILTER settings for the destination OSS table. Format: {"oss_file_filter":"field_name[:filter_type][,field_name[:filter_type]]"}. The only supported filter_type is bloom. This setting takes effect only when the destination table does not yet exist — the system uses it to automatically create the FILE_FILTER property and generate filter data during archiving. If the destination table already exists, the existing file filter is used.
ON (PARTITIONS OVER num)YesTriggers archiving when the number of partitions exceeds num.

Manage DLM policies

-- Enable a policy
ALTER TABLE table_name DLM ENABLE POLICY [(dlm_policy_name [, dlm_policy_name] ...)]

-- Disable a policy
ALTER TABLE table_name DLM DISABLE POLICY [(dlm_policy_name [, dlm_policy_name] ...)]

-- Drop a policy
ALTER TABLE table_name DLM DROP POLICY [(dlm_policy_name [, dlm_policy_name] ...)]

table_name is the table with the policy. dlm_policy_name is the policy to act on. Specify multiple policy names to act on several policies at once.

Execute a DLM policy

Run a DLM policy on demand with a stored procedure call, or schedule it using a MySQL EVENT.

Run immediately:

-- Execute all DLM policies in the cluster
CALL dbms_dlm.execute_all_dlm_policies();

-- Execute policies on a specific table
CALL dbms_dlm.execute_table_dlm_policies('database_name', 'table_name');

Schedule with a MySQL EVENT:

Use the MySQL EVENT feature to run DLM policies on a schedule during your cluster's O&M window. This avoids impacting performance during peak hours.

CREATE
    EVENT
    [IF NOT EXISTS]
    event_name
    ON SCHEDULE schedule
    [COMMENT 'comment']
    DO event_body;

schedule: {
  EVERY interval
  [STARTS timestamp [+ INTERVAL interval] ...]
}

interval:
    quantity {YEAR | QUARTER | MONTH | DAY | HOUR | MINUTE |
              WEEK | SECOND | YEAR_MONTH | DAY_HOUR | DAY_MINUTE |
              DAY_SECOND | HOUR_MINUTE | HOUR_SECOND | MINUTE_SECOND}

event_body: {
      CALL dbms_dlm.execute_all_dlm_policies();
    | CALL dbms_dlm.execute_table_dlm_policies('database_name', 'table_name');
}
ParameterRequiredDescription
event_nameYesThe EVENT name.
scheduleYesWhen and how often the EVENT runs.
commentNoA comment for the EVENT.
event_bodyYesThe DLM procedure call to execute. Use execute_all_dlm_policies() to run all cluster policies (create one EVENT per cluster). Use execute_table_dlm_policies() to run policies for a specific table (create one EVENT per table).
intervalYesThe time interval between EVENT executions.
timestampYesThe start time for the EVENT.
database_nameYesThe database where the target table resides.
table_nameYesThe table name.

For the full EVENT syntax reference, see the MySQL EVENT documentation.

Examples

Archive data to an OSS external table (TIER TO TABLE)

This example creates a sales partitioned table with both INTERVAL and DLM policies:

  • INTERVAL policy: Automatically creates a new partition with a one-year interval when inserted data exceeds the current partition range.

  • DLM policy (`TIER TO TABLE`): When the number of partitions exceeds three, cold partitions are archived to a single OSS external table named sales_history.

Step 1: Create the table with a DLM policy

Creating an INTERVAL RANGE partitioned table has prerequisites. See Interval range partitioning for details.
CREATE TABLE `sales` (
  `id` int DEFAULT NULL,
  `name` varchar(20) DEFAULT NULL,
  `order_time` datetime NOT NULL,
   PRIMARY KEY (`order_time`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4
PARTITION BY RANGE  COLUMNS(order_time) INTERVAL(YEAR, 1)
(PARTITION p20200101000000 VALUES LESS THAN ('2020-01-01 00:00:00') ENGINE = InnoDB,
 PARTITION p20210101000000 VALUES LESS THAN ('2021-01-01 00:00:00') ENGINE = InnoDB,
 PARTITION p20220101000000 VALUES LESS THAN ('2022-01-01 00:00:00') ENGINE = InnoDB)
DLM ADD POLICY test_policy TIER TO TABLE ENGINE=CSV STORAGE=OSS READ ONLY
STORAGE TABLE_NAME = 'sales_history' EXTRA_INFO '{"oss_file_filter":"id,name:bloom"}' ON (PARTITIONS OVER 3);

The policy test_policy archives cold partitions to sales_history in CSV format on OSS when the partition count exceeds three. If sales_history does not exist, the system creates it and adds an OSS_FILE_FILTER bloom filter on the id and name columns.

Verify the policy was created by querying mysql.dlm_policies. For the full table schema, see Table schema description.

mysql> SELECT * FROM mysql.dlm_policies\G

Expected output:

*************************** 1. row ***************************
                   Id: 3
         Table_schema: test
           Table_name: sales
          Policy_name: test_policy
          Policy_type: TABLE
         Archive_type: PARTITION COUNT
         Storage_mode: READ ONLY
       Storage_engine: CSV
        Storage_media: OSS
  Storage_schema_name: test
   Storage_table_name: sales_history
      Data_compressed: OFF
 Compressed_algorithm: NULL
              Enabled: ENABLED
      Priority_number: 10300
Tier_partition_number: 3
       Tier_condition: NULL
           Extra_info: {"oss_file_filter": "id,name:bloom,order_time"}
              Comment: NULL
1 row in set (0.03 sec)

The table currently has three partitions, so archiving is not yet triggered.

Step 2: Insert data to trigger new partitions

Insert 3,000 test records to push data beyond the current partition range. This triggers the INTERVAL policy to automatically create new partitions.

DROP PROCEDURE IF EXISTS proc_batch_insert;
delimiter $$
CREATE PROCEDURE proc_batch_insert(IN begin INT, IN end INT, IN name VARCHAR(20))
BEGIN
SET @insert_stmt = concat('INSERT INTO ', name, ' VALUES(? , ?, ?);');
PREPARE stmt from @insert_stmt;
WHILE begin <= end DO
SET @ID1 = begin;
SET @NAME = CONCAT(begin+begin*281313, '@stiven');
SET @TIME = from_days(begin + 737600);
EXECUTE stmt using @ID1, @NAME, @TIME;
SET begin = begin + 1;
END WHILE;
END;
$$
delimiter ;
CALL proc_batch_insert(1, 3000, 'sales');

After insertion, the INTERVAL policy creates new partitions, bringing the total above three:

mysql> SHOW CREATE TABLE sales\G
*************************** 1. row ***************************
      Table: sales
Create Table: CREATE TABLE `sales` (
  `id` int(11) DEFAULT NULL,
  `name` varchar(20) DEFAULT NULL,
  `order_time` datetime NOT NULL,
   PRIMARY KEY (`order_time`)) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
/*!50500 PARTITION BY RANGE  COLUMNS(order_time) */
/*!99990 800020200 INTERVAL(YEAR, 1) */
/*!50500 (PARTITION p20200101000000 VALUES LESS THAN ('2020-01-01 00:00:00') ENGINE = InnoDB,
PARTITION p20210101000000 VALUES LESS THAN ('2021-01-01 00:00:00') ENGINE = InnoDB,
PARTITION p20220101000000 VALUES LESS THAN ('2022-01-01 00:00:00') ENGINE = InnoDB,
PARTITION _p20230101000000 VALUES LESS THAN ('2023-01-01 00:00:00') ENGINE = InnoDB,
PARTITION _p20240101000000 VALUES LESS THAN ('2024-01-01 00:00:00') ENGINE = InnoDB,
PARTITION _p20250101000000 VALUES LESS THAN ('2025-01-01 00:00:00') ENGINE = InnoDB,
PARTITION _p20260101000000 VALUES LESS THAN ('2026-01-01 00:00:00') ENGINE = InnoDB,
PARTITION _p20270101000000 VALUES LESS THAN ('2027-01-01 00:00:00') ENGINE = InnoDB,
PARTITION _p20280101000000 VALUES LESS THAN ('2028-01-01 00:00:00') ENGINE = InnoDB) */
1 row in set (0.03 sec)

The partition count exceeds three, so the DLM policy can now run.

Step 3: Execute the DLM policy

Run the policy immediately, or schedule it to run during your O&M window. The following example schedules it to run daily at 1:00 AM starting October 11, 2022:

CREATE EVENT dlm_system_base_event
       ON SCHEDULE EVERY 1 DAY
    STARTS '2022-10-11 01:00:00'
    DO CALL dbms_dlm.execute_all_dlm_policies();

Step 4: Verify the result

After the policy runs, check the sales table schema:

mysql> SHOW CREATE TABLE sales\G
*************************** 1. row ***************************;
       Table: sales
Create Table: CREATE TABLE `sales` (
  `id` int(11) DEFAULT NULL,
  `name` varchar(20) DEFAULT NULL,
  `order_time` datetime NOT NULL,
   PRIMARY KEY (`order_time`)) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
/*!50500 PARTITION BY RANGE  COLUMNS(order_time) */
/*!99990 800020200 INTERVAL(YEAR, 1) */
/*!50500 (PARTITION _p20260101000000 VALUES LESS THAN ('2026-01-01 00:00:00') ENGINE = InnoDB,
PARTITION _p20270101000000 VALUES LESS THAN ('2027-01-01 00:00:00') ENGINE = InnoDB,
PARTITION _p20280101000000 VALUES LESS THAN ('2028-01-01 00:00:00') ENGINE = InnoDB) */
1 row in set (0.03 sec)

Only the three most recent partitions remain. Check the archiving progress in mysql.dlm_progress:

mysql> SELECT * FROM mysql.dlm_progress\G
*************************** 1. row ***************************;
                  Id: 1
        Table_schema: test
          Table_name: sales
         Policy_name: test_policy
         Policy_type: TABLE
      Archive_option: PARTITIONS OVER 3
      Storage_engine: CSV
       Storage_media: OSS
     Data_compressed: OFF
Compressed_algorithm: NULL
  Archive_partitions: p20200101000000, p20210101000000, p20220101000000, _p20230101000000, _p20240101000000, _p20250101000000
       Archive_stage: ARCHIVE_COMPLETE
  Archive_percentage: 0
  Archived_file_info: null
          Start_time: 2024-07-26 17:56:20
            End_time: 2024-07-26 17:56:50
          Extra_info: null
1 row in set (0.00 sec)

Six partitions — p20200101000000, p20210101000000, p20220101000000, _p20230101000000, _p20240101000000, and _p20250101000000 — have been transferred to the OSS external table.

Check the schema of the OSS external table:

mysql> SHOW CREATE TABLE sales_history\G
*************************** 1. row ***************************;
       Table: sales_history
Create Table: CREATE TABLE `sales_history` (
  `id` int(11) DEFAULT NULL,
  `name` varchar(20) DEFAULT NULL,
  `order_time` datetime DEFAULT NULL,
   PRIMARY KEY (`order_time`)
) /*!99990 800020213 STORAGE OSS */ ENGINE=CSV DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci /*!99990 800020204 NULL_MARKER='NULL' */ /*!99990 800020223 OSS META=1 */ /*!99990 800020224 OSS_FILE_FILTER='id,name:bloom,order_time' */
1 row in set (0.15 sec)

sales_history is a CSV table stored in OSS, queryable the same way as a local table. The id, name, and order_time columns (the partition key) all have OSS_FILE_FILTER entries.

Verify the data counts add up:

SELECT COUNT(*) FROM sales;
+----------+
| count(*) |
+----------+
|      984 |
+----------+
1 row in set (0.01 sec)

SELECT COUNT(*) FROM sales_history;
+----------+
| count(*) |
+----------+
|     2016 |
+----------+
1 row in set (0.57 sec)

The totals sum to 3,000, matching the number of records inserted.

Query the OSS external table with OSS_FILE_FILTER (requires the OSS_FILE_FILTER switch to be enabled):

mysql> EXPLAIN SELECT * FROM sales_history WHERE id = 9;
+----+-------------+---------------+------------+------+---------------+------+---------+------+------+----------+-----------------------------------------------------------------------------+
| id | select_type | table         | partitions | type | possible_keys | key  | key_len | ref  | rows | filtered | Extra                                                                       |
+----+-------------+---------------+------------+------+---------------+------+---------+------+------+----------+-----------------------------------------------------------------------------+
|  1 | SIMPLE      | sales_history | NULL       | ALL  | NULL          | NULL | NULL    | NULL | 2016 |    10.00 | Using where; With pushed engine condition (`test`.`sales_history`.`id` = 9) |
+----+-------------+---------------+------------+------+---------------+------+---------+------+------+----------+-----------------------------------------------------------------------------+
1 row in set, 1 warning (0.59 sec)

mysql> SELECT * FROM sales_history WHERE id = 9;
+------+----------------+---------------------+
| id   | name           | order_time          |
+------+----------------+---------------------+
|    9 | 2531826@stiven | 2019-07-04 00:00:00 |
+------+----------------+---------------------+
1 row in set (0.19 sec)

Archive partitions to OSS (TIER TO PARTITION)

This example uses TIER TO PARTITION, which moves each cold partition directly to OSS. The sales table remains a single hybrid partitioned table — older partitions use the CSV engine on OSS, while newer partitions remain in InnoDB.

Step 1: Create the table with a DLM policy

CREATE TABLE `sales` (
  `id` int DEFAULT NULL,
  `name` varchar(20) DEFAULT NULL,
  `order_time` datetime NOT NULL,
   PRIMARY KEY (`order_time`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4
PARTITION BY RANGE  COLUMNS(order_time) INTERVAL(YEAR, 1)
(PARTITION p20200101000000 VALUES LESS THAN ('2020-01-01 00:00:00') ENGINE = InnoDB,
 PARTITION p20210101000000 VALUES LESS THAN ('2021-01-01 00:00:00') ENGINE = InnoDB,
 PARTITION p20220101000000 VALUES LESS THAN ('2022-01-01 00:00:00') ENGINE = InnoDB)
DLM ADD POLICY policy_part2part TIER TO PARTITION ENGINE=CSV STORAGE=OSS READ ONLY ON (PARTITIONS OVER 3);

The policy policy_part2part moves old partitions to OSS when the partition count exceeds three.

Verify the policy:

SELECT * FROM mysql.dlm_policies\G
*************************** 1. row ***************************
                   Id: 2
         Table_schema: test
           Table_name: sales
          Policy_name: policy_part2part
          Policy_type: PARTITION
         Archive_type: PARTITION COUNT
         Storage_mode: READ ONLY
       Storage_engine: CSV
        Storage_media: OSS
  Storage_schema_name: NULL
   Storage_table_name: NULL
      Data_compressed: OFF
 Compressed_algorithm: NULL
              Enabled: ENABLED
      Priority_number: 10300
Tier_partition_number: 3
       Tier_condition: NULL
           Extra_info: null
              Comment: NULL
1 row in set (0.03 sec)

Step 2: Insert data to trigger new partitions

CALL proc_batch_insert(1, 3000, 'sales');

Expected output:

Query OK, 1 row affected, 1 warning (0.99 sec)

Verify the new partitions:

SHOW CREATE TABLE sales \G
*************************** 1. row ***************************
       Table: sales
Create Table: CREATE TABLE `sales` (
  `id` int(11) DEFAULT NULL,
  `name` varchar(20) DEFAULT NULL,
  `order_time` datetime DEFAULT NULL,
   PRIMARY KEY (`order_time`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
/*!50500 PARTITION BY RANGE  COLUMNS(order_time) */ /*!99990 800020200 INTERVAL(YEAR, 1) */
/*!50500 (PARTITION p20200101000000 VALUES LESS THAN ('2020-01-01 00:00:00') ENGINE = InnoDB,
 PARTITION p20210101000000 VALUES LESS THAN ('2021-01-01 00:00:00') ENGINE = InnoDB,
 PARTITION p20220101000000 VALUES LESS THAN ('2022-01-01 00:00:00') ENGINE = InnoDB,
 PARTITION _p20230101000000 VALUES LESS THAN ('2023-01-01 00:00:00') ENGINE = InnoDB,
 PARTITION _p20240101000000 VALUES LESS THAN ('2024-01-01 00:00:00') ENGINE = InnoDB,
 PARTITION _p20250101000000 VALUES LESS THAN ('2025-01-01 00:00:00') ENGINE = InnoDB,
 PARTITION _p20260101000000 VALUES LESS THAN ('2026-01-01 00:00:00') ENGINE = InnoDB,
 PARTITION _p20270101000000 VALUES LESS THAN ('2027-01-01 00:00:00') ENGINE = InnoDB,
 PARTITION _p20280101000000 VALUES LESS THAN ('2028-01-01 00:00:00') ENGINE = InnoDB) */
1 row in set (0.03 sec)

Step 3: Execute the DLM policy

CALL dbms_dlm.execute_all_dlm_policies();

Step 4: Verify the result

Check the progress record:

SELECT * FROM mysql.dlm_progress \G
*************************** 1. row ***************************
                  Id: 4
        Table_schema: test
          Table_name: sales
         Policy_name: policy_part2part
         Policy_type: PARTITION
      Archive_option: PARTITIONS OVER 3
      Storage_engine: CSV
       Storage_media: OSS
     Data_compressed: OFF
Compressed_algorithm: NULL
  Archive_partitions: p20200101000000, p20210101000000, p20220101000000, _p20230101000000, _p20240101000000, _p20250101000000
       Archive_stage: ARCHIVE_COMPLETE
  Archive_percentage: 100
  Archived_file_info: null
          Start_time: 2023-09-11 18:04:39
            End_time: 2023-09-11 18:04:40
          Extra_info: null
1 row in set (0.02 sec)

Check the updated table schema:

SHOW CREATE TABLE sales \G
*************************** 1. row ***************************
       Table: sales
Create Table: CREATE TABLE `sales` (
  `id` int(11) DEFAULT NULL,
  `name` varchar(20) DEFAULT NULL,
  `order_time` datetime DEFAULT NULL,
   PRIMARY KEY (`order_time`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci CONNECTION='default_oss_server'
/*!99990 800020205 PARTITION BY RANGE  COLUMNS(order_time) */ /*!99990 800020200 INTERVAL(YEAR, 1) */
/*!99990 800020205 (PARTITION p20200101000000 VALUES LESS THAN ('2020-01-01 00:00:00') ENGINE = CSV,
 PARTITION p20210101000000 VALUES LESS THAN ('2021-01-01 00:00:00') ENGINE = CSV,
 PARTITION p20220101000000 VALUES LESS THAN ('2022-01-01 00:00:00') ENGINE = CSV,
 PARTITION _p20230101000000 VALUES LESS THAN ('2023-01-01 00:00:00') ENGINE = CSV,
 PARTITION _p20240101000000 VALUES LESS THAN ('2024-01-01 00:00:00') ENGINE = CSV,
 PARTITION _p20250101000000 VALUES LESS THAN ('2025-01-01 00:00:00') ENGINE = CSV,
 PARTITION _p20260101000000 VALUES LESS THAN ('2026-01-01 00:00:00') ENGINE = InnoDB,
 PARTITION _p20270101000000 VALUES LESS THAN ('2027-01-01 00:00:00') ENGINE = InnoDB,
 PARTITION _p20280101000000 VALUES LESS THAN ('2028-01-01 00:00:00') ENGINE = InnoDB) */
1 row in set (0.03 sec)

Partitions p20200101000000 through _p20250101000000 now use the CSV engine on OSS. The three most recent partitions (_p20260101000000, _p20270101000000, _p20280101000000) remain in InnoDB. The sales table is now a hybrid partitioned table. For details on querying hybrid partitioned tables, see Query a hybrid partition.

Delete cold data (TIER TO NONE)

This example uses TIER TO NONE to permanently delete cold partitions when the partition count exceeds three.

Step 1: Create the table with a DLM policy

CREATE TABLE `sales` (
  `id` int DEFAULT NULL,
  `name` varchar(20) DEFAULT NULL,
  `order_time` datetime DEFAULT NULL,
   PRIMARY KEY (`order_time`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4
PARTITION BY RANGE  COLUMNS(order_time) INTERVAL(YEAR, 1)
(PARTITION p20200101000000 VALUES LESS THAN ('2020-01-01 00:00:00') ENGINE = InnoDB,
 PARTITION p20210101000000 VALUES LESS THAN ('2021-01-01 00:00:00') ENGINE = InnoDB,
 PARTITION p20220101000000 VALUES LESS THAN ('2022-01-01 00:00:00') ENGINE = InnoDB)
DLM ADD POLICY test_policy TIER TO NONE ON (PARTITIONS OVER 3);

Verify the policy:

SELECT * FROM mysql.dlm_policies\G
*************************** 1. row ***************************
                   Id: 4
         Table_schema: test
           Table_name: sales
          Policy_name: test_policy
          Policy_type: NONE
         Archive_type: PARTITION COUNT
         Storage_mode: NULL
       Storage_engine: NULL
        Storage_media: NULL
  Storage_schema_name: NULL
   Storage_table_name: NULL
      Data_compressed: OFF
 Compressed_algorithm: NULL
              Enabled: ENABLED
      Priority_number: 50000
Tier_partition_number: 3
       Tier_condition: NULL
           Extra_info: null
              Comment: NULL
1 row in set (0.01 sec)

Step 2: Insert data to trigger new partitions

CALL proc_batch_insert(1, 3000, 'sales');
Query OK, 1 row affected, 1 warning (0.99 sec)

Step 3: Execute the DLM policy

CALL dbms_dlm.execute_all_dlm_policies();

Step 4: Verify the result

SELECT * FROM mysql.dlm_progress \G
*************************** 1. row ***************************
                  Id: 1
        Table_schema: test
          Table_name: sales
         Policy_name: test_policy
         Policy_type: NONE
      Archive_option: PARTITIONS OVER 3
      Storage_engine: NULL
       Storage_media: NULL
     Data_compressed: OFF
Compressed_algorithm: NULL
  Archive_partitions: p20200101000000, p20210101000000, p20220101000000, _p20230101000000, _p20240101000000, _p20250101000000
       Archive_stage: ARCHIVE_COMPLETE
  Archive_percentage: 100
  Archived_file_info: null
          Start_time: 2023-01-09 17:31:24
            End_time: 2023-01-09 17:31:24
          Extra_info: null
1 row in set (0.03 sec)

Partitions p20200101000000 through _p20250101000000 have been deleted. The remaining table schema:

SHOW CREATE TABLE sales \G
*************************** 1. row ***************************
       Table: sales
Create Table: CREATE TABLE `sales` (
  `id` int(11) DEFAULT NULL,
  `name` varchar(20) DEFAULT NULL,
  `order_time` datetime DEFAULT NULL,
   PRIMARY KEY (`order_time`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
/*!50500 PARTITION BY RANGE  COLUMNS(order_time) */ /*!99990 800020200 INTERVAL(YEAR, 1) */
/*!50500 (PARTITION _p20260101000000 VALUES LESS THAN ('2026-01-01 00:00:00') ENGINE = InnoDB,
 PARTITION _p20270101000000 VALUES LESS THAN ('2027-01-01 00:00:00') ENGINE = InnoDB,
 PARTITION _p20280101000000 VALUES LESS THAN ('2028-01-01 00:00:00') ENGINE = InnoDB) */
1 row in set (0.02 sec)

Manage policies with ALTER TABLE

-- Add a DLM policy to an existing table
ALTER TABLE t DLM ADD POLICY test_policy TIER TO TABLE ENGINE=CSV STORAGE=OSS READ ONLY
STORAGE TABLE_NAME = 'sales_history' ON (PARTITIONS OVER 3);

-- Enable a policy
ALTER TABLE t DLM ENABLE POLICY test_policy;

-- Disable a policy
ALTER TABLE t DLM DISABLE POLICY test_policy;

-- Drop a policy
ALTER TABLE t DLM DROP POLICY test_policy;

Handle execution errors

DLM policy failures are recorded in mysql.dlm_progress. Query for errors with:

SELECT * FROM mysql.dlm_progress WHERE Archive_stage = "ARCHIVE_ERROR";

Check the Extra_info field for the error details. After identifying the cause, either delete the error record or update its Archive_stage to ARCHIVE_COMPLETE. Then run the policy manually:

CALL dbms_dlm.execute_all_dlm_policies();

Or wait for the next scheduled execution cycle.

When a policy record has the status ARCHIVE_ERROR, the policy does not run again automatically. This prevents unsafe retries until you have confirmed the cause of the failure and updated the record.