All Products
Search
Document Center

Data Lake Formation:Storage optimization

Last Updated:Feb 10, 2026

Data Lake Formation (DLF) storage optimization provides features such as table-level adaptive compaction, expired snapshot cleanup, partition lifecycle management, and orphan file cleanup. These features simplify the use and maintenance of Paimon tables and improve computing and storage efficiency. This topic describes the intelligent storage optimization policies that DLF runs in the background and their execution mechanisms.

Important

Iceberg tables do not automatically reclaim storage. To prevent storage costs from increasing, you must manually clean up expired snapshots and orphan files. For more information, see Iceberg Table Storage Administration.

Storage optimization strategies

Policy type

Description

DLF execution mechanism

Compaction

The compaction feature merges small files into larger ones. This reduces the number of files, which lowers metadata management overhead and file lookup costs during queries. This improves the query performance and efficiency of Paimon tables.

DLF automatically triggers compaction when you commit data writes.

Expired snapshot cleanup

As long as a snapshot exists, the data files it references cannot be deleted. This ensures that the historical state of the data remains readable. As new snapshots are created, the storage space consumed by historical data increases. You must remove old snapshots to release the space occupied by inactive historical data. This helps you manage and free up storage resources.

DLF automatically triggers snapshot cleanup when a DLF storage optimization job runs. The default expiration time for snapshots is 1 hour. You can adjust the expiration time using Paimon table parameters. For more information, see Clean up expired data.

Partition lifecycle management

Many business scenarios require access only to recent data. In these cases, you can partition data by time and set a partition expiration time to automatically delete old historical partitions. This frees up storage space. You can also configure storage tiering to move partition data that is accessed less frequently from high-performance storage, such as Standard, to low-cost storage, such as Infrequent Access, Archive, or Cold Archive. This reduces storage costs while meeting business needs.

You can configure the expiration time using Paimon table parameters. For more information, see Set partition expiration time. After you configure the parameters, the process is automatically triggered when a DLF storage optimization job runs. You can also use the Intelligent storage tiering feature to automatically move partition data that meets the specified criteria to a storage class, such as Standard, Infrequent Access, Archive, or Cold Archive. You can also manually change the storage class on the table details page. On the Storage overview page, you can view the storage tiering distribution for data catalogs, databases, and tables.

Orphan file cleanup

Because of job errors, restarts, or other issues, some uncommitted temporary files may remain in the Paimon table directory. These orphan files are not referenced by any snapshot and cannot be deleted by the snapshot expiration mechanism. These files need to be cleaned up periodically.

The default retention period for orphan files is 7 days. Orphan files older than this period are considered expired and are automatically cleaned up. DLF triggers a cleanup task every 7 days.

Enable or disable intelligent storage optimization

Note

The Storage Optimization tab is displayed only when you create a Paimon table.

  1. Log on to the Data Lake Formation console.

  2. On the Data Catalog list page, click the catalog name.

  3. On the Databases tab, click the name of the destination database to view the data tables.

  4. In the Table List, click a table to view its column information.

  5. Click the Storage Optimization tab. The intelligent storage optimization switch is enabled by default. Click the image switch to disable it.

View and configure storage optimization strategies

Compaction

On the Storage Optimization tab, click Compaction. You can view the execution status of small file merges, Rescale records, and execution history.

Edit the policy pattern as needed:

Dynamic resource pattern (Recommended)

The system automatically scales computing resources based on the real-time load. This pattern does not require manual capacity planning and is suitable for scenarios with fluctuating traffic.
Three configuration preferences are supported:

  • Balanced resource and latency: Strikes a balance between merge speed and resource consumption (default).

  • Latency first: Allocates more resources to complete merges faster and reduce data visibility latency.

  • Resource first: Limits resource usage to reduce computing costs, which may result in longer merge times.

    Dynamic resource allocation and scaling policy

    In the dynamic resource pattern, the system automatically allocates computing resources for primary key tables based on real-time load and data features. The amount of allocated resources is determined by the following core factors and triggers automatic scaling when necessary.

    Write throughput
    Write traffic is the main driver for resource allocation. Higher write traffic results in more computing resources being allocated to ensure stable data ingestion.

    Active concurrency
    The number of active partitions and buckets directly determines the resource requirements. The more active partitions and buckets there are, the more parallel processing resources the system allocates.

    Data volume for advanced features
    When a table has deletion vectors or the lookup changelog producer enabled, the computational complexity increases. In this case, the larger the total file size in active partitions, the more resources the system allocates.

    Data row features
    Extreme sizes of single rows increase processing overhead. Whether a single row is too small (contains many null values) or too large (contains very long string fields), the system increases resource allocation to maintain processing performance.

    Resource optimization policy for small tables

    For small-scale data tables, the system uses a sharing mechanism to reduce resource overhead:

    When a table is configured for adaptive bucketing (bucket = -2) and "Latency first" mode is not enabled, the system merges the optimization tasks of multiple small tables into a single job. This multi-table sharing mechanism significantly reduces overall resource usage.

    Resource monitoring and troubleshooting

    To analyze resource consumption, go to Data Catalog > Resource Overview > Resource Request Metric Details and view the CU * Hours Top Tables list.

    Note

    If a single table shows abnormally high resource usage, it is often because the partition granularity is too fine. A fine-grained partition policy can cause a surge in the number of concurrently active partitions, forcing the system to allocate many resources. Check and optimize the partition policy.

Fixed resource pattern

Manually specify the amount of computing resources for compaction. This pattern is suitable for scenarios with stable traffic or strict cost control requirements.

  • Configuration requirements: The computing unit (CU) configuration must be at least 2 CUs.

  • Parameter settings: You can customize the compaction trigger interval and the number of buckets.

View execution status

You can view the optimization execution status of the current table and configure custom alert subscriptions in Cloud Monitor. For more information about metrics and configuration steps, see Lakehouse table optimization monitoring.

View Rescale records

This section records historical events of bucket rescaling for a data table or specific partitions. It reflects changes in the underlying physical storage structure of the table. The rescale mechanism is mainly used to address performance issues caused by changes in data volume. You can use the Rescale records to determine if a table is not entering the compaction process because it is currently being rescaled.

View execution history

You can view the execution history of small file merges for the current table. This history shows how the system processes fragmented files to optimize read performance and storage space. Use these records to:

  1. Confirm task execution: Ensure that background merge tasks are running correctly to prevent an infinite buildup of small files.

  2. Evaluate compression efficiency: Compare the number of files and their sizes before and after merging to determine whether the current compaction strategy is appropriate.

Expired snapshot cleanup

On the Storage Optimization tab, click Snapshot Expire. You can configure snapshot cleanup rules and view the results.

  • Configure snapshot cleanup rules

    Click Edit, set Snapshot Retention Period (default: 1 hour), and then click Save to complete the configuration.

  • View snapshot cleanup results

    • Current Number of Snapshots: Displays the current number of remaining snapshots in real time.

    • Earliest Snapshot Information: Displays the details of the earliest table snapshot, including the snapshot ID, commit time, commit type, total table rows, and rows added in this commit.

Partition lifecycle management

On the Storage Optimization tab, click Partition LifeCycle. You can configure partition cleanup rules, view partition cleanup results, and configure storage tiering.

Configure partition cleanup rules

  1. You can click the image toggle on the right side of Enable Partition Cleanup to enable partition cleanup.

  2. After you enable partition cleanup, configure the following rules as needed. Click Save to complete the configuration.

    You can complete the configuration by setting the corresponding table option key-value pairs.

    Configuration item

    Description

    Expiration Policy

    (partition.expiration-strategy)

    You can select one of the following expiration policies:

    • Last access time (access-time): Determines expiration based on the last access time of the partition data.

    • Partition value (values-time): You can configure the partition timestamp format and partition field pattern.

      • Timestamp format (partition.timestamp-formatter): You can configure formats such as yyyy-MM-dd, yyyyMMdd, dd/MM/yyyy, and dd.MM.yyyy.

      • Timestamp pattern (partition.timestamp-pattern): By default, the first partition field is used. You can configure patterns such as $dt or $year-$month-$day.

    • Last update time (update-time): Determines expiration based on the last update time of the partition data at the finest granularity.

    Partition Retention Period

    (partition.expiration-time)

    Unit: days. You can configure a value such as 30d. The maximum value is 999,999 days. The start time for the retention period is determined by the selected expiration policy.

  3. (Optional) After saving, you can also click Edit next to Rule Configuration to make changes.

Note

If you want to retain partitions permanently, do not configure partition expiration rules. By default, the system does not automatically clean up partition data.

View partition cleanup results

Click View Partitions to view the partition list for the current table. The list includes the partition name, number of rows (physical), number of referenced files, total file size, creator, storage class, last updater, creation time, last update time, and operations.

Configure storage tiering

Configuration item

Description

Intelligent Tiering

imageWhen enabled, the system automatically tiers the storage for all tables in the catalog based on the lifecycle rules you configure. Specify the tiering policy and rules as needed.

Note
  • If intelligent tiering is enabled at the catalog level, it is also enabled by default for tables (inherited from the catalog). You can modify the configuration at the table level. If you modify the rules at the table level, the "inherited from Catalog" status is no longer displayed.

  • If intelligent tiering is not enabled at the catalog level, you can still enable and modify it at the table level.

Tiering Strategy

  • Last Access Time: The rule is evaluated based on the last access time of the table or partition data.

  • Last Update Time: The rule is evaluated based on the last update time of the table or partition data.

Tiering Rule

The minimum storage duration requirements vary for different storage classes.

You can configure the following tiering rules:

  • Transition to Infrequent Access

    • Inactivity Threshold: Custom. The default is 30 days.

      Data is automatically transitioned to IA storage if its last access time exceeds this number of days. IA storage can still be accessed by the compute engine, but with reduced performance.

    • Convert to Standard Storage Upon Access: If you select this option, the system automatically transitions the partition or non-partitioned table to the Standard storage class when it is accessed.

      Note

      This feature is supported only when the tiering strategy is based on "Last Access Time".

  • Transition to Archive

    • Number of Days: Customizable. The default value is 60 days.

      Data is automatically transitioned to Archive Storage if its last access time exceeds this number of days. Data in Archive Storage cannot be accessed by the compute engine.

    • Convert to Standard Storage Upon Access: If you select this option, the system automatically transitions the partition or non-partitioned table to the Standard storage class when it is accessed.

      Note

      This feature is supported only when the tiering policy is based on "Last Access Time".

  • Transition to Cold Archive

    • Inactivity Threshold: Custom. The default is 180 days.

      Data is automatically transitioned to Cold Archive storage if its last access time exceeds this number of days. Data in Cold Archive storage cannot be accessed by the compute engine.

Note

In addition to using the Intelligent storage tiering feature, you can also manually change the storage class on the table details page. You can also view the storage tiering distribution for data catalogs, databases, and tables on the Storage overview page.

Orphan file cleanup

On the Storage Optimization tab, click Orphan File Remove. You can view the orphan file cleanup rules. For example, the default retention period for orphan files is 7 days, based on the file write time. Expired orphan files older than this period are automatically cleaned up by the system.

Manually change the storage class

  1. In the Databases list, click a database name to view the table list.

  2. In the Tables list, click a table name to view the table columns.

  3. Click the Table Details tab. You can manually change the storage class for partitioned and non-partitioned tables.

    Partitioned tables

    On the Partitions tab, you can change the storage class for partitions of different storage classes.

    • Partitions in Standard, Infrequent Access, or Archive storage classes:

      In the Actions column, click Modify Storage Class. You can change the storage class to any class other than the current one.

    • Partitions in the Cold Archive storage class:

      You must first restore the data. After the restoration is complete and the status changes to restored, you can change the storage class. Perform the following steps:

      1. Click Restore. Configure the Restored state duration. You can select partitions for batch restoration.

        • Value range: 1 to 365. The unit is days.

        • Default value: 1 day.

      2. When the data enters the restored state, click Actions column, click Modify Storage Class to change the storage class.

    Non-partitioned tables

    In the Basic Information section of the table, you can modify the Storage Class.

    • Standard, Infrequent Access, or Archive storage classes

      Click Edit next to Storage Class. You can change the storage class to any class other than the current one.

    • Cold Archive storage class

      You must first restore the data. After the restoration is complete and the status changes to restored, you can change the storage class. Perform the following steps:

      1. Click Restore next to Storage Class. Configure the Restored state duration.

        • Value range: 1 to 365. The unit is days.

        • Default value: 1 day.

      2. When the Storage Class changes to Cold Archive (Restored), click Edit next to Storage Class. You can then change it to any other storage class.

    Note
    • Restoration time: The time required to restore an object. The Cold Archive storage class supports only the Standard restoration priority, and the restoration process takes 2 to 5 hours.

    • Restored state start time: The time at which the first Cold Archive object in a partition enters the restored state after the restoration operation is complete.

    • Restored state duration: The validity period for which data remains in the restored state after the first Cold Archive object in a partition is restored. After all objects in the partition are restored, you can read, write, or change the storage class of the partition. When the restored state duration ends, the data in the partition returns to the Cold Archive state and cannot be accessed directly. To perform operations on the data, you must restore it again.

    Restoration procedure

    1. Initially, the object is in the frozen state.

    2. After you submit a restore request, the object enters the restoring state. The actual restoration time may vary.

    3. After the server completes the restoration task, the object enters the restored state. For table-level storage tiering, the partition can be accessed normally after all objects within it are restored.

      You can extend the duration of the restored state by adjusting the partition's restored state duration. However, the total duration cannot exceed the maximum limit for that storage class.

    4. After the restored state duration ends, the object returns to the frozen state without changing its original storage class. To access the data again, you must submit a new restore request and wait for the restoration to complete.