This topic describes the limits of Data Lake Formation (DLF). When you use the DLF console or call API operations, make sure that relevant requirements are met. Otherwise, an error occurs.
Metadata
Item | Quota |
Queries on a single table per second | 500 |
Number of partitions in a single table | 1,000,000 |
Preprocessing of data import
Item | Quota |
Number of data import tasks per region | 1,000 |
Maximum number of resources for each data import task, which are measured by compute units (CUs) | 100 |
Compatibility and restrictions
The following Hive-related features are not supported. We recommend that you use the latest Apache Hudi, Apache Iceberg, or Delta Lake instead.
Hive DB Lock Manager API
Hive DB Transaction Manager API
Constraints on Hive, such as PRIMARY KEY and FOREIGN KEY
SQL standards-based authorization in Hive is not supported.
This authorization mode is introduced in an earlier version of Hive and is no longer updated by the open source community. We recommend that you use the data permission management feature of DLF instead.
Metastore listeners are not supported.
We recommend that you call Alibaba Cloud ActionTrail API operations to monitor and manage metadata operations.
The Live Long and Process (LLAP) functionality for Hive is not supported.
We recommend that you use engines such as Presto and Spark instead.
Currently, DLF does not support self-managed Hadoop clusters. If you want to use data lakes, select Alibaba Cloud E-MapReduce clusters.