This topic describes the release notes for AnalyticDB for MySQL in 2024 and provides links to the relevant references.
Usage notes
Take note of the following items during minor version updates of AnalyticDB for MySQL clusters:
For AnalyticDB for MySQL clusters in reserved mode for Cluster Edition or AnalyticDB for MySQL clusters in elastic mode for Cluster Edition that have 32 cores or more, data read and write operations are not interrupted when engine versions are updated. Within 5 minutes before the update is complete, queries may encounter transient connections.
For AnalyticDB for MySQL clusters in elastic mode for Cluster Edition that have 8 or 16 cores, data write operations may be interrupted for 30 minutes when engine versions are updated. Within 5 minutes before the update is complete, queries may encounter transient connections.
Minor version updates of AnalyticDB for MySQL clusters do not affect database access, account management, database management, or IP address whitelist settings.
During a minor version update of an AnalyticDB for MySQL cluster, network jitters may occur and affect write and query operations. Make sure that your application is configured to automatically reconnect to the AnalyticDB for MySQL cluster.
During a minor version update of an AnalyticDB for MySQL cluster, the cluster may encounter transient connections. Make sure that your application is configured to automatically reconnect to the AnalyticDB for MySQL cluster.
If you do not need to update the minor version of your AnalyticDB for MySQL cluster or an error occurs during the update process, you can cancel the scheduled minor version update. You can cancel only the scheduled events of a minor version update. For more information, see the "Cancel scheduled events" section of the Manage O&M events topic.
If the minor version of your AnalyticDB for MySQL cluster is earlier than the latest minor version, Alibaba Cloud pushes a notification at an irregular interval to inform you that the cluster needs to be updated to the latest minor version. We recommend that you update the minor version of your AnalyticDB for MySQL cluster at the earliest opportunity within six months after you receive the notification. Otherwise, you shall assume all liabilities for risks such as service interruptions and data loss.
December 2024
Category | Feature | Description | References |
New feature | Cross-account cluster cloning | Data Lakehouse Edition clusters can be cloned across Alibaba Cloud accounts. | |
Disk encryption | The AnalyticDB for MySQL console allows you to check whether the disk encryption feature is enabled for a cluster and view the Key Management Service (KMS) key ID for disk encryption. |
November 2024
Category | Feature | Description | References |
New feature | Lake cache | The lake cache feature is supported to cache the Object Storage Service (OSS) objects that are frequently accessed on high-performance NVMe SSDs to improve the reading efficiency of OSS data.OSS |
October 2024
Category | Feature | Description | References |
New feature | Backup and restoration | Data backup sets can be deleted and the data backup feature can be disabled in the AnalyticDB for MySQL console. | |
Zero-ETL | The zero-ETL feature is supported to synchronize Lindorm data. You can create data synchronization tasks from Lindorm to AnalyticDB for MySQL to synchronize and manage data in an end-to-end manner and integrate transaction processing with data analysis.Lindorm |
September 2024
Category | Feature | Description | References |
New feature | Cross-region cluster cloning | Clusters can be cloned across regions. |
V3.2.2
Category | Feature | Description | References |
New feature | Batch creation of MaxCompute external tables | Multiple MaxCompute external tables can be created at a time. | |
Support for aggregate functions in configuring incremental refresh for materialized views | The | ||
Access to MaxCompute external tables in arrow API mode | The arrow API mode is supported to read and write MaxCompute external tables. Compared with the traditional tunnel mode, the arrow API mode can improve data access and processing efficiency. | Use external tables to import data to Data Lakehouse Edition | |
INSERT INTO | The TIMESTAMP() function can be included in the INSERT INTO statement. | ||
Optimized feature | FROM_UNIXTIME function | The FROM_UNIXTIME function can be used to convert a UNIX timestamp into the DATETIME format. | |
Fixed issue | Data type conversion | The following issue is fixed: An error is returned when the TINYINT, SMALLINT, INT, or BIGINT type is converted into the DECIMAL type. | None |
August 2024
Category | Feature | Description | References |
New feature | Selection of the Spark engine for interactive resource groups | The Spark engine can be selected when you create an interactive resource group in an AnalyticDB for MySQL Data Lakehouse Edition cluster. You can run only Spark jobs in interactive resource groups by using the Spark engine. Spark jobs are run in an interactive manner. | |
Limits on the number of zero-ETL tasks | The number of zero-ETL tasks from ApsaraDB RDS for MySQL or PolarDB for MySQL to AnalyticDB for MySQL is limited. |
July 2024
V3.2.1
Category | Feature | Description | References |
New feature | Next-generation storage engine | The next-generation storage engine | |
Incremental refresh for multi-table materialized views | Incremental refresh is supported for multi-table materialized views. The incremental data of multiple tables that are joined together can be automatically refreshed to the corresponding multi-table materialized view. This improves data query performance and data analysis efficiency. | ||
Invocation of user-defined functions (UDFs) by using the REMOTE_CALL() function | The REMOTE_CALL() function can be used to invoke custom functions that you create in Function Compute (FC). This way, you can use UDFs in AnalyticDB for MySQL. | ||
Forcible deletion of databases | The CASCADE keyword is supported in the DROP DATABASE statement to forcibly delete a database, including all tables in the database. | ||
Wide table engine | The wide table engine is supported for Data Lakehouse Edition. The wide table engine is compatible with the capabilities and syntax of the open source columnar database ClickHouse and can handle large amounts of columnar data. | ||
Path analysis functions | The SEQUENCE_MATCH() and SEQUENCE_COUNT() functions are supported to analyze user behavior and check whether the user behavior matches the specified pattern. | ||
SSL encryption | SSL encryption is supported to encrypt data transmitted between a Data Warehouse Edition cluster and a client. This prevents data from being listened to, intercepted, and tampered with by third parties. | ||
Support for complex MaxCompute data types by MaxCompute external tables | Complex MaxCompute data types, such as ARRAY, MAP, and STRUCT, are supported for MaxCompute external tables of Data Lakehouse Edition clusters. | ||
Support for the ROARING BITMAP type by AnalyticDB for MySQL internal tables | The ROARING BITMAP type is supported. | ||
Subscription to AnalyticDB for MySQL binary logs by using Realtime Compute for Apache Flink | Realtime Compute for Apache Flink can be used to consume AnalyticDB for MySQL binary logs in real time. | Use Realtime Compute for Apache Flink to subscribe to AnalyticDB for MySQL binary logs | |
Optimized feature | Change of LIFECYCLE from a required keyword to an optional one | If you do not specify the LIFECYCLE keyword when you create a table, partition data is permanently retained. | |
Table-level partition lifecycle management | For AnalyticDB for MySQL clusters of V3.2.1.1 or later, the partition lifecycle is managed at the table level, but not the shard level. The LIFECYCLE n parameter specifies that up to n partitions can be retained in each table. | ||
Import of OSS data to AnalyticDB for MySQL by using external tables | The absolute path name and the asterisk (*) wildcard are supported for the url parameter when you use external tables to import OSS data to AnalyticDB for MySQL. | Use external tables to import data to Data Warehouse Edition | |
Automatic validity check of column names at table creation | Column names are automatically checked against naming conventions of AnalyticDB for MySQL when you execute the CREATE TABLE statement to create a table. If a column name does not meet the naming conventions, an error is returned. For information about the naming conventions of column names, see the "Naming limits" section of the Limits topic. | None |
June 2024
Category | Feature | References | |
New feature | AnalyticDB for MySQL Enterprise Edition and Basic Edition are released.
|
May 2024
Category | Feature | Description | References |
New feature | Cross-account cluster cloning | Data Lakehouse Edition clusters can be cloned across Alibaba Cloud accounts. |
April 2024
Category | Feature | Description | References |
New feature | Query rewrite | The query rewrite feature of materialized views is supported. After you enable this feature, the optimizer determines whether a query can use pre-computed and stored data in materialized views. This way, the optimizer partially or entirely rewrites the original query to a query that can use materialized views. | |
Synchronization of Simple Log Service (SLS) data by using data synchronization | The data synchronization feature can be used to synchronize data in real time from an SLS Logstore to an AnalyticDB for MySQL cluster based on a specific offset. This helps meet your business requirements for real-time analysis of log data. | ||
Zero-ETL | The zero-ETL feature is supported to help you synchronize and manage data, integrate transaction processing with data analysis, and focus on data analysis. You can create data synchronization tasks from ApsaraDB RDS for MySQL or PolarDB for MySQL to AnalyticDB for MySQL. | ||
Time zone selection at cluster creation | The time zone parameter can be selected for an AnalyticDB for MySQL cluster at cluster creation based on your business requirements. After you select a time zone, the system performs time-related data writes based on the selected time zone. | ||
Self-service minor version update | The minor version of a Data Warehouse Edition cluster can be viewed and updated in the AnalyticDB for MySQL console. | ||
Vertical scaling of reserved storage resource specifications | Reserved storage resource specifications can be scaled up or down for Data Lakehouse Edition clusters. | ||
Use of a Spark distributed SQL engine in DataWorks | A Spark distributed SQL engine of AnalyticDB for MySQL Data Lakehouse Edition can be registered as an execution engine by registering Cloudera's Distribution Including Apache Hadoop (CDH) clusters to DataWorks. This way, you can develop and run Spark SQL jobs in DataWorks. | ||
Display of the progress bar in creation or configuration change of a cluster | A progress bar is displayed when you create or change the configurations of a Data Warehouse Edition cluster. |
March 2024
Data Lakehouse Edition
Category | Feature | Description | References |
New feature | Spot instance | The spot instance feature can be enabled for job resource groups in Data Lakehouse Edition clusters. After you enable the spot instance feature for a job resource group, Spark jobs that run in the resource group attempt to use the spot instance resources. Compared with AnalyticDB compute unit (ACU) elastic resources, spot instance resources help you significantly reduce the costs of Spark jobs. |
February 2024
Category | Feature | Description | References |
New feature | Intelligent assistant | An intelligent assistant is provided in the AnalyticDB for MySQL console. The intelligent assistant answers your questions and helps you quickly resolve issues. Note The intelligent assistant supports only the Chinese language. | None |
Spark Distribution SQL Engine | AnalyticDB for MySQL Data Lakehouse Edition Spark provides managed services for open source Spark distributed SQL engines to develop Spark SQL jobs. This helps you easily analyze, process, and query data to improve SQL efficiency. | Use a Spark distributed SQL engine to develop Spark SQL jobs | |
Access to OSS-HDFS | AnalyticDB for MySQL Data Lakehouse Edition Spark can be used to access OSS-HDFS. | ||
Storage overview | The data size of a cluster or a table can be viewed on the Storage Overview page of the AnalyticDB for MySQL console. |
V3.1.10
Category | Feature | Description | References |
New feature | Primary and foreign key constraints | Primary and foreign key constraints can be used to eliminate unnecessary joins to improve database query performance. | Use primary and foreign key constraints to eliminate unnecessary joins |
Monthly execution of resource scaling plans | Resource scaling plans can be configured to execute every month in Data Warehouse Edition. | ||
Multi-cluster scaling models | The multi-cluster feature can be enabled for resource groups in Data Lakehouse Edition. A multi-cluster scaling model allows AnalyticDB for MySQL to automatically scale resources based on query loads to meet resource isolation and high concurrency requirements for resource groups. | ||
Variable-length binary functions | The AES_DECRYPT_MY() and AES_ENCRYPT_MY() functions are supported. | ||
JSON functions | The JSON_REMOVE() function is supported. | ||
PlanCache | The plan cache feature is supported to allow you to cache execution plans of SQL statements. When you execute SQL statements that share the same SQL pattern, AnalyticDB for MySQL uses the cached execution plan of the SQL pattern to accelerate SQL compilation optimization and improve query performance. | ||
Elastic import | The elastic data import method is supported for Data Lakehouse Edition. Elastic import consumes a small amount of storage resources or does not consume computing and storage resources. This reduces impacts on real-time data reads and writes and improves resource isolation. | ||
Asynchronous scheduling of extract, transform, load (ETL) tasks by using Data Management (DMS) | The task orchestration feature of DMS can be used to asynchronously schedule ETL tasks. | None | |
Modification of workload management rules | The WLM syntax can be used to modify workload management rules. | ||
Optimized feature | Basic statistics | The collection policy for basic statistics is optimized. | None |
Column group statistics | The collection policy for column group statistics is optimized. | None | |
Internal Error error message | The Internal Error error messages is optimized to help you quickly identify issues. | None | |
Asynchronous generation of splits | For external tables that have large amounts of data, AnalyticDB for MySQL can asynchronously generate splits to reduce the amount of time required to generate execution plans. | None | |
Split flow control | The split flow control feature for scanning OSS and MaxCompute external tables is optimized. | None | |
Parameter check policy for making RC HTTP calls | The parameter check policy for making RC HTTP calls is optimized to prevent SQL injections. | None | |
Memory usage of storage nodes | The memory usage of storage nodes is optimized to reduce garbage collection (GC) frequency and improve system stability. | None | |
Fixed issue | Materialized views | The following issue is fixed: An error is returned for the ARRAY_AGG() function when you use the CREATE VIEW statement to create a view. | None |
On-premises data import by using the LOAD DATA statement | The following issue is fixed: When you use the LOAD DATA statement to import on-premises data to Data Warehouse Edition, CSV files are incompatible or data is disordered. | None | |
Cold data storage | The cold data storage issue is fixed to improve the query hit ratio and query performance. | None |