All Products
Search
Document Center

AnalyticDB:2024

Last Updated:Feb 06, 2025

This topic describes the release notes for AnalyticDB for MySQL in 2024 and provides links to the relevant references.

Usage notes

Take note of the following items during minor version updates of AnalyticDB for MySQL clusters:

  • For AnalyticDB for MySQL clusters in reserved mode for Cluster Edition or AnalyticDB for MySQL clusters in elastic mode for Cluster Edition that have 32 cores or more, data read and write operations are not interrupted when engine versions are updated. Within 5 minutes before the update is complete, queries may encounter transient connections.

  • For AnalyticDB for MySQL clusters in elastic mode for Cluster Edition that have 8 or 16 cores, data write operations may be interrupted for 30 minutes when engine versions are updated. Within 5 minutes before the update is complete, queries may encounter transient connections.

  • Minor version updates of AnalyticDB for MySQL clusters do not affect database access, account management, database management, or IP address whitelist settings.

  • During a minor version update of an AnalyticDB for MySQL cluster, network jitters may occur and affect write and query operations. Make sure that your application is configured to automatically reconnect to the AnalyticDB for MySQL cluster.

  • During a minor version update of an AnalyticDB for MySQL cluster, the cluster may encounter transient connections. Make sure that your application is configured to automatically reconnect to the AnalyticDB for MySQL cluster.

If you do not need to update the minor version of your AnalyticDB for MySQL cluster or an error occurs during the update process, you can cancel the scheduled minor version update. You can cancel only the scheduled events of a minor version update. For more information, see the "Cancel scheduled events" section of the Manage O&M events topic.

Warning

If the minor version of your AnalyticDB for MySQL cluster is earlier than the latest minor version, Alibaba Cloud pushes a notification at an irregular interval to inform you that the cluster needs to be updated to the latest minor version. We recommend that you update the minor version of your AnalyticDB for MySQL cluster at the earliest opportunity within six months after you receive the notification. Otherwise, you shall assume all liabilities for risks such as service interruptions and data loss.

December 2024

Category

Feature

Description

References

New feature

Cross-account cluster cloning

Data Lakehouse Edition clusters can be cloned across Alibaba Cloud accounts.

Clone a cluster

Disk encryption

The AnalyticDB for MySQL console allows you to check whether the disk encryption feature is enabled for a cluster and view the Key Management Service (KMS) key ID for disk encryption.

Disk encryption

November 2024

Category

Feature

Description

References

New feature

Lake cache

The lake cache feature is supported to cache the Object Storage Service (OSS) objects that are frequently accessed on high-performance NVMe SSDs to improve the reading efficiency of OSS data.OSS

Lake cache

October 2024

Category

Feature

Description

References

New feature

Backup and restoration

Data backup sets can be deleted and the data backup feature can be disabled in the AnalyticDB for MySQL console.

Manage backups

Zero-ETL

The zero-ETL feature is supported to synchronize Lindorm data. You can create data synchronization tasks from Lindorm to AnalyticDB for MySQL to synchronize and manage data in an end-to-end manner and integrate transaction processing with data analysis.Lindorm

Import data from Lindorm

September 2024

Category

Feature

Description

References

New feature

Cross-region cluster cloning

Clusters can be cloned across regions.

Clone a cluster

V3.2.2

Category

Feature

Description

References

New feature

Batch creation of MaxCompute external tables

Multiple MaxCompute external tables can be created at a time.

IMPORT FOREIGN SCHEMA

Support for aggregate functions in configuring incremental refresh for materialized views

The MAX(), MIN(), APPROX_DISTINCT(), COUNT(DISTINCT), and AVG() functions can be included in the QUERY BODY parameter when you configure incremental refresh for materialized views.

Configure incremental refresh for materialized views

Access to MaxCompute external tables in arrow API mode

The arrow API mode is supported to read and write MaxCompute external tables. Compared with the traditional tunnel mode, the arrow API mode can improve data access and processing efficiency.

Use external tables to import data to Data Lakehouse Edition

INSERT INTO

The TIMESTAMP() function can be included in the INSERT INTO statement.

INSERT INTO

Optimized feature

FROM_UNIXTIME function

The FROM_UNIXTIME function can be used to convert a UNIX timestamp into the DATETIME format.

Date and time functions

Fixed issue

Data type conversion

The following issue is fixed: An error is returned when the TINYINT, SMALLINT, INT, or BIGINT type is converted into the DECIMAL type.

None

August 2024

Category

Feature

Description

References

New feature

Selection of the Spark engine for interactive resource groups

The Spark engine can be selected when you create an interactive resource group in an AnalyticDB for MySQL Data Lakehouse Edition cluster. You can run only Spark jobs in interactive resource groups by using the Spark engine. Spark jobs are run in an interactive manner.

Create and manage a resource group

Limits on the number of zero-ETL tasks

The number of zero-ETL tasks from ApsaraDB RDS for MySQL or PolarDB for MySQL to AnalyticDB for MySQL is limited.

July 2024

V3.2.1

Category

Feature

Description

References

New feature

Next-generation storage engine

The next-generation storage engine XUANWU_V2 is launched by AnalyticDB for MySQL. The engine caches cold data to the Enterprise SSDs (ESSDs) to speed up data reading and provides the next-generation column-oriented storage that supports higher I/O concurrency and reduces the memory usage. The engine also allows you to enable the compaction service to perform local compaction operations in an independent process by using an independent resource pool. This reduces resource usage and improves service stability.

XUANWU_V2 engine

Incremental refresh for multi-table materialized views

Incremental refresh is supported for multi-table materialized views. The incremental data of multiple tables that are joined together can be automatically refreshed to the corresponding multi-table materialized view. This improves data query performance and data analysis efficiency.

Configure incremental refresh for materialized views

Invocation of user-defined functions (UDFs) by using the REMOTE_CALL() function

The REMOTE_CALL() function can be used to invoke custom functions that you create in Function Compute (FC). This way, you can use UDFs in AnalyticDB for MySQL.

UDFs

Forcible deletion of databases

The CASCADE keyword is supported in the DROP DATABASE statement to forcibly delete a database, including all tables in the database.

DROP DATABASE

Wide table engine

The wide table engine is supported for Data Lakehouse Edition. The wide table engine is compatible with the capabilities and syntax of the open source columnar database ClickHouse and can handle large amounts of columnar data.

Wide table engine

Path analysis functions

The SEQUENCE_MATCH() and SEQUENCE_COUNT() functions are supported to analyze user behavior and check whether the user behavior matches the specified pattern.

Path analysis functions

SSL encryption

SSL encryption is supported to encrypt data transmitted between a Data Warehouse Edition cluster and a client. This prevents data from being listened to, intercepted, and tampered with by third parties.

SSL encryption

Support for complex MaxCompute data types by MaxCompute external tables

Complex MaxCompute data types, such as ARRAY, MAP, and STRUCT, are supported for MaxCompute external tables of Data Lakehouse Edition clusters.

CREATE EXTERNAL TABLE

Support for the ROARING BITMAP type by AnalyticDB for MySQL internal tables

The ROARING BITMAP type is supported.

Roaring bitmap functions

Subscription to AnalyticDB for MySQL binary logs by using Realtime Compute for Apache Flink

Realtime Compute for Apache Flink can be used to consume AnalyticDB for MySQL binary logs in real time.

Use Realtime Compute for Apache Flink to subscribe to AnalyticDB for MySQL binary logs

Optimized feature

Change of LIFECYCLE from a required keyword to an optional one

If you do not specify the LIFECYCLE keyword when you create a table, partition data is permanently retained.

CREATE TABLE

Table-level partition lifecycle management

For AnalyticDB for MySQL clusters of V3.2.1.1 or later, the partition lifecycle is managed at the table level, but not the shard level. The LIFECYCLE n parameter specifies that up to n partitions can be retained in each table.

CREATE TABLE

Import of OSS data to AnalyticDB for MySQL by using external tables

The absolute path name and the asterisk (*) wildcard are supported for the url parameter when you use external tables to import OSS data to AnalyticDB for MySQL.

Use external tables to import data to Data Warehouse Edition

Automatic validity check of column names at table creation

Column names are automatically checked against naming conventions of AnalyticDB for MySQL when you execute the CREATE TABLE statement to create a table. If a column name does not meet the naming conventions, an error is returned. For information about the naming conventions of column names, see the "Naming limits" section of the Limits topic.

None

June 2024

Category

Feature

References

New feature

AnalyticDB for MySQL Enterprise Edition and Basic Edition are released.

  • Enterprise Edition runs in a cluster mode and is an integrated edition of Data Lakehouse Edition and Data Warehouse Edition that provides the same features as Data Lakehouse Edition. Enterprise Edition supports capabilities in elastic mode, such as resource group isolation, elastic resource scaling, and tiered storage of hot and cold data. Enterprise Edition also supports capabilities in reserved mode, such as high throughput, real-time writes and high-concurrency, real-time queries.

  • Basic Edition runs in standalone mode and supports tiered storage of hot and cold data. Basic Edition does not provide distributed capabilities, high availability, resource group isolation, or scheduled scaling. You cannot change a cluster from Basic Edition to Enterprise Edition.

Editions

May 2024

Category

Feature

Description

References

New feature

Cross-account cluster cloning

Data Lakehouse Edition clusters can be cloned across Alibaba Cloud accounts.

Clone a cluster

April 2024

Category

Feature

Description

References

New feature

Query rewrite

The query rewrite feature of materialized views is supported. After you enable this feature, the optimizer determines whether a query can use pre-computed and stored data in materialized views. This way, the optimizer partially or entirely rewrites the original query to a query that can use materialized views.

Query rewrite of materialized views

Synchronization of Simple Log Service (SLS) data by using data synchronization

The data synchronization feature can be used to synchronize data in real time from an SLS Logstore to an AnalyticDB for MySQL cluster based on a specific offset. This helps meet your business requirements for real-time analysis of log data.

Zero-ETL

The zero-ETL feature is supported to help you synchronize and manage data, integrate transaction processing with data analysis, and focus on data analysis. You can create data synchronization tasks from ApsaraDB RDS for MySQL or PolarDB for MySQL to AnalyticDB for MySQL.

Use zero-ETL to synchronize data

Time zone selection at cluster creation

The time zone parameter can be selected for an AnalyticDB for MySQL cluster at cluster creation based on your business requirements. After you select a time zone, the system performs time-related data writes based on the selected time zone.

Create a cluster

Self-service minor version update

The minor version of a Data Warehouse Edition cluster can be viewed and updated in the AnalyticDB for MySQL console.

Update the minor version of a cluster

Vertical scaling of reserved storage resource specifications

Reserved storage resource specifications can be scaled up or down for Data Lakehouse Edition clusters.

Scale a Data Lakehouse Edition cluster

Use of a Spark distributed SQL engine in DataWorks

A Spark distributed SQL engine of AnalyticDB for MySQL Data Lakehouse Edition can be registered as an execution engine by registering Cloudera's Distribution Including Apache Hadoop (CDH) clusters to DataWorks. This way, you can develop and run Spark SQL jobs in DataWorks.

Use a Spark distributed SQL engine in DataWorks

Display of the progress bar in creation or configuration change of a cluster

A progress bar is displayed when you create or change the configurations of a Data Warehouse Edition cluster.

Create a Data Warehouse Edition cluster

March 2024

Data Lakehouse Edition

Category

Feature

Description

References

New feature

Spot instance

The spot instance feature can be enabled for job resource groups in Data Lakehouse Edition clusters. After you enable the spot instance feature for a job resource group, Spark jobs that run in the resource group attempt to use the spot instance resources. Compared with AnalyticDB compute unit (ACU) elastic resources, spot instance resources help you significantly reduce the costs of Spark jobs.

Spot instances

February 2024

Category

Feature

Description

References

New feature

Intelligent assistant

An intelligent assistant is provided in the AnalyticDB for MySQL console. The intelligent assistant answers your questions and helps you quickly resolve issues.

Note

The intelligent assistant supports only the Chinese language.

None

Spark Distribution SQL Engine

AnalyticDB for MySQL Data Lakehouse Edition Spark provides managed services for open source Spark distributed SQL engines to develop Spark SQL jobs. This helps you easily analyze, process, and query data to improve SQL efficiency.

Use a Spark distributed SQL engine to develop Spark SQL jobs

Access to OSS-HDFS

AnalyticDB for MySQL Data Lakehouse Edition Spark can be used to access OSS-HDFS.

Access OSS-HDFS

Storage overview

The data size of a cluster or a table can be viewed on the Storage Overview page of the AnalyticDB for MySQL console.

Storage analysis

V3.1.10

Category

Feature

Description

References

New feature

Primary and foreign key constraints

Primary and foreign key constraints can be used to eliminate unnecessary joins to improve database query performance.

Use primary and foreign key constraints to eliminate unnecessary joins

Monthly execution of resource scaling plans

Resource scaling plans can be configured to execute every month in Data Warehouse Edition.

Create a resource scaling plan

Multi-cluster scaling models

The multi-cluster feature can be enabled for resource groups in Data Lakehouse Edition. A multi-cluster scaling model allows AnalyticDB for MySQL to automatically scale resources based on query loads to meet resource isolation and high concurrency requirements for resource groups.

Multi-cluster scaling models

Variable-length binary functions

The AES_DECRYPT_MY() and AES_ENCRYPT_MY() functions are supported.

Variable-length binary functions

JSON functions

The JSON_REMOVE() function is supported.

JSON functions

PlanCache

The plan cache feature is supported to allow you to cache execution plans of SQL statements. When you execute SQL statements that share the same SQL pattern, AnalyticDB for MySQL uses the cached execution plan of the SQL pattern to accelerate SQL compilation optimization and improve query performance.

PlanCache

Elastic import

The elastic data import method is supported for Data Lakehouse Edition. Elastic import consumes a small amount of storage resources or does not consume computing and storage resources. This reduces impacts on real-time data reads and writes and improves resource isolation.

Data import methods

Asynchronous scheduling of extract, transform, load (ETL) tasks by using Data Management (DMS)

The task orchestration feature of DMS can be used to asynchronously schedule ETL tasks.

None

Modification of workload management rules

The WLM syntax can be used to modify workload management rules.

WLM

Optimized feature

Basic statistics

The collection policy for basic statistics is optimized.

None

Column group statistics

The collection policy for column group statistics is optimized.

None

Internal Error error message

The Internal Error error messages is optimized to help you quickly identify issues.

None

Asynchronous generation of splits

For external tables that have large amounts of data, AnalyticDB for MySQL can asynchronously generate splits to reduce the amount of time required to generate execution plans.

None

Split flow control

The split flow control feature for scanning OSS and MaxCompute external tables is optimized.

None

Parameter check policy for making RC HTTP calls

The parameter check policy for making RC HTTP calls is optimized to prevent SQL injections.

None

Memory usage of storage nodes

The memory usage of storage nodes is optimized to reduce garbage collection (GC) frequency and improve system stability.

None

Fixed issue

Materialized views

The following issue is fixed: An error is returned for the ARRAY_AGG() function when you use the CREATE VIEW statement to create a view.

None

On-premises data import by using the LOAD DATA statement

The following issue is fixed: When you use the LOAD DATA statement to import on-premises data to Data Warehouse Edition, CSV files are incompatible or data is disordered.

None

Cold data storage

The cold data storage issue is fixed to improve the query hit ratio and query performance.

None