Realtime Compute for Apache Flink released version VVR 6.0.7 on June 21, 2023, with platform updates, connector updates, and bug fixes.
This version is rolling out as a canary release and will reach the entire network within two weeks. If new features are not yet visible in the console, your platform is still on the previous version. To request an early upgrade, submit a ticket. For the latest upgrade schedule, check the announcement on the right side of the Realtime Compute for Apache Flink console homepage.
Overview
Ververica Runtime (VVR) 6.0.7 is based on Apache Flink 1.15.4.
Highlights:
-
Catalog support: MaxCompute catalogs and Log Service catalogs are now officially supported, letting you define permanent tables without writing DDL statements for every SQL deployment.
-
Apache Paimon 0.4.0 (invitational preview): Real-time data ingestion into data lakes via Flink Change Data Capture (CDC), schema evolution, and improved streaming read/write ordering.
-
MySQL connector enhancements: Improved result table and dimension table capabilities, plus support for CDC source tables without a primary key.
-
Alerting improvements: New No Data Warning and Alarm Noise Reduction switches for more actionable alerts.
-
VVR 4.0.18 released: The final recommended update for VVR 4.X users.
After the canary release completes, the new engine version appears in the Engine Version drop-down list of your draft's Configurations pane.
Upgrade notes
The following changes require action before or after upgrading.
MySQL connector migration
The MySQL connector now covers all capabilities of the ApsaraDB RDS for MySQL connector for result tables and dimension tables. Start migrating your deployments from the ApsaraDB RDS for MySQL connector to the MySQL connector. For guidance, see MySQL connector.
VVR 4.X end of recommended updates
VVR 4.0.18 is the final version recommended for VVR 4.X. If your deployments are still on VVR 4.X, upgrade to a Stable or Recommend engine version. Engine versions are now classified as Recommend, Stable, Normal, or Deprecated in the Engine Version drop-down list.
New features
Platform
| Feature | What changed | References |
|---|---|---|
| Audit logs | Realtime Compute for Apache Flink is now connected to ActionTrail. View user operation records directly from the ActionTrail console. | View resource operation events by using ActionTrail |
| Access to Kerberos-authenticated Hive clusters | JAR and Python deployments can now write data to Hive clusters that use Kerberos authentication. | Register a Hive cluster that supports Kerberos authentication, Create a deployment |
| Engine version classification | Engine versions in the Engine Version drop-down list are now grouped into four categories: Recommend, Stable, Normal, and Deprecated. | Develop an SQL draft, Create a deployment |
| Intelligent deployment diagnostics | The Diagnosis tab is added to the Deployments page, replacing the previous dialog box. Health scores and diagnostics are now visible in a single view. | Perform intelligent deployment diagnostics |
| Modifier column | The Modifier column is added to the Deployments page, showing who last modified each deployment. | — |
| Enhanced alerting | Two new switches are added to alert rules: No Data Warning detects data source exceptions early; Alarm Noise Reduction suppresses repeated and invalid alerts. | Configure alert rules |
| Member management API | A member management API is now available for automated authorization workflows. | — |
| Page experience improvements | The Deployments page supports customizable layout and filtering. UI styles and specific page layouts are updated, and sections such as the log section are expanded. | — |
Catalogs
| Feature | What changed | References |
|---|---|---|
| MaxCompute catalogs | MaxCompute catalogs are officially supported. Register metadata once using a MaxCompute catalog, and create SQL deployments without writing DDL statements for every MaxCompute source table. | Manage MaxCompute catalogs |
| Log Service catalogs | Log Service catalogs are officially supported. Register metadata once, and skip repetitive DDL statements when creating Log Service source tables in SQL deployments. | Manage Log Service catalogs |
| DLF as Hive catalog metadata management center | In Hive 3.X, Data Lake Formation (DLF) can now serve as the metadata management center for Hive catalogs. | Manage Hive catalogs |
| Apache Paimon catalogs | The Apache Paimon connector is upgraded to Apache Paimon 0.4.0 (invitational preview). Use Apache Paimon catalogs to ingest data into Apache Paimon in real time via the CREATE TABLE AS and CREATE DATABASE AS statements. Schema evolution, snapshot cleanup, automatic partition deletion, and the Parquet file format are supported. Streaming read/write ordering and overall performance are also improved. | CREATE TABLE AS statement, Manage Apache Paimon catalogs |
Connectors
| Feature | What changed | References |
|---|---|---|
| MySQL connector enhancements | Result table and dimension table capabilities are enhanced. See Upgrade notes for migration guidance. | MySQL connector |
| MySQL CDC source tables without a primary key | MySQL CDC source tables without a primary key can now be used for incremental reading, expanding support for more MySQL table types. | — |
| Tair connector: expiration time and increment settings | The Tair connector for result tables supports specifying expiration time and configuring increment settings. | Tair connector |
| MaxCompute connector: Exclusive Tunnel resource groups | The MaxCompute connector now supports Exclusive Tunnel resource groups, making data transfer more stable and efficient. | MaxCompute connector |
| DataHub connector: source table performance | In specific scenarios, DataHub connector source table performance is improved by approximately 290%. | — |
| Tablestore connector: time series data | The Tablestore connector now supports writing time series data using the Tablestore time series model. | — |
| Hudi connector upgrade | The Hudi connector is upgraded to Apache Hudi 0.13.1. | — |
Fixed issues
-
Fixed a memory overflow that occurred when using the MySQL connector with the CREATE TABLE AS or CREATE DATABASE AS statement to consume MySQL CDC source table data.
-
Fixed a null pointer exception that occurred when using the Hologres connector for dimension tables.
-
Fixed a memory overflow that occurred when using the Hologres connector for source tables.