The upgrade for this version is rolled out incrementally using a canary release plan. New features are available only after the upgrade is complete for your account. To request an early upgrade, submit a ticket. For the latest upgrade schedule, check the announcement panel on the Realtime Compute for Apache Flink console.
This release includes platform, engine, and connector updates, along with performance improvements and bug fixes.
Overview
Platform updates
-
Hybrid billing: A new billing method that combines subscription and pay-as-you-go. Assign fixed resources through subscription for baseline workloads, and elastic resources through pay-as-you-go for spikes. Use it together with the platform's automatic tuning feature for further cost savings.
-
Console homepage redesign: The updated homepage provides an overview of resources and deployments, quick access to frequently used features, and direct links to relevant documentation.
-
Draft renaming: Rename existing drafts directly from the console.
-
Improved version management: When engine versions reach End of Support (EOS), the most recently used EOS versions are retained so you can roll back if needed.
Engine updates: Ververica Runtime (VVR) 8.0.10
VVR 8.0.10 is built on Apache Flink 1.17.2. After the upgrade completes for your account, upgrade your deployments to this engine version. For instructions, see Upgrade the engine version of a deployment.
New capabilities
-
JDK 11 support *(experimental)*: VVR 8.0.10 supports JDK 11. JDK 11 and JDK 8 are not compatible across different minor VVR versions.
-
SelectDB connector *(public preview)*: Write data from Flink to ApsaraDB for SelectDB.
Experimental and public preview capabilities do not have guaranteed service level agreements (SLAs). Use with caution in production environments.
Enhanced capabilities
SQL
-
Processing-time temporal join: Correlates rows in a fact table to the latest version of a corresponding key in a dimension table based on data arrival time. Unlike an event-time temporal join, which correlates rows based on event occurrence time, this approach uses processing time. See Processing-time temporal join statements.
-
PERCENTILE function: The
PERCENTILE(expr, percentage[, frequency])built-in SQL function is now supported. See Supported functions. -
Dimension table joins in Keyed-Ordered mode: Key-Ordered mode is introduced for use cases where data is retrieved asynchronously from an external system and processed in UpsertKey order. This mode fills the gap between Ordered and Unordered modes. See Key parameters.
-
Enhanced access control for CREATE TABLE AS SELECT (CTAS) and CREATE DATABASE AS SELECT (CDAS): CTAS and CDAS now support Apache Paimon catalogs with Data Lake Formation (DLF) 2.0 as the metadata storage type. See CREATE TABLE AS statement for CTAS and CREATE DATABASE AS statement for CDAS.
Connectors and CDC ingestion
-
Enhanced CDC ingestion in YAML deployments: The Kafka connector can now be used as a source in YAML deployments, supporting Debezium JSON and Canal JSON formats. The Apache Paimon and StarRocks connectors now handle upstream
TRUNCATEandDROP TABLEstatements.DECIMALandTIMESTAMPcolumns with different precisions can be merged in a sharded database scenario. See Kafka connector for connector details and Develop a YAML draft for data ingestion (public preview) for authoring instructions. -
Enhanced StarRocks connector:
BIGINT UNSIGNEDandVARBINARYdata types are now supported.CHARcolumn length is automatically extended to three times the original length to accommodate encoding differences between MySQL and StarRocks. See StarRocks. -
Optimized SLS connector: A backoff policy is applied to improve stability and reliability.
Catalog management
-
Optimized Hive catalogs: Create Hive catalogs in workspaces with fully managed storage, upload configuration files, and manage their lifecycles. See Manage Hive catalogs.
-
Apache Paimon catalog security: After creating an Apache Paimon catalog, the
fs.oss.accessKeySecretparameter value is displayed as ciphertext to protect credential security.
Performance improvements
-
Faster full and incremental MySQL CDC ingestion to Apache Paimon: Unified batch and stream processing significantly improves the performance of ingesting both full and incremental data from MySQL Change Data Capture (CDC) to Apache Paimon.
-
Faster savepoint restoration with fully managed storage: Resuming a job from a savepoint in a workspace configured with fully managed storage now requires less time and fewer resources.
Experience optimizations
-
MySQL CDC connector: System configurations for certain Debezium-related options now take precedence over user configurations to prevent potential misconfiguration.
-
Hologres connector: The timeout option is optimized to reduce unnecessary retry attempts during draft deployment, making data writes to Hologres more reliable.
-
SQL draft validation: Tips and suggestions for the SinkMaterializer operator during SQL draft validation are improved for clarity.
Features
| Feature | Description | References |
|---|---|---|
| JDK 11 support *(experimental)* | VVR 8.0.10 supports JDK 11, giving you more runtime environment choices and letting you use JDK 11 features in your Java applications. Compatibility between JDK 11 and JDK 8 is not guaranteed across different minor VVR versions. | Develop a JAR draft | Develop a Python API draft | UDSFs |
| Dimension table joins in Keyed-Ordered mode | Key-Ordered mode is introduced for use cases where data is retrieved asynchronously from an external system and processed in UpsertKey order, filling the gap between Ordered and Unordered modes. | Key parameters |
| Enhanced CDC ingestion in YAML deployments | YAML deployments now support the Kafka connector as a source, enabling Flink jobs to process Kafka data streams in Debezium JSON and Canal JSON formats. | Kafka connector | Develop a YAML draft for data ingestion (public preview) |
| Optimized SLS connector | A backoff policy is applied to improve the Simple Log Service (SLS) connector's stability and reliability. | N/A |
| Enhanced StarRocks connector | BIGINT UNSIGNED and VARBINARY data types are supported. CHAR column length is automatically extended to three times the original length to handle encoding differences between MySQL and StarRocks. |
StarRocks |
| Enhanced SQL semantics | Processing-time temporal join is supported. It uses a processing-time attribute to correlate rows in a fact table to the latest version of a corresponding key in a dimension table. | Processing-time temporal join statements |
| New built-in SQL function | The PERCENTILE function is now supported. |
Supported functions |
| Optimized Hive catalogs | Create Hive catalogs in workspaces with fully managed storage, upload configuration files, and manage their lifecycles. | Manage Hive catalogs |
| Enhanced access control for CTAS/CDAS | CTAS and CDAS support Apache Paimon catalogs with DLF 2.0 as the metastore type. | CREATE TABLE AS statement | CREATE DATABASE AS statement |
| Console homepage redesign | The homepage now provides an overview of resources and deployments, quick access to frequently used features, and navigation to relevant documentation. | N/A |
| Hybrid billing | Hybrid billing combines subscription and pay-as-you-go, letting you allocate both fixed and elastic resources within a single workspace. | Hybrid billing |
| Optimized log archiving | Expired archived logs are periodically cleared to reduce storage costs. | View the logs of a historical job |
| SelectDB connector *(public preview)* | Write data to ApsaraDB for SelectDB, a fully managed, real-time data warehouse service hosted on Alibaba Cloud and fully compatible with Apache Doris. | SelectDB connector (public preview) |
Fixed issues
Connector issues
-
MySQL CDC: Fixed a data loss issue that could occur during the transition from full data reading to binlog-based incremental reading through Object Storage Service (OSS).
-
Tair (Redis OSS-compatible): Fixed a write failure caused by a Buffered Writer defect in VVR 8.0.9.
-
OSS: Fixed a write performance issue that affected VVR 8.0.7 or later.
-
Apache Paimon: Fixed a time zone conversion issue for
TIMESTAMPtype columns in YAML deployments. -
MaxCompute and Table Store (OTS): Fixed an issue where dimension table rows could not be matched to fact table rows when the dimension tables had primary keys and were configured with
SHUFFLE_HASH,REPLICATED_SHUFFLE_HASH, orSKEWjoin policy, together with the Cache ALL policy.
SQL issues
-
Source merging: Fixed a deployment failure that occurred when
table.optimizer.source-merge.enabledwas set totrue. -
Minibatch interval: Fixed an issue in VVR 8.0.7 where the minibatch interval configuration did not take effect.
Compatibility and dependency issues
-
Connector class loading: Fixed a
connector class not foundexception that occurred when starting a deployment that used a built-in connector with a JAR dependency. -
IntelliJ IDEA local run: Fixed a
ClassNotFoundException: MySqlSourceReaderMetricserror that occurred when running a MySQL CDC JAR package locally in IntelliJ IDEA.
Dynamic configuration issues
-
Fixed an issue where dynamic configuration updates did not take effect occasionally.