This release is based on Ververica Runtime (VVR) 6.0.5 and Apache Flink 1.15.3. All defects in Apache Flink 1.15.3 are fixed in this release.
What's in this release:
-
New connector: The StarRocks connector is now officially launched for both source tables and result tables.
-
SQL enhancements: Flink SQL complex event processing (CEP) supports group patterns and the NO SKIP pattern.
-
CDC improvements: Data type mappings and
op_tsmetadata synchronization are now supported in real-time data ingestion. -
Log Service connector: New options for Logstore bucket configuration and consumer group checkpoint-based log consumption.
-
Bug fixes: Three issues resolved in the Kafka connector, Flink CDC with JDBC, and Codegen.
New features
| Feature | Description | References |
|---|---|---|
| Data type mappings in the Flink CDC connector | When using the CREATE DATABASE AS or CREATE TABLE AS statement to ingest data into a data lake or data warehouse, upstream and downstream columns can have different but compatible data types. This prevents unnecessary type mismatch errors. Limitation: The downstream data store must be Hologres. |
Manage Hologres catalogs, CREATE TABLE AS statement |
| Group patterns and NO SKIP pattern in Flink SQL CEP | Flink SQL complex event processing (CEP) now supports group patterns (such as A(B C)+) and the NO SKIP pattern (such as A B{4,10}?), enabling more expressive event matching across multiple elements. |
CEP statements |
| Bucket configuration and consumer group checkpoints in the Log Service connector | For Logstores with more than 64 shards, specify the number of buckets directly in the connector configuration. The connector can also resume log consumption from checkpoints stored in a specified consumer group. | Create a Log Service source table |
| StarRocks connector | The StarRocks connector is now officially launched. Use it to read data from StarRocks source tables and write data to StarRocks result tables. | N/A |
op_ts metadata synchronization in CREATE TABLE AS |
When executing the CREATE TABLE AS statement, metadata fields such as op_ts are now included in the synchronization. |
Manage MySQL catalogs |
Fixed issues
-
Kafka connector: Fixed an issue where modifying parameters in the
WITHclause of the Message Queue for Apache Kafka connector caused a deployment to fail on restart. -
Flink CDC with JDBC: Fixed an issue where, when Flink Change Data Capture (CDC) used a Java Database Connectivity (JDBC) driver to read data in an asynchronous thread, the cause of an out of memory (OOM) error cannot be displayed.
-
Codegen: Fixed an issue where a Codegen error caused the error message
Table program cannot be compiled.to appear.