All Products
Search
Document Center

Realtime Compute for Apache Flink:September 11, 2024 release

Last Updated:Mar 26, 2026

This release includes platform updates, engine updates, connector updates, performance optimization, and bug fixes for Realtime Compute for Apache Flink.

Important

The upgrade is rolled out incrementally using a canary release plan. New features are available only after the upgrade completes for your account. To request early access, submit a ticket. For the latest upgrade schedule, see the announcement on the right side of the Realtime Compute for Apache Flink console.

Overview

Platform updates

  • Data ingestion via YAML deployments (public preview): Flink Change Data Capture (CDC) now supports a dedicated data ingestion module based on Flink CDC 3.0. Originally developed by Alibaba and donated to the Apache Software Foundation, Flink CDC has evolved from a Flink source for change data capture into a Flink-based streaming extract, transform, and load (ETL) framework. YAML deployments let you define end-to-end data synchronization pipelines with less configuration overhead than SQL or JAR deployments.

  • Task orchestration enhancements: CloudMonitor can now send workflow event alerts through DingTalk and phone calls in addition to existing channels. Dynamic variables are also supported, so the same job can run periodically with different parameter values without code changes.

  • Variable management (formerly key hosting): The key hosting feature is renamed variable management and extended to JAR and Python deployments. Previously limited to SQL deployments, you can now store and reference both plaintext and ciphertext variables across all deployment types.

  • Reorganized left-side navigation: The development console's left-side navigation pane is reorganized to group related modules together. With the addition of new modules such as data ingestion, the updated layout makes it easier to find features without scrolling through a flat list.

Engine updates

Ververica Runtime (VVR) 8.0.9 is now available, based on Apache Flink 1.17.2. After the canary upgrade completes for your account, upgrade the VVR engine for your deployments. For more information, see Upgrade the engine version of a deployment.

Key updates in VVR 8.0.9:

  • MySQL CDC: Binlog parsing thread parameters are configurable to increase concurrent Binlog parsing throughput.

  • Kafka connector: Zstandard compression algorithm and built-in Protobuf format are now supported.

  • Redis connector: Sink write performance is improved and connection pool parameters are configurable for more flexible connection management.

  • Paimon connector: Delete actions are supported for Paimon sinks, making partial data updates easier to implement.

Features

Feature

Description

References

Data ingestion module (public preview)

Define end-to-end data synchronization pipelines in YAML based on Flink CDC 3.0, without writing SQL or JAR jobs. Reduces configuration overhead for teams managing high-volume CDC workflows.

Develop Flink CDC data ingestion jobs (Public Preview)

Data Lake Formation (DLF) 2.0 integration

When you select DLF 2.0 as the metadata storage type for a Paimon catalog, AccessKey pair configuration is no longer required — permissions are resolved automatically.

Manage Paimon Catalogs

Streamlined DLF access permissions

DLF-related permissions are automatically granted when you create a Realtime Compute for Apache Flink workspace for the first time. Existing users receive these permissions by default, removing a manual setup step.

DLF-related permission operations

Quick session cluster creation

If no session cluster exists when you run a query script, configure key parameters inline to create an execution environment and run the script immediately without leaving the editor.

N/A

Task orchestration enhancements

Workflow alerting now supports CloudMonitor event alerts delivered via DingTalk and phone calls, giving you more notification options when jobs fail or breach thresholds.

Cloud Monitor event alerting

Variable management (formerly key hosting)

Key hosting is renamed variable management and extended to JAR and Python deployments. Reference plaintext or ciphertext variables in all deployment types, not just SQL.

Variable management

Reorganized development console navigation

The left-side navigation pane is reorganized to group new modules such as data ingestion logically, making it easier to locate features as the console expands.

N/A

MySQL connector: configurable Binlog parsing threads

Configure Binlog parsing thread parameters to increase concurrent Binlog parsing throughput for high-volume MySQL CDC workloads.

MySQL

Kafka connector: Zstandard compression and Protobuf format

  • Zstandard compression algorithm is supported to improve the data transmission efficiency.

  • Built-in Protobuf format is supported for processing structured data.

Message Queue for Apache Kafka

Redis connector: improved sink performance and connection pools

  • Sink cache is optimized to write multiple data entries at a time, improving throughput.

  • Connection pool parameters are configurable for more flexible connection management.

ApsaraDB for Tair (Redis Open-Source Edition)

Simple Log Service connector refactoring

  • Adopts the FLIP-27 source interface to handle shard changes and distribute shards evenly across data sources.

  • Shard change intervals are dynamically detected.

Simple Log Service (SLS)

Paimon connector: delete action support

Configure the semantics for retraction messages (delete or update) to improve delete action performance on Paimon sinks, making partial data updates more efficient.

Streaming Data Lakehouse Paimon

MongoDB dimension table: ObjectId lookup

The _id field of the ObjectId type is now readable in dimension table lookups.

MongoDB

StarRocks connector: improved write reliability

The write retry mechanism is optimized for network exceptions, and the default value of sink.max-retries is changed to improve data writing stability under poor network conditions.

StarRocks

ApsaraDB for HBase connector: null field handling

null fields are skipped during write operations, reducing storage consumption and accommodating schemas where sparse columns are expected.

ApsaraDB for HBase

Lindorm connector: selective column updates

Write data to a result table and exclude specific columns from update operations, giving you finer control over which fields are overwritten.

Lindorm

Fixed issues

  • MySQL CDC: When consuming from a specified checkpoint, MySQL CDC failed to recover after a primary/secondary switchover. This is now fixed.

  • StarRocks connector: Using the CREATE TABLE AS statement in VVR 8.0.8 caused a java.lang.ClassNotFoundException error. This is now fixed.

  • Elasticsearch connector: Version V8 was not supported when connecting through the Realtime Compute for Apache Flink console. This is now fixed.

  • Hologres connector: The connector unnecessarily checked the table ID during startup. This forced check is now removed.