All Products
Search
Document Center

Realtime Compute for Apache Flink:August 8, 2025

Last Updated:Oct 11, 2025

This topic describes the major feature updates and bug fixes in the Realtime Compute for Apache Flink release on August 8, 2025.

Important

The version upgrade is gradually released to users. For more information, see the latest announcement in the Realtime Compute for Apache Flink console. You can use the new features in this version only after the upgrade is complete for your account. To apply for an expedited upgrade, submit a ticket.

Overview

A new version of Realtime Compute for Apache Flink was released on August 8, 2025. This release includes updates to the platform, engine, and connectors, along with performance optimizations and bug fixes.

Engine

Ververica Runtime (VVR) 11.2 is now available. Built on Apache Flink 1.20.2, VVR 11.2 provides additional optimizations and enterprise enhancements. Highlights:

Flink SQL

This version greatly expands the built-in SQL function library:

Scalar functions:

  • String processing: PRINTF, TRANSLATE, ELT, BTRIM, STARTSWITH, and ENDSWITH

  • JSON processing: JSON_QUOTE and JSON_UNQUOTE

  • Regular expression: REGEXP_SUBSTR, REGEXP_INSTR, REGEXP_COUNT, and REGEXP_EXTRACT_ALL

  • Arithmetic: UNHEX

Data type support

Adds support for the Variant type, enhancing flexibility in schema handling.

Table API

Supports the Hive dialect, enabling familiar Hive SQL syntax in Table API jobs.

AI function

Introduces configurable strategies for handling messages that exceed the AI model's maximum context window: Discard or Truncate.

Connector

  • MySQL CDC connector: Optimizes handling of VARCHAR fields, improving synchronization performance and stability.

  • Now supports the Canal-JSON format for data ingestion. Can extract both event timestamps (ts) and event sequences (es) fields.

  • AnalyticDB for MySQL connector: Adds support for INSERT IGNORE, improving fault tolerance during data writes.

Security

The Paimon and OSS connectors now support RAM-based authorization, replacing AccessKey pairs for improved security and permission management.

Performance enhancements

The MongoDB CDC connector supports concurrent oplog parsing, enhancing data sync stability and reliability. The Tair (Redis OSS-compatible) connector supports asynchronous lookup joins, improving cache access efficiency and job performance.

Platform

New features

  • Executing multiple DDL/DML statements in a single batch job: Create tables, perform computations, and delete tables, all in one job.

  • Refreshing materialized tables on schedule: Periodically refresh historical partitions to backfill late data, ensuring eventual consistency.

  • Automatically releasing idle session clusters: If new session clusters stay idle for over 30 minutes, they are automatically released to improve resource utilization.

  • Blackout periods for automatic tuning: Restrict automatic resource scaling during business-critical hours, ensuring business stability. Performance tuning advice is still provided.

  • Comprehensive Git integration: Support more mainstream Git tools, like Alibaba Cloud DevOps. Pull directory structure and troubleshoot with helpful error messages.

  • Granular access control for data queries

Experience optimizations

  • Supports console-based AI model creation, deletion, and modification. This lets you better manage AI models in the Catalogs page.

  • Displays CU usage across hours for batch jobs. This metric better reflects the performance of batch jobs.

  • Supports workflow fuzzy search by name in the Workflows page.

  • Supports console-based Iceberg catalog creation.

API

This release includes two new APIs, two deprecated APIs, and two bug fixes. To use the new APIs, upgrade your cluster and update the pom dependency to version 1.8.0.

Previously, APIs related to Resource and DeploymentTarget could not manage hybrid billing workspaces. This release upgrades these APIs:

  • Introduces new APIs:

    • CreateDeploymentTargetV2

    • UpdateDeploymentTargetV2

  • Deprecates APIs:

    CreateDeploymentTarget and UpdateDeploymentTarget are deprecated. Transition to the new APIs as soon as possible.

  • Resource object enhancement: Add new fields to support the configuration of hybrid billing workspaces.

  • The createDeploymentDraft and modifyDeploymentDraft APIs are optimized to fix the issue that the maximum number of labels was not validated.

  • The listDeployments API is optimized to validate the input of the sortName and sortOrder parameters: Only these strings are allowed: letters (a-z, A-Z) and underscores (_).

Features and enhancements

Feature

Description

References

Optimizing VARCHAR for MySQL CDC

Improves performance and stability of data sync for VARCHAR fields.

MySQL

Enhancing Flink CDC

Adds support for Canal-JSON and es/ts timestamp extraction in Kafka connector.

MySQL

Synchronize MySQL binary logging data to Kafka

Supporting Hive SQL for Table API

Allows use of the Hive dialect in jobs developed with the Table API.

Get started with a Hive SQL deployment

Authorizing access to Paimon and OSS using RAM roles

Enables secure access to Paimon and OSS via RAM roles instead of AccessKey pairs, improving security and permission management.

Manage Paimon catalogs

Supporting INSERT IGNORE for AnalyticDB for MySQL

Enhances fault tolerance during data writes by supporting the INSERT IGNORE syntax.

AnalyticDB for MySQL V3.0 connector

Optimizing asynchronous lookup joins for the Tair (Redis OSS-compatible) connector

Improves cache access efficiency and stability with enhanced asynchronous lookup join functionality.

ApsaraDB for Tair (Redis Open-Source Edition)

Enhancing PyFlink to support direct use of built-in connectors

Improves Python developer experience by allowing direct use of built-in connectors without manual dependency management.

Use Python dependencies

Concurrent oplog parsing for the MongoDB CDC connector

Supports concurrent parsing of MongoDB oplogs to improve synchronization stability and reliability.

MongoDB

New Flink SQL built-in functions

Adds multiple built-in functions for enhanced SQL.

Supported functions

Automatic schema evolution for Kafka-Paimon data ingestion

Supports automatic schema evolution during data ingestion from Kafka to Paimon, enhancing data model flexibility.

Implement real-time data ingestion into a data lake

SQL Variant type support

Supports the Variant type, enhancing the flexibility of data processing.

Data type conversion

Configurable AI function behavior

Allows users to define how oversized messages (exceeding the AI model's context window) are handled—options include discard or truncate.

Model DDLs

Notable fixes

This release addresses several key issues to improve the stability, performance, and functionality of Flink and its connectors.

Connector

  • Kafka: Fixed time zone conversion and data sync issues.

  • MySQL: Resolved permission errors affecting connectivity.

  • Paimon: Fixed Avro timestamp precision validation and resolved issues.

  • DLF: Resolved issues with data access token expiration for improved reliability.

  • MySQL 8.0: Addressed compatibility issues for smoother integration.

SQL and transformation

  • Fixed LIKE syntax parsing in Paimon.

  • Fixed issues related to date handling and REGEXP_REPLACE in YAML scripts.

  • Resolved NullPointerException when accessing schema registry.

Stability and performance

  • Addressed metadata inconsistencies that could arise after job failover, ensuring a more reliable state.

  • Fixed resource cleanup on unexpected job exits.

  • Patched a checkpointing crash in the Paimon connector.

  • Optimized connector retry mechanism for greater job resilience.