All Products
Search
Document Center

Realtime Compute for Apache Flink:Version of January 15, 2025

Last Updated:Mar 26, 2026

These release notes cover the major updates and bug fixes in Realtime Compute for Apache Flink released on January 15, 2025.

Important

The version upgrade is gradually rolled out. New features are available only after the upgrade completes for your account. To apply for an expedited upgrade, submit a ticket. For the latest rollout status, see the Realtime Compute for Apache Flink console.

Before you upgrade

This release includes a change that requires action before or after upgrading:

Python version upgraded from 3.7.9 to 3.9.21

If you upgrade to Ververica Runtime (VVR) 8.0.11 and run Python deployments, complete these steps after upgrading:

  1. Run a compatibility test on your Python script against Python 3.9.21.

  2. Re-deploy the Python script.

  3. Restart the deployment.

What's new

This release includes platform updates, an engine update, and connector enhancements.

Platform updates

Hive dialect support for batch SQL

You can now develop batch SQL scripts using Hive dialect, which lets you migrate Hive workloads to Flink without rewriting SQL. See Get started with Hive SQL deployments.

Data backfilling in Workflows

You can now fill in missing data and correct errors in historical data directly from Workflows. See Manage workflows.

Deployment search by IP address and port

You can now find a specific deployment by searching for its source or destination system's IP address and port. This is especially useful when Flink handles requests from many external systems. See Network architecture upgrade.

Simplified workspace creation

Zone selection is no longer required when purchasing a workspace. Choose a deployment model instead:

  • Single-zone: The optimal zone is selected automatically. Flink communicates with services in the same region over Virtual Private Cloud (VPC) with sub-3-millisecond latency. Transparent resource scheduling within the region enhances resource elasticity. For intra-region latency benchmarks, see Cloud Network Performance.

  • Cross-zone: If the primary zone fails, deployments automatically fail over to the secondary zone, preventing service disruptions and maintaining high availability.

See Activate Realtime Compute for Apache Flink and Cross-zone high availability.

Namespace variables in runtime parameters

You can now configure namespace variables in a deployment's runtime parameters to avoid using plaintext AccessKey pairs and credentials. See Manage variables.

Saved tuning plans in autopilot mode

After a deployment stabilizes using the stable strategy in autopilot mode, you can view, edit, and save the generated tuning plan for future use. Autopilot mode now offers two tuning plan options: schedule-based and fixed-resource. See Configure automatic tuning.

Engine update

VVR 8.0.11 is now generally available. It is an enterprise-grade engine based on Apache Flink 1.17.2, with optimizations and enhancements beyond the Apache Flink upstream.

Connector updates

Hologres connector

  • Conditional updates (check-and-put): Apply updates to Hologres only when specified conditions are met.

  • Aggressive write mode (aggressive.enabled): Improve write timeliness during periods of low traffic.

  • Binary log consumption from partitioned tables (public preview): Consume binary logs from Hologres partitioned tables, useful for building a real-time data warehouse. See Consume Hologres data in real time.

  • Metadata columns: Access metadata columns (such as hg_binlog_event_type) from a source Hologres table using the Hologres catalog. See Manage Hologres catalogs and Hologres connector.

MaxCompute connector

Use upsert.partial-column to update specific columns in Delta tables. This simplifies creating wide tables from multiple data streams written to MaxCompute. See MaxCompute connector.

StarRocks connector

When mapping a Flink CHAR field to a StarRocks CHAR field, the StarRocks field length is automatically extended to four times the original length. This handles multi-byte characters such as emojis. See StarRocks connector.

Materialized tables

When batch execution mode is enabled, materialized tables dynamically choose between incremental updates and full updates. Incremental updates are preferred. See Create and use materialized tables.

Feature summary

Feature Description Status
Hive dialect support Develop batch SQL scripts using Hive dialect to migrate Hive workloads to Flink without rewriting SQL. GA
Data backfilling in Workflows Fill in missing data and fix errors in historical data directly from Workflows. GA
Deployment search by IP address and port Search for a deployment by source or destination system IP address and port. GA
Simplified workspace creation Choose a deployment model (single-zone or cross-zone) when purchasing a workspace; zone selection is automatic. GA
Namespace variables in runtime parameters Use namespace variables in runtime parameters to avoid plaintext credentials. GA
Saved tuning plans in autopilot mode View, edit, save, and apply generated tuning plans after stabilization in autopilot mode. Two options: schedule-based and fixed-resource. GA
Hologres: conditional updates check-and-put enables conditional writes to Hologres. GA
Hologres: aggressive write mode aggressive.enabled improves write timeliness during low traffic. GA
Hologres: binary log from partitioned tables Consume binary logs from Hologres partitioned tables for real-time data warehouse use cases. Public preview
Hologres: metadata columns Access metadata columns such as hg_binlog_event_type via the Hologres catalog. GA
MaxCompute: partial column updates upsert.partial-column updates specific columns in Delta tables. GA
StarRocks: CHAR field length extension Flink CHAR fields mapped to StarRocks CHAR fields are automatically extended to 4x the original length. GA
Materialized tables: incremental updates Batch execution mode dynamically selects incremental or full updates; incremental is preferred. GA
Python version upgrade Python upgraded from 3.7.9 to 3.9.21. Action required for Python deployments. GA

Fixed issues

Connector issues

  • MySQL connector: Fixed a null pointer exception on startup.

  • MySQL connector: Fixed performance degradation when writing to a table without a primary key. This issue was introduced after a VVR version upgrade.

  • Kafka connector: Fixed a misalignment between JSON messages in Canal format and metadata columns when reusing a Kafka source.

  • ApsaraDB for HBase connector: Fixed a startup exception: No length info found when processingnull.

  • Simple Log Service catalog: Fixed an error when using the Simple Log Service catalog: AssertionError: Conversion to relational algebra failed.

SQL issues

  • Fixed an issue where a window was not triggered due to delayed watermark emission.

  • Fixed an error when adding a BIT(1) column to a table created with a CTAS statement: ValidationException: Binary string length must be between 1 and 2147483647.

Stability issues

  • Fixed an issue where a deployment recovered from the wrong checkpoint after being abnormally terminated with exit code 137.