All Products
Search
Document Center

Realtime Compute for Apache Flink:January 4, 2024

Last Updated:Feb 02, 2024

This topic describes the release notes for fully managed Flink and provides links to relevant references. The release notes provide the major updates and bug fixes in fully managed Flink in the version that was released on January 4, 2024.

Important

A canary release will be gradually complete on the entire network for the upgrade. To learn about the upgrade plan, view the most recent announcement on the right side of the homepage of the Realtime Compute for Apache Flink console. If you cannot use new features in fully managed Flink, the new version is still unavailable for your account. If you want to perform an upgrade at the earliest opportunity, submit a ticket to apply for an upgrade.

Overview

The new engine version 8.0.5 of Ververica Runtime (VVR) of Realtime Compute for Apache Flink was officially released on January 4, 2024. This version includes connector updates, performance optimization, and defect fixes.

This version is an enterprise-level Flink engine based on Apache Flink 1.17.2. In this version, the following features are updated or added: The speed of reading data from a MySQL Change Data Capture (CDC) source table at the specified offset or timestamp is increased. The MongoDB connector supports synchronization of table schema changes when the MongoDB connector is used for a source table. The MongoDB connector can be used for dimension tables. The Java Database Connectivity (JDBC) connector supports data of the JSONB and UUID extension types. StarRocks result tables support data of the JSON type. When the Hologres connector uses the JDBC mode, deduplication can be disabled during batch data processing. This ensures data integrity when throughput is optimized. The PolarDB for Oracle 1.0 connector can be used to write data to PolarDB for PostgreSQL(Compatible with Oracle). Multiple defects that exist in Apache Flink 1.17.2 are fixed, including the fixed defects of the Apache Flink community. Engine-related issues are also resolved to improve system stability and reliability.

The canary release will be gradually complete on the entire network. After the canary release is complete, you can upgrade the engine that is used by your deployment to the new version. For more information, see Upgrade the engine version of deployments. We look forward to your feedback.

Features

Feature

Description

References

Increased speed of reading data from a MySQL CDC source table at the specified start offset or timestamp

A start offset or timestamp can be specified to read data from a MySQL CDC source table. This way, the position from which data is read can be quickly located. This increases the data reading speed.

MySQL connector

Synchronization of table schema changes and automatic addition of columns supported by the MongoDB connector that is used for a source table

When the MongoDB connector is used for a source table, the CREATE TABLE AS or CREATE DATABASE AS statement can be executed to synchronize schema changes from the source table to the result table and automatically add columns to the result table.

MongoDB connector used for dimension tables

The MongoDB connector can be used for dimension tables to perform join queries.

MongoDB connector (public preview)

Support for MongoDB catalogs

After metadata is registered by using a MongoDB catalog, the schema of a collection can be inferred. This way, you do not need to use DDL statements to create a MongoDB source table when you write SQL code.

Manage MongoDB catalogs (public preview)

More data types supported by the JDBC connector

Data of the JSONB and UUID extension types is supported by the JDBC connector.

JDBC connector

SSL-encrypted transmission, data writing in bulkload mode, and deduplication during batch data processing supported by the Hologres connector

  • SSL-encrypted transmission is used to implement more secure data reading and writing.

  • If data is written in bulkload mode, the workload of data writing to a Hologres result table is significantly reduced.

  • If the sdkMode parameter is set to jdbc or jdbc_fixed, deduplication can be performed during batch data processing. This can effectively improve data consistency and database performance, and reduce network overheads.

Hologres connector

Removal of the configuration of the jdbcBinlogSlotName parameter

In Hologres V2.1 or later, the jdbcBinlogSlotName parameter no longer needs to be configured. This parameter specifies the slot name of a binary log source table in JDBC mode.

Addition of the PolarDB for Oracle 1.0 connector

The PolarDB for Oracle 1.0 connector can be used to write data to PolarDB for PostgreSQL(Compatible with Oracle).

PolarDB for Oracle 1.0 connector

More data types supported by the StarRocks connector

Data of the JSON type is supported by StarRocks result tables.

None

partitionField parameter supported by ApsaraMQ for RocketMQ result tables

The partitionField parameter can be configured for an ApsaraMQ for RocketMQ result table to specify a field name that is used as a partition key column. When data is written to the result table, the hash value is calculated based on the value of the column. Data that has the same hash value is written to the same partition of the table.

ApsaraMQ for RocketMQ connector

Elasticsearch V8.X supported by the Elasticsearch connector and ignorance of the .keyword suffix for fields of the TEXT data type

  • The Elasticsearch connector can be used to write data to an Elasticsearch result table of V8.X.

  • When the Elasticsearch connector is used for a dimension table, the ignoreKeywordSuffix parameter can be set to true to ignore the .keyword suffix for fields of the TEXT data type.

Elasticsearch connector

Null values ignored by the MySQL connector

Null values in the data that is written to the MySQL result table during data updates can be ignored.

MySQL connector