This topic describes the release notes for fully managed Flink and provides links to relevant references. The release notes provide the major updates and bug fixes in fully managed Flink in the version that was released on March 4, 2022.

Overview

Ververica Runtime (VVR) 4.0.12 was officially released on March 4, 2022. This version is developed based on Apache Flink 1.13. In this version, fully managed Flink can synchronize JSON schema changes from Message Queue for Apache Kafka to Hologres. Fully managed Flink provides the enterprise-level Hudi connector to work with Data Lake Formation (DLF). To improve the development efficiency, fully managed Flink provides more than 20 common Flink SQL job templates. To enhance O&M capabilities, fully managed Flink supports powerful job diagnostics and dynamic log level adjustment without the need to stop jobs. Fully managed Flink also supports various data processing capabilities, such as enterprise-level features of ClickHouse, new connectors, and the new syntax for ingesting data into data warehouses and data lakes. Some issues that are fixed in the Apache Flink community are also fixed in this version.

New features

FeatureDescriptionReferences
Synchronization of JSON schema changes to HologresJSON is one of the most common event formats in stream processing. Schema changes are expected to be transparent for real-time streaming jobs and tables in the backend storage engine.
This version provides the following enhancements to meet this requirement:
  • Before you consume JSON data, you can configure the table schema based on the JSON schema.
  • If the JSON schema changes during subsequent data consumption, the schema of the backend Hologres table also changes.
Enhanced data lake building capabilities for Iceberg and Hudi
  • Alibaba Cloud DLF catalogs can be configured.

    You can configure DLF catalogs to access Hudi, Iceberg, or other engines that are supported by DLF. This helps you efficiently build a real-time data lake.

  • The small files of an Iceberg table can be rewritten into a large file.

    You can execute the AUTO OPTIMIZE statement to start a streaming optimization task to automatically rewrite the small files of an Iceberg table into a large file.

  • The built-in enterprise-level Hudi connector is supported by fully managed Flink to reduce O&M complexity.
    • You can use Flink Change Data Capture (CDC) to ingest data from a database into data lakes and automatically synchronize changes in the table schema.
    • Fully managed Flink can be integrated with Alibaba Cloud services, such as Object Storage Service (OSS) and DLF, to improve data connectivity between computing engines.
Improvement on ease of use for log viewing and configuration
  • Logs can be displayed by page.

    If a job runs for a long period of time, the Logs tab for the job may not be displayed due to a large number of logs. To prevent this issue, fully managed Flink allows you to view the logs of a job by page on the Logs tab.

  • Log levels can be changed.

    You can change the log levels of a TaskManager that is running on the Logs tab for a job without the need to restart the job. This helps you identify the cause of an issue.

  • Logs of failed TaskManagers can be viewed.

    On the Logs tab, you can view the logs of TaskManagers that fail to run while the JobManager is running. This allows you to identify the cause of the TaskManager failure.

Multiple enterprise-level ClickHouse features that are supported by fully managed Flink
  • The exactly-once semantics is supported.

    The ClickHouse service that is provided by E-MapReduce (EMR), rather than ApsaraDB for ClickHouse, supports the exactly-once semantics.

  • The NESTED data type of ClickHouse is supported.

    The NESTED data type of ClickHouse is mapped to the ARRAY data type of Flink.

  • Data can be written to a local table that corresponds to a ClickHouse distributed table.

    You can directly write data to a local table that corresponds to a ClickHouse distributed table. This significantly improves the throughput of writing data to the distributed table.

Create a ClickHouse result table
Optimized job diagnostics rules and the Diagnosis panel
  • More than 20 diagnostics rules are added to help comprehensively analyze the status of jobs.

    Risk levels can be identified as high, medium, or low based on the status of jobs.

  • The Diagnosis panel is optimized to help you better view the job status.
Use the deployment diagnostics feature
Addition of computed columns during data synchronizationWhen the CREATE TABLE AS statement is used to synchronize data, a computed column can be added to the source table and used as the primary key column of the destination table.

When you ingest data into a data warehouse or data lake, you can execute the CREATE TABLE AS statement to specify the position of a computed column that you want to add and use the column as the physical column in the destination table. This way, the results of the computed column can be synchronized to the destination table in real time. You can also execute the CREATE TABLE AS statement to change the primary key of the destination table and use the new column as the new primary key column of the destination table.

CREATE TABLE AS statement
Generation of test dataThe Faker connector is supported.

You can use the Faker connector to more easily generate test data that meets your business requirements. This way, you can verify your business logic during development and testing.

Template center provided to accelerate job development
  • More than 20 code templates are provided.

    More than 20 templates that are used in common scenarios of Flink SQL are provided to help you quickly understand how to use Flink SQL to build job code.

  • Templates for synchronizing data from MySQL to Hologres are provided.

    You can use these templates to quickly create Flink CDC jobs to ingest data into data warehouses or data lakes.

Display of resource utilizationThe CPU utilization and memory usage of the current project are displayed in the lower-left corner of the console of fully managed Flink. You can manage project resources based on the information. N/A
Fast locating of the logs of jobs for which checkpoints are created at a low speedThe snapshot status of nodes in the snapshot history can be sorted. In addition, you can be navigated from the Flink Checkpoints History tab to the Logs tab of the Running Task Managers tab to view the cause of the slow speed at which checkpoints are created for the job. Find the checkpoints that are created at a low speed and view the logs of the TaskManagers for the checkpoints
Creation of an AnalyticDB for PostgreSQL result table and an AnalyticDB for PostgreSQL dimension table
  • Data can be written by fully managed Flink to an AnalyticDB for PostgreSQL result table.
  • Fully managed Flink can be associated with AnalyticDB for PostgreSQL to perform associated queries.
Improvement on ease of use of the enterprise-level state backend storage
  • Real-time parameter tuning is supported. This significantly reduces the complexity and costs of manual parameter tuning and avoids more than 95% of manual parameter tuning.
  • The single-core throughput is improved by 10% to 40%. This helps you handle traffic peaks and valleys with ease.

Performance improvement

The enterprise-level state backend storage is significantly improved in this version. The performance of dual-stream or multi-stream JOIN jobs is significantly improved. The average computing resource utilization can be increased by 50%. In typical scenarios, the average computing resource utilization can be increased by 100% to 200%. This helps you run stateful stream processing applications more smoothly.

Fixed issues

  • The catalog service is optimized to fix the issue that data does not appear after data is refreshed if a database or a table contains a large amount of data.
  • The issue that the Flink version is not displayed for a session cluster is fixed.
  • The issue that the watermarkLag curve is not displayed as expected on the Metrics tab is fixed.
  • The effect of displaying curve charts by page on the Metrics tab is optimized.
  • Flink CDC issues, such as an issue of the currentFetchEventTimeLag metric and class conflicts, are fixed.
  • The issue that the CREATE TABLE AS statement cannot be used to modify existing columns is fixed.