This topic describes the release notes for fully managed Flink and provides links to the relevant references. The release notes describe the major updates and bug fixes in fully managed Flink in the version that was released on March 16, 2021.
Major updates in VVP
Flink 1.12 is supported.
The console of fully managed Flink is updated. After you publish a job, you can modify job configurations on the Draft Editor page. The status of the job is not affected.
You can configure checkpoints and savepoints for a job that is deployed in a session cluster.
The Member Name column is added to the Members page.
Python jobs can be published. For more information, see Develop a Python API draft.
Major bug fixes in VVP
The issue that job scripts are missing after job information is saved as a file is fixed.
Major updates in VVR 3.0.0
The hints syntax can be used in the JOIN clause of SQL statements.
The schema of a Kafka table that is stored in the Avro format can be configured.
Metadata columns can be defined in Message Queue for Apache Kafka, DataHub, Log Service, and Message Queue for Apache RocketMQ source tables. This helps you obtain the metadata.
A startup offset can be specified for the MySQL CDC connector. The startup offset can be set to binlog position, timestamp, earliest-offset, latest-offset, or initial.
The following monitoring metrics are added: numBytesIn, numBytesInPerSecond, sourceIdleTime, pendingRecords, watermarkLag, numBytesOut, numBytesOutPerSecond, numRecordsOut, and numRecordsIn.
The FileSystem connector is added. For more information, see Create a FileSystem dimension table.
Built-in functions are added.
Scalar functions
Function
BITAND
DATEDIFF
FROM_UNIXTIME
MURMUR_HASH_64
TO_DATE
BITNOT
DATE_ADD
ISNAN
PARSE_URL
TO_TIMESTAMP_TZ
BITOR
DATE_FORMAT
JSON_VALUE
REGEXP_EXTRACT
UNIX_TIMESTAMP
BITXOR
DATE_FORMAT_TZ
KEYVALUE
REGEXP_REPLACE
CONV
DATE_SUB
MURMUR_HASH
SPLIT_INDEX
Table-valued functions
Function
GENERATE_SERIES
JSON_TUPLE
MULTI_KEYVALUE
STRING_SPLIT
Aggregate functions
Function
APPROX_COUNT_DISTINCT
FIRST_VALUE
LAST_VALUE
Major bug fixes in VVR 3.0.0
The issue that the checkpoint file is missing after the savepoint is created is fixed.
The issue that disk space becomes insufficient due to unlimited growth of Postgres CDC WAL logs is fixed.
The issue that sensitive information such as usernames and passwords is not masked in logs is fixed.
Precautions
Changes of SQL script names and job names in the SQL editor
Job development and O&M are not separated on the original Ververica Platform (VVP). Therefore, SQL scripts and published jobs do not comply with the same naming conventions because SQL scripts are not associated with the published jobs. On the new VVP, job development and O&M are separated. Therefore, you must take note of the following naming conventions:
The name of an SQL script corresponds to the name of an online job. If the original SQL script name contains Chinese characters, the Chinese characters are replaced with random numbers after VVP is updated. The original SQL script name is displayed in the label that is configured by using the script. We recommend that you change the name of the SQL script to the name of the associated online job.
If duplicate SQL script names exist, a random number is added to each duplicate SQL script name. You can also rename an SQL script to avoid duplicate SQL script names.
Flink version update for SQL jobs
After VVP is updated, the version of the built-in SQL compiler is updated to Flink 1.12. The classes of Flink 1.11 are incompatible with the classes of Flink 1.12. After the update, the SQL jobs of Flink 1.11 fail to run. For SQL jobs that are created and run in Flink 1.11 before VVP is updated, you can perform one of the following operations:
Update the Flink version of the job to Flink 1.12.
WarningA downgrade cannot be performed after the Flink version is updated. If you update the Flink version of the SQL job to Flink 1.12, you cannot downgrade the Flink version for the job.
A failure may occur during the update. To prevent data loss, we recommend that you create a copy of the SQL job whose Flink version you want to update, and then update the Flink version of the SQL job copy. After the update is successful, you can update the Flink version of the original SQL job.
Continue to use the original Flink version of the SQL job.
ImportantIf the SQL job of Flink 1.11 is not published, you must use Flink 1.12 to run the job.
If the SQL job of Flink 1.11 is published, you can continue to use Flink 1.11 to run the job. If you want to modify the deployment, resource configurations, log configurations, Kubernetes configurations, the name of the Flink image repository, URL, or tags for the job, you can continue to use Flink 1.11 to run the job. The deployment that you want to modify must support Flink 1.11. If you want to make other modifications, you must update the Flink version of the job to Flink 1.12.
Changes to the update policy
The new version no longer allows you to configure the update policy. The system automatically uses the update policy of the None mode.