This topic lists the major features and changes in Blink 3.3.0 for Realtime Compute.
- Slim CU mode
- You can set the expected number of CUs for the initial execution plan. The specified number of CUs determines the initial parallelism settings of a job when the execution plan is generated.
- You can use the resource allocation for the slim CU mode. If only one parallel instance is set for each vertex and the number of required CUs exceeds the specified number of expected CUs, the slim CU mode is enabled. In the slim CU mode, the parallel instances of multiple vertexes automatically run on one slot. All the vertexes are scheduled to run on one TaskManager to reduce the consumed resources.
- Automatic scaling
The maximum CUs for the automatic scaling feature are changed from the maximum CUs of a job to the maximum CUs that are specified in the execution plan. This resolves the issue that jobs fail to be started because of the limit for CUs. After this change is made, the CUs consumed by a job may exceed the maximum CUs of the automatic scaling feature.
- Optional job parameters are added for the automatic scaling feature. You can use the
added parameters to manually disable the scaling down feature. This ensures the stable
running of jobs. The added parameters are
healthmanager.resource.scale.down.enabled(controls resource scaling down) and
healthmanager.parallelism.scale.down.enabled(controls parallelism scaling down).
- Resources can be manually configured and the automatic scaling feature can be enabled for the jobs that are created based on the DataStream API. As of Blink 3.3.0, you can edit the resource plans and enable the automatic scaling feature for the jobs that are created based on the DataStream API. The automatic scaling feature for the jobs that are created based on the DataStream API is only available for trial use.
- Read first rows of duplicate records based on the Rowtime field
You can read the first rows of duplicate records for deduplication based on the Rowtime field. The Rowtime field indicates the event time and you can perform window operations only on the data that has the event time attribute. After node deduplication, you can still perform window operations on the records.
- Case insensitive for SQL code
As of Blink 3.3.0, SQL code is case insensitive. Compilation errors may occur due to this change. If you use uppercase and lowercase letters to distinguish variables or identifiers, compilation errors occur.
- Error message