All Products
Search
Document Center

Realtime Compute for Apache Flink:Service notices of Realtime Compute (Blink) (phased out)

Last Updated:Nov 17, 2023

This topic describes the announcements of Realtime Compute for Apache Flink, including version updates, feature updates, and product activities.

Alibaba Cloud Realtime Compute (Blink) has entered the product convergence period. Download the PDF file to view the details.

Alibaba Cloud Realtime Compute (Blink) has entered the product convergence period. For more information about Alibaba Cloud Realtime Compute (Blink), see Service types. For more information about Alibaba Cloud Realtime Compute (Blink), download the Blink Exclusive Mode (Phased-Out for Alibaba Cloud) file to view the details.

November 15, 2022: Blink exclusive clusters enter the EOM2 stage

Blink exclusive clusters of Realtime Compute for Apache Flink enter the EOM2 stage from November 15, 2022. EOM is short for end of marketing. At the EOM2 stage, orders for new purchases and scaling of existing Blink exclusive clusters are not accepted, and the renewal period that is specified in renewal orders cannot exceed four months. This aims to provide a more comprehensive service system, focus on meeting customer requirements, and provide a clearer and more simplified experience of using Realtime Compute for Apache Flink. Alibaba Cloud service support for Blink exclusive clusters are retained. For more information about the lifecycle policies and the time point for the end of service (EOS) of Blink exclusive clusters, see Lifecycle policies and Service types.

Realtime Compute for Apache Flink launches the fully managed Flink service. This service provides better and more comprehensive real-time data processing capabilities. Fully managed Flink allows you to use multiple development methods such as user-defined functions (UDFs) or JAR instead of using only SQL statements. The development costs are not increased.

Thank you for your support for Realtime Compute for Apache Flink.

August 15, 2022: Blink shared clusters enter the EOM2 stage

Blink shared clusters of Realtime Compute for Apache Flink enter the EOM2 stage from August 15, 2022. At the EOM2 stage, orders for new purchases and scaling of existing Blink shared clusters are not accepted, and the renewal period that is specified in renewal orders cannot exceed four months. This aims to provide a more comprehensive service system, focus on meeting customer requirements, and provide a clearer and more simplified experience of using Realtime Compute for Apache Flink. Alibaba Cloud service support for Blink shared clusters are retained. For more information about the lifecycle policies and the time point for the EOS of Blink shared clusters, see Lifecycle policies and Service types.

Realtime Compute for Apache Flink launches the fully managed Flink service. This service provides better and more comprehensive real-time data processing capabilities. Fully managed Flink allows you to use multiple development methods such as UDFs or JAR instead of using only SQL statements. The development costs are not increased.

Thank you for your support for Realtime Compute for Apache Flink.

April 28, 2021: Realtime Compute for Apache Flink in exclusive mode is no longer available

Realtime Compute for Apache Flink in exclusive mode is no longer available starting from April 28, 2021. You can only scale out, scale in, or renew the existing projects of Realtime Compute for Apache Flink in exclusive mode. If you want to purchase Realtime Compute for Apache Flink, we recommend that you purchase fully managed Flink.

Announcement on the version update of Realtime Compute for Apache Flink jobs due to the change of endpoints of Message Queue for Apache RocketMQ

  • Update announcement

    Region-based endpoints are used to access Message Queue for Apache RocketMQ. For more information, see Announcement on the settings of internal TCP endpoints. If you use the Message Queue for Apache RocketMQ connector in Realtime Compute for Apache Flink of a version earlier than Blink 3.7.10, you must update the version of your Realtime Compute for Apache Flink job to Blink 3.7.10 or later and change the endpoint that you specified in the job to the new endpoint of Message Queue for Apache RocketMQ. For more information about endpoints, see the following topics:

  • Usage notes

    • The old endpoints of Message Queue for Apache RocketMQ is unavailable starting from November 2021. If you still use the old endpoints, the stability of jobs may be affected. To prevent this issue, we recommend that you update the version of your Realtime Compute for Apache Flink jobs before November 2021.

    • The product team of Message Queue for Apache RocketMQ ensures that the old endpoints of Message Queue for Apache RocketMQ remain effective before November 2021. However, the stability of jobs that use the old endpoints may be affected.

    • This update can cause the states of jobs to change. To minimize the impact of the update on your business, we recommend that you update the version of your jobs at an appropriate time based on your business requirements.

    • After November 1, 2021, the product team of Realtime Compute for Apache Flink no longer maintains and supports jobs that use the old endpoints of Message Queue for Apache RocketMQ.

From 21:00 on August 10, 2020 to 02:00 on August 11, 2020: Realtime Compute for Apache Flink is updated

Realtime Compute for Apache Flink in exclusive mode in the China (Hangzhou) region is updated. Services are not interrupted during the update, but you cannot create clusters or scale in or out clusters.

From 21:00 on April 28, 2020 to 02:00 on April 29, 2020: Realtime Compute for Apache Flink is updated

Realtime Compute for Apache Flink in exclusive mode in the China (Shanghai) region is updated. Services are not interrupted during the update, but you cannot scale in or out clusters.

From April 20, 2020 to April 22, 2020: The storage service is updated

The storage service of Realtime Compute for Apache Flink in shared mode in the China (Shanghai) and China (Shenzhen) regions is updated. This update improves the stability of Realtime Compute for Apache Flink. In most cases, this update does not affect your services. In special circumstances, jobs of Blink 3.2 and Blink 3.3 fail and then restore to normal during the update.

August 27, 2019: The Configurations tab is added

The Configurations tab is added to the right side of the Development page in the Realtime Compute for Apache Flink console. The link to resource configurations is removed from the Basic Properties tab.

May 30, 2019: New features in Realtime Compute for Apache Flink versions later than V3.0.0 are released

  • Overview

    The vertex information can be queried. For more information, see Overview.

  • Metrics

    The curve charts that display the auto scaling metrics are added. For more information, see Metrics.

  • Timeline

    The Timeline tab is added to the Administration page. For more information, see Timeline.

  • Properties and parameters

    Details about auto scaling iterations can be queried. For more information, see Properties and parameters.

January 24, 2019: Blink 2.2.7 is released

Among all Blink 2.X versions, Blink 2.2.7 is the latest stable version. Compared with Blink 1.X versions, Blink 2.2.7 has significant improvements. Among all Blink 1.X versions, Blink 1.6.4 is the latest stable version. Blink 2.2.7 uses the new generation of Niagara as the data storage system for the state backend platform. Blink 2.2.7 also optimizes SQL performance and introduces a wide range of new features.

  • Features

    • SQL

      • Supports the EMIT statement for windows. The EMIT statement is used to define the output policy. For example, you can view the latest stream processing result every minute in a one-hour window.

      • Supports miniBatch for joining two streams. Blink 2.2.7 optimizes the retraction mechanism and state data storage structure to improve performance.

      • Supports data filtering in aggregation. You can aggregate the rows that meet specified filter conditions.

      • Optimizes local-global aggregation.

      • Restructures the SQL code during the optimization stage. This shortens the time to compile code.

      • Allows keys in SortedMapView to support multiple data types, such as BOOLEAN, BYTE, SHORT, INT, LONG, FLOAT, DOUBLE, BIGDECIMAL, BIGINT, BYTE[], and STRING.

      • Optimizes the performance in the scenarios in which retractions are involved in the MIN, MAX, FIRST, or LAST function.

      • Adds multiple scalar functions, including the following functions that are related to time zones: TO_TIMESTAMP_TZ, DATE_FORMAT_TZ, and CONVERT_TZ.

      • Classifies SQL and connector error messages into different types and provides an error code for each type of error message.

    • Connector

      • Allows you to register the connectors of source tables and result tables by using a custom TableFactory.

      • Allows you to parse data source types by using user-defined table-valued functions (UDTFs).

      • Allows you to read data from and write data to Kafka.

      • Allows you to write data to Elasticsearch.

    • Runtime

      • Streamlines behavior such as submitting jobs and obtaining job running results by using Blink sessions.

      • Supports the plug-in scheduling mechanism. Based on this mechanism, a computing model can customize the scheduling logic to meet your business requirements.

      • Optimizes the failover-triggered restart policy. In throttling scenarios, you can restart only the required tasks in a failed job for recovery instead of restarting all tasks in the job. This optimization improves the efficiency of JobManagers and task failover processing.

    • StateBackend

      • Replaces RocksDBStateBackend with NiagaraStateBackend to improve read/write performance.

      • (Experimental) NiagaraStateBackend separates computing from storage. This state backend helps you restore state data in a few seconds if failovers occur.

  • Items incompatible with Blink 1.6.4

    Feature

    Impact

    Solution

    TableFunction interface changes

    The changes affect the users who use UDTFs.

    Update the code to implement the getResultType interface.

    ScalarFunction interface changes

    The changes affect the users who use user-defined scalar functions.

    Implement the getResultType interface.

    AggregateFunction interface changes

    The changes affect the users who use user-defined aggregation functions (UDAFs).

    Implement the getAccumulatorType and getResultType interfaces. For example, if the accumulator is of the Row(STRING, LONG) type and the Agg result is of the STRING type, use the following code:

    public DataType getAccumulatorType\(\) { return
                  DataTypes.createRowType\(DataTypes.String, DataTypes.LONG\); } public DataType
                  getResultType\(\) { return DataTypes.String; }

    MapView constructor changes

    The parameter that indicates the data type is changed from TypeInformation to DataType. The change affects the jobs that declare the MapView in the UDAFs.

    Update the code and create a MapView by using the DataType parameter. For example, before the change, the code is MapView map = new MapView<>(Types.STRING, Types.INT);. After the change, the code is MapView map = new MapView<>(DataTypes.STRING, DataTypes.INT);.

    The type of the values returned by the AVG and division functions is changed to DOUBLE if the input parameter is of the LONG or INT type

    Before the change, the AVG and division functions return the same type of data as the input parameters. After the change, the data type of the return data is DOUBLE. This may cause type mismatch errors. For example, if the data returned by the AVG or division functions is written to a result table, an error may occur. This is because the type of the data in the result table is different from the data type of the query fields.

    Add the CAST function to perform explicit conversion on the data that is returned by the AVG and division functions.

    Precision is considered when data of the BIGDECIMAL type and data of the DECIMAL type are compared

    For jobs that use the DECIMAL data type, a type mismatch error may occur. This is because the data type of the actual data is BIGDECIMAL.

    Specify the precision when you declare the DECIMAL data type in the job code: Decimal(38, 18).

    Semantics used to compare between the null value and strings

    In Blink 1.X versions, the SQL statement returns true if a string is null. In Blink 2.X versions, the SQL statement returns false to comply with SQL syntax and semantics.

    The change affects processing in scenarios in which the comparison between the null value and strings is involved. For example, if SPLIT_INDEX(shop_list, ':', 1) in the WHERE SPLIT_INDEX(shop_list, ':', 1) <> shop_id clause returns null, the WHERE clause returns true in Blink 1.X versions and false in Blink 2.X versions. This facilitates data filtering.

  • Update to Blink 2.X

    To update a Blink 1.X version to a Blink 2.X version for a job, you must reset a start offset when you restart the job to meet your business requirements.

    1. Terminate the job. After the job is terminated, the state data is cleared.

    2. In the lower-right corner of the Development page, select Blink 2.2.7 from the Flink Version drop-down list. Then, publish the job.

    3. Start the job on the Administration page and specify the start offset.

      If Step 3 fails, perform the following steps for troubleshooting:

      • Verify and fix the SQL code, and repeat Steps 1 to 3.

      • If the SQL code cannot be fixed, roll back the Blink version to the original version.

      • If a JSON plan cannot be generated, specify the following parameters:

        • blink.job.option.jmMemMB=4096

        • blink.job.submit.timeoutInSeconds=600

    For information about the JAR packages for the user-defined extensions (UDXs) in Blink 2.0.1, see Overview. The following errors may occur because the version of the UDX JAR package is earlier than the required version or a package conflict occurs:

    code:[30016], brief info:[get app plan failed], context info:[detail:[java.lang.NoClassDefFoundError: org/apache/flink/table/functions/aggfunctions/DoubleSumWithRetractAggFunction
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:788)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:73)