This topic describes how to create, register, and use a user-defined aggregate function (UDAF) in Realtime Compute for Apache Flink.

Definition

A UDAF aggregates multiple values into a single value. A many-to-one mapping is established between the input and output data of a UDAF. Multiple input records are aggregated to generate one output value.

Limits

Services provided by fully managed Flink are subject to deployment environments and network environments. Therefore, when you develop Python UDFs in fully managed Flink, take note of the following limits:
  • Only Apache Flink 1.12 and later are supported.
  • Python 3.7.9 is pre-installed on a fully managed Flink cluster. Therefore, you must develop code in Python 3.7.9.
  • JDK 1.8 is used in the runtime environment of fully managed Flink. If your Python job depends on a third-party JAR package, make sure that the JAR package is compatible with JDK 1.8.
  • Only open source Scala 2.11 is supported. If your Python job depends on a third-party JAR package, make sure that the JAR package that corresponds to Scala 2.11 is used.

Create a UDAF

Note Flink provides sample code of Python user-defined extensions (UDXs) for you to develop UDXs. The sample code includes the implementation of Python UDFs, Python UDAFs, and Python user-defined table-valued functions (UDTFs). This topic describes how to create a UDAF in the Windows operating system.
  1. Download and decompress the python_demo-master package to your on-premises machine.
  2. In the main menu bar of PyCharm, choose File > Open to open the decompressed python_demo-master package.
  3. Double-click the udfs.py file in the \python_demo-master\udx directory. Then, modify the content of the file based on your business requirements.
    In this example, weighted_avg defines the code for the weighted average of the current data and historical data.
    from pyflink.common import Row
    from pyflink.table import AggregateFunction, DataTypes
    from pyflink.table.udf import udaf
    
    
    class WeightedAvg(AggregateFunction):
    
        def create_accumulator(self):
            # Row(sum, count)
            return Row(0, 0)
    
        def get_value(self, accumulator: Row) -> float:
            if accumulator[1] == 0:
                return 0
            else:
                return accumulator[0] / accumulator[1]
    
        def accumulate(self, accumulator: Row, value, weight):
            accumulator[0] += value * weight
            accumulator[1] += weight
    
        def retract(self, accumulator: Row, value, weight):
            accumulator[0] -= value * weight
            accumulator[1] -= weight
    
    
    weighted_avg = udaf(f=WeightedAvg(),
                        result_type=DataTypes.DOUBLE(),
                        accumulator_type=DataTypes.ROW([
                            DataTypes.FIELD("f0", DataTypes.BIGINT()),
                            DataTypes.FIELD("f1", DataTypes.BIGINT())]))
  4. Go to the \python_demo-master directory to which the udx folder belongs and run the following command to package the files in the directory:
    zip -r python_demo.zip udx

    If the python_demo.zip package appears in the \python_demo-master\ directory, the UDAF is created.

Register a UDAF

For more information about how to register a UDAF, see .

Use a UDAF

After a UDAF is registered, you can perform the following steps to use the UDAF:
  1. Use Flink SQL to create a job. For more information, see Develop a job.
    Obtain the value of the a field in the ASI_UDAF_Source table with the b field as the weight. Sample code:
    CREATE TEMPORARY TABLE ASI_UDAF_Source (
      a   BIGINT,
      b   BIGINT
    ) WITH (
      'connector' = 'datagen'
    );
    
    CREATE TEMPORARY TABLE ASI_UDAF_Sink (
      avg_value  DOUBLE
    ) WITH (
      'connector' = 'blackhole'
    );
    
    INSERT INTO ASI_UDAF_Sink
    SELECT weighted_avg(a, b)
    FROM ASI_UDAF_Source;
  2. On the Deployments page in the console of fully managed Flink, find the job that you want to start, and click Start in the Actions column.

    After the job is started, the average of the current data and historical data of the a field in the ASI_UDAF_Source table is inserted into each row in the ASI_UDAF_Sink table with field b as the weight.