All Products
Search
Document Center

Realtime Compute for Apache Flink:Common SQL errors

Last Updated:Apr 15, 2024

This topic provides answers to some frequently asked questions about SQL of Realtime Compute for Apache Flink, including FAQ about drafts and deployments and errors returned during draft development and deployment O&M.

Data output is suspended on the LocalGroupAggregate operator for a long period of time. No data output is generated. Why does this happen?

  • Problem description

    The table.exec.mini-batch.size parameter is not configured for the deployment or the table.exec.mini-batch.size parameter is set to a negative value. The deployment includes both WindowAggregate and GroupAggregate operators. The time column of the WindowAggregate operator is proctime, which indicates the event time. The topology of the deployment contains the LocalGroupAggregate operator, but does not contain the MiniBatchAssigner operator when the deployment is started.

    image

    The following sample code provides an example on how to create a deployment that includes both WindowAggregate and GroupAggregate operators. The time column of the WindowAggregate operator is proctime, which indicates the event time.

    CREATE TEMPORARY TABLE s1 (
      a INT,
      b INT,
      ts as PROCTIME(),
      PRIMARY KEY (a) NOT ENFORCED
    ) WITH (
      'connector'='datagen',
      'rows-per-second'='1',
      'fields.b.kind'='random',
      'fields.b.min'='0',
      'fields.b.max'='10'
    );
    
    CREATE TEMPORARY TABLE sink (
      a BIGINT,
      b BIGINT
    ) WITH (
      'connector'='print'
    );
    
    CREATE TEMPORARY VIEW window_view AS
    SELECT window_start, window_end, a, sum(b) as b_sum FROM TABLE(TUMBLE(TABLE s1, DESCRIPTOR(ts), INTERVAL '2' SECONDS)) GROUP BY window_start, window_end, a;
    
    INSERT INTO sink SELECT count(distinct a), b_sum FROM window_view GROUP BY b_sum;
  • Cause

    The managed memory is used to cache data in miniBatch processing mode if the table.exec.mini-batch.size parameter is not configured for the deployment or the table.exec.mini-batch.size parameter is set to a negative value. The MiniBatchAssigner operator fails to be generated and cannot send the watermark message to compute operators to trigger final calculation and data output. Final calculation and data output are triggered only when one of the following conditions is met: The managed memory is full, the CHECKPOINT command is received and checkpointing has not been performed, and the deployment is canceled. For more information, see table.exec.mini-batch.size. The checkpoint interval is set to an excessively large value. The LocalGroupAggregate operator does not trigger data output for a long period of time.

  • Solutions

    • Decrease the checkpoint interval. This way, the LocalGroupAggregate operator can automatically trigger data output before checkpointing is performed.

    • Use the heap memory to cache data. This way, data output is automatically triggered when the amount of data cached on the LocalGroupAggregate operator reaches the value of the table.exec.mini-batch.size parameter. To configure the table.exec.mini-batch.size parameter, perform the following steps: In the Parameters section of the Configuration tab on the Deployments page in the development console of Realtime Compute for Apache Flink, set the table.exec.mini-batch.size parameter to a positive value N in the Other Configuration field.

Why does a time difference exist between the current time and the time in the values of the Low Watermark and Datetime of Watermark Timestamp parameters on the Watermarks tab of the Status tab, as well as between the current time and the time in the value of the Task InputWatermark metric in the Watermark section of the Metrics tab?

  • Cause 1: A field of the TIMESTAMP_LTZ (TIMESTAMP(p) WITH LOCAL TIME ZONE) data type is used to declare the watermark in the source table. As a result, a time difference exists between the current time and the values of the watermark-related parameters.

    The following example shows the difference between the watermark that is declared by using a field of the TIMESTAMP_LTZ data type and the watermark that is declared by using a field of the TIMESTAMP data type.

    • The following sample code shows that the field that is used to declare the watermark in the source table is of the TIMESTAMP_LTZ data type.

      CREATE TEMPORARY TABLE s1 (
        a INT,
        b INT,
        ts as CURRENT_TIMESTAMP,-- Use the built-in function CURRENT_TIMESTAMP to generate data of the TIMESTAMP_LTZ data type. 
        WATERMARK FOR ts AS ts - INTERVAL '5' SECOND 
      ) WITH (
        'connector'='datagen',
        'rows-per-second'='1',
        'fields.b.kind'='random','fields.b.min'='0','fields.b.max'='10'
      );
      
      CREATE TEMPORARY TABLE t1 (
        k INT,
        ts_ltz timestamp_ltz(3),
        cnt BIGINT
      ) WITH ('connector' = 'print');
      
      -- Obtain the calculation result. 
      INSERT INTO t1
      SELECT b, window_start, COUNT(*) FROM
      TABLE(
          TUMBLE(TABLE s1, DESCRIPTOR(ts), INTERVAL '5' SECOND))
      GROUP BY b, window_start, window_end;
      Note

      The calculation result that is generated by using the syntax of the legacy window is the same as the calculation result that is generated by using the table-valued function (TVF) window. The following sample code provides an example of the syntax of the legacy window.

      SELECT b, TUMBLE_END(ts, INTERVAL '5' SECOND), COUNT(*) FROM s1 GROUP BY TUMBLE(ts, INTERVAL '5' SECOND), b;

      The following figures show that an 8-hour time difference exists between the current time (UTC+8) and the time in the values of the Low Watermark and Datetime of Watermark Timestamp parameters on the Watermarks tab of the Status tab, as well as between the current time (UTC+8) and the time in the value of the Task InputWatermark metric in the Watermark section of the Metrics tab after a draft is deployed and published in the development console of Realtime Compute for Apache Flink.

      • Watermark&Low Watermark

        image

      • Task InputWatermark

        image

    • The following sample code shows that the field that is used to declare the watermark in the source table is of the TIMESTAMP (TIMESTAMP(p) WITHOUT TIME ZONE) data type.

      CREATE TEMPORARY TABLE s1 (
        a INT,
        b INT,
        -- No time zone information is included in the timestamp of the simulated data source. In this case, the timestamp is incremented by one second from 2024-01-31 01:00:00. 
        ts as TIMESTAMPADD(SECOND, a, TIMESTAMP '2024-01-31 01:00:00'),
        WATERMARK FOR ts AS ts - INTERVAL '5' SECOND 
      ) WITH (
        'connector'='datagen',
        'rows-per-second'='1',
        'fields.a.kind'='sequence','fields.a.start'='0','fields.a.end'='100000',
        'fields.b.kind'='random','fields.b.min'='0','fields.b.max'='10'
      );
      
      CREATE TEMPORARY TABLE t1 (
        k INT,
        ts_ltz timestamp_ltz(3),
        cnt BIGINT
      ) WITH ('connector' = 'print');
      
      -- Obtain the calculation result. 
      INSERT INTO t1
      SELECT b, window_start, COUNT(*) FROM
      TABLE(
          TUMBLE(TABLE s1, DESCRIPTOR(ts), INTERVAL '5' SECOND))
      GROUP BY b, window_start, window_end;

      After you deploy and publish a draft in the development console of Realtime Compute for Apache Flink, the time in the values of the Low Watermark and Datetime of Watermark Timestamp parameters on the Watermarks tab of the Status tab and the time in the value of the Task InputWatermark metric in the Watermark section of the Metrics tab are the same as the current time. In this example, the current time is the time of the simulated data.

      • Watermark&Low Watermark

        image

      • Task InputWatermark

        image

  • Cause 2: The time zone of the display time in the development console of Realtime Compute for Apache Flink is different from the time zone of the display time on the Apache Flink UI.

    The display time in the development console of Realtime Compute for Apache Flink is in UTC+0. However, the display time on the Apache Flink UI is the local time that is converted based on the local time zone that the Apache Flink UI obtains by using the browser. The following example shows the difference between the display time in the development console of Realtime Compute for Apache Flink and the display time on the Apache Flink UI when UTC+8 is used. The display time in the development console of Realtime Compute for Apache Flink is 8 hours earlier than the display time on the Apache Flink UI.

    • Development console of Realtime Compute for Apache Flink

      image

    • Apache Flink UI

      image

What do I do if the error message "undefined" appears?

What do I do if the error message "Object '****' not found" appears?

  • Problem description

    After you click Validate, the error message shown in the following figure appears.报错详情

  • Cause

    When you execute DDL and DML statements in the same text, you do not declare CREATE TEMPORARY TABLE in the DDL statement.

  • Solution

    When you execute DDL and DML statements in the same text, declare CREATE TEMPORARY TABLE instead of CREATE TABLE in the DDL statement.

What do I do if the error message "Only a single 'INSERT INTO' is supported" appears?

  • Problem description

    After you click Validate, the error message shown in the following figure appears.报错详情

  • Cause

    Multiple DML statements are not written between the key statements BEGIN STATEMENT SET; and END;.

  • Solution

    Write the DML statements between BEGIN STATEMENT SET; and END;. For more information, see INSERT INTO statement.

What do I do if the error message "The primary key is necessary when enable 'Key: 'scan.incremental.snapshot.enabled' , default: true (fallback keys: [])' to 'true'" appears?

  • Problem description

    Caused by: org.apache.flink.table.api.ValidationException: The primary key is necessary when enable 'Key: 'scan.incremental.snapshot.enabled' , default: true (fallback keys: [])' to 'true'
        at com.alibaba.ververica.cdc.connectors.mysql.table.MySqlTableSourceFactory.validatePrimaryKeyIfEnableParallel(MySqlTableSourceFactory.java:186)
        at com.alibaba.ververica.cdc.connectors.mysql.table.MySqlTableSourceFactory.createDynamicTableSource(MySqlTableSourceFactory.java:85)
        at org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:134)
        ... 30 more
  • Cause

    In Realtime Compute for Apache Flink whose engine version is vvr-3.0.7-flink-1.12 or earlier, the MySQL Change Data Capture (CDC) source does not support parallel data reading. However, in Realtime Compute for Apache Flink whose engine version is vvr-4.0.8-flink-1.13 or later, data can be sharded based on the primary key to support parallel data reading. This feature is specified by the scan.incremental.snapshot.enabled parameter. The default value of this parameter is true. This indicates that the feature is enabled by default. The primary key must be configured when this feature is enabled.

  • Solutions

    If you use Realtime Compute for Apache Flink whose engine version is vvr-4.0.8-flink-1.13 or later, use one of the following solutions based on your business requirements:

    • If you want to read data from the MySQL CDC source in parallel, configure the primary key in the DDL statement.

    • If you do not want to read data from the MySQL CDC source in parallel, set the scan.incremental.snapshot.enabled parameter to false. For more information about the parameter configuration, see Parameters in the WITH clause.

What do I do if the error message "exceeded quota: resourcequota" appears?

  • Problem description

    The error message appears when a deployment is started.报错

  • Cause

    The deployment fails to be started because the resources of the current project are insufficient.

  • Solution

    Reconfigure the project resources. For more information, see Reconfigure resources.

What do I do if the error message "Exceeded checkpoint tolerable failure threshold" appears?

  • Problem description

    The error message appears when a deployment is running.

    org.apache.flink.util.FlinkRuntimeException:Exceeded checkpoint tolerable failure threshold.
      at org.apache.flink.runtime.checkpoint.CheckpointFailureManager.handleJobLevelCheckpointException(CheckpointFailureManager.java:66)
  • Cause

    The maximum number of checkpoint failures allowed in a task is not specified. By default, a failover is triggered each time a checkpoint fails.

  • Solutions

    1. On the Deployments page, click the name of the desired deployment.

    2. On the Configuration tab of the deployment details page, click Edit in the upper-right corner of the Parameters section.

    3. In the Other Configurations field, enter the following code:

      execution.checkpointing.tolerable-failed-checkpoints: num

      You must replace num with the maximum number of checkpoint failures that are allowed in the task. This parameter must be set to 0 or a positive integer. If the parameter is set to 0, no checkpoint exceptions or failures are allowed.

What do I do if the error message "Flink version null is not configured for sql" appears?

  • Problem description

    StatusRuntimeException: INTERNAL: Flink version null is not configured for sql.
  • Cause

    The Ververica Runtime (VVR) version of the system is updated to VVR 4.0.8. As a result, the version information about the Realtime Compute for Apache Flink compute engine of the deployment cannot be obtained.

  • Solutions

    Click the Configurations tab on the right side of the SQL Editor page and select the required version from the Engine Version drop-down list.引擎版本

    Note

    If you want to use the debugging feature, check whether the engine version that is selected on the Session Clusters page is correct.

What do I do if the error message "java.util.concurrent.CompletionException: java.lang.IllegalArgumentException" appears?

  • Problem description

     Caused by: java.util.concurrent.CompletionException: java.lang.IllegalArgumentException: The bucket name "fakeBucket" is invalid. A bucket name must: 1) be comprised of lower-case characters, numbers or dash(-); 2) start with lower case or numbers; 3) be between 3-63 characters long. 
  • Cause

    On the Configuration tab of the deployment details page, you configure the state.backend parameter in the Other Configuration field of the Parameters section but does not configure the state.checkpoints.dir parameter.

  • Solutions

    • If you want to retain the state.backend parameter, add the configuration of the state.checkpoints.dir parameter.

    • If you do not want to retain the state.backend parameter, delete this parameter.

What do I do if the error message "The difference between the request time and the current time is too large" appears?

  • Problem description

    Caused by: org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.OSSException: The difference between the request time and the current time is too large.
    [ErrorCode]: RequestTimeTooSkewed
    [RequestId]: 61691C47AF60CA3639812A28
    [HostId]: oss-bucket-name.oss-cn-shanghai-internal.aliyuncs.com
    [ResponseError]:
    <?xml version="1.0" encoding="UTF-8"?>
    <Error>
     <Code>RequestTimeTooSkewed</Code>
     <Message>The difference between the request time and the current time is too large.</Message>
     <RequestId>61691C47AF60C******12A28</RequestId>
     <HostId>******.oss-cn-shanghai-internal.aliyuncs.com</HostId>
     <MaxAllowedSkewMilliseconds>900000</MaxAllowedSkewMilliseconds>
     <RequestTime>2021-10-15T05:57:31.000Z</RequestTime>
     <ServerTime>2021-10-15T06:14:31.000Z</ServerTime>
    </Error>
     at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.common.utils.ExceptionFactory.createOSSException(ExceptionFactory.java:100) ~[?:?]
     at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.internal.OSSErrorResponseHandler.handle(OSSErrorResponseHandler.java:70) ~[?:?]
     at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.common.comm.ServiceClient.handleResponse(ServiceClient.java:257) ~[?:?]
     at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.common.comm.ServiceClient.sendRequestImpl(ServiceClient.java:140) ~[?:?]
     at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.common.comm.ServiceClient.sendRequest(ServiceClient.java:70) ~[?:?]
     at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.internal.OSSOperation.send(OSSOperation.java:83) ~[?:?]
     at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.internal.OSSOperation.doOperation(OSSOperation.java:145) ~[?:?]
     at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.internal.OSSOperation.doOperation(OSSOperation.java:102) ~[?:?]
     at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.internal.OSSBucketOperation.listObjects(OSSBucketOperation.java:411) ~[?:?]
     at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.OSSClient.listObjects(OSSClient.java:443) ~[?:?]
     at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.listObjects(AliyunOSSFileSystemStore.java:506) ~[?:?]
     at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.getFileStatus(AliyunOSSFileSystem.java:264) ~[?:?]
     at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.validatePath(AliyunOSSFileSystem.java:548) ~[?:?]
     at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.mkdirs(AliyunOSSFileSystem.java:532) ~[?:?]
     at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2326) ~[?:?]
     at org.apache.flink.fs.osshadoop.common.HadoopFileSystem.mkdirs(HadoopFileSystem.java:196) ~[?:?]
  • Cause

    The versioning feature is enabled for OSS buckets but lifecycle rules are not configured to periodically remove the delete markers and unnecessary historical versions of objects. If a large number of delete markers exists, the LIST operation may slow down and the deployment may fail to start. For more information, see Activate Realtime Compute for Apache Flink.

  • Solutions

    Configure lifecycle rules for a bucket in the OSS console to periodically remove the delete markers and unnecessary historical versions of objects. For more information, see Configure lifecycle rules based on the last modified time for versioning-enabled buckets to reduce storage costs.

What do I do if the error message "INFO: org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss" appears?

  • Problem description报错详情

  • Cause

    When OSS creates a directory, OSS checks whether the directory exists. If the directory does not exist, the error message "INFO: org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss" appears. Realtime Compute for Apache Flink deployments are not affected.

  • Solutions

    Add <Logger level="ERROR" name="org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss"/> to the log template. For more information, see Configure parameters to export logs of a deployment.

What do I do if the error message "DateTimeParseException: Text 'xxx' could not be parsed" appears?

  • Problem description

    When a deployment is running, the error message DateTimeParseException: Text 'xxx' could not be parsed appears.

  • Cause

    If the VVR version is earlier than VVR 4.0.13 and the date format that you declare in a DDL statement is inconsistent with the format of the actual data, Realtime Compute for Apache Flink reports an error.

  • Solutions

    In VVR 4.0.13 and later, the parsing of TIMESTAMP data in a JSON-formatted string is optimized. The following JSON formats are supported: JSON, Canal JSON, Debezium JSON, Maxwell JSON, and Ogg JSON. The following data parsing capabilities are optimized:

    • Data of the TIMESTAMP type that is declared in a DDL statement can be parsed as data in the DATE format.

    • Data of the TIMESTAMP_LTZ type that is declared in a DDL statement can be parsed as data in the DATE or TIMESTAMP format.

      Realtime Compute for Apache Flink converts data of the TIMESTAMP type to data of the TIMESTAMP_LTZ type based on the time zone that is specified by the table.local-time-zone parameter. For example, you can declare the following information in the DDL statement:

      CREATE TABLE source (
        date_field TIMESTAMP,
        timestamp_field TIMESTAMP_LTZ(3)
      ) WITH (
        'format' = 'json',
        ...
      );

      If the system parses {"date_field": "2020-09-12", "timestamp_field": "2020-09-12T12:00:00"} and the current time zone is UTC+8, the parsing result is "+I(2020-09-12T00:00:00, 2020-09-12T04:00:00.000Z)".

    • Data of the TIMESTAMP or TIMESTAMP_LTZ type can be automatically parsed.

      Before the optimization, when the system parses TIMESTAMP data in a JSON-formatted string, you must set the timestamp-format.standard parameter to SQL or ISO-8601 to ensure that data can be correctly parsed. After the optimization, Realtime Compute for Apache Flink automatically infers the format of TIMESTAMP data and then parses the data. If the data is not correctly parsed, an error is returned. The value of the timestamp-format.standard parameter that you configured is used as a prompt for the parser to use.

What do I do if the error message "DELETE command denied to user 'userName'@'*.*.*.*' for table 'table_name'" appears?

  • Problem description

    Cause by:java.sql.SQLSyntaxErrorException:DELETE command denied to user 'userName'@'*.*.*.*' for table 'table_name'
        at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
        ...
  • Cause

    If a WHERE clause is added to an SQL statement that is used to process MySQL CDC data streams, Realtime Compute for Apache Flink sends a BEFORE UPDATE data record and an AFTER UPDATE data record for the data that is generated when an UPDATE operation is performed to the downstream. The downstream identifies the BEFORE UPDATE data record as the DELETE operation. In this case, the user who wants to perform operations on the MySQL CDC result table must have the DELETE permission.

  • Solutions

    Check whether retract operations exist in the SQL logic. If retract operations exist, grant the DELETE permission to the user who wants to perform operations on the MySQL CDC result table.

What do I do if the error message "java.io.EOFException: SSL peer shut down incorrectly" appears?

  • Problem description

    Caused by: java.io.EOFException: SSL peer shut down incorrectly
        at sun.security.ssl.SSLSocketInputRecord.decodeInputRecord(SSLSocketInputRecord.java:239) ~[?:1.8.0_302]
        at sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:190) ~[?:1.8.0_302]
        at sun.security.ssl.SSLTransport.decode(SSLTransport.java:109) ~[?:1.8.0_302]
        at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1392) ~[?:1.8.0_302]
        at sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1300) ~[?:1.8.0_302]
        at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:435) ~[?:1.8.0_302]
        at com.mysql.cj.protocol.ExportControlled.performTlsHandshake(ExportControlled.java:347) ~[?:?]
        at com.mysql.cj.protocol.StandardSocketFactory.performTlsHandshake(StandardSocketFactory.java:194) ~[?:?]
        at com.mysql.cj.protocol.a.NativeSocketConnection.performTlsHandshake(NativeSocketConnection.java:101) ~[?:?]
        at com.mysql.cj.protocol.a.NativeProtocol.negotiateSSLConnection(NativeProtocol.java:308) ~[?:?]
        at com.mysql.cj.protocol.a.NativeAuthenticationProvider.connect(NativeAuthenticationProvider.java:204) ~[?:?]
        at com.mysql.cj.protocol.a.NativeProtocol.connect(NativeProtocol.java:1369) ~[?:?]
        at com.mysql.cj.NativeSession.connect(NativeSession.java:133) ~[?:?]
        at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:949) ~[?:?]
        at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:819) ~[?:?]
        at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:449) ~[?:?]
        at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:242) ~[?:?]
        at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:198) ~[?:?]
        at org.apache.flink.connector.jdbc.internal.connection.SimpleJdbcConnectionProvider.getOrEstablishConnection(SimpleJdbcConnectionProvider.java:128) ~[?:?]
        at org.apache.flink.connector.jdbc.internal.AbstractJdbcOutputFormat.open(AbstractJdbcOutputFormat.java:54) ~[?:?]
        ... 14 more
  • Cause

    The driver version of the MySQL database is 8.0.27 and the SSL protocol is enabled for the MySQL database. However, the default access mode of the MySQL database is not SSL.

  • Solutions

    We recommend that you set the connector parameter to rds in the WITH clause and append characterEncoding=utf-8&useSSL=false to the URL parameter for the MySQL dimension table. Example:

    'url'='jdbc:mysql://***.***.***.***:3306/test?characterEncoding=utf-8&useSSL=false'

What do I do if the error message "binlog probably contains events generated with statement or mixed based replication format" appears?

  • Problem description

    Caused by: io.debezium.DebeziumException: Received DML 'insert into table_name (...) values (...)' for processing, 
    binlog probably contains events generated with statement or mixed based replication format
  • Cause

    MySQL CDC binary logs cannot be in the mixed format, but can only be in the ROW format.

  • Solutions

    1. Run the show variables like "binlog_format" command on the MySQL database to query the current format of binary logs.

      Note

      You can run the show global variables like "binlog_format" command to view the global format of binary logs.

    2. Change the format of binary logs to ROW on the MySQL database.

    3. Restart the deployment to make the configurations take effect.

What do I do if the error message "java.lang.ClassCastException: org.codehaus.janino.CompilerFactory cannot be cast to org.codehaus.commons.compiler.ICompilerFactory" appears?

  • Problem description

    Causedby:java.lang.ClassCastException:org.codehaus.janino.CompilerFactorycannotbecasttoorg.codehaus.commons.compiler.ICompilerFactory
        atorg.codehaus.commons.compiler.CompilerFactoryFactory.getCompilerFactory(CompilerFactoryFactory.java:129)
        atorg.codehaus.commons.compiler.CompilerFactoryFactory.getDefaultCompilerFactory(CompilerFactoryFactory.java:79)
        atorg.apache.calcite.rel.metadata.JaninoRelMetadataProvider.compile(JaninoRelMetadataProvider.java:426)
        ...66more
  • Cause

    • The JAR package contains a Janino dependency that causes a conflict.

    • Specific Realtime Compute for Apache Flink dependencies such as flink-table-planner and flink-table-runtime are mistakenly added to the JAR package of the user-defined function (UDF) or connector.

  • Solutions

    Check whether the JAR package contains org.codehaus.janino.CompilerFactory. Class conflicts may occur because the class loading sequence on different machines is different. To resolve this issue, perform the following steps:

    1. On the Deployments page, click the name of the desired deployment.

    2. On the Configuration tab of the deployment details page, click Edit in the upper-right corner of the Parameters section.

    3. In the Other Configurations field, enter the following code:

      classloader.parent-first-patterns.additional: org.codehaus.janino

      Replace the value of the classloader.parent-first-patterns.additional parameter with a conflict class.