This topic provides answers to some frequently asked questions about SQL of Realtime Compute for Apache Flink, including errors returned during job development and job O&M.

What do I do if the error message "undefined" appears?

What do I do if the error message "Object '****' not found" appears?

  • Problem description
    After you click Run, the error message shown in the following figure appears. Error details
  • Cause

    When you execute DDL and DML statements in the same text, you do not declare CREATE TEMPORARY TABLE in the DDL statement.

  • Solution

    When you execute DDL and DML statements in the same text, you must declare CREATE TEMPORARY TABLE instead of CREATE TABLE in the DDL statement.

What do I do if the error message "Only a single 'INSERT INTO' is supported" appears?

  • Problem description
    After you click Run, the error message shown in the following figure appears. Error details
  • Cause

    Multiple DML statements are not written between the key statements BEGIN STATEMENT SET; and END;.

  • Solution

    Write the DML statements between BEGIN STATEMENT SET; and END;. For more information, see INSERT INTO statement.

What do I do if the error message "The primary key is necessary when enable 'Key: 'scan.incremental.snapshot.enabled' , default: true (fallback keys: [])' to 'true'" appears?

  • Problem description
    Caused by: org.apache.flink.table.api.ValidationException: The primary key is necessary when enable 'Key: 'scan.incremental.snapshot.enabled' , default: true (fallback keys: [])' to 'true'
        at com.alibaba.ververica.cdc.connectors.mysql.table.MySqlTableSourceFactory.validatePrimaryKeyIfEnableParallel(MySqlTableSourceFactory.java:186)
        at com.alibaba.ververica.cdc.connectors.mysql.table.MySqlTableSourceFactory.createDynamicTableSource(MySqlTableSourceFactory.java:85)
        at org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:134)
        ... 30 more
  • Cause

    If the version of the Flink compute engine is vvr-3.0.7-flink-1.12 or earlier, the MySQL CDC connector can read data from a MySQL CDC source table in only one thread. If the version of the Flink compute engine is vvr-4.0.8-flink-1.13 or later, the MySQL CDC connector can read data that is sharded based on the primary key of a source table in multiple threads at the same time. In this case, you must configure the primary key.

  • Solution
    • If the version of the Flink compute engine is vvr-4.0.8-flink-1.13 or later and you want to read data from a MySQL CDC source table in multiple threads at the same time, we recommend that you define the primary key in the DDL statement that is used to create the MySQL CDC source table.
    • If the version of the Flink compute engine is vvr-4.0.8-flink-1.13 or later but you do not want to read data from the MySQL CDC source table in multiple threads at the same time, set the scan.incremental.snapshot.enabled parameter to false.

What do I do if the error message "exceeded quota: resourcequota" appears?

  • Problem description
    The error message appears when a job is started. Error details
  • Cause

    The job fails to be started because the resources of the current project are insufficient.

  • Solution

    You must reconfigure the project resources. For more information, see Create and manage a project.

What do I do if the error message "Exceeded checkpoint tolerable failure threshold" appears?

  • Problem description
    The error message appears when a job is running. Error details
  • Cause

    The maximum number of checkpoint failures allowed in a task is not specified. By default, a failover is triggered each time a checkpoint fails.

  • Solution
    1. On the right side of the Draft Editor page in the console of fully managed Flink, click the Advanced tab.
    2. In the Additional Configuration section, enter the following parameter:
      execution.checkpointing.tolerable-failed-checkpoints: num
      You must replace num with the number of checkpoint failures that are allowed in the task. This parameter must be set to 0 or a positive integer. If the parameter is set to 0, no checkpoint exceptions or failures are allowed.

What do I do if the error message "Flink version null is not configured for sql" appears?

  • Problem description
    StatusRuntimeException: INTERNAL: Flink version null is not configured for sql.
  • Cause

    The Ververica Runtime (VVR) version of the system is updated to VVR 4.0.8. As a result, the version information about the Flink compute engine of the job cannot be obtained.

  • Solution
    On the right side of the Draft Editor page, click the Advanced tab and select the required version from the Engine Version drop-down list in the Configuration section. Engine Version
    Note If you want to use the SQL preview feature, you must check whether the engine version that you selected on the Create Session Cluster page is correct.

What do I do if the error message "java.util.concurrent.CompletionException: java.lang.IllegalArgumentException" appears?

  • Problem description
     Caused by: java.util.concurrent.CompletionException: java.lang.IllegalArgumentException: The bucket name "fakeBucket" is invalid. A bucket name must: 1) be comprised of lower-case characters, numbers or dash(-); 2) start with lower case or numbers; 3) be between 3-63 characters long. 
  • Cause

    In the Additional Configuration section of the Advanced tab, the state.backend parameter is configured but the state.checkpoints.dir parameter is not configured.

  • Solution
    If you configure the state.backend parameter, you must configure the state.checkpoints.dir parameter.
    Note You can also delete the state.backend parameter.

What do I do if the error message "The difference between the request time and the current time is too large" appears?

  • Problem description
    Caused by: org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.OSSException: The difference between the request time and the current time is too large.
    [ErrorCode]: RequestTimeTooSkewed
    [RequestId]: 61691C47AF60CA3639812A28
    [HostId]: oss-bucket-name.oss-cn-shanghai-internal.aliyuncs.com
    [ResponseError]:
    <?xml version="1.0" encoding="UTF-8"?>
    <Error>
      <Code>RequestTimeTooSkewed</Code>
      <Message>The difference between the request time and the current time is too large.</Message>
      <RequestId>61691C47AF60C******12A28</RequestId>
      <HostId>******.oss-cn-shanghai-internal.aliyuncs.com</HostId>
      <MaxAllowedSkewMilliseconds>900000</MaxAllowedSkewMilliseconds>
      <RequestTime>2021-10-15T05:57:31.000Z</RequestTime>
      <ServerTime>2021-10-15T06:14:31.000Z</ServerTime>
    </Error>
        at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.common.utils.ExceptionFactory.createOSSException(ExceptionFactory.java:100) ~[?:?]
        at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.internal.OSSErrorResponseHandler.handle(OSSErrorResponseHandler.java:70) ~[?:?]
        at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.common.comm.ServiceClient.handleResponse(ServiceClient.java:257) ~[?:?]
        at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.common.comm.ServiceClient.sendRequestImpl(ServiceClient.java:140) ~[?:?]
        at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.common.comm.ServiceClient.sendRequest(ServiceClient.java:70) ~[?:?]
        at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.internal.OSSOperation.send(OSSOperation.java:83) ~[?:?]
        at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.internal.OSSOperation.doOperation(OSSOperation.java:145) ~[?:?]
        at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.internal.OSSOperation.doOperation(OSSOperation.java:102) ~[?:?]
        at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.internal.OSSBucketOperation.listObjects(OSSBucketOperation.java:411) ~[?:?]
        at org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss.OSSClient.listObjects(OSSClient.java:443) ~[?:?]
        at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.listObjects(AliyunOSSFileSystemStore.java:506) ~[?:?]
        at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.getFileStatus(AliyunOSSFileSystem.java:264) ~[?:?]
        at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.validatePath(AliyunOSSFileSystem.java:548) ~[?:?]
        at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.mkdirs(AliyunOSSFileSystem.java:532) ~[?:?]
        at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2326) ~[?:?]
        at org.apache.flink.fs.osshadoop.common.HadoopFileSystem.mkdirs(HadoopFileSystem.java:196) ~[?:?]
  • Cause

    The versioning feature is enabled for OSS buckets but lifecycle rules are not configured to periodically remove the delete markers and unnecessary historical versions of objects. If a large number of delete markers exists, the LIST operation may slow down and the job may fail to start. For more information, see Precautions.

  • Solution

    You can configure lifecycle rules for a bucket in the OSS console to periodically remove the delete markers and unnecessary historical versions of objects. For more information, see Configure lifecycle rules to manage object versions.

What do I do if the error message "INFO: org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss" appears?

  • Problem descriptionError details
  • Cause

    When OSS creates a directory, OSS checks whether the directory exists. If the directory does not exist, the error message "INFO: org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss" appears. Flink jobs are not affected.

  • Solution

    Add <Logger level="ERROR" name="org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss"/> to the log template. For more information, see Configure the logs of a historical job instance to be exported.

What do I do if the error message "DateTimeParseException: Text 'xxx' could not be parsed" appears?

  • Problem description

    When a job is running, the error message DateTimeParseException: Text 'xxx' could not be parsed appears.

  • Cause

    If the VVR version is earlier than VVR 4.0.13 and the date format that you declare in a DDL statement is inconsistent with the format of the actual data, Realtime Compute for Apache Flink reports an error.

  • Solution
    In VVR 4.0.13 and later, the parsing of TIMESTAMP data in a JSON-formatted string is optimized. The following JSON formats are supported: JSON, Canal JSON, Debezium JSON, Maxwell JSON, and Ogg JSON. The following data parsing capabilities are optimized:
    • Data of the TIMESTAMP type that is declared in a DDL statement can be parsed as data in the DATE format.
    • Data of the TIMESTAMP_LTZ type that is declared in a DDL statement can be parsed as data in the DATE or TIMESTAMP format.
      Realtime Compute for Apache Flink converts data of the TIMESTAMP type to data of the TIMESTAMP_LTZ type based on the time zone that is specified by the table.local-time-zone parameter. For example, you can declare the following information in the DDL statement:
      CREATE TABLE source (
        date_field TIMESTAMP,
        timestamp_field TIMESTAMP_LTZ(3)
      ) WITH (
        'format' = 'json',
        ...
      );

      If the system parses {"date_field": "2020-09-12", "timestamp_field": "2020-09-12T12:00:00"} and the current time zone is UTC+8, the parsing result is "+I(2020-09-12T00:00:00, 2020-09-12T04:00:00.000Z)".

    • Data of the TIMESTAMP or TIMESTAMP_LTZ type can be automatically parsed.

      Before the optimization, when the system parses TIMESTAMP data in a JSON-formatted string, you must set the timestamp-format.standard parameter to SQL or ISO-8601 to ensure that data can be correctly parsed. After the optimization, Realtime Compute for Apache Flink automatically infers the format of TIMESTAMP data and then parses the data. If the data is not correctly parsed, an error is returned. The value of the timestamp-format.standard parameter that you configured is used as a prompt for the parser to use.

What do I do if the error message "DELETE command denied to user 'userName'@'*.*.*.*' for table 'table_name'" appears?

  • Problem description
    Cause by:java.sql.SQLSyntaxErrorException:DELETE command denied to user 'userName'@'*.*.*.*' for table 'table_name'
        at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
        ...
  • Cause

    If a WHERE clause is added to an SQL statement that is used to process CDC data streams, Realtime Compute for Apache Flink sends a BEFORE UPDATE data record and an AFTER UPDATE data record for the data that is generated when an UPDATE operation is performed to the downstream. The downstream identifies the BEFORE UPDATE data record as the DELETE operation. In this case, the user who wants to perform operations on the MySQL CDC result table must have the DELETE permission.

  • Solution

    Check whether retract operations exist in the SQL logic. If retract operations exist, grant the DELETE permission to the user who wants to perform operations on the MySQL CDC result table.

What do I do if the error message "java.io.EOFException: SSL peer shut down incorrectly" appears?

  • Problem description
    Caused by: java.io.EOFException: SSL peer shut down incorrectly
        at sun.security.ssl.SSLSocketInputRecord.decodeInputRecord(SSLSocketInputRecord.java:239) ~[?:1.8.0_302]
        at sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:190) ~[?:1.8.0_302]
        at sun.security.ssl.SSLTransport.decode(SSLTransport.java:109) ~[?:1.8.0_302]
        at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1392) ~[?:1.8.0_302]
        at sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1300) ~[?:1.8.0_302]
        at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:435) ~[?:1.8.0_302]
        at com.mysql.cj.protocol.ExportControlled.performTlsHandshake(ExportControlled.java:347) ~[?:?]
        at com.mysql.cj.protocol.StandardSocketFactory.performTlsHandshake(StandardSocketFactory.java:194) ~[?:?]
        at com.mysql.cj.protocol.a.NativeSocketConnection.performTlsHandshake(NativeSocketConnection.java:101) ~[?:?]
        at com.mysql.cj.protocol.a.NativeProtocol.negotiateSSLConnection(NativeProtocol.java:308) ~[?:?]
        at com.mysql.cj.protocol.a.NativeAuthenticationProvider.connect(NativeAuthenticationProvider.java:204) ~[?:?]
        at com.mysql.cj.protocol.a.NativeProtocol.connect(NativeProtocol.java:1369) ~[?:?]
        at com.mysql.cj.NativeSession.connect(NativeSession.java:133) ~[?:?]
        at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:949) ~[?:?]
        at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:819) ~[?:?]
        at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:449) ~[?:?]
        at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:242) ~[?:?]
        at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:198) ~[?:?]
        at org.apache.flink.connector.jdbc.internal.connection.SimpleJdbcConnectionProvider.getOrEstablishConnection(SimpleJdbcConnectionProvider.java:128) ~[?:?]
        at org.apache.flink.connector.jdbc.internal.AbstractJdbcOutputFormat.open(AbstractJdbcOutputFormat.java:54) ~[?:?]
        ... 14 more
  • Cause

    The secure sockets layer (SSL) protocol is enabled for the MySQL database. However, the default protocol that is used to access the MySQL database is not SSL.

  • Solution

    We recommend that you specify the URL for MySQL dimension tables, such as 'url'='jdbc:mysql://***.***.***.***:3306/test?characterEncoding=utf-8&useSSL=false'.

What do I do if the error message "binlog probably contains events generated with statement or mixed based replication format" appears?

  • Problem description
    Caused by: io.debezium.DebeziumException: Received DML 'insert into table_name (...) values (...)' for processing, 
    binlog probably contains events generated with statement or mixed based replication format
  • Cause

    The MySQL CDC binary log data cannot be in the mixed format. You must set the binary log format to ROW.

  • Solution

    Run the show variables like "binlog_format" command to check the binary log format. Only the ROW binary log format is supported by the MySQL CDC connector.