All Products
Search
Document Center

Common errors and FAQ

Last Updated: Oct 26, 2021

Description of common errors and FAQ

ApsaraDB RDS whitelist

Region

Whitelist of classic networks

Whitelist of VPCs

China (Hangzhou)

11.197.14.0/2411.197.15.0/24

100.104.191.0/24

China (Shanghai)

11.217.75.0/2411.222.38.0/2411.222.93.0/2411.223.69.0/24

100.104.136.0/24

China (Beijing)

11.220.203.0/2411.220.204.0/2411.220.216.0/2411.220.217.0/2411.223.107.0/24

100.104.33.0/24

China (Shenzhen)

11.216.113.0/2411.217.52.0/2411.220.54.0/2411.220.56.0/24

100.104.55.0/24

Singapore (Singapore)

11.216.101.0/2411.219.129.0/24

100.104.163.0/24

China North 2 Ali Gov 1

11.199.246.0/2411.199.247.0/24

100.104.254.0/26

China (Zhangjiakou)

11.218.202.0/2411.218.203.0/24

100.104.195.0/26

India (Mumbai)

11.207.230.0/2411.207.231.0/2411.207.248.0/24

100.104.254.0/26

Malaysia (Kuala Lumpur)

11.204.39.0/2411.204.40.0/2411.204.41.0/2411.48.249.0/2411.48.250.0/24

100.104.13.0/24

Error related to permissions

Error message:

com.aliyun.datahub.exception.NoPermissionException: No permission, authentication failed in ram

The error message indicates that the RAM user is not authorized. For more information about how to authorize a RAM user, see Access control.

Error related to an ApsaraDB RDS instance in a VPC

Error message:

InvalidInstanceId.NotFound:The instance not in current vpc

Solution:

  1. Click here to view the DescribeDBInstanceAttribute operation that is used to query the details of an ApsaraDB RDS instance.

  2. Click Debug. On the right part of the page that appears, select a region and enter the instance ID, as shown in the following figure.

    rds_1
  3. Click Initiate. Find VpcCloudInstanceId in the return results.

  4. Go to the panel in which you can configure settings to synchronize data from DataHub to ApsaraDB RDS. Then, enter the obtained instance ID of the virtual private cloud (VPC) in the Instance ID field.

Errors related to JAR package conflicts

If you use DataHub SDK for Java, you may encounter the following JAR package conflicts:

  • InjectionManagerFactory not found

    • By default, DataHub SDK for Java depends on the jersey client V2.22.1. If you use the jersey client that is later than V2.22.1, you must add dependencies to the SDK.

  <dependency>
      <groupId>org.glassfish.jersey.inject</groupId>
      <artifactId>jersey-hk2</artifactId>
      <version>xxx</version>
    </dependency>
  • java.lang.NoSuchFieldError: EXCLUDE_EMPTY

    • The version of the jersey-common library is earlier than V2.22.1. We recommend that you use the jersey-common library of V2.22.1 or later.

  • Error reading entity from input stream

    • Cause 1: The version of the HTTP client is earlier than V4.5.2.

    • Cause 2: The SDK of the current version does not support specific data types. You must update the SDK.

  • jersey-apache-connector later than V2.22.1 has bugs related to TCP connections.

    • Use V2.22.1.

  • java.lang.NosuchMethodError:okhttp3.HttpUrl.get(java/lang/String:)okhttp3/HttpUrl

    • Run the mvn dependency:tree command to check whether the version of the OkHttp client conflicts with dependencies.

  • javax/ws/rs/core/ResponseStatusFamily

    • Check the dependencies of the javax.ws.rs package. For example, check whether the javax.ws.rs package depends on jsr311-api.

Other errors and FAQ

  • Parse body failed, Offset: 0

    • Generally, this error occurs when data is being written. Apsara Stack DataHub of earlier versions does not support binary transmission of protocol buffers. However, binary transmission is enabled in specific SDKs by default. In this case, you must manually disable binary transmission in these SDKs.

    • Java SDK

datahubClient = DatahubClientBuilder.newBuilder()
    .setDatahubConfig(
        new DatahubConfig(Constant.endpoint,
            // Specify whether to enable binary transmission. The server of V2.12 or later supports binary transmission.
            new AliyunAccount(Constant.accessId, Constant.accessKey), true))
    .build();
  • Python SDK

# Json mode: for datahub server version <= 2.11
dh = DataHub(access_id, access_key, endpoint, enable_pb=False)
  • GO SDK

config := &datahub.Config{
    EnableBinary:   false,
}
dh := datahub.NewClientWithConfig(accessId, accessKey, endpoint, config)
  • Logstash

Specify whether to enable binary transmission for Logstash. enable_pb => false

  • Request body size exceeded

    • The error message indicates that the size of the request body exceeds the upper limit. For more information, see Limits.

  • Record field size not match.

    • The specified schema does not match the schema of the topic. We recommend that you call the getTopic method to obtain the schema.

  • The limit of query rate is exceeded.

    • To ensure the effective use of resources, DataHub limits the number of queries per second (QPS). This error occurs when the frequency of data reads and writes exceeds the upper limit. We recommend that you read and write data in batches. For example, you can write a batch of data every minute and read 1,000 records each time.

  • Num of topics exceed limit

    • DataHub of the new version limits that a project can contain up to 20 topics.

  • SeekOutOfRange

    • The offset parameter is invalid or the offset has expired.

  • Offset session has changed

    • A subscription cannot be consumed by multiple consumers at a time. Check whether a subscription is consumed by multiple consumers in the program.

  • Is the DECIMAL data type is supported in the synchronization to MaxCompute?

    • Data of the DECIMAL type with no precision is supported. By default, a DECIMAL value has up to 18 digits on each side of the decimal point.

  • What does addAttribute mean?

    • You can use the addAttribute() method to add additional attributes to a record based on your business requirements. Additional attributes are optional.

  • How do I delete data from a topic?

    • DataHub does not allow you to delete data from a topic. We recommend that you reset offsets to invalidate data.

  • The data in a shard is stored in a file in the specified Object Storage Service (OSS) path. The name of the file is randomly generated. If the file size exceeds 5 GB, another file is created to store the data from the shard. Can I modify this setting?

    • No, you cannot modify this setting.

  • What can I do if my AnalyticDB for MySQL instance cannot access a public endpoint?

  • You must apply for an internal endpoint in AnalyticDB for MySQL. Log on to your AnalyticDB for MySQL instance, execute the alter database set intranet_vip = true statement, and then the select internal_domain, internal_port from information_schemata statement.