edit-icon download-icon

E-MapReduce SDK release notes

Last Updated: Jan 05, 2018

Description

  • emr-core package: implements interaction between Hadoop/Spark and OSS data sources. It is by default existent in the cluster’s running environment. You don’t need to include the emr-core during job packaging, or you can keep the version consistent with the emr-core version in the cluster.

  • emr-sdk_2.10 package: implements interaction between Spark and other data sources of Alibaba Cloud, such as Log Service, MNS, ONS and ODPS . You must include emr-sdk_2.10 in the package during job packing. Otherwise errors of some class not found will be prompted.

    1. <dependency>
    2. <groupId>com.aliyun.emr</groupId>
    3. <artifactId>emr-core</artifactId>
    4. <version>1.1.3.1</version>
    5. </dependency>
    6. <dependency>
    7. <groupId>com.aliyun.emr</groupId>
    8. <artifactId>emr-sdk_2.10</artifactId>
    9. <version>1.1.3.1</version>
    10. </dependency>

v1.1.3.1

SDK

  • Solve the dependency conflicts between MNS and Spark/Hadoop packages.

  • Solve null pointers in Spark Streaming + MNS scenarios.

  • Solve some bugs of Python SDK.

  • Spark Streaming + Loghub supports custom time and location.

Core

  • Solve the problem that Hadoop is not able to support native Snappy files. Now, E-MapReduce supports processing files archived in OSS in Snappy format by Log Service.

  • Solve the problem that Spark is not able to support Snappy zip files.

  • Solve the problem that OSS is not able to support two algorithms of Hadoop 2.7.2 OutputCommitter.

  • Improve OSS reading/writing performance of Hadoop/Spark.

  • Solve the Log4j abnormal output of Spark job printing.

v1.1.2

  • Solve the “ConnectionClosedException” occurred during slow reading/writing of OSS by the job.

  • Solve some unavailable Hadoop commands problem for OSS data sources.

  • Solve the “java.text.ParseException: Unparseable date” problem.

  • Optimize emr-core to support local debugging and running.

  • Compatible with the “_$folder$” files generated by earlier version by interpreting them as directories instead of normal files.

  • Add retry mechanism for failures for OSS reading/writing in Hadoop/Spark.

v1.1.1

  • Solve the unbalanced usage between multiple disks during local writing of OSS temporary files.

  • Remove the $_folder$ tag file created during OSS directory creation in job execution.

v1.1.0

  • Upgrade LogHub SDK to version 0.6.2, remove the Client DB mode and use Server DB instead.

  • Upgrade OSS SDK to version 2.2.0, repair the running exceptions caused by OSS SDK bugs.

  • Add MNS support.

  • Compatibility

    • For the 1.0.x series SDKs
      • Interface:
        • Compatible
      • Namespace:
        • Incompatible: The packet structure is adjusted. The packet name com.aliyun is replaced with com.aliyun.emr.
  • Modify the project groupId from com.aliyun to com.aliyun.emr. The modified POM dependency is:

    1. <dependency>
    2. <groupId>com.aliyun.emr</groupId>
    3. <artifactId>emr-sdk_2.10</artifactId>
    4. <version>1.1.3.1</version>
    5. </dependency>

v1.0.5

  • Optimize LoghubUtils interface and parameter input.

  • Optimize LogStore data output format and add the “topic” and “source” fields.

  • Add the configuration of time interval for pulling data from LogStore. The parameter “spark.logservice.fetch.interval.millis”, and the default value is 200 milliseconds.

  • Update the dependency ODPS SDK version to 0.20.7-public.

v1.0.4

  • Downgrade the guava dependency version to 11.0.2 to avoid conflicts with the guava version in Hadoop.

  • The computing task supports a file size with more than 5GB of data.

v1.0.3

  • Add the configuration parameter of OSS Client.

v1.0.2

  • Repair the bug of OSS URI parsing error.

v1.0.1

  • Optimize OSS URI settings.

  • Add MQ support.

  • Add Log Service support.

  • Support OSS append writing feature.

  • Support OSS data uploading using the multi-part method.

  • Support OSS data copying using the upload part copy method.

Java Doc

This document introduces how to use SDK in Spark to read and write data in Alibaba Cloud OSS, ODPS, Log Service and MQ products. Please click to download the Latest version of document.

Thank you! We've received your feedback.