Reading Guidance

Last Updated: Dec 26, 2016

If you are a first-time MaxCompute user, we recommend that you begin by reading the following sections:

  • MaxCompute Summary — This chapter introduces the general introduction of MaxCompute and its main function modules. By reading this chapter, you will have a general understanding of MaxCompute.
  • Quick Start — This chapter will adopt detailed instances step by step to guide you how to apply for an account, how to install the client, how to create a table, how to authorize for a user, how to export/import data, how to run SQL tasks, how to run UDF, how to run Mapreduce programs, etc.
  • Basic Introduction — This chapter mainly introduces some essential terms and common used commands of MaxCompute. You can be further familiar with how to operate MaxCompute.

  • Tools—Before analyzing the data, you may need to master the methods to dowmload, configure and use the tools. We provides the following tools:

After you are familar with the modules mentioned above, you are recommended to hava a further study on other modules.

If you are a Data Analyst?

If you are a data analyst, maybe you need to read the following modules:

  • MaxCompute SQL: you can query and analyze massive data stored on MaxCompute. The main functions includes:

    • It supports DDL. You can manage tables and partitions through Create, Drop and Alter syntaxes.

    • You can select a few records from a table through Select clause. You can query the records which meet the conditions in Where clause.

    • You can achieve the association of the two tables by the equivalent connection of Join.

    • You can achieve the aggregation operation through Group By clause.

    • You can insert the result records into another table through Insert overwrite/into syntax.

    • You can achieve a series of calculations by using built-in functions and user-defined functions(UDF).

If you are a Developer?

If you are an experienced developer and understand the concept of distributed system and some data analyzing cannot be achieved by SQL, we recommend that you learn more advanced modules of MaxCompute:

  • MapReduce:MaxCompute provides MapReduce programming interface. You can use Java API provided by MapReduce to write MapReduce program for processing data in MaxCompute.

  • Graph: A set of framework for iterative graph computing. Use the graph for modeling, which is composed of Vertex and Edge. Vertex and Edge include weight value (Value). Edit and evolute the graph through the iteration and get the final result.

  • Eclipse Plugin: facilitate users to use Java SDK of MapReduce, UDF and Graph.

  • Tunnel: you can use Tunnel service to upload batch offline data to MaxCompute or download batch offline data from MaxCompute.

  • DataHub Service: you can use DataHub service to publish and subscribe real time data.

  • SDK: A toolkit available to developers.

If you are a Project Owner or a Project Administrator?

If you are a project owner or administrator, you need to read:

  • Security:Through reading this chapter, you can understand how to grant privileges to a user, how to share resource span projects, how to set project protection, how to grant privilege by policy, etc.
Thank you! We've received your feedback.