This topic describes the data types and parameters that are supported by MaxCompute Reader and how to configure MaxCompute Reader by using the codeless user interface (UI) and code editor.

Background information

MaxCompute Reader reads data from MaxCompute. For more information about MaxCompute, see What is MaxCompute?

MaxCompute Reader uses Tunnel commands to read data from MaxCompute based on the information that you specify, such as the source project, table, partitions, and fields. For more information about common Tunnel commands, see Tunnel commands.

MaxCompute Reader cannot read views. It can read only partitioned and non-partitioned tables. DataWorks cannot map the fields in partitioned MaxCompute tables. If you want to read data from a partitioned MaxCompute table, you must specify each desired partition when you configure MaxCompute Reader. For example, if you want to read data from the partition pt=1,ds=hangzhou in the t0 table, you must specify pt=1,ds=hangzhou when you configure MaxCompute Reader. In addition, you can select some or all of the table fields, change the order in which the fields are arranged, or add constant fields and partition key columns. Partition key columns are not table fields.
Note
  • MaxCompute Reader cannot filter data. If you want MaxCompute Reader to read only specific data during data synchronization, you must create a table, write the data to the table, and then enable MaxCompute Reader to read the data from the table.
  • MaxCompute Reader cannot read data from external tables.

Data types

The following table lists the data types that are supported by MaxCompute Reader.
Category Data Integration data type MaxCompute data type
Integer LONG BIGINT, INT, TINYINT, and SMALLINT
Boolean BOOLEAN BOOLEAN
Date and time DATE DATETIME, TIMESTAMP, and DATE
Floating point DOUBLE FLOAT, DOUBLE, and DECIMAL
Binary BYTES BINARY
Complex STRING ARRAY, MAP, and STRUCT

Parameters

Parameter Description Required Default value
datasource The name of the data source. It must be the same as the name of the added data source. You can add data sources by using the code editor. Yes No default value
table The name of the table from which you want to read data. The name is not case-sensitive. Yes No default value
partition
The partitions from which you want to read data.
  • You can use Linux Shell wildcards to specify the partitions. An asterisk (*) indicates multiple numbers of characters, and a question mark (?) indicates a single character.
  • The partitions that you specify must exist in the source table. Otherwise, the system reports an error for the synchronization node. If you want the synchronization node to be successfully run even if the partitions that you specify do not exist in the source table, use the code editor to modify the code of the node. In addition, you must add "successOnNoPartition": true to the configuration of MaxCompute Reader.
For example, the partitioned table test contains four partitions: pt=1,ds=hangzhou, pt=1,ds=shanghai, pt=2,ds=hangzhou, and pt=2,ds=beijing. In this case, you can set the partition parameter based on the following instructions:
  • To read data from the partition pt=1,ds=hangzhou, specify "partition":"pt=1,ds=hangzhou".
  • To read data from all the ds partitions in the pt=1 partition, specify "partition":"pt=1,ds=*".
  • To read data from all the partitions in the test table, specify "partition":"pt=*,ds=*".
You can also specify other conditions to read data from partitions based on your business requirements.
  • To read data from the partition that stores the largest amount of data, add /*query*/ ds=(select MAX(ds) from DataXODPSReaderPPR) to the configuration of MaxCompute Reader.
  • To filter data by specifying filter conditions, add /*query*/ pt+Expression to the configuration of MaxCompute Reader. For example, /*query*/ pt>=20170101 and pt<20170110 indicates that you want to read the data that is generated from January 1, 2017 to January 9, 2017 from all the pt partitions in the table test.
Note MaxCompute Reader processes the content after /*query*/ as a WHERE clause.
Required only for partitioned tables No default value
column The names of the columns from which you want to read data. For example, the test table contains the id, name, and age columns.
  • To read the data in the columns in sequence, specify "column":["id","name","age"] or "column":["*"].
    Note We recommend that you do not use "column":["*"]. If you specify "column":["*"], MaxCompute Reader reads data from all the columns in a source table in sequence. If the column order, data type, or number of columns is changed in the source table, the columns in the source and destination tables may be inconsistent. As a result, the data synchronization may fail, or the data synchronization results do not meet your expectation.
  • To read the data in the name and id columns in sequence, specify "column":["name","id"].
  • You can add constant fields to the source table to establish mappings between the source table columns and destination table columns. In this case, when you specify the column parameter, you must enclose each constant field in single quotation marks ('). For example, if you add the constant field 1988-08-08 08:08:08 to the source table and want to read data from the age, name, 1988-08-08 08:08:08, and id columns in sequence, specify "column":["age","name","'1988-08-08 08:08:08'","id"].
    The single quotation marks (') are used to identify constant columns. When MaxCompute Reader reads data from the source table, the constant column values that are read by MaxCompute Reader exclude the single quotation marks (').
    Note
    • MaxCompute Reader does not use SELECT statements to read data. Therefore, you cannot specify function fields.
    • The column parameter must explicitly specify all the columns from which you want to read data. The parameter cannot be left empty.
Yes No default value

Configure MaxCompute Reader by using the codeless UI

Create a synchronization node and configure the node. For more information, see Configure a synchronization node by using the codeless UI.

Perform the following steps on the configuration tab of the synchronization node:
  1. Configure data sources.
    Configure Source and Target for the synchronization node. Configure data sources
    Parameter Description
    Connection The name of the data source from which you want to read data. This parameter is equivalent to the datasource parameter that is described in the preceding section.
    Development Project Name The name of the project in the development environment. You cannot change the value.
    Note This parameter is displayed only if the workspace is in standard mode.
    Production Project Name The name of the project in the production environment. You cannot change the value.
    Table The name of the table from which you want to read data. This parameter is equivalent to the table parameter that is described in the preceding section.
    Partition Key Column If your daily incremental data is stored in the partitions of a specific date, you can specify the partition information to synchronize the daily incremental data. For example, set pt to ${bizdate}.
    Note DataWorks cannot map the fields in partitioned MaxCompute tables. If you want to read data from a partitioned MaxCompute table, you must specify each desired partition when you configure MaxCompute Reader.
    Note In the code editor, if you want to synchronize all the columns in the source table, specify "column": [""]. You can directly specify the partitions from which you want to read data. You can also use wildcards to specify the partitions.
    • "partition":"pt=20140501/ds=*" indicates all the ds partitions in the pt=20140501 partition.
    • "partition":"pt=top?" indicates partitions pt=top and pt=to.
    You can specify the partitions from which you want to synchronize data. For example, if you want to synchronize data from the partition pt=${bdp.system.bizdate} in a MaxCompute table, you can add the pt column to the source table in the Mappings section. If the value of Type for pt is Unidentified, you can ignore the value and proceed to the next step.
    • To synchronize data in all partitions, specify pt=* for the Partition Key Column parameter.
    • To synchronize data in specific partitions, specify the required dates.
  2. Configure field mappings. This operation is equivalent to setting the column parameter that is described in the preceding section.
    Fields in the source on the left have a one-to-one mapping with fields in the destination on the right. You can click Add to add a field. To remove an added field, move the pointer over the field and click the Remove icon. Field mappings
    Operation Description
    Map Fields with the Same Name Click Map Fields with the Same Name to establish mappings between fields with the same name. The data types of the fields must match.
    Map Fields in the Same Line Click Map Fields in the Same Line to establish mappings between fields in the same row. The data types of the fields must match.
    Delete All Mappings Click Delete All Mappings to remove the mappings that are established.
    Auto Layout Click Auto Layout. Then, the system automatically sorts the fields based on specified rules.
    Change Fields Click the Change Fields icon. In the Change Fields dialog box, you can manually edit the fields in the source table. Each field occupies a row. The first and the last blank rows are included, whereas other blank rows are ignored.
    Add Click Add to add a field. Take note of the following rules when you add a field:
    • You can enter constants. Each constant must be enclosed in single quotation marks ('), such as 'abc' and '123'.
    • You can use scheduling parameters, such as ${bizdate}.
    • If the field that you entered cannot be parsed, the value of Type for the field is Unidentified.
  3. Configure channel control policies. Channel control
    Parameter Description
    Expected Maximum Concurrency The maximum number of parallel threads that the synchronization node uses to read data from the source or write data to the destination. You can configure the parallelism for the synchronization node on the codeless UI.
    Bandwidth Throttling Specifies whether to enable bandwidth throttling. You can enable bandwidth throttling and specify a maximum transmission rate to avoid heavy read workloads on the source. We recommend that you enable bandwidth throttling and set the maximum transmission rate to an appropriate value based on the configurations of the source.
    Dirty Data Records Allowed The maximum number of dirty data records allowed.
    Distributed Execution

    The distributed execution mode that allows you to split your node into pieces and distribute them to multiple Elastic Compute Service (ECS) instances for parallel execution. This speeds up synchronization. If you use a large number of parallel threads to run your synchronization node in distributed execution mode, excessive access requests are sent to the data sources. Therefore, before you use the distributed execution mode, you must evaluate the access load on the data sources. You can enable this mode only if you use an exclusive resource group for Data Integration. For more information about exclusive resource groups for Data Integration, see Exclusive resource groups for Data Integration and Create and use an exclusive resource group for Data Integration.

Configure MaxCompute Reader by using the code editor

In the following code, a synchronization node is configured to read data from MaxCompute. For more information about how to configure a synchronization node by using the code editor, see Create a synchronization node by using the code editor.
Notice Delete the comments from the following code before you run the code:
{
    "type":"job",
    "version":"2.0",
    "steps":[
        {
            "stepType":"odps",// The reader type. 
            "parameter":{
                "partition":[],// The partitions from which you want to read data. 
                "isCompress":false,// Specifies whether to enable compression. 
                "datasource":"",// The name of the data source. 
                "column":[// The names of the columns from which you want to read data. 
                    "id"
                ],
                "emptyAsNull":true,
                "table":""// The name of the table from which you want to read data. 
            },
            "name":"Reader",
            "category":"reader"
        },
        { 
            "stepType":"stream",
            "parameter":{
            },
            "name":"Writer",
            "category":"writer"
        }
    ],
    "setting":{
        "errorLimit":{
            "record":"0"// The maximum number of dirty data records allowed. 
        },
        "speed":{
            "throttle": true,// Specifies whether to enable bandwidth throttling. The value false indicates that bandwidth throttling is disabled, and the value true indicates that bandwidth throttling is enabled. The mbps parameter takes effect only when the throttle parameter is set to true. 
            "concurrent":1, // The maximum number of parallel threads. 
            "mbps":"12"// The maximum transmission rate.
        }
    },
    "order":{
        "hops":[
            {
                "from":"Reader",
                "to":"Writer"
            }
        ]
    }
}
If you want to specify the Tunnel endpoint of MaxCompute, you can use the code editor to configure the data source. To configure the data source, replace "datasource":"", in the preceding code with the parameters of the data source. The following code provides an example:
"accessId":"*******************",
"accessKey":"*******************",
"endpoint":"http://service.eu-central-1.maxcompute.aliyun-inc.com/api",
"odpsServer":"http://service.eu-central-1.maxcompute.aliyun-inc.com/api", 
"tunnelServer":"http://dt.eu-central-1.maxcompute.aliyun.com", 
"project":"*****",