All Products
Search
Document Center

MaxCompute:Use a PyODPS node to pass parameters

Last Updated:Jul 22, 2024

This topic describes how to use a PyODPS node to pass parameters.

Prerequisites

The following operations are performed:

  • MaxCompute is activated. For more information, see Activate MaxCompute.

  • DataWorks is activated. For more information, see Purchase guide.

  • A workflow is created in the DataWorks console. In this example, a workflow is created for a DataWorks workspace in basic mode. For more information, see Create a workflow.

Procedure

  1. Prepare test data.

    1. Create a partitioned table and a source table, and import data to the source table. For more information, see Create tables and upload data.

      In this example, use the following table creation statements and source data.

      • Execute the following statement to create a partitioned table named user_detail:

        create table if not exists user_detail
        (
        userid    BIGINT comment 'user ID',
        job       STRING comment 'job type',
        education STRING comment 'education level'
        ) comment 'user information table'
        partitioned by (dt STRING comment 'date',region STRING comment 'region');
      • Execute the following statement to create a source table named user_detail_ods:

        create table if not exists user_detail_ods
        (
          userid    BIGINT comment 'user ID',
          job       STRING comment 'job type',
          education STRING comment 'education level',
          dt STRING comment 'date',
          region STRING comment 'region'
        );
      • Create a source data file named user_detail.txt and save the following data to the file. Import the data to the user_detail_ods table.

        0001,Internet,bachelor,20190715,beijing
        0002,education,junior college,20190716,beijing
        0003,finance,master,20190715,shandong
        0004,Internet,master,20190715,beijing
    2. Right-click the workflow and choose Create Node > MaxCompute > ODPS SQL.

    3. In the Create Node dialog box, specify Name and click Confirm.

    4. On the configuration tab of the ODPS SQL node, enter the following code in the code editor:

      insert overwrite table user_detail partition (dt,region)
      select userid,job,education,dt,region from user_detail_ods;
    5. Click the Run icon in the toolbar to insert the data from the user_detail_ods table into the user_detail partitioned table.

  2. Use a PyODPS node to pass parameters.

    1. Log on to the DataWorks console.

    2. In the left-side navigation pane, click Workspaces.

    3. Find your workspace, and choose Shortcuts > Data Development in the Actions column.

    4. On the DataStudio page, right-click the created workflow and choose Create Node > MaxCompute > PyODPS 2.

    5. In the Create Node dialog box, specify Name and click Confirm.

    6. On the configuration tab of the PyODPS 2 node, enter the following code in the code editor:

      import sys
      reload(sys)
      print('dt=' + args['dt'])
      # Change the default encoding format. 
      sys.setdefaultencoding('utf8')
      # Obtain the user_detail table. 
      t = o.get_table('user_detail')
      # Receive the partition field that is passed. 
      with t.open_reader(partition='dt=' + args['dt'] + ',region=beijing') as reader1:
          count = reader1.count
      print("Query data in the partitioned table:")
      for record in reader1:
          print record[0],record[1],record[2]
    7. Click the Run with Parameters icon in the toolbar.

    8. In the Parameters dialog box, configure parameters and click Run.

      Parameter description:

      • Resource Group Name: Select Common scheduler resource group.

      • dt: Set this parameter to dt=20190715.

      image

    9. View the execution result of the node on the Runtime Log tab.运行日志