This topic describes how to configure a Spark SQL job.


A project is created. For more information, see Manage projects.

Background information

Note By default, a Spark SQL job is submitted in the yarn-client mode.


  1. Log on to the Alibaba Cloud E-MapReduce console with an Alibaba Cloud account.
  2. Click the Data Platform tab.
  3. In the Projects section, click Edit Job in the row of a project.
  4. In the left-side navigation pane, right-click the required folder and choose Create Job from the shortcut menu.
    Note You can also right-click the folder to create a subfolder, rename the folder, or delete the folder.
  5. In the dialog box that appears, set the Name and Description parameters, and select Spark SQL from the Job Type drop-down list.
    This option indicates that a Spark SQL job will be created. You can use the following command syntax to submit a Spark SQL job:
    spark-sql [options] [cli option]spark-sql [options] -e {SQL_CONTENT}                    
    • options: the setting of the SPARK_CLI_PARAMS parameter that you configure by performing the following operation: Choose Job Settings > Advanced Settings, click add in the Environment Variables section, and then add the setting of the SPARK_CLI_PARAMS parameter, for example, SPARK_CLI_PARAMS="--executor-memory 1g --executor-cores.
    • SQL_CONTENT: the SQL statements that you enter in the job editor.
  6. Click OK.
  7. Enter the Spark SQL statements in the Content field.


    -- SQL statement example
    -- The size of SQL statements cannot exceed 64 KB.
    show databases;
    show tables;
    -- LIMIT 2000 is automatically used for the SELECT statement.
    select * from test1;
  8. Click Save.