edit-icon download-icon

Spark SQL job configuration

Last Updated: May 04, 2018

By default, the Spark SQL mode to submit a job is Yarn mode.

  1. Log on to Alibaba Cloud E-MapReduce Console Job List.

  2. Click Create a job in the upper right corner to enter the job creation page.

  3. Enter the job name.

  4. Select the Spark SQL job type to create a Spark SQL job. This type of job is submitted in the background by using the following process:

    1. spark-sql [options] [cli option]
  5. Enter the Parameters in the option box with parameters subsequent to Spark SQL commands.

    -e option

    Directly write running SQL for -e options by inputting it into the Parameters box of the job, for example:

    1. -e "show databases;"

    -f option

    -f options can be used to specify a Spark SQL script file. Loading well prepared Spark SQL script files on OSS can give more flexibility. We recommend that you use this operation mode, for example:

    1. -f ossref://your-bucket/your-spark-sql-script.sql
  6. Select the policy for failed operations.

  7. Click OK to complete Spark SQL job configuration.

Thank you! We've received your feedback.