In this tutorial, you will learn how to configure a Spark SQL job.
Note By default, the mode of Spark SQL used for submitting a job is YARN.
- Log on to the Alibaba Cloud E-MapReduce console.
- At the top of the navigation bar, click Data Platform.
- In the Actions column, click Design Workflow next to the specified project.
- On the left of the Job Editing page, right-click the folder you want to operate and select New Job.
- In the New Job dialog box, enter the job name and description.
- Click OK.
Note You can also create subfolders, rename folders, and delete folders by right-clicking on them.
- Select the Spark SQL job type to create a Spark SQL job. This type of job is submitted in the background using the following method.
spark-sql [options] [cli option]
- Enter the parameters in the Content field after the Spark SQL commands.
- -e option
-e options can be written to the running SQL by inputting them into the Content field of the job. For example:
-e "show databases;"
- -f option
-f options can be used to specify a Spark SQL script file. Uploading well-prepared Spark SQL script files to OSS can provide greater flexibility. We recommend that you use this operation mode. For example:
- -e option
- Click Save to complete Spark SQL job configuration.