This topic describes how to develop a Spark SQL job.
Prerequisites
A workspace and an SQL session are created. For more information, see Create a workspace and Manage SQL sessions.
Create an SQL job
Go to the Data Development page.
Log on to the E-MapReduce (EMR) console.
In the left-side navigation pane, choose
.On the Spark page, find the desired workspace and click the name of the workspace.
In the left-side navigation pane of the EMR Serverless Spark page, click Data Development.
Create a job.
On the Development tab, click the
(Create) icon.
In the Create dialog box, configure the Name parameter, choose
from the Type drop-down list, and then click OK.In the upper-right corner of the configuration tab of the job, select a catalog from the Default Catalog drop-down list, a database from the Default Database drop-down list, and a started SQL session from the SQL Sessions drop-down list.
You can also click Create SQL Session from the SQL Sessions drop-down list to create an SQL session. For more information about SQL sessions, see Manage SQL sessions.
Enter SQL statements in the editor of the created job.
Optional. In the right-side navigation pane, click the Version Information tab to view the version information.
On the tab, you can view or compare the job versions. For example, you can compare the SQL code of the job of different versions and highlight the differences.
Run and publish the job.
Click Run.
You can view the results on the Execution Results tab in the lower part of the page. If an exception occurs, you can view the exception on the Execution Issues tab in the lower part of the page.
Confirm that the job runs as expected. Then, in the upper-right corner of the configuration tab of the job, click Publish to publish the job.
In the Publish dialog box, configure the Remarks parameter and click OK.
What to do next
After you create a job, you can create a workflow to schedule the job on a regular basis. For more information, see Create a workflow. For information about how to schedule jobs in a workflow, see Get started with the development of Spark SQL jobs.