This topic describes how to configure a Spark Shell job.


A project is created. For more information, see Manage projects.


  1. Create a job.
    1. You have logged on to the Alibaba Cloud EMR console by using your Alibaba Cloud account.
    2. In the top navigation bar, select the region where your cluster resides and select a resource group based on your business requirements.
    3. Click the Data Platform tab.
    4. In the Projects section of the page that appears, find the project you want to edit and click Edit Job in the Actions column.
    5. In the Edit Job pane on the left, right-click the folder on which you want to perform operations and select Create Job.
  2. Configure the job.
    1. In the Create Job dialog box, specify Name and Description, and select Spark Shell from the Job Type drop-down list.
    2. Click OK.
    3. Specify the command line parameters that follow the Spark Shell command in the Content field.
      val count = sc.parallelize(1 to 100).filter { _ =>
        val x = math.random
        val y = math.random
        x*x + y*y < 1
      }.count()println(s"Pi is roughly ${4.0 * count / 100}")
  3. Click Save.