This topic describes how to configure a Spark Shell job.


A project is created. For more information, see Manage projects.


  1. Go to the Data Platform tab.
    1. Log on to the Alibaba Cloud EMR console by using your Alibaba Cloud account.
    2. In the top navigation bar, select the region where your cluster resides and select a resource group based on your business requirements.
    3. Click the Data Platform tab.
  2. In the Projects section of the page that appears, find the project that you want to edit and click Edit Job in the Actions column.
  3. Create a Spark Shell job.
    1. In the Edit Job pane on the left, right-click the folder on which you want to perform operations and select Create Job.
    2. In the Create Job dialog box, specify Name and Description, and then select Spark Shell from the Job Type drop-down list.
    3. Click OK.
  4. Edit job content.
    1. Configure the command line parameters that follow the Spark Shell command in the Content field.
      val count = sc.parallelize(1 to 100).filter { _ =>
        val x = math.random
        val y = math.random
        x*x + y*y < 1
      println("Pi is roughly ${4.0 * count / 100}")
    2. Click Save.