All Products
Search
Document Center

E-MapReduce:Configure a Spark Shell job

Last Updated:Mar 26, 2026

Configure a Spark Shell job in E-MapReduce (EMR) Data Platform to run Spark Shell scripts on your cluster.

Prerequisites

Before you begin, ensure that you have:

Create and configure a Spark Shell job

  1. Log on to the Alibaba Cloud EMR console.

  2. In the top navigation bar, select the region where your cluster resides and select a resource group.

  3. Click the Data Platform tab.

  4. In the Projects section, find your project and click Edit Job in the Actions column.

  5. In the Edit Job pane on the left, right-click the target folder and select Create Job.

  6. In the Create Job dialog box, fill in Name and Description, then select Spark Shell from the Job Type drop-down list. Click OK.

  7. In the Content field, enter the Spark Shell script to run. The following example uses the Monte Carlo method to estimate Pi:

    val count = sc.parallelize(1 to 100).filter { _ =>
      val x = math.random
      val y = math.random
      x*x + y*y < 1
    }.count();
    println("Pi is roughly ${4.0 * count / 100}")
  8. Click Save.