All Products
Search
Document Center

E-MapReduce:Create a workspace

Last Updated:Mar 26, 2026

A workspace is the basic unit in EMR Serverless Spark. It provides a boundary for managing jobs, members, roles, and permissions. You must create a workspace before you can run any jobs.

Before you begin

Account and permissions

  • You have registered an Alibaba Cloud account and completed real-name verification.

  • The account you use to create the workspace has the required permissions:

    • Alibaba Cloud account: Grant the account the necessary roles. For details, see Assign roles to an Alibaba Cloud account.

    • RAM user or RAM role: Attach the AliyunEMRServerlessSparkFullAccess, AliyunOSSFullAccess, and AliyunDLFFullAccess policies to the RAM user or RAM role. Then add the RAM user or RAM role on the Access Control page of EMR Serverless Spark and grant it the administrator role. For details, see Grant permissions to a RAM user and Manage users and roles.

Required services

  • Data Lake Formation (DLF): Activate DLF before creating a workspace. DLF stores and manages the metadata for your Spark jobs. For details, see Quick Start. For supported regions, see Regions and endpoints.

  • Object Storage Service (OSS): Activate OSS and create a bucket. The bucket serves as the workspace directory for storing task logs, running events, and resources. For details, see Activate OSS and Create a bucket.

Create a workspace

  1. Log on to the EMR console.

  2. In the left navigation pane, choose EMR Serverless > Spark.

  3. In the top navigation bar, select the region where you want to create the workspace.

    Important

    You cannot change the region of a workspace after it is created.

  4. Click Create Workspace.

  5. Configure the workspace parameters.

    ParameterDescriptionExample
    RegionWe recommend that you select the region where your data is stored.China (Hangzhou)
    Billing methodThe Subscription and Pay-as-you-go billing methods are supported.Pay-as-you-go
    Workspace nameEnter a name that is 1 to 64 characters in length. The name can contain only Chinese characters, letters, digits, hyphens (-), and underscores (_). Workspace names must be unique within the same Alibaba Cloud account. If you enter the name of an existing workspace, the system prompts you to enter a different name.emr-serverless-spark
    Maximum quotaThe maximum number of compute units (CUs) that can be concurrently used to process jobs in the workspace.1000
    Workspace directoryThe path used to store data files such as task logs, running events, and resources. Select a bucket with OSS-HDFS enabled for compatibility with native Hadoop Distributed File System (HDFS) interfaces. If your scenario does not require HDFS, a standard OSS bucket works.emr-oss-hdfs
    DLF for metadata storageThe data catalog used to store and manage your metadata. After you activate DLF, the system selects a default data catalog named after your UID. To use a separate data catalog: (1) Click Create Catalog, enter a Catalog ID, and click OK. (2) Select the catalog from the drop-down list.emr-dlf
    Execution roleThe RAM role that EMR Serverless Spark uses to run jobs. The default role is AliyunEMRSparkJobRunDefaultRole. This role grants access to resources in other cloud products, such as OSS and DLF. To control permissions more precisely, use a custom execution role. For details, see Execution role.AliyunEMRSparkJobRunDefaultRole
    (Optional) Advanced settings > TagsTags identify and classify cloud resources. Each workspace supports up to 20 tags. Each tag consists of a key and a value. You can also use tags for cost allocation and fine-grained management of pay-as-you-go resources. You can attach tags when creating the workspace or add and modify them later on the workspace list page. For details, see What is a tag?.Enter a custom tag key and value
    Note

    The runtime environment of the code is managed and configured by the owner of the environment.

  6. Click Create Workspace.

What's next

After your workspace is ready, start developing jobs. For SparkSQL job development, see Quick start for SparkSQL development.