All Products
Search
Document Center

E-MapReduce:ECS application role used in EMR V3.32.0 or an earlier minor version or EMR V4.5.0 or an earlier minor version

Last Updated:Mar 26, 2026

E-MapReduce (EMR) automatically binds an ECS application role called MetaService to every cluster you create. Jobs and applications running on the cluster use this role to access Alibaba Cloud resources — no AccessKey pair required in configuration files or code, which eliminates the risk of credential exposure.

MetaService supports access to the following services without an AccessKey pair:

ServiceDefault enabled
Object Storage Service (OSS)Yes
Log ServiceNo — requires additional RAM permissions
Message Service (MNS)No — requires additional RAM permissions

To enable Log Service or MNS access, grant the required permissions to the AliyunEmrEcsDefaultRole role in the RAM console. For more information, see Grant permissions to a RAM role.

Prerequisites

Before you begin, ensure that you have:

How MetaService works

When a job or application calls a supported Alibaba Cloud service, MetaService provides a Security Token Service (STS) temporary credential in the background. No AccessKey pair is needed in your code or configuration.

STS credential rotation: A new STS temporary credential is generated 30 minutes before the current one expires. Both credentials remain valid during the 30-minute overlap window.

Default permissions

The AliyunEmrEcsDefaultRole role is configured with the policy AliyunEmrECSRolePolicy, which grants the following permissions.

OSS permissions

Permission (Action)Description
oss:PutObjectUploads a file or folder
oss:GetObjectQueries a file or folder
oss:ListObjectsQueries files
oss:DeleteObjectDeletes a file
oss:ListBucketsQueries buckets
oss:AbortMultipartUploadTerminates a multipart upload event
oss:ListMultipartUploadsQueries all ongoing multipart upload events
oss:RestoreObjectRestores an Archive or Cold Archive object
oss:GetBucketInfoQueries the information about a bucket
oss:ListObjectVersionsQueries the versions of all objects in a bucket, including delete markers
oss:DeleteObjectVersionDeletes a specific version of an object
oss:PostDataLakeStorageFileOperationAccesses OSS-HDFS

Tablestore permissions

Permission (Action)Description
ots:CreateTableCreates a table based on the specified table schema
ots:DeleteTableDeletes a specific table from the current instance
ots:GetRowReads data in a single row based on a specific primary key
ots:PutRowInserts data into a specific row
ots:UpdateRowUpdates data in a specific row
ots:DeleteRowDeletes a row of data
ots:GetRangeReads data within a specific value range of the primary key
ots:BatchWriteRowInserts, modifies, or deletes multiple rows of data from one or more tables at a time
ots:BatchGetRowReads multiple rows of data from one or more tables at a time
ots:ComputeSplitPointsBySizeLogically splits data in a table into several shards whose sizes are close to the specified size, and returns the split points between the shards and the prompt about hosts where the partitions reside
ots:StartLocalTransactionCreates a local transaction based on a specified partition key value and queries the ID of the local transaction
ots:CommitTransactionCommits a local transaction
ots:AbortTransactionAborts a local transaction

Data Lake Formation (DLF) permissions

Permission (Action)Description
dlf:BatchCreatePartitionsCreates multiple partitions at a time
dlf:BatchCreateTablesCreates multiple tables at a time
dlf:BatchDeletePartitionsDeletes multiple partitions at a time
dlf:BatchDeleteTablesDeletes multiple tables at a time
dlf:BatchGetPartitionsQueries information about multiple partitions at a time
dlf:BatchGetTablesQueries information about multiple tables at a time
dlf:BatchUpdatePartitionsUpdates multiple partitions at a time
dlf:BatchUpdateTablesUpdates multiple tables at a time
dlf:CreateDatabaseCreates a database
dlf:CreateFunctionCreates a function
dlf:CreatePartitionCreates a partition
dlf:CreateTableCreates a table
dlf:DeleteDatabaseDeletes a database
dlf:DeleteFunctionDeletes a function
dlf:DeletePartitionDeletes a partition
dlf:DeleteTableDeletes a table
dlf:GetDatabaseQueries information about a database
dlf:GetFunctionQueries information about a function
dlf:GetPartitionQueries information about a partition
dlf:GetTableQueries information about a table
dlf:ListCatalogsQueries catalogs
dlf:ListDatabasesQueries databases
dlf:ListFunctionNamesQueries the names of the functions
dlf:ListFunctionsQueries functions
dlf:ListPartitionNamesQueries the names of the partitions
dlf:ListPartitionsQueries partitions
dlf:ListPartitionsByExprQueries metadata table partitions by conditions
dlf:ListPartitionsByFilterQueries metadata table partitions by conditions
dlf:ListTableNamesQueries the names of tables
dlf:ListTablesQueries tables
dlf:RenamePartitionRenames a partition
dlf:RenameTableRenames a table
dlf:UpdateDatabaseUpdates a database
dlf:UpdateFunctionUpdates a function
dlf:UpdateTableUpdates a table
dlf:UpdateTableColumnStatisticsUpdates the statistics of a metadata table
dlf:GetTableColumnStatisticsQueries the statistics of a metadata table
dlf:DeleteTableColumnStatisticsDeletes the statistics of a metadata table
dlf:UpdatePartitionColumnStatisticsUpdates the statistics of a partition
dlf:GetPartitionColumnStatisticsQueries the statistics of a partition
dlf:DeletePartitionColumnStatisticsDeletes the statistics of a partition
dlf:BatchGetPartitionColumnStatisticsQueries the statistics of multiple partitions at a time
dlf:CreateLockCreates a metadata lock
dlf:UnLockUnlocks a specific metadata lock
dlf:AbortLockAborts a metadata lock
dlf:RefreshLockRefreshes a metadata lock
dlf:GetLockQueries information about a metadata lock
dlf:GetAsyncTaskStatusQueries the status of an asynchronous task
dlf:DeltaGetPermissionsQueries permissions
dlf:GetPermissionsQueries information about data permissions
dlf:GetServiceInfoQueries information about a service
dlf:GetRolesQueries information about roles in data permissions
dlf:CheckPermissionsVerifies data permissions

Use MetaService

Choose the approach that matches your workload:

ScenarioApproachWhat you get
Jobs running on the cluster (Hadoop, Hive, Spark)Use simplified OSS pathsAccess OSS without credentials in the path
Self-managed services or custom applicationsCall the MetaService HTTP APISTS temporary credentials for OSS, Log Service, and MNS

Access OSS from cluster jobs

When MetaService is active, use the simplified path format in any cluster job:

oss://<bucket-name>/<object-path>

MetaService handles authentication automatically. You do not need to embed credentials in the path. This also improves your experience when interacting with OSS resources, because the OSS path you need to enter is significantly shorter.

Hadoop — list objects:

# Without MetaService (credentials embedded in path)
hadoop fs -ls oss://ZaH******As1s:Ba23N**************sdaBj2@bucket.oss-cn-hangzhou-internal.aliyuncs.com/a/b/c

# With MetaService (simplified path)
hadoop fs -ls oss://bucket/a/b/c

Hive — create an external table backed by OSS:

-- Without MetaService (credentials embedded in path)
CREATE EXTERNAL TABLE test_table(id INT, name string)
        ROW FORMAT DELIMITED
        FIELDS TERMINATED BY '/t'
        LOCATION 'oss://ZaH******As1s:Ba23N**************sdaBj2@bucket.oss-cn-hangzhou-internal.aliyuncs.com/a/b/c';

-- With MetaService (simplified path)
CREATE EXTERNAL TABLE test_table(id INT, name string)
        ROW FORMAT DELIMITED
        FIELDS TERMINATED BY '/t'
        LOCATION 'oss://bucket/a/b/c';

Spark — read OSS data:

// Without MetaService (credentials embedded in path)
val data = sc.textFile("oss://ZaH******As1s:Ba23N**************sdaBj2@bucket.oss-cn-hangzhou-internal.aliyuncs.com/a/b/c")

// With MetaService (simplified path)
val data = sc.textFile("oss://bucket/a/b/c")

Get STS credentials from self-managed services

MetaService exposes an HTTP API at http://localhost:10011 on each cluster node. Call these endpoints to retrieve the STS temporary credential your application needs to access Alibaba Cloud resources without an AccessKey pair.

Example — get the cluster region:

curl http://localhost:10011/cluster-region

Available endpoints:

EndpointReturns
/cluster-regionRegion where the cluster resides
/cluster-role-nameRole name
/role-access-key-idAccessKey ID of the STS credential
/role-access-key-secretAccessKey secret of the STS credential
/role-security-tokenSecurity token of the STS credential
/cluster-network-typeNetwork type

Usage notes

Important

Modify or delete the AliyunEmrEcsDefaultRole role with caution. Deleting or misconfiguring this role causes cluster creation failures and job failures. To minimize security risk, follow the principle of least privilege when configuring permissions in the RAM console.