All Products
Search
Document Center

AnalyticDB:Granular access control for Spark RAM users

Last Updated:Dec 26, 2025

This topic describes describes how to separate the permission to submit Spark jobs from the permissions required at runtime to access resources, by configuring RAM users, RAM roles, and fine-grained permission policies.

Overview

AnalyticDB Spark provides a quick authorization feature for resources within the same account. However, enterprise environments with strict permission controls require custom policies to implement granular access control and prevent default permission escalation.

This topic demonstrates a best practice for separating permissions to submit Spark jobs from permissions to access resources at runtime. A RAM user (sub-account) submits the job. The Spark application then uses a RAM role to access cloud resources such as OSS.

Configure RAM identities and policies

  1. Log on to the RAM console as a RAM administrator.

  2. Create a RAM user. For this example, use the username adb-spark-ramuser-test.

  3. Create a RAM role for a trusted Alibaba Cloud service.

    • Principal name: Select AnalyticDB for MySQL.

    • Example role name: adb-spark-ramrole-test.

  4. Create custom policies

    Create two policies: one to manage the AnalyticDB cluster and one to control application access to external resources.

    Policy A: AnalyticDB management permissions

    This policy allows the RAM user to manage AnalyticDB Spark applications and grants the PassRole permission.

    1. Go to Permissions > Policies and click Create Policy.

    2. Select JSON.

    3. Enter the following content. Replace ${AliyunAccount} with your Alibaba Cloud account ID.

    4. For this example, name the policy adb-spark-ramuser-test-adbpolicy.

    Note

    Replace the value of Resource for ram:PassRole with the ARN of the RAM role that you created.

    • To obtain the ARN, go to Identities > Roles. Click the role name adb-spark-ramrole-test. In the Basic Information section, copy the ARN.

    • Record the ARN for use in a later step.

    {
      "Version": "1",
      "Statement": [
        {
          "Action": [
            "adb:Get*",
            "adb:List*",
            "adb:Kill*",
            "adb:Cancel*",
            "adb:Submit*",
            "adb:Allocate*",
            "adb:Describe*",
            "adb:Lock*",
            "adb:Copy*",
            "adb:Query*",
            "adb:Show*",
            "adb:Test*",
            "adb:Export*",
            "adb:Execute*",
            "adb:*Project",
            "adb:*Directory",
            "adb:*ProcessDefinition",
            "adb:*ProcessDefinitions",
            "adb:*ProcessDefinitionAttribute",
            "adb:*ProcessInstance",
            "adb:*ProcessInstances",
            "adb:*ProcessInstanceAttribute",
            "adb:*ProcessInstanceTasks",
            "adb:*TaskDefinition",
            "adb:*TaskDefinitions",
            "adb:*TaskDefinitionCode",
            "adb:*TaskDefinitionAttribute",
            "adb:*TaskInstance",
            "adb:*TaskInstances",
            "adb:*Notebook",
            "adb:*Notebooks",
            "adb:*NotebookKernel",
            "adb:*NotebookAttribute",
            "adb:*NotebookConfiguration",
            "adb:*NotebookParagraph",
            "adb:*NotebookParagraphs",
            "adb:*JupyterInstance",
            "adb:*JupyterInstances",
            "adb:*JupyterSpecifications",
            "adb:*JupyterInstanceAttribute",
            "adb:*Schedule",
            "adb:BindAccount",
            "adb:UnbindAccount",
            "adb:Authentication",
            "adb:Check*",
            "adb:Load*",
            "adb:Stat*",
            "adb:*SparkTemplate*",
            "adb:Download*"
          ],
          "Resource": "*",
          "Effect": "Allow"
        },
        {
          "Action": "ram:ListUserBasicInfos",
          "Effect": "Allow",
          "Resource": "*"
        },
        {
          "Action": "ram:PassRole",
          "Resource": "acs:ram::${AliyunAccount}:role/adb-spark-ramrole-test",
          "Effect": "Allow"
        }
      ]
    }

    Policy B: Data access permissions

    This policy controls the Spark application's access to OSS at runtime.

    1. Configure the allowed buckets and IP CIDR blocks.

    2. For this example, name the policy adb-spark-ramuser-test-datapolicy.

    Replace the following parameters with your actual values:

    • ${bucket}: The bucket to access.

    • ${AliyunAccount}: Your Alibaba Cloud account ID.

    • ${IPv4 CIDR Block}: The IP CIDR block. For example, 192.168.1.0/24.

    {
      "Version": "1",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": "oss:*",
          "Resource": [
            "acs:oss:oss-*:${AliyunAccount}:${bucket-1}",
            "acs:oss:oss-*:${AliyunAccount}:${bucket-2}",
            "acs:oss:oss-*:${AliyunAccount}:${bucket-1}/*",
            "acs:oss:oss-*:${AliyunAccount}:${bucket-2}/*"
          ],
          "Condition": {
            "IpAddress": {
              "acs:SourceIp": [
                "${IPv4 CIDR Block1}",
                "${IPv4 CIDR Block2}"
              ]
            }
          }
        }
      ]
    }
  5. Grant permissions to the RAM user

    1. Go to Identities > Users, find the user adb-spark-ramuser-test, and click its name.

    2. Click Permissions > Grant Permission.

    3. Add the adb-spark-ramuser-test-adbpolicy policy.

  6. Grant permissions to the RAM role

    1. Go to Identities > Roles, find the role adb-spark-ramrole-test, and click its name.

    2. Click Permissions > Grant Permission.

    3. Add the adb-spark-ramuser-test-datapolicy policy.

Configure an AnalyticDB for MySQL account

  1. Log on to the AnalyticDB for MySQL console as an administrator.

  2. Click Clusters and select the target cluster.

  3. Click Accounts > Create Account. Set Account Type to Standard Account.

  4. After the account is created, click Manage RAM Association and attach the RAM user adb-spark-ramuser-test.

  5. Click Permissions > Edit Permissions and grant permissions as needed. For example, you can grant global create, query, and alter permissions.

Submit a job as a RAM user

After you complete the preceding steps, the RAM user adb-spark-ramuser-test has the cluster management permissions defined in Policy A. The following steps show how to use this RAM user to submit a job and securely read and write data in OSS by specifying a RAM role that has the permissions of Policy B.

  1. Log on to the AnalyticDB for MySQL console as the RAM user adb-spark-ramuser-test.

  2. Click Clusters, select the target cluster, and then click Job Development > SQL Development.

  3. Above the editor, select the Spark engine and the corresponding resource group.

  4. Write and submit the job.

    In the SQL statement, configure the spark.adb.roleArn parameter with the ARN that you recorded earlier.

    Note

    Although the current RAM user has permission to submit jobs, the user does not have direct access to OSS. The Spark engine assumes the RAM role (adb-spark-ramrole-test) specified in the parameter and uses the permissions of that role to access data in OSS.

    -- Configure the ARN of the RAM role that the Spark application uses at runtime.
    -- This role has been granted OSS read and write permissions in Policy B.
    set spark.adb.roleArn=acs:ram::${AliyunAccount}:role/adb-spark-ramrole-test;
    
    -- Create a database that points to an OSS path.
    -- Make sure that bucket-1 is in the list of allowed resources in the policy.
    create database if not exists test_db_01 location 'oss://bucket-1/path/to/test_db_01/';
    
    -- Create a table.
    create table if not exists test_db_01.test_tbl_01(id int, name string);
    
    -- Write data. The Spark engine uses the specified role to write to OSS.
    insert into test_db_01.test_tbl_01 values(1, 'a');
    
    -- Query data. The Spark engine uses the specified role to read from OSS.
    select * from test_db_01.test_tbl_01;
    To specify a log path in the SET configuration, you can configure the spark.app.log.rootPath parameter. For more information, see Spark application configuration parameters.
  5. Verify the result
    Click Execute. If the job runs successfully, it indicates that the granular access control configuration is effective. This confirms that the RAM user successfully submitted the job and the Spark application successfully accessed OSS by assuming the role.