All Products
Search
Document Center

Realtime Compute for Apache Flink:Secure access best practices

Last Updated:Mar 26, 2026

Flink jobs connect to upstream and downstream systems — Simple Log Service, OSS, databases — and need credentials to do so. Hardcoding AccessKey pairs directly in your SQL is a common mistake: credentials end up in version control, get shared across teams, and are exposed far beyond what's needed. If an Alibaba Cloud account's AccessKey pair is leaked, every resource in the account is at risk.

This topic shows how to secure access using two controls:

  • Least-privilege RAM users — grant a RAM user only the permissions required for specific upstream or downstream resources, instead of using your Alibaba Cloud account's AccessKey pair, which carries full permissions on everything in the account.

  • Flink variables — store AccessKey pairs as encrypted variables and reference them by name in SQL, so credentials never appear in plaintext.

The walkthrough uses Simple Log Service (SLS) as an example: a RAM user is granted access to a specific Logstore, and its AccessKey pair is stored as Flink variables and referenced in a SQL deployment.

Prerequisites

Before you begin, ensure that you have:

  • An Alibaba Cloud account or RAM administrator access

  • A Flink deployment environment (Realtime Compute for Apache Flink)

  • A Simple Log Service project and Logstore

How it works

  1. Create a RAM user with OpenAPI access — this generates a scoped AccessKey pair.

  2. Attach a custom policy that grants access only to the specific Logstore the job needs.

  3. Store the AccessKey pair as Flink variables so it never appears in plaintext SQL.

  4. Reference the variables in your SQL connector configuration using ${secret_values.<variable_name>}.

Set up least-privilege access for Simple Log Service

Step 1: Create a RAM user

Create a RAM user using your Alibaba Cloud account or as a RAM administrator.

In the Access Mode section, select OpenAPI Access. An AccessKey ID and AccessKey secret are automatically generated.

Important

The AccessKey secret is shown only once, at creation time. Copy and store it securely before leaving the page.

Step 2: Create a custom policy for Simple Log Service

  1. In the RAM console, go to Permissions > Policies in the left navigation pane.

  2. On the Policies page, click Create Policy.

  3. On the Create Policy page, click the JSON tab and replace the existing content with one of the following policy templates. Then click Next to edit policy information.

    Replace <Project name> and <Logstore name> with your actual values. For more on writing custom policies, see Examples of using custom policies to grant permissions to a RAM user.

    Grant the read-only permissions on a specific Logstore

    Read-only access to a specific Logstore:

    {
        "Version": "1",
        "Statement": [
            {
                "Action": "log:ListProject",
                "Resource": "acs:log:*:*:project/*",
                "Effect": "Allow"
            },
            {
                "Action": "log:List*",
                "Resource": "acs:log:*:*:project/<Project name>/logstore/*",
                "Effect": "Allow"
            },
            {
                "Action": [
                    "log:Get*",
                    "log:List*"
                ],
                "Resource": "acs:log:*:*:project/<Project name>/logstore/<Logstore name>",
                "Effect": "Allow"
            }
        ]
    }

    Grant the write permissions on a specific Logstore

    Write access to a specific Logstore:

    {
      "Version":"1",
      "Statement":[
        {
          "Effect":"Allow",
          "Action":[
            "log:PostLogStoreLogs"
          ],
          "Resource":[
            "acs:log:*:*:project/<Project name>/logstore/<Logstore name>"
          ]
        }
      ]
    }
    The logstore keyword in the resource ARN covers both Logstores and Metricstores. The write policy above applies to Metricstores as well.
  4. Enter a name and description for the policy, then click OK.

Step 3: Attach the policy to the RAM user

Use the custom policy you just created to grant permissions to the RAM user. See Grant permissions to a RAM user for step-by-step instructions.

Step 4: Store the AccessKey pair as Flink variables

Create two variables in Flink — one for the AccessKey ID and one for the AccessKey secret — using the credentials from Step 1. After the variables are created, your SQL references the variable names rather than the credentials directly.

See Manage variables and keys for instructions. In this example, the variables are named slslak (AccessKey ID) and slsaks (AccessKey secret).

Step 5: Use the variables in a Flink deployment

Reference variables in the ${secret_values.<variable_name>} format inside your SQL connector configuration. The following example reads from a Simple Log Service Logstore and writes to a blackhole sink for testing:

CREATE TEMPORARY TABLE sls_input(
   `__source__` STRING METADATA VIRTUAL,
   __tag__ MAP<VARCHAR, VARCHAR> METADATA VIRTUAL,
   `__topic__` STRING METADATA VIRTUAL,
   deploymentName STRING,
   `level`STRING,
   `location` STRING,
   message STRING,
   thread STRING,
   `time`STRING
) WITH (
  'connector' = 'sls',
  'endpoint' ='cn-beijing-intranet.log.aliyuncs.com',
  'accessId' = '${secret_values.slsak}',
  'accessKey' = '${secret_values.slsaks}',
  'starttime' = '2024-08-30 15:39:00',
  'project' ='test',
  'logstore' ='flinktest'
);

CREATE TEMPORARY TABLE blackhole_sink(
   `__source__` STRING,
   `__topic__` STRING,
   deploymentName STRING,
   `level` STRING,
   `location` STRING,
   message STRING,
   thread STRING,
   `time` STRING,
   receive_time BIGINT
) WITH (
  'connector' = 'blackhole'
);

INSERT INTO blackhole_sink
SELECT `__source__`,
   `__topic__`,
   deploymentName,
   `level`,
   `location`,
   message,
   thread,
   `time`,
   cast(__tag__['__receive_time__'] as bigint) as receive_time
FROM sls_input;

The accessId and accessKey fields resolve to the actual credentials at runtime — they are never stored in plaintext in your SQL draft.

What's next

  • To connect Flink to Object Storage Service (OSS), create a custom policy that grants read or write permissions on a specific bucket. See RAM policies for the policy templates.

  • For more on developing SQL drafts, see Develop an SQL draft.

  • Flink provides job and project variables to prevent security risks caused by the leak of sensitive data such as an AccessKey pair and a password in plaintext. To use variables in other scenarios or learn more, see Manage variables and keys.