All Products
Search
Document Center

Realtime Compute for Apache Flink:Concepts

Last Updated:Mar 26, 2026

Realtime Compute for Apache Flink organizes its resources and operations around a set of core concepts. This page defines each concept and explains how they relate, so you can plan your workspace setup, configure namespaces, deploy jobs, and manage access.

Hierarchy

The following diagram shows how the core concepts are organized — from workspace and namespace at the top, down through deployment and job, to the connectors, functions, and catalogs that support data processing.

image

Consoles

Realtime Compute for Apache Flink provides two consoles with distinct roles.

ConsolePurposeWhen to use
Management ConsoleCentralized portal for workspace and namespace lifecycle management. Create, release, and reconfigure workspaces; clone namespaces for migration or extension.
    Use to set up infrastructure — creating workspaces, adjusting compute resources, or duplicating a namespace to a new environment.
    Development ConsolePer-workspace IDE for Flink job development and operations (O&M). Switch between namespaces, write and debug code, monitor running jobs, and manage access control — all from one place.Use for day-to-day development: writing SQL or YAML drafts, starting deployments, monitoring job health, and managing namespace permissions.

    Workspace and namespace

    Workspaces and namespaces form the organizational backbone of Realtime Compute for Apache Flink.

    ConceptWhat it isWhy it exists
    workspaceThe basic management unit for namespaces of Realtime Compute for Apache Flink. Each workspace is an independent environment with its own dedicated compute resources.Isolates compute environments — for example, separate workspaces for different business units or products. Each workspace has its own independent Development Console.
    namespaceThe management unit for drafts and deployments within a workspace. Each namespace has its own configurations, drafts, deployments, and permissions, managed independently.Isolates resources and permissions between tenants or teams within the same workspace. Create separate namespaces to enforce access boundaries without provisioning a new workspace.

    A workspace contains one or more namespaces. Drafts and deployments always belong to a specific namespace.

    Related tasks: Create a workspace | Manage namespaces | Reconfigure resources

    Draft and deployment

    Drafts and deployments represent two stages of the Flink job lifecycle.

    ConceptWhat it isKey constraints
    draftA SQL or YAML script created and edited in the Development Console.Drafts can only be created within the console — not via SDKs.
    deploymentA runnable unit created from a draft or a user-uploaded JAR or Python package. Deployments isolate environments such as development and production.Editing a draft after deployment does not affect the running job instance. Deployments can be created and managed via the Development Console or SDKs.

    A draft is your editable source; a deployment is the running version. Changing the draft after you deploy does not affect the live job — you must redeploy to apply changes.

    Related tasks: Job development map | Deploy a job

    Job

    A job is a running instance of a workload that runs in a deployment. Jobs have deterministic streaming or batch properties — defined at deployment time and fixed for the life of that job.

    Resources and queues

    Resource

    Realtime Compute for Apache Flink bills based on compute units (CUs). One CU provides:

    • 1 CPU core

    • 4 GiB of memory

    • 20 GB of local storage (for logs and checkpoints)

    The number of CUs a deployment consumes depends on three factors: the queries per second (QPS) of input data streams, the computing complexity of the job, and the distribution of input data. Plan your CU allocation based on your expected workload scale.

    Related tasks: Billing overview | Configure job resources | Tag management

    Queue

    A queue is a resource partition that you assign to deployments. Use queues to implement resource isolation and management.

    Related task: Manage resource queues

    Data processing

    Connector

    Connectors integrate Realtime Compute for Apache Flink with upstream and downstream data stores, enabling data reads and writes for synchronization pipelines. The service provides a set of built-in connectors and also supports custom connectors uploaded as JAR packages.

    Reference: Supported connectors

    Function

    Functions extend SQL processing logic within Flink jobs. Realtime Compute for Apache Flink provides built-in functions and supports user-defined functions (UDFs) for custom logic.

    References: Overview of window functions | Overview of built-in functions | UDFs

    Catalog

    A catalog stores the metadata that Flink jobs use to discover and query data — including databases, tables, fields, partitions, and the data stored in databases or other external systems. Catalog management is a critical part of setting up data pipelines.

    Reference: Data Management

    Access control

    Role

    A role is a named set of permissions. Assign roles to users to grant consistent access across a namespace. One user can hold multiple roles, and multiple users can share the same role. Any permission change to a role applies to all users assigned that role.

    Reference: Grant permissions to a RAM role

    Member

    An Alibaba Cloud account or RAM user becomes a member when added to a namespace. Members can manage data, drafts, deployments, resources, and functions within that namespace only after being added and granted the appropriate permissions.

    Reference: Development console authorization

    What's next