All Products
Search
Document Center

AnalyticDB:Create and manage resource groups

Last Updated:Mar 28, 2026

Resource groups partition the computing resources of an AnalyticDB for MySQL cluster so that different workloads — online queries, batch Spark jobs, and AI/ML tasks — run in isolation without competing for the same compute capacity.

Prerequisites

Data Warehouse Edition only: Before you begin, make sure your cluster meets the following requirements:

  • The cluster runs in Elastic mode

  • The cluster has 32 cores or more

  • The kernel version is 3.1.3.2 or later

To check or update the kernel version, go to the Configuration Information section on the Cluster Information page in the AnalyticDB for MySQL console.

Billing

Enterprise Edition, Basic Edition, or Data Lakehouse Edition

Interactive and Job resource groups are billed based on AnalyticDB Compute Unit (ACU) usage for elastic resources.

For AI resource groups with the Ray Cluster deployment type:

  • Worker Resource Type: CPU — billed based on ACU usage

  • Worker Resource Type: GPU — billed based on GPU specifications and the number of GPUs

  • Worker Disk Storage — billed based on configured storage size

To check how much elastic resource a resource group consumes, go to Cluster Management > Resource Management > Resource Overview:

  • Enterprise Edition and Basic Edition: Elastic usage = Total Resources minus Reserved Resources

  • Data Lakehouse Edition: Elastic usage = Total Computing Resources minus Reserved Computing Resources

Data Warehouse Edition

Resource group fees are the same as computing resource fees. You are charged only for the resources you use.

Choose a resource group type

AnalyticDB for MySQL supports three resource group types. Pick the one that matches your workload:

TypeBest forExecution modelResponse time
InteractiveOnline queries with high QPS and low latencyMassively Parallel Processing (MPP) using resident resourcesMilliseconds
JobOffline batch processing requiring high throughputBulk Synchronous Parallel (BSP) using temporary resourcesSeconds to minutes
AIHeterogeneous workloads using GPUs or CPUsRay Cluster with configurable Worker GroupsVaries
Important

The resource group type cannot be changed after creation. Decide before you proceed.

Default resource groups

Every cluster comes with two pre-created resource groups:

  • `user_default` — An Interactive resource group that handles all XIHE queries when no other resource group is assigned

  • `serverless` — A Job resource group for Spark jobs (Spark Jar and Spark SQL), created automatically for new clusters with kernel version 3.2.2.8 or later

You cannot delete these default resource groups.

Create a resource group

Enterprise Edition, Basic Edition, or Data Lakehouse Edition

  1. Log in to the AnalyticDB for MySQL console. In the upper-left corner, select a region. In the left navigation pane, click Clusters, then click the cluster ID.

  2. Go to Cluster Management > Resource Management, click the Resource Groups tab, and click Create Resource Group in the upper-right corner.

  3. Enter a name and select a Job Type. Then configure the properties based on the job type. Click OK.

The resource group becomes available when its status changes to Running.

Parameters that cannot be changed after creation

Before configuring, note the following immutable settings:

Resource group typeImmutable settings
All typesResource Group Name, Task Type
InteractiveEngine
JobMin Computing Resources
AIDeployment Type, Worker Group Name

Interactive resource group parameters

ParameterDescription
EngineXIHE: Runs XIHE SQL only. Spark: Runs Spark SQL jobs in interactive mode. Cannot be changed after creation.
Auto StopWhen the resource group has been idle for a few minutes (no commands executed), running clusters are released automatically. This reduces costs but adds a restart delay on the next query. Available only when Engine is set to Spark.
Cluster sizeACUs allocated per cluster. Minimum: 16 ACU for XIHE, 24 ACU for Spark. For the mapping between cluster size and Spark Driver/Executor specifications, see Appendix: Mapping between cluster size and Spark Driver and Spark Executor specifications.
Minimum clusters / Maximum clustersControls how the resource group scales. See the scaling modes below. Maximum value: 10 clusters.
Job resubmission rulesRoutes queries that exceed the Query Execution Time Threshold to a Target Resource Group. See Job delivery. Available only when Engine is set to XIHE.
Spark configurationSpark application parameters applied to all jobs in this resource group. To override settings for a specific job, set them in the job submission code. See Spark application configuration parameters. Available only when Engine is set to Spark.

Scaling modes for Interactive resource groups

Setting Minimum clusters and Maximum clusters to different values or the same value gives you two distinct behaviors:

  • Auto-scale mode (Min clusters not equal to Max clusters): AnalyticDB for MySQL dynamically adds or removes clusters based on query load, staying within the configured range. Use this for variable traffic.

  • Fixed-size mode (Min clusters = Max clusters): AnalyticDB for MySQL starts exactly that many clusters and keeps them running. Use this when you need predictable, static resources.

Setting either value to 2 or more enables the Multi-Cluster elastic model.

Job resource group parameters

ParameterDescription
Minimum computing resourcesFixed at 0 ACU. Cannot be changed after creation.
Maximum computing resourcesMaximum ACUs the resource group can use. The console supports up to 1,024 ACU in steps of 8 ACU. To request a higher limit, submit a ticket.
Spot instanceWhen enabled, Spark jobs attempt to use spot instance resources to reduce cost. See Spot instances.
Spark configurationSpark application parameters applied to all jobs in this resource group. To override settings for a specific job, set them in the job submission code. See Spark application configuration parameters.

AI resource group parameters

ParameterDescription
Deployment modeSelect RayCluster.
Head resource specificationsCPU core count for the Head node, which manages Ray metadata, runs the Global Control Store (GCS), and handles task scheduling. It does not execute tasks. Choose from small, m.xlarge, or m.2xlarge — size this based on the overall Ray Cluster scale.
Worker Group NameA custom name for the Worker Group. One AI resource group can contain multiple Worker Groups with different names. Cannot be changed after creation.
Worker resource typeCPUsubmit a ticket: Use for everyday computing, multitasking, and complex logical operations. GPU: Use for large-scale data parallel processing, machine learning, and deep learning training.
Worker resource specificationsFor CPU Workers: small, m.xlarge, or m.2xlarge. See Spark resource specification list for CPU core counts. For GPU Workers: submit a ticket to select a GPU model.
Worker disk storageDisk space for Ray logs, temporary data, and overflow from Ray distributed object storage. Range: 30–2,000 GB. Default: 100 GB. For temporary use only — do not store persistent data here.
Minimum workers / Maximum workersMinimum Workers: the minimum number of Workers per Worker Group (minimum value: 1). Maximum Workers: the maximum number of Workers per Worker Group (maximum value: 8). Worker Groups scale independently. If Min Workers does not equal Max Workers, the system dynamically adjusts Workers based on task count.
Allocation unitNumber of GPUs per Worker node (for example, 1/3 means each Worker gets one-third of a GPU). Required only when Worker Resource Type is GPU.

Data Warehouse Edition

  1. Log in to the AnalyticDB for MySQL console. In the upper-left corner, select a region. In the left navigation pane, click Clusters, then click the cluster ID.

  2. In the left navigation pane, click Resource Group Management.

  3. Click Create Resource Group in the upper-right corner.

  4. Configure the following parameters, then click OK.

ParameterDescription
Resource Group Name2–30 characters, must start with a letter, and can contain only letters, digits, and underscores (_).
Query typeThe SQL query type for this resource group. Default_Type: Default query type. Batch: For complex queries on large datasets such as ETL (Extract-Transform-Load) jobs. Intermediate results can be written to disk, so queries will not fail due to data volume — though performance may decrease. Interactive: For real-time analysis with low latency. Memory-based, so queries fail if data exceeds machine capacity. For details, see Query execution modes.
Resource amountComputing resources to allocate to this resource group.

Modify a resource group

Enterprise Edition, Basic Edition, or Data Lakehouse Edition

On the Resource Groups page, find the target resource group and click Modify in the Actions column. Adjust the available settings in the panel, then click OK.

The changes take effect when the resource group status returns to Running.

What you can change after creation:

Resource group typeModifiable settings
Interactive (custom)Auto Stop, Cluster Size, Min Clusters, Max Clusters, Job Delivery Rule, Spark Configuration
Job (custom)Max Computing Resources, Spot Instance, Spark Configuration
AI (custom, Ray Cluster)Head Resource Specification, Worker Resource Type, Worker Resource Specification, Worker Disk Space, Min Workers, Max Workers
user_default (Enterprise/Basic Edition)Job Delivery Rule only
user_default (Data Lakehouse Edition)Reserved Computing Resources, Job Delivery Rule
serverlessNot modifiable

Data Warehouse Edition

On the Resource Group Management page, find the target resource group and click Modify in the Actions column. Adjust Query Type or Resource Amount, then click OK. Changes take effect immediately.

What you can change after creation:

  • Default resource group (`user_default`): Query Type only. The resource amount is calculated automatically as total cluster resources minus resources allocated to other groups.

  • Custom resource groups: Query Type and Resource Amount.

Delete a resource group

Important

Deleting a resource group interrupts all tasks running in it. Default resource groups (user_default and serverless) cannot be deleted.

Before deleting, check for dependencies:

  • If a XIHE SQL script references the resource group you plan to delete, update the script to use a different resource group. Otherwise, those XIHE SQL jobs will fall back to the default resource group.

  • If a Spark job references the resource group, update the job configuration. Otherwise, the Spark job will fail.

To delete a resource group, go to the Resource Groups page, click Delete in the Actions column for the target resource group, and confirm by clicking OK.

Monitor resource usage

This section applies to Enterprise Edition, Basic Edition, and Data Lakehouse Edition clusters.

AnalyticDB for MySQL provides three levels of resource monitoring. For a full list of metrics, see Resource group monitoring.

Cluster-level: reserved and elastic resources

Go to Cluster Management > Resource Management > Resource Overview to see a point-in-time snapshot of resources across all resource groups in the cluster:

  • Enterprise Edition and Basic Edition: View Total Resources and Reserved Resources. Elastic usage = Total minus Reserved.

  • Data Lakehouse Edition: View Total Computing Resources and Reserved Computing Resources. Elastic usage = Total Computing minus Reserved Computing.

Resource group level: compute and load metrics

Go to Cluster Management > Resource Management > Resource Groups, find the target resource group, and click Monitoring. This view shows computing resources in use and load metrics such as the number of running and queued XIHE SQL queries, Spark engines, and connections.

Job level: per-job resource consumption

Go to Cluster Management > Resource Management > Job Usage Statistics to see resource consumption broken down by job — including XIHE BSP jobs, Spark jobs, and SLS/Kafka data synchronization and migration tasks. The view reports total, reserved, elastic, and spot instance resources consumed.

FAQ

My cluster has 32 ACU of reserved resources. Are these shared between the default resource group and custom resource groups?

It depends on your edition.

In Enterprise Edition and Basic Edition, reserved resources can only be assigned to the user_default group. The serverless group and any custom Job or Interactive resource groups can only use elastic resources.

In Data Lakehouse Edition, reserved resources can be distributed across the user_default group, the serverless group, and custom Job and Interactive resource groups. The amount assigned to user_default is determined by its minimum and maximum computing resource settings. The remaining reserved resources — total reserved minus what is assigned to user_default — can then be assigned to other groups.

API reference

Use OpenAPI to create, modify, and delete resource groups, and to attach or detach database accounts: