MaxCompute enforces quotas at the compute resource, job, and statement level. This page lists the hard limits for each feature area and indicates which limits are adjustable.
Subscription computing resources
The default subscription quota is 2,000 compute units (CUs). To raise this limit, submit a ticket using your Alibaba Cloud account. MaxCompute engineers review the request within three business days and notify you of the result by text message.
Pay-as-you-go computing resources
The table below shows the maximum CUs a single user can consume simultaneously in a given region. The cap prevents one user from exhausting cluster resources and blocking other users' jobs.
These values are the maximum CUs you can obtain, not a guaranteed minimum. MaxCompute may allocate additional CUs to accelerate your queries.
| Country or area | Region | Max CUs (pay-as-you-go) |
|---|---|---|
| Regions in China | China (Hangzhou), China (Shanghai), China (Beijing), China (Zhangjiakou), China (Ulanqab), China (Shenzhen), China East 2 Finance, China North 2 Ali Gov, and China South 1 Finance | 2,000 |
| Regions in China | China (Chengdu) and China (Hong Kong) | 500 |
| Other countries or areas | Singapore, Malaysia (Kuala Lumpur), Indonesia (Jakarta), Japan (Tokyo), Germany (Frankfurt), US (Silicon Valley), US (Virginia), UK (London), and UAE (Dubai) | 500 |
Subscription Tunnel slots
The default quota for subscription Tunnel slots is 500. To purchase more than 500 slots, submit a .
SQL limits
The table below covers limits that apply to SQL statements, tables, and job execution in MaxCompute. The Scope column indicates where the limit applies. The Adjustable column indicates whether the limit can be raised by submitting a ticket.
| Item | Limit | Scope | Adjustable | Notes |
|---|---|---|---|---|
| Table name length | 128 bytes | Table | No | Names may contain only letters, digits, and underscores (_) and must start with a letter. |
| Comment length | 1,024 bytes | Table | No | |
| Column definitions per table | 1,200 | Table | No | |
| Partitions per table | 60,000 | Table | No | |
| Partition levels per table | 6 | Table | No | |
| Column record length | 8 MB | Table | No | |
| Column data type and position | Unmodifiable | Table | No | |
| View writability | Not writable | Table | No | Views do not support INSERT. |
| Partitions queryable per query | 10,000 | Query | No | |
SELECT output rows | 10,000 | Query | No | |
MULTI-INSERT destination tables | 256 | Statement | No | |
UNION ALL combined tables | 256 | Statement | No | |
MAPJOIN small tables | 128 | Statement | No | |
MAPJOIN total memory for small tables | 512 MB | Statement | No | Applies across all small tables in a single statement. |
ptinsubq (partition in subquery) rows | 1,000 | Statement | No | |
| SQL statement length | 2 MB | Statement | No | Applies when executing SQL via the SDK. |
IN clause parameters | 1,024 (recommended) | Statement | — | Not a hard limit. Exceeding 1,024 parameters degrades compilation performance. |
| Java user-defined functions (UDFs) | Cannot be abstract or static | Statement | No | |
jobconf.json file size | 1 MB | Job | No | Tables with large numbers of partitions may exceed this limit. |
| SQL execution plan size | 1 MB | Job | No | Error when exceeded: FAILED: ODPS-0010000:System internal error - The Size of Plan is too large |
| Max job execution duration | 72 hours | Job | No | Default is 24 hours. Run set odps.sql.job.max.time.hours=72; to extend to 72 hours. Jobs exceeding 72 hours are stopped automatically. |
For more information, see SQL.
MapReduce limits
The table below covers limits for MapReduce jobs. Items marked Yes in the Adjustable column can be changed using the listed configuration parameter.
| Item | Value | Scope | Adjustable | Configuration parameter | Default |
|---|---|---|---|---|---|
| Memory per instance (framework + Java Virtual Machine (JVM) heap) | 256 MB–12 GB | Instance | Yes | odps.stage.mapper(reducer).mem and odps.stage.mapper(reducer).jvm.mem | 2,048 MB (framework) + 1,024 MB (JVM) |
| Retries per instance | 3 | Instance | No | — | — |
| Resource repeated reads per instance | 64 | Instance | No | — | — |
| String column length | 8 MB | Instance | No | — | — |
Worker timeout (no data read/write or heartbeat via context.progress()) | 1–3,600 seconds | Instance | Yes | odps.function.timeout | 600 seconds |
| Resources per job | 256 | Job | No | — | — |
| Inputs per job | 1,024 | Job | No | — | — |
| Input tables per job | 64 | Job | No | — | — |
| Outputs per job | 256 | Job | No | — | — |
| Custom counters per job | 64 | Job | No | — | — |
| Counter group name + counter name total length | 100 characters | Job | No | — | — |
| Total resource bytes per job | 2 GB | Job | No | — | — |
| Map instances per job | 1–100,000 | Job | Yes | odps.stage.mapper.num | Calculated from split size |
| Reduce instances per job | 0–2,000 | Job | Yes | odps.stage.reducer.num | 25% of map instance count |
| Split size | ≥ 1 | Job | Yes | odps.stage.mapper.split.size | 256 MB |
| Supported field types | BIGINT, DOUBLE, STRING, DATETIME, BOOLEAN | Job | No | — | — |
Additional restrictions:
Counter group names and counter names cannot contain number signs (
#).MapReduce cannot read data from Object Storage Service (OSS).
MapReduce does not support the new data types introduced in MaxCompute V2.0.
Local debug mode limits:
| Item | Default | Max |
|---|---|---|
| Map instances | 2 | 100 |
| Reduce instances | 1 | 100 |
| Download records per input | 100 | 10,000 |
For more information, see MapReduce.
PyODPS limits
The following limits apply when developing PyODPS jobs in MaxCompute through DataWorks.
Each PyODPS node can process a maximum of 50 MB of data and consume a maximum of 1 GB of memory. DataWorks terminates the node if either limit is exceeded. Avoid writing Python data processing code that operates on large datasets directly in PyODPS jobs.
DataWorks limits CPU utilization and memory usage on the gateway to prevent overload. If the system displays
Got killed, memory usage has exceeded the limit and the related processes have been terminated. Local data operations are affected by these limits; SQL and DataFrame tasks (exceptto_pandas) initiated by PyODPS are not.The Python
atexitpackage is not supported. Use thetry-finallystructure instead.options.tunnel.use_instance_tunneldefaults toFalsein DataWorks. Set it toTrueto enable InstanceTunnel globally.Writing and debugging code in DataWorks has limited efficiency. Install an integrated development environment (IDE) on your local machine to write code.
Package limitations:
Because packages such as matplotlib are not available in the DataWorks environment, the following restrictions apply:
| Context | Available libraries | Restrictions |
|---|---|---|
| DataFrame UDFs | Pure Python libraries and NumPy | No pandas or other third-party libraries. UDFs must be committed to MaxCompute before they can run. |
| Non-UDF functions | NumPy and pandas (pre-installed) | Third-party packages that contain binary code are not supported. |
DataFrame plot function | — | Affected by the absence of matplotlib. |
For more information, see PyODPS.
Graph limits
The following limits apply to Graph jobs in MaxCompute.
| Item | Limit | Notes |
|---|---|---|
| Resources per job | 256 | Each table or archive counts as one resource. |
| Total resource bytes per job | 512 MB | |
| Inputs per job | 1,024 | Input tables cannot exceed 64. |
| Outputs per job | 256 | |
| Custom counters per job | 64 | Counter group names and counter names cannot contain #. The combined length of both names cannot exceed 100 characters. |
| Output label length | 256 characters | Labels can contain letters, digits, _, #, ., and -. Labels cannot be null or empty strings. |
| Workers per job | 1,000 (max) | An error is returned if the worker count exceeds this value. |
| CPU per worker | 200 units (default); range: 50–800 | |
| Memory per worker | 4,096 MB (default); range: 256 MB–12 GB | |
| Resource repeated reads per worker | 64 | |
split_size | 64 MB (default); must be > 0 and ≤ 9223372036854775807 >> 20 |
Sandbox restrictions: GraphLoader, Vertex, and Aggregator run under the Java Sandbox in a cluster environment. The main program of a Graph job is not restricted by the Java Sandbox. For more information, see Java Sandbox.
For more information, see Graph.
Concurrent job limits
MaxCompute limits the number of jobs that can run concurrently within a single project. If you continue to submit jobs when the number of concurrent jobs reaches the upper limit, an error message appears:
com.aliyun.odps.OdpsException: Request rejected by flow control. You have exceeded the limit for the number of tasks you can run concurrently in this project. Please try later| Region | Max concurrent jobs per project |
|---|---|
| China (Hangzhou), China (Shanghai), China (Beijing), China (Zhangjiakou), China (Ulanqab), China (Shenzhen), and China (Chengdu) | 2,500 |
| China (Hong Kong), Singapore, Malaysia (Kuala Lumpur), Indonesia (Jakarta), Japan (Tokyo), Germany (Frankfurt), US (Silicon Valley), US (Virginia), UK (London), and UAE (Dubai) | 1,000 |
Region | Maximum concurrent jobs per MaxCompute project |
China (Hangzhou), China (Shanghai), China (Beijing), China (Zhangjiakou), China (Ulanqab), China (Shenzhen), China (Chengdu) | 2500 |
China (Hong Kong), Singapore, Malaysia (Kuala Lumpur), Indonesia (Jakarta), Japan (Tokyo), Germany (Frankfurt), US (Silicon Valley), US (Virginia), UK (London), UAE (Dubai) | 1000 |