All Products
Search
Document Center

MaxCompute:Limits

Last Updated:Mar 26, 2026

MaxCompute enforces quotas at the compute resource, job, and statement level. This page lists the hard limits for each feature area and indicates which limits are adjustable.

Subscription computing resources

The default subscription quota is 2,000 compute units (CUs). To raise this limit, submit a ticket using your Alibaba Cloud account. MaxCompute engineers review the request within three business days and notify you of the result by text message.

Pay-as-you-go computing resources

The table below shows the maximum CUs a single user can consume simultaneously in a given region. The cap prevents one user from exhausting cluster resources and blocking other users' jobs.

Important

These values are the maximum CUs you can obtain, not a guaranteed minimum. MaxCompute may allocate additional CUs to accelerate your queries.

Country or areaRegionMax CUs (pay-as-you-go)
Regions in ChinaChina (Hangzhou), China (Shanghai), China (Beijing), China (Zhangjiakou), China (Ulanqab), China (Shenzhen), China East 2 Finance, China North 2 Ali Gov, and China South 1 Finance2,000
Regions in ChinaChina (Chengdu) and China (Hong Kong)500
Other countries or areasSingapore, Malaysia (Kuala Lumpur), Indonesia (Jakarta), Japan (Tokyo), Germany (Frankfurt), US (Silicon Valley), US (Virginia), UK (London), and UAE (Dubai)500

Subscription Tunnel slots

The default quota for subscription Tunnel slots is 500. To purchase more than 500 slots, submit a .

SQL limits

The table below covers limits that apply to SQL statements, tables, and job execution in MaxCompute. The Scope column indicates where the limit applies. The Adjustable column indicates whether the limit can be raised by submitting a ticket.

ItemLimitScopeAdjustableNotes
Table name length128 bytesTableNoNames may contain only letters, digits, and underscores (_) and must start with a letter.
Comment length1,024 bytesTableNo
Column definitions per table1,200TableNo
Partitions per table60,000TableNo
Partition levels per table6TableNo
Column record length8 MBTableNo
Column data type and positionUnmodifiableTableNo
View writabilityNot writableTableNoViews do not support INSERT.
Partitions queryable per query10,000QueryNo
SELECT output rows10,000QueryNo
MULTI-INSERT destination tables256StatementNo
UNION ALL combined tables256StatementNo
MAPJOIN small tables128StatementNo
MAPJOIN total memory for small tables512 MBStatementNoApplies across all small tables in a single statement.
ptinsubq (partition in subquery) rows1,000StatementNo
SQL statement length2 MBStatementNoApplies when executing SQL via the SDK.
IN clause parameters1,024 (recommended)StatementNot a hard limit. Exceeding 1,024 parameters degrades compilation performance.
Java user-defined functions (UDFs)Cannot be abstract or staticStatementNo
jobconf.json file size1 MBJobNoTables with large numbers of partitions may exceed this limit.
SQL execution plan size1 MBJobNoError when exceeded: FAILED: ODPS-0010000:System internal error - The Size of Plan is too large
Max job execution duration72 hoursJobNoDefault is 24 hours. Run set odps.sql.job.max.time.hours=72; to extend to 72 hours. Jobs exceeding 72 hours are stopped automatically.

For more information, see SQL.

MapReduce limits

The table below covers limits for MapReduce jobs. Items marked Yes in the Adjustable column can be changed using the listed configuration parameter.

ItemValueScopeAdjustableConfiguration parameterDefault
Memory per instance (framework + Java Virtual Machine (JVM) heap)256 MB–12 GBInstanceYesodps.stage.mapper(reducer).mem and odps.stage.mapper(reducer).jvm.mem2,048 MB (framework) + 1,024 MB (JVM)
Retries per instance3InstanceNo
Resource repeated reads per instance64InstanceNo
String column length8 MBInstanceNo
Worker timeout (no data read/write or heartbeat via context.progress())1–3,600 secondsInstanceYesodps.function.timeout600 seconds
Resources per job256JobNo
Inputs per job1,024JobNo
Input tables per job64JobNo
Outputs per job256JobNo
Custom counters per job64JobNo
Counter group name + counter name total length100 charactersJobNo
Total resource bytes per job2 GBJobNo
Map instances per job1–100,000JobYesodps.stage.mapper.numCalculated from split size
Reduce instances per job0–2,000JobYesodps.stage.reducer.num25% of map instance count
Split size≥ 1JobYesodps.stage.mapper.split.size256 MB
Supported field typesBIGINT, DOUBLE, STRING, DATETIME, BOOLEANJobNo

Additional restrictions:

  • Counter group names and counter names cannot contain number signs (#).

  • MapReduce cannot read data from Object Storage Service (OSS).

  • MapReduce does not support the new data types introduced in MaxCompute V2.0.

Local debug mode limits:

ItemDefaultMax
Map instances2100
Reduce instances1100
Download records per input10010,000

For more information, see MapReduce.

PyODPS limits

The following limits apply when developing PyODPS jobs in MaxCompute through DataWorks.

  • Each PyODPS node can process a maximum of 50 MB of data and consume a maximum of 1 GB of memory. DataWorks terminates the node if either limit is exceeded. Avoid writing Python data processing code that operates on large datasets directly in PyODPS jobs.

  • DataWorks limits CPU utilization and memory usage on the gateway to prevent overload. If the system displays Got killed, memory usage has exceeded the limit and the related processes have been terminated. Local data operations are affected by these limits; SQL and DataFrame tasks (except to_pandas) initiated by PyODPS are not.

  • The Python atexit package is not supported. Use the try-finally structure instead.

  • options.tunnel.use_instance_tunnel defaults to False in DataWorks. Set it to True to enable InstanceTunnel globally.

  • Writing and debugging code in DataWorks has limited efficiency. Install an integrated development environment (IDE) on your local machine to write code.

Package limitations:

Because packages such as matplotlib are not available in the DataWorks environment, the following restrictions apply:

ContextAvailable librariesRestrictions
DataFrame UDFsPure Python libraries and NumPyNo pandas or other third-party libraries. UDFs must be committed to MaxCompute before they can run.
Non-UDF functionsNumPy and pandas (pre-installed)Third-party packages that contain binary code are not supported.
DataFrame plot functionAffected by the absence of matplotlib.

For more information, see PyODPS.

Graph limits

The following limits apply to Graph jobs in MaxCompute.

ItemLimitNotes
Resources per job256Each table or archive counts as one resource.
Total resource bytes per job512 MB
Inputs per job1,024Input tables cannot exceed 64.
Outputs per job256
Custom counters per job64Counter group names and counter names cannot contain #. The combined length of both names cannot exceed 100 characters.
Output label length256 charactersLabels can contain letters, digits, _, #, ., and -. Labels cannot be null or empty strings.
Workers per job1,000 (max)An error is returned if the worker count exceeds this value.
CPU per worker200 units (default); range: 50–800
Memory per worker4,096 MB (default); range: 256 MB–12 GB
Resource repeated reads per worker64
split_size64 MB (default); must be > 0 and ≤ 9223372036854775807 >> 20

Sandbox restrictions: GraphLoader, Vertex, and Aggregator run under the Java Sandbox in a cluster environment. The main program of a Graph job is not restricted by the Java Sandbox. For more information, see Java Sandbox.

For more information, see Graph.

Concurrent job limits

MaxCompute limits the number of jobs that can run concurrently within a single project. If you continue to submit jobs when the number of concurrent jobs reaches the upper limit, an error message appears:

com.aliyun.odps.OdpsException: Request rejected by flow control. You have exceeded the limit for the number of tasks you can run concurrently in this project. Please try later
RegionMax concurrent jobs per project
China (Hangzhou), China (Shanghai), China (Beijing), China (Zhangjiakou), China (Ulanqab), China (Shenzhen), and China (Chengdu)2,500
China (Hong Kong), Singapore, Malaysia (Kuala Lumpur), Indonesia (Jakarta), Japan (Tokyo), Germany (Frankfurt), US (Silicon Valley), US (Virginia), UK (London), and UAE (Dubai)1,000

Region

Maximum concurrent jobs per MaxCompute project

China (Hangzhou), China (Shanghai), China (Beijing), China (Zhangjiakou), China (Ulanqab), China (Shenzhen), China (Chengdu)

2500

China (Hong Kong), Singapore, Malaysia (Kuala Lumpur), Indonesia (Jakarta), Japan (Tokyo), Germany (Frankfurt), US (Silicon Valley), US (Virginia), UK (London), UAE (Dubai)

1000