Document Center

    MaxCompute

    • Release Note
      • Announcements
      • Document updates
      • History
    • Product Introduction
      • What is MaxCompute?
      • Usage notes
      • Definitions
        • Terms
        • Project
        • Table
        • Partition
        • Lifecycle
        • Resource
        • Function
        • Task
        • Task instance
        • ACID semantics
      • Limits
      • Storage performance
      • MaxCompute features in different regions
    • Pricing
      • Billing
      • Storage pricing (pay-as-you-go)
      • Computing pricing
      • Download pricing(Pay-As-You-Go)
      • View billing details
      • Switch billing methods
      • Upgrade or downgrade configurations
      • Outstanding payment warning and suspension policies
      • Renewal management
      • Consumption control
    • Type
      • Subscription
      • Pay-as-you-go
    • Prepare
      • Create an Alibaba Cloud account
      • Create RAM users
      • Activate MaxCompute
      • Create a project
      • Add project members and configure roles
      • Install and configure the MaxCompute client
      • Configure endpoints
      • Users and roles
    • Quick Start
      • Create and view a table
      • Import data
      • Run SQL statements and export data
      • (Optional) Use MapReduce
      • (Optional) Develop Java UDFs
      • (Optional) Submit Graph jobs
      • (Optional) Use an ad-hoc query to run SQL statements
    • Tutorials
      • Build an online operation analysis platform
        • Business scenarios and development process
        • Prepare the environment
        • Prepare data
        • Build data models and analyze data
          • Create tables
          • Design workflows
          • Configure nodes
          • Commit and test nodes
        • Display data on dashboards
    • Development
      • Data types
        • Data type editions
        • MaxCompute V1.0 data type edition
        • MaxCompute V2.0 data type edition
        • Hive-compatible data type edition
        • Compatibility between components and data types
      • Common commands
        • Common SQL statements
        • Project operations
        • SET operations
        • Security operations
        • User and role operations
        • Authorization operations
        • Table operations
        • Partition and column operations
        • Instance operations
        • Resource operations
        • Function operations
        • Tunnel operations
        • Time zone configuration operations
        • Other operations
      • SQL
        • MaxCompute SQL overview
        • Differences with other SQL statements
        • MaxCompute SQL limits
        • DDL SQL
          • Table operations
          • Partition and column operations
          • Lifecycle management operations
          • View-related operations
        • Insert Operation
          • INSERT OVERWRITE and INSERT INTO
          • Insert data into dynamic partitions (DYNAMIC PARTITION)
          • MULTI INSERT
          • VALUES
          • LOAD
        • Select Operation
          • SELECT syntax
          • Sequence of clauses in a SELECT statement
          • Subqueries
          • UNION, INTERSECT, and EXCEPT
          • JOIN
          • SEMI JOIN
          • MAPJOIN hint
          • LATERAL VIEW
          • HAVING clause
          • GROUPING SETS
          • SELECT TRANSFORM
          • EXPLAIN
        • SQL enhancement operations
          • Common table expressions
          • CLONE TABLE
          • Parameterized view
        • Builtin functions
          • Comparison of functions built in MaxCompute, MySQL, and Oracle
          • Date functions
          • Mathematical functions
          • Window functions
          • Aggregate functions
          • String functions
          • Other functions
        • UDF
          • Overview
          • Java UDF
          • Python 2 UDFs
          • Python 3 UDF
          • Code-embedded UDFs
          • SQL functions
          • Open source geospatial UDFs
          • Reference third-party packages in Python UDFs
          • UDTF usage
          • UDF examples
            • Obtain a JSON string
            • Check the value of a JSON string
            • Convert data types to JSON STRING
            • Replace strings by using regular expressions
            • Add key-value pairs to a JSON string
            • Obtain the values of strings that have delimiters
            • Obtain the values of strings that have no delimiters
            • Obtain a remainder
            • Obtain time in a specific format
            • Obtain the character at a specific position in a URL
        • UDT
          • Overview
          • Usage examples
        • UDJ
        • Script Mode SQL
        • Appendix
          • Operators
          • Escape character
          • LIKE usage
          • Regular expressions
          • Reserved words and keywords
          • Data type mappings
          • Data type conversion
          • Dynamic parameters
      • PyODPS
        • Quick start
        • Installation guide and limits
        • Platform instructions
          • Overview
          • Migrate PyODPS nodes from a data development platform to a local PyODPS environment
          • Use PyODPS in DataWorks
          • Create a PyODPS 3 node
        • Basic operations
          • Overview
          • Projects
          • Tables
          • SQL
          • Task instances
          • Resources
          • Functions
        • Configurations
        • DataFrame
          • Overview
          • Quick start
          • Create a DataFrame object
          • Sequence
          • Collection
          • Execution
          • Column operations
          • Aggregation
          • Sort, deduplicate, sample, and transform data
          • Use UDFs and the third-party Python libraries
          • MapReduce API
          • Data merging
          • Window functions
          • Plotting
          • Debugging
        • User experience enhancement
          • Command line
          • IPython
          • Jupyter Notebook
        • API overview
        • Examples
          • Reference a third-party package in a PyODPS node
          • Use a PyODPS node to query data based on specific criteria
          • Use a PyODPS node to pass parameters
          • Use a PyODPS node to read data from a partitioned table
          • Use a PyODPS node to read data from the level-1 partition of the specified table
          • Use a PyODPS node to perform sequence operations
          • Use a PyODPS node to aggregate data
          • Use a PyODPS node to perform column operations
          • Use a PyODPS node to sort data
          • Use a PyODPS node to de-duplicate data
          • Use a PyODPS node to sample data
          • Use a PyODPS node to scale data
          • Use a PyODPS node to process NULL values
      • MapReduce
        • Summary
          • Overview
          • Extended MapReduce model
          • Open source MapReduce
        • Limits
        • Function Introduction
          • Terms
          • Submit a MapReduce job
          • Input and output
          • Resource usage
          • Job running in local mode
        • Program Example
          • WordCount
          • Map-only jobs
          • MultipleInOut
          • MultiJobs samples
          • SecondarySort
          • Resources
          • UserDefinedCounters
          • Grep
          • JOIN
          • Sleep samples
          • Unique samples
          • Sort examples
          • Partition samples
          • Pipeline examples
        • Java SDK
          • Overview
          • Compatibility with Hadoop MapReduce
          • Java sandbox
      • Mars
        • Version updates
        • Overview
        • Preparations
        • Usage notes
      • Spark
        • Spark on MaxCompute overview
        • Set up a Spark on MaxCompute development environment
        • Running modes
        • Java and Scala development examples
          • Develop a demo project on Spark on MaxCompute by using Java or Scala
          • Spark-1.x examples
          • Spark 2.x examples
        • Develop a Spark on MaxCompute application by using PySpark
        • Access instances in a VPC from Spark on MaxCompute
        • Configure Spark on MaxCompute to access OSS resources
        • Check jobs
      • MaxCompute Query Acceleration
        • Overview
        • Usage notes
      • External table
        • Overview
        • STS authorization
        • Access external tables
          • Access OSS data by using the built-in extractor
          • Access unstructured data in OSS by using a custom extractor
          • Partition data by using foreign tables
          • Access Tablestore data
        • Process external table data
          • Process OSS data stored in open source formats
          • Export unstructured data to OSS
      • Interactive SQL (Lightning)
        • Upgrade MaxCompute Lightning to Hologres
        • Overview
        • Endpoints of MaxCompute Lightning
        • Limits
        • Quick Start
        • Usage notes
          • Access services using JDBC interfaces
            • JDBC driver
            • Configure a JDBC connection
            • Use common tools to connect to MaxCompute Lightning
          • Query syntax and built-in functions
          • View or cancel query jobs
        • FAQs
      • Graph
        • Overview
        • Aggregator overview
        • Limits
        • Graph jobs
        • SDK configuration
        • Development and debugging
        • Examples
          • SSSP
          • PageRank
          • K-means clustering
          • Bipartite matching
          • Strongly connected component
          • Connected component
          • Topological sorting
          • Linear regression
          • Triangle count
          • Vertex table import
          • Edge table import
      • View Job Running Information
        • Use Logview to view job information
        • Use Logview V2.0 to view job running information
        • Use errors and alerts in the MaxCompute compiler for troubleshooting
      • Open-source features of MaxCompute
    • Data Migration
      • Data upload and download
        • Overview
        • Connection to the Tunnel or DataHub service
        • Select tools to migrate data to MaxCompute
        • Introduction to the tools
        • Run Tunnel commands to upload and download data
          • Tunnel commands
          • Tunnel command usage
        • Tunnel SDK
          • MaxCompute Tunnel overview
          • TableTunnel
          • InstanceTunnel
          • UploadSession
          • DownloadSession
          • TunnelBufferedWriter
        • Bulk data channel SDK example
          • Overview
          • Simple upload
          • Simple download example
          • Data upload in multi-threaded mode
          • Multi-threaded download
          • Data upload by using BufferedWriter
          • Data upload by using BufferedWriter in multi-threaded mode
          • Examples of uploading and downloading complex data
        • Import or export data by using Data Integration
        • Import data from Message Queue for Apache Kafka to MaxCompute
    • Management
      • Security management
        • Security models
        • Manage roles
          • Management role permissions
          • Assign management roles to a RAM user
          • Use DataWorks to manage permissions of a role on a project
        • Permission relationships between MaxCompute and DataWorks
        • Management of users and permissions
        • Enable security features
      • Configure security features
        • Target users
        • Quick Start
          • Add a user and grant permissions to the user
          • Add a role and grant permissions to the role by using ACL
          • Enable project data protection
          • Configure IP address whitelists
        • Manage users and permissions
          • User authentication
          • User management
          • Authorize users
          • Manage roles
          • Check permissions
        • Column-level access control
        • Policy-based access control and download control
        • Resource share across project space
          • Package-based resource sharing across projects
          • Package usage method
        • Project security configurations
        • Project data protection
        • Security command list
          • Statements for project security configurations
          • Statements for project permission management
          • Statements for package-based resource sharing
      • Security management use cases
        • Create a project
        • Package-based authorization
        • Data security self-check
        • Manage permissions by using a RAM user
        • Manage permissions by row
      • Security white paper
        • MaxCompute security white paper
      • Resource and job management
        • MaxCompute Management
      • Information schema
        • Overview of Information Schema
        • Metadata views
      • Audit logs
      • Data encryption
    • Tools and Downloads
      • Query editor
      • Client
      • MaxCompute Studio
        • What is MaxCompute Studio?
        • Tools Installation and version history
          • Install IntelliJ IDEA
          • Installation procedure
          • View and upgrade the version
        • Configure MaxCompute Studio
        • Manage project connections
        • Manage data and resources
          • Manage project data
          • Import and export table data
          • Manage tables in a visualized manner
        • Develop SQL procedure
          • Create a MaxCompute script module
          • Develop and submit an SQL script
          • Submit SQL scripts
        • Developing Java
          • Overview
          • Create MaxCompute Java Module
          • Develop UDFs
          • Develop MapReduce
          • Develop Graph
          • Query unstructured data
          • Package, upload, and register a Java program
        • Develop a Python program
          • Configure a Python development environment
          • Develop a Python UDF
          • Develop a PyODPS script
        • Manage MaxCompute jobs
          • View jobs
          • Job instance details
        • Tool integration
          • Integrate with MaxCompute client
        • FAQ
          • How to develop UDF using Studio
          • How to manage MaxCompute metadata using Studio
      • Downloads
    • SDK Reference
      • Java SDK
        • Version updates
        • Overview
        • Java SDK examples
          • Run security-related commands
          • Instance Logview
          • Return error logs
          • Set the SQL flag
          • Use SQLTask and Tunnel to export a large amount of data
      • Python SDK
        • Overview
        • SDK for Python
    • JDBC Reference
      • Version updates
      • Overview
      • Usage notes
      • Third-party tools integrated with JDBC
        • Configure MaxCompute JDBC on Tableau
        • Configure MaxCompute JDBC on SQL Workbench/J
    • Best Practices
      • SQL
        • Write MaxCompute SQL statements
        • Demo: Modify incompatible SQL
        • Export SQL execution results
        • Check whether partition pruning is effective
        • Group out the first n sections of each group of data
        • Merge multiple rows of data into one row
        • JOIN operations in MaxCompute SQL
      • Data migration
        • Overview
        • Migrate data across DataWorks workspaces
        • Migrate data from Hadoop to MaxCompute
        • Best practice to migrate data from Oracle to MaxCompute
        • Best practices for migrating data from Kafka to MaxCompute
        • Migrate data from Elasticsearch to MaxCompute
        • Migrate data from ApsaraDB RDS to MaxCompute based on dynamic partitioning
        • Migrate JSON data from MongoDB to MaxCompute
        • Migrate JSON data from OSS to MaxCompute
        • Migrate data from MaxCompute to Tablestore
        • Migrate data from MaxCompute to OSS
        • Migrate data from a user-created MySQL database on an ECS instance to MaxCompute
        • Migrate data from Amazon Redshift to MaxCompute
        • Migrate data from BigQuery to MaxCompute
        • Migrate log data to MaxCompute
          • Overview
          • Use Tunnel to upload log data to MaxCompute
          • Use DataHub to migrate log data to MaxCompute
          • Use DataWorks Data Integration to migrate log data to MaxCompute
      • Data development
        • Use IntelliJ IDEA to develop a Java UDF
        • Use MaxCompute to query geolocations of IP addresses
        • Resolve the issue that you cannot upload files that exceed 10 MB to DataWorks
        • Grant access to a specific UDF to a specified user
        • Use a PyODPS node to segment Chinese text based on Jieba
        • Use a PyODPS node to download data to a local directory for processing or to process data online
      • Compute optimization
        • Optimize SQL statements
        • Optimize JOIN long tails
        • Optimize long tail computing
        • Optimize the calculation for long-period metrics
      • Cost optimization
        • Overview
        • Select a billing method
        • Optimize computing costs
        • Optimize storage costs
        • Optimize the costs of data uploads and downloads
        • Manage costs
        • Command reference
        • Analyze MaxCompute bills
      • Security management
        • Set a RAM user as the super administrator for a MaxCompute project
    • Error Code Appendix
      • Common errors
      • Common SQL errors
      • Common MapReduce errors
      • Common Tunnel errors
    • Public Datasets
      • Overview
    • Service Agreements
      • MaxCompute Service Level Agreement
    • General Reference
      • Guidelines for the use of documentation
        • Alibaba Cloud documentation
        • Instructions for open source documentation
          • Open-Source Documentation Code of Conduct
          • Editing Open-Source Documentation
      • Alibaba Cloud console
      • Regions and zones
      • Service limits
      • Create an AccessKey pair
      • ICP Filing
      • Glossary
 
All Products
Search
  • Document Center
  • MaxCompute
  • Product Introduction
  • Limits

This Product

  • This Product
  • All Products

    Limits

    Document Center

    Limits

    Last Updated: Jan 18, 2021

    This topic describes the limits on using various modules in MaxCompute.

    For more information about the limits on using various modules, visit the following links:
    • Limits on SQL development
    • Limits on data upload and download
    • Limits on operation commands
    • Limits on MapReduce
    • Limits on security configurations
    • Limits on MaxCompute Graph
    • Limits on MaxCompute Lightning
    Previous: ACID semantics Next: Storage performance

    How helpful was this page?

    What might be the problems?

      More suggestions?

      Thank you! We've received your feedback.
      Free Trial Free Trial