Resources are files or programs that MaxCompute jobs need at runtime. To run a UDF (User-Defined Function) or MapReduce job, you upload the compiled code or data file to MaxCompute as a resource, then reference it when creating or running the job. MaxCompute downloads the resource automatically at execution time.
How resources work
-
Write your UDF or MapReduce program.
-
Package the compiled code (or raw script) and upload it to MaxCompute as a resource.
-
Reference the resource when you create or run a job — MaxCompute downloads it at execution time.
For step-by-step instructions, see Resource operations (CLI) or Manage MaxCompute resources (console).
Resource types
Each resource has a maximum size of 2,048 MB. The table below shows which type to use based on what you are uploading and how you plan to use it:
| Type | What to upload | Supported formats | Typical use |
|---|---|---|---|
| JAR | Compiled JAR package for direct execution | .jar |
Java UDFs, MapReduce jobs |
| File | Files in .zip, .so, or .jar format | .zip, .so, .jar |
Shared libraries, auxiliary files |
| Archive | Compressed bundle containing multiple files | .zip, .tgz, .tar.gz, .tar, .jar |
Multi-file dependencies |
| Python | Python script for registering a Python UDF | — | Python UDFs |
| Table | Existing MaxCompute table | — | Input data for UDFs and MapReduce |
Tables referenced by MapReduce support only the following field types: BIGINT, DOUBLE, STRING, DATETIME, and BOOLEAN.
Limitations
Access restrictions apply when UDFs and MapReduce read resources. For details, see Limits.
What's next
-
Create and use MaxCompute resources — upload resources and reference them in your code
-
Resource samples — working code examples for resources
-
Java UDFs — Java-specific patterns for UDF resources
-
MapReduce — how MapReduce jobs consume resources