MaxCompute supports the pay-as-you-go billing method for SQL, MapReduce, Spark, Mars, and MaxCompute Query Acceleration (MCQA) jobs.
- Pay-as-you-go: You are charged for each job based on the resources you consume after you run a job. This billing method is used for standard SQL jobs, SQL jobs that reference external tables, MapReduce jobs, Spark jobs, Mars jobs, and MCQA jobs.
- Subscription: You can subscribe to some resources.
MaxCompute supports SQL, MapReduce, Spark, Mars, MCQA, Graph, and machine learning jobs. You are charged for the use of SQL, MapReduce, and Spark jobs. You are not charged an additional fee for the use of UDFs. You are charged for Mars jobs from September 1, 2020. You are charged for MCQA jobs from October 1, 2020. You are not charged for other types of computing jobs.
Subscription
Resource | Memory | CPU | Price (USD/month) |
---|---|---|---|
1 CU | 4 GB | 1 CPU | 22.0 |
After you purchase subscription computing resources, you can monitor and manage the resources by using MaxCompute Management. For more information, see Use MaxCompute Management.
If this is the first time you use MaxCompute, we recommend that you select the pay-as-you-go billing method. If you select the subscription billing method, you purchase a specific amount of computing resources. As a new user, you may consume fewer resources than what you purchase. Some resources may remain idle. The pay-as-you-go billing method is more cost-effective because you are charged based on the exact amount of resources that you consume.
Billing for standard SQL jobs
Each time you run an SQL job, MaxCompute calculates the fee based on the amount of input data in computing and SQL complexity. On the following day, MaxCompute aggregates the fees for all executed SQL jobs into one bill within your Alibaba Cloud account. Then, MaxCompute deducts the fees from the balance of your Alibaba Cloud account.
Fee for a standard SQL job = Amount of input data in computing × SQL complexity × Unit price of SQL jobs
Billable item | Unit price |
---|---|
Standard SQL job | USD 0.0438/GB |
- Amount of input data in computing: the amount of data scanned by an SQL job. Most SQL jobs support partition filtering
and column pruning. In most cases, this value is less than the amount of data in the
source table.
- Partition filtering: If you submit an SQL statement that contains the WHERE clause
WHERE ds > 20130101
(ds
is the partition key column), you are charged only for the data in the partitions that are read. - Column pruning: If you submit the SQL statement
SELECT f1,f2,f3 FROM t1;
, you are charged only for the data in columns f1, f2, and f3 of table t1. You are not charged for the data in the other columns.
- Partition filtering: If you submit an SQL statement that contains the WHERE clause
- SQL complexity: The complexity of an SQL job is calculated based on the number of keywords in the
SQL statements of the SQL job.
- Number of SQL keywords = Number of JOIN clauses + Number of GROUP BY clauses + Number
of ORDER BY clauses + Number of DISTINCT clauses + Number of window functions +
MAX(Number of INSERT statements - 1, 1)
- Calculation of SQL complexity:
- If the number of keywords is less than or equal to 3, the complexity of an SQL job is 1.
- If the number of keywords is less than or equal to 6 but greater than or equal to 4, the complexity of an SQL job is 1.5.
- If the number of keywords is less than or equal to 19 but greater than or equal to 7, the complexity of an SQL job is 2.
- If the number of keywords is greater than or equal to 20, the complexity of an SQL job is 4.
- Number of SQL keywords = Number of JOIN clauses + Number of GROUP BY clauses + Number
of ORDER BY clauses + Number of DISTINCT clauses + Number of window functions +
COST SQL <SQL Sentence>;
odps@ $odps_project >COST SQL SELECT DISTINCT total1 FROM
(SELECT id1, COUNT(f1) AS total1 FROM in1 GROUP BY id1) tmp1
ORDER BY total1 DESC LIMIT 100;
Intput:1825361100.8 Bytes
Complexity:1.5
1.7 × 1.5 × 0.0438 = 0.11 (USD)
- The bill is generated before 06:00 the following day.
- You are not charged for failed SQL jobs.
- You are charged for SQL jobs based on the amount of data after compression, which is similar to the storage service.
Billing for SQL jobs that reference external tables
Since March 2019, you are charged for MaxCompute SQL jobs that reference external tables based on the pay-as-you-go billing method.
Fee for an SQL job = Amount of input data in computing × Unit price of SQL jobs that reference external tables
Billable item | Unit price |
---|---|
SQL jobs that reference external tables | USD 0.0044/GB |
- The bill is generated before 06:00 the following day.
- For the jobs that involve internal and external tables, MaxCompute separately calculates the fees for jobs that reference internal tables from jobs that reference external tables.
- You cannot estimate the fees for SQL jobs that reference external tables.
Pay-as-you-go billing for MapReduce jobs
Since December 19, 2017, you are charged for MaxCompute MapReduce jobs based on the pay-as-you-go billing method.
Fees for MapReduce jobs of the day = Number of billable hours × Unit price (USD)
Billable item | Unit price |
---|---|
MapReduce job | USD 0.0690/hour/job |
Number of billable hours of a MapReduce job = Number of job running hours × Number of CPU cores consumed
For example, if a MapReduce job that runs for 0.5 hours consumes 100 CPU cores, the
number of billable hours is 50 based on the following calculation: 100 cores × 0.5 hours = 50
.
- The bill is generated before 06:00 the following day.
- You are not charged for failed MapReduce jobs.
- The queuing time of jobs is not counted in the billable hours.
- If you select the subscription billing method for MaxCompute, you can run MapReduce jobs free of charge within the subscription period.
Pay-as-you-go billing for Spark jobs
Fees for Spark jobs of the day = Number of billable hours × Unit price (USD 0.1041/hour/job)
Number of billable hours of a Spark job = MAX[Number of CPU cores × Number of job running hours, ROUND UP(Memory size × Number of job running hours/4)]
- You must provide the number of CPU cores consumed, number of job running hours, and memory size.
- One billable hour is equivalent to 1 CPU core and 4 GB of memory.
For example, if a Spark job that runs for 1 hour consumes 2 CPU cores and 5 GB of
memory, the number of billable hours is 2 based on the following calculation: MAX[2 × 1, ROUND UP(5 × 1/4)] = 2
. If a Spark job that runs for 1 hour consumes 2 CPU cores and 10 GB of memory, the
number of billable hours is 3 based on the following calculation: MAX[2 × 1, ROUND UP(10 × 1/4)] = 3
.
- The bill is generated before 06:00 the following day.
- The queuing time of jobs is not counted in the billable hours.
- The fee for the same job may vary based on the amount of specified resources.
- If you select the subscription billing method for MaxCompute, you can run Spark jobs free of charge within the subscription period.
- For any questions about the billing for Spark jobs, submit a ticket.
Pay-as-you-go billing for Mars jobs
Fees for Mars jobs of the day = Number of billable hours × Unit price (USD 0.1041/hour/job)
- Calculate the number of CPU cores and memory size that are consumed by the job.
- One billable hour is equivalent to 1 CPU core and 4 GB of memory.
- The number of billable hours of a Mars job is calculated based on the following formula:
MAX[Number of CPU cores × Number of job running hours, ROUND UP(Memory size × Number of job running hours/4)]
.For example, if a Mars job that runs for 1 hour consumes 2 CPU cores and 5 GB of memory, the number of billable hours is 2 based on the following calculation:
MAX[2 × 1, ROUND UP(5 × 1/4)] = 2
. If a Mars job that runs for 1 hour consumes 2 CPU cores and 10 GB of memory, the number of billable hours is 3 based on the following calculation:MAX[2 × 1, ROUND UP(10 × 1/4)] = 3
.
After a Mars job is run, MaxCompute calculates the billable hours of the job. On the following day, MaxCompute aggregates the fees for all executed Mars jobs into one bill within your Alibaba Cloud account. Then, MaxCompute deducts the fees from the balance of your Alibaba Cloud account.
- The bill is generated before 06:00 the following day.
- The queuing time of jobs is not counted in the billable hours.
- The fee for the same job may vary based on the amount of specified resources.
- If you select the subscription billing method for MaxCompute, you can run Mars jobs free of charge within the subscription period.
- For any questions about the billing for Mars jobs, submit a ticket.
Pay-as-you-go billing for MCQA jobs
Since October 1, 2020, you are charged for MCQA jobs based on the pay-as-you-go billing method. For more information, see Overview.
Each time you run an MCQA job, MaxCompute calculates the fee based on the amount of input data of the job. On the following day, MaxCompute aggregates the fees for all executed MCQA jobs.
Fee for an MCQA job = Amount of input data for the MCQA job × Unit price (USD 0.0438/GB)
- MCQA jobs use dedicated computing resources. If you select the subscription billing method for MaxCompute, MaxCompute calculates the fee based on the amount of data scanned by an MCQA job when you run the MCQA job.
- MaxCompute calculates the fee based on the amount of data scanned by each MCQA job (at least 10 MB for each MCQA job). You are charged for canceled MCQA jobs based on the amount of data scanned.
- The bill is generated before 06:00 the following day.
- No fee is generated if no query is performed.
- By default, MaxCompute performs column-oriented storage and compression on data. MaxCompute calculates the amount of scanned data based on the compressed data.
- When you query a partitioned table, you can use partition filtering conditions to reduce the amount of scanned data and improve query performance.
- MCQA is in public preview in the following regions: China (Hong Kong), Singapore (Singapore), Indonesia (Jakarta), India (Mumbai), and Malaysia (Kuala Lumpur). MCQA will be gradually available in other regions.