This topic describes the billing methods for computing jobs in MaxCompute.

MaxCompute supports the following two billing methods for computing jobs:
  • Pay-as-you-go: Expenses are charged after job execution based on the input data size. This billing method applies to standard SQL jobs, SQL jobs that process data in external tables, MapReduce jobs, and Spark jobs.
  • Subscription: The subscription billing method indicates that you reserve some computing resources in advance.

Currently, MaxCompute supports the SQL, user-defined function (UDF), MapReduce, Graph, and machine learning jobs. Expenses have been charged for MapReduce jobs since December 19, 2017. Expenses are also charged for SQL jobs, excluding UDF jobs. No immediate plan is proposed for charging other types of computing jobs.

Subscription

You can purchase some resources in advance. MaxCompute will reserve the resources you have purchased. The basic unit of such resources is defined as the compute unit (CU). One CU includes 1 CPU core and 4 GB memory. Resources purchased by using the subscription billing method are reserved for computing jobs, such as the SQL, MapReduce, and Spark jobs.
Resource Memory CPU Price (USD/month)
1 CU 4 GB 1 CPU core 22.0

After purchasing computing resources by using the subscription billing method, you can monitor and manage the resources in MaxCompute CU Management. For more information, see MaxCompute Manager.

If you are a new user, we recommend that you select the pay-as-you-go billing method, where expenses are charged after computing jobs are completed based on the input data size. This is because you may use only a few resources when you use MaxCompute in the initial stage. If you purchase resources by using the subscription billing method, some resources may become idle. The pay-as-you-go billing method will charge lower expenses for new users who have fewer data requirements.

Pay-as-you-go for standard SQL jobs

For standard SQL jobs, expenses are charged after the jobs are completed based on the input data size. Each time you run a standard SQL job, MaxCompute calculates the expense based on the input data size and complexity of the job.

MaxCompute calculates the expense of each standard SQL job based on the input data size. The expenses calculated on the current day are collected at one time on the next day.

The following formula is used to calculate the expense of a standard SQL job:
Expense of a standard SQL job = Input data size × SQL complexity × Unit price for standard SQL computing
The following table lists the unit price for standard SQL computing.
Item Unit price
Standard SQL computing USD 0.0438 per GB
  • Input data size: the actual size of the data that an SQL statement scans. Most SQL statements support partition filtering and column pruning. Therefore, the input data size is generally much smaller than the size of the source table.
    • Column pruning: For example, if you submit the SQL statement select f1,f2,f3 from t1;, only the data in columns f1, f2, and f3 of table t1 is billed.
    • Partition filtering: For example, if you submit an SQL statement containing the WHERE clause where ds>"20130101", in which ds is the partition key column, only data in the partitions that is actually read is billed.
  • SQL complexity: MaxCompute counts the keywords in SQL statements and then calculates the SQL complexity. The calculation details are as follows:
    • Number of keywords = Number of JOIN clauses + Number of GROUP BY clauses + Number of ORDER BY clauses + Number of DISTINCT clauses + Number of window functions + Max(Number of INSERT INTO clauses - 1, 1)
    • The relationship between the number of keywords in SQL statements and the SQL complexity is as follows:
      • If the number of keywords is less than or equal to 3, the SQL complexity is 1.
      • If the number of keywords is greater than 3 but less than or equal to 6, the SQL complexity is 1.5.
      • If the number of keywords is greater than 6 but less than or equal to 19, the SQL complexity is 2.
      • If the number of keywords is greater than or equal to 20, the SQL complexity is 4.
The command for calculating the SQL complexity is in the following format:
cost sql <SQL Sentence>;
An example is as follows:
odps@ $odps_project >cost sql SELECT DISTINCT total1 FROM
(SELECT id1, COUNT(f1) AS total1 FROM in1 GROUP BY id1) tmp1
ORDER BY total1 DESC LIMIT 100;
Intput:1825361100.8 Bytes
Complexity:1.5
In this example, the SQL statement contains a DISTINCT clause, a COUNT clause, a GROUP BY clause, and an ORDER BY clause. The number of keywords is 4, the SQL complexity is 1.5, and the data size is about 1.7 GB. Therefore, the actual expense is as follows:
1.7 × 1.5 × 0.0438 = USD 0.11
Note
  • The bill will be generated before 06:00 of the next day. After a computing job is successfully run, MaxCompute counts the data size and calculates the SQL complexity. After the bill is generated, the expense is automatically collected from your account balance. If the computing job fails, no expense is collected.
  • Similar to the billing for storage, SQL computing jobs are billed based on the size of compressed data.

Pay-as-you-go for SQL jobs that process data in external tables

As of March 2019, MaxCompute has begun charging for SQL jobs that process data in external tables. The following formula is used to calculate the expense of such a job:
Expense of an SQL job that processes data in external tables = Input data size × Unit price for SQL computing that involves external tables
The following table lists the unit price for SQL computing that involves external tables.
Item Unit price
SQL computing that involves external tables USD 0.0044 per GB
The unit price for SQL computing that involves external tables is USD 0.0044 per GB per level of complexity. Here, the complexity coefficient is 1. The bill for the expenses calculated on the current day is generated on the next day. The expenses are automatically collected from your account balance after the bill is generated.
Note
  • You will be charged separately if you need to process data in both internal and external tables.
  • The expenses of SQL statements that are used to process data in external tables cannot be estimated.

Pay-as-you-go for MapReduce jobs

As of December 19, 2017, MaxCompute has begun charging for MapReduce jobs. The following formula is used to calculate the expense of a MapReduce job:
Expense of a MapReduce job = Total billable hours × Unit price for MapReduce computing
The following table lists the unit price for MapReduce computing.
Item Unit price
MapReduce computing USD 0.0690 per hour per job
The following formula is used to calculate the billable hours of a MapReduce job:
Billable hours of a MapReduce job = Execution time in hours × Number of CPU cores that the job calls

For example, if you consume 100 CPU cores for running a MapReduce job for 0.5 hour, the billable hours you need to pay are calculated as follows: 0.5 × 100 = 50.

After a MapReduce job is successfully run, MaxCompute calculates the billable hours of the job. The bill for the expenses calculated on the current day is generated on the next day. The expenses are automatically collected from your account balance after the bill is generated.
Note
  • If the MapReduce job fails, no expense is collected.
  • The billable hours do not include the time waiting for execution.
  • If you have purchased the MaxCompute service by using the subscription billing method, you can run MapReduce jobs for free within the range of the service. No additional expenses are charged.

Pay-as-you-go for Spark jobs

As of July 24, 2019, MaxCompute has begun charging for Spark jobs. The following formula is used to calculate the expense of a MaxCompute Spark job:
Expense of a Spark job = Total billable hours × Unit price for Spark computing (USD 0.1041 per hour per job)
The following formula is used to calculate the billable hours of a Spark job:
Billable hours of a Spark job = Max[Number of CPU cores × Execution time in hours, Ceiling(Memory size × Execution time in hours/4)]
Where:
  • You must provide the number of CPU cores consumed, execution time in hours, and memory size.
  • One billable hour is equal to 1 CPU core plus 4 GB memory.

For example, if you consume 2 CPU cores and 5 GB memory for running a Spark job for 1 hour, the billable hours you need to pay are calculated as follows: Max[2 × 1, Ceiling(5 × 1/4)] = 2. If you consume 2 CPU cores and 10 GB memory for running a Spark job for 1 hour, the billable hours you need to pay are calculated as follows: Max[2 × 1, Ceiling(10 × 1/4)] = 3.

After a Spark job is completed, MaxCompute calculates the billable hours of the job. The bill for the expenses calculated on the current day is generated on the next day. The expenses are automatically collected from your account balance after the bill is generated.
Note
  • The billable hours do not include the time waiting for execution.
  • The expense of a job may vary depending on the size of resources specified at the time of job submission.
  • If you have purchased the MaxCompute service by using the subscription billing method, you can run Spark jobs for free within the range of the service. No additional expenses are charged.
  • For any questions about the expenses of Spark jobs, submit a ticket to us.