This topic provides answers to some frequently asked questions about billing.
Which projects incur fees?
Pipelines that run in Machine Learning Designer or Studio incur fees. In detail, the algorithm components of pipelines incur fees during computing.
Computing resources used by Data Science Workshop (DSW) instances incur fees. We recommend that you stop instances that you no longer need to prevent unnecessary costs.
Deep Learning Containers (DLC) jobs that run in the public resource group incur fees. Services that run in dedicated resource groups do not incur fees because these resource groups have already been paid when you buy them.
Elastic Algorithm Service (EAS) services that run in the public resource group incur fees regardless of whether the services are called. Services that run in pay-as-you-go dedicated resource groups incur fees. This is because these resource groups are billed after you use them. Services that run in subscription dedicated resource groups do not incur fees. This is because these resource groups have already been paid when you purchase them.
The following items describe the billing rules. For more information, see Billing of EAS.
You are charged for pay-as-you-go dedicated resource groups in the Running state.
ImportantWhen you scale in or out a dedicated resource group that is in the running state, the group enters an intermediate state, which is Scaling out or Scaling in. Resources that are used while the group is in this intermediate state are also billed.
You are charged for services that are deployed in the public resource group and are in the Running state.
ImportantWhen you scale out a deployed service, the service enters an intermediate state, which is Pending. Resources that are used while the service is in this intermediate state are also billed.
How do I stop a project that is being billed?
Stop the billed project in Machine Learning Designer or Studio.
Log on to the Machine Learning Platform for AI console.
In the left-side navigation pane, click Workspaces. On the Workspaces page, click the name of the workspace that you want to manage.
In the left-side navigation pane, choose Model Training > Visualized Modeling (Designer).
On the Visualized Modeling (Designer) page, double-click the name of the pipeline that you want to stop to go to the pipeline details page.
Click the
button to stop this pipeline that is in the running state.
Stop a DSW instance.
Log on to the Machine Learning Platform for AI console.
In the left-side navigation pane, click Workspaces. On the Workspaces page, click the name of the workspace that you want to manage.
In the left-side navigation pane, choose Model Training > Notebook Service (DSW).
On the Interactive Modeling (DSW) page, find the DSW instance that you want to stop and click Stop in the Actions column.
After the instance is stopped, the Status column displays Stopped. If the instance is a pay-as-you-go instance, the system stops charging fees for the instance. Make sure that the DSW instances that are not in use are in the Stopped state. Otherwise, the system continues to charge fees for the instances.
ImportantWe recommend that you create DSW instances in workspaces. This helps avoid unnecessary costs that are incurred because you cannot stop your DSW instances due to misconfiguration.
Stop an EAS service that is being billed.
Stop a service that runs in a pay-as-you-go dedicated resource group:
To stop the service, reduce the number of nodes in the dedicated resource group to 0. Procedure:
On the details page of a workspace, choose Model Deployment > Model Serving (EAS) in the left-side navigation pane.
On the EAS-Online Model Services page, click the Resource Group tab and click the name of the resource group that you want to manage to go to the details page of the resource group.
On the Nodes tab, find the pay-as-you-go nodes and click Delete in the Actions column.
Stop a service that runs in the public resource group:
On the EAS-Online Model Services page, find the model service that you want to stop on the Inference Service tab and click Stop in the Actions column. Then, the model service is stopped, and the system stops billing the resources used by the model service.
ImportantMake sure that the stopped model service is no longer required to prevent unnecessary business losses.
How do I query deductions and details?
On the Bill Details page, you can set the filter conditions to view bill details on the Billing Details tab. For more information, see View bills and usage details. In particular, on the Billing Details tab, the Product Detail column indicates the module that generates the expense. The following content describes the values related to Machine Learning Platform for AI:
PaiEasPostpay: fees incurred by pay-as-you-go dedicated resource groups in EAS.
Machine Learning Platform for AI: fees incurred by MaxCompute computing resources used in Machine Learning Designer or Studio for training. To view the billing details, click the Billing Details tab and view the information in the Billing Item column. The following table describes the mapping between the billable items and expenses.
Product
Billable item
Instance ID
Expense source
Machine Learning Platform for AI
Usage
text_analysis
data_analysis
data_manipulation
deep_learning
default
Fees are incurred by pipeline training performed in Machine Learning Designer or Studio.
EAS pre-payment for dedicated machine: fees incurred by subscription dedicated resource groups in EAS.
Why are fees deducted after I stop a billable project?
Fees are not immediately deducted after you stop a project. Instead, fees are deducted after a bill is generated for the project. A delay exists between the time when the bill is generated and the time when you stop the project. For example, fees incurred by resource usage between 10:00 and 11:00 may be billed and deducted several hours later. Therefore, even if you have stopped the project at 11:00, you may still receive a bill for the project later. Despite the delay, only the fees incurred before you stop the project are deducted.