All Products
Search
Document Center

Platform For AI:What is Platform for AI (PAI)?

Last Updated:Mar 10, 2026

Platform overview

Platform for AI (PAI) is Alibaba Cloud's one-stop AI development platform. It provides comprehensive services for the end-to-end AI development lifecycle, including data annotation, model development, model training, and model deployment. PAI consists of these core components:

Core component

Description

Scenario

Quick Start

Data Annotation (iTAG)

Supports annotation for various data types, including images, text, and videos. It also offers fully managed data annotation outsourcing services.

Data annotation

-

Data Science Workshop (DSW)

Provides a cloud-based IDE for AI development. Developers familiar with Notebooks or VSCode can start quickly.

AI model development

DSW Quick Start

Machine Learning Designer

Provides over 140 built-in algorithm components for visual model building using a low-code, drag-and-drop interface.

Big data + AI model development

Designer Quick Start

Deep Learning Containers (DLC)

Create distributed or single-node training tasks without manually purchasing machines or configuring runtime environments. Aligns with local training script practices.

Distributed model training

DLC Quick Start

Elastic Algorithm Service (EAS)

Deploys trained models as online inference services with simple configurations.

Deploy models as service APIs

EAS Quick Start

Model Gallery

Integrates DLC and EAS for zero-code training and deployment of open-source models.

Zero-code training and deployment of open-source models

Model Gallery Quick Start

Click a component name to learn more about its features.

Benefits

End-to-end AI development

  • Supports full AI lifecycle from data annotation, model development, model training, and model optimization to model deployment and AI operations and governance.

  • Offers over 140 optimized, built-in algorithm components.

  • Provides core capabilities such as development modes, deep integration with big data engines, multi-framework compatibility, and custom image support.

Support for multiple open-source frameworks

  • Flink stream processing framework.

  • Deep learning frameworks such as TensorFlow, PyTorch, Megatron, and DeepSpeed, deeply optimized based on their upstream open-source versions.

  • Industry-standard open-source frameworks such as Spark, PySpark, and MapReduce.

Industry-leading AI optimization

  • A high-performance computing (HPC) training framework for sparse training scenarios, supporting tens to hundreds of billions of sparse features, hundreds of billions to trillions of samples, and distributed incremental training on up to 1,000 workers.

  • PAI-Blade accelerates many mainstream models, including ResNet-50 and Transformer+LM.

Flexible product delivery

  • Provides fully managed and semi-managed options on the public cloud.

  • Supports deployment in various formats, including AI High-Performance Computing (HPC) clusters and lightweight offerings.

  • Supports periodic scheduling with DataWorks, which separates production and development environments to ensure data security and isolation.

Billing

Billing method

Description

Scenarios

Applicable components

Pay-as-you-go

A billing method based on actual usage.

Ideal for short-term or unpredictable workloads, such as test environments, bursty demand, or initial project phases.

Designer, DSW, DLC, EAS

Subscription

A prepaid method for purchasing resources on a monthly or yearly basis.

Suitable for long-term, stable workloads. Committing to a fixed duration costs less than pay-as-you-go.

DSW, DLC, EAS

Resource Plan

A prepaid method for purchasing quota packages for specific resources.

Best for scenarios requiring large volumes of specific resources. Purchasing resource quota packages provides better pricing.

DSW

Savings plan

A prepaid method for purchasing specific discount plans.

Ideal for committing to specific spending amounts over a period to receive lower pay-as-you-go rates.

DSW, EAS

Pay-by-inference-duration

A pay-as-you-go billing model charging only for actual inference duration during service invocation—with no cost for service deployment. Resources automatically scale elastically based on request volume.

Ideal for Serverless inference workloads with unpredictable or variable demand, effectively handling high-concurrency requests and dynamic loads.

EAS

Get started

See the User guide for new users.

Use cases

FAQ

Q: Why is a PAI DSW instance failing to start or stop, and how to release it?

For more information, see DSW FAQ - Instance startup and release.

Q: Why do EAS service calls fail?

For more information, see EAS FAQ - Service invocation.