PAI provides use cases for large language model (LLM) training, fine-tuning, deployment, and data processing.
Get started
Model Gallery offers pre-trained LLMs from the open-source community. You can perform model fine-tuning, distillation, compression, evaluation, and deployment without coding.
-
Quick start: Deploy, fine-tune, and evaluate the QwQ-32B model
-
Quick start: Train, evaluate, compress, and deploy Qwen2.5-Coder models
-
Quick start: Fine-tune, evaluate, compress, and deploy a DistilQwen2 model
-
Quick start: Data augmentation and model distillation for LLMs
Advanced usage
If Model Gallery does not include the model you need, or if its training and deployment capabilities do not meet your requirements, use Data Science Workshop (DSW) and Deep Learning Containers (DLC) for model fine-tuning and training, then deploy your model in Elastic Algorithm Service (EAS).
|
Stage |
Use cases |
|
Training |
|
|
Deployment |
PAI-Lingjun intelligent computing service
PAI-Lingjun intelligent computing service is designed for large-scale deep learning scenarios, providing heterogeneous computing resources and an AI engineering platform.
Data processing
Machine Learning Designer integrates advanced algorithms for processing text, video, and image data to improve training data quality. Text processing algorithms support editing, transforming, deduplicating, and filtering low-quality data samples. Video and image processing algorithms support data cleaning, content filtering, metadata extraction, and caption generation. Built-in data processing templates are available and can be extended through secondary development.