All Products
Search
Document Center

Platform For AI:Model deployment and training

Last Updated:Mar 11, 2026

Select, deploy, and fine-tune pre-trained models from Model Gallery using domain filters, resource configurations, and training parameters.

Select a model

Model Gallery provides models for various business scenarios. Consider these factors when selecting:

  • Domain and task: Filter by application domain and task requirements.

  • Pre-training dataset: Models perform better when pre-training datasets match your use case. Check dataset details on each model's product page.

  • Model size: Larger models typically offer better performance but require more resources for deployment and fine-tuning.

To access Model Gallery:

  1. Navigate to Model Gallery.

    1. Log on to the PAI console.

    2. In the left-side navigation pane, click Workspaces and select your workspace.

    3. In the left-side navigation pane, click QuickStart > Model Gallery.

  2. Select a model matching your requirements.

    image

    After selecting, deploy the model, test inference, and debug online. See Deploy a model and Fine-tune a model.

Deploy a model

For deployment example using Qwen3-0.6B, see Model Gallery Getting Started - Model Deployment.

Fine-tune a model

For fine-tuning example using Qwen3-0.6B, see Model Gallery Getting Started - Model Fine-tuning.

Fine-tuning configuration provides these parameters:

Fine-tuning parameters

Parameter Type

Parameter

Description

Training Mode

Supervised Fine-tuning (SFT)

Supported methods:

  • Supervised fine-tuning (SFT): Fine-tunes LLM parameters using input-output pairs.

  • Direct preference optimization (DPO): Aligns language models with human preferences, achieving the same goal as Reinforcement Learning from Human Feedback (RLHF).

Both methods support full-parameter fine-tuning, LoRA, and QLoRA.

Direct Preference Optimization (DPO)

Job Configuration

Task name

A default name is provided. Modify as needed.

Maximum running time

Maximum runtime. Tasks exceeding this duration are automatically stopped. Default: Unlimited.

Dataset Configuration

Training dataset

A default training dataset is provided. To use custom datasets, format your data per model documentation and upload using these methods:

  • OSS file or directory

    Click image to select the OSS path. In the Select OSS folder or file dialog box, select an existing file or click Upload file to upload.

  • Custom Dataset

    Use cloud storage datasets (e.g., OSS). Click image to select an existing dataset. To create datasets, see Create and manage datasets.

Validate dataset

Click Add validation dataset. Configuration is identical to Training dataset.

Output Configuration

Cloud storage path for saving trained models and TensorBoard logs.

Note

If a default OSS path is configured in workspace settings, it populates automatically. See Manage workspaces for details.

Computing Resources

Resource Type

Supports General Computing and Lingjun Intelligent Computing.

Source

  • Public Resources:

    • Billing: Pay-as-you-go.

    • Use case: Suitable for small-scale, non-urgent tasks. May experience queuing delays.

  • Resource Quota: General Computing or Lingjun resources.

    • Billing: Subscription.

    • Use case: High-availability tasks with large workloads.

  • Preemptible Resources:

    • Billing: Pay-as-you-go.

    • Use case: Cost-effective option with discounted pricing.

    • Limits: Resources may be unavailable or reclaimed without guarantee. See Use preemptible jobs.

Hyperparameters

Hyperparameters vary by model. Use default values or customize as needed.

Note

Available parameters vary by model. Configure according to your specific model requirements.

Billing

Model Gallery is free. Charges apply for EAS and DLC resources used during deployment and training. See Billing for Elastic Algorithm Service (EAS) and Billing for Deep Learning Containers (DLC).

References

Model Gallery FAQ