All Products
Search
Document Center

Platform For AI:AutoML

Last Updated:Mar 06, 2026

PAI provides AutoML to search for optimal hyperparameter combinations based on specific policies. Use AutoML to improve model tuning efficiency.

Concepts

  • Hyperparameter: External parameters used to train models. Configure hyperparameters before starting model training. After configuration, hyperparameters remain unchanged during training. In contrast, model parameters are constantly updated and optimized during machine learning.

  • Hyperparameter optimization (HPO): Manual or automatic hyperparameter optimization process. In this topic, HPO refers to the AutoML service that automatically searches and fine-tunes hyperparameters. HPO helps you obtain optimal hyperparameters and improve model performance efficiently, allowing algorithm developers to focus on modeling.

  • Search space: Range of possible hyperparameter combination values. AutoML searches for the optimal hyperparameter combination within this range.

  • Experiment: Experiment created to search for the optimal hyperparameter combination of a model in the search space.

  • Trial: Each trial involves model training, generation, and evaluation by using a specific hyperparameter combination. An experiment runs multiple trials and compares the results of the trials to find the optimal hyperparameter combination. For more information, see Overview.

  • Job type: Resources and environment used for model training in a trial. Valid values: Deep Learning Containers (DLC) and MaxCompute.

Background information

In machine learning, hyperparameters are parameters used to train models. Configure hyperparameters before machine learning. After configuration, hyperparameters remain unchanged during model training.

HPO is the process of finding optimal hyperparameters. For models with multiple hyperparameters, the hyperparameters are considered as multi-dimensional vectors. HPO finds the specific vector value that provides optimal model performance, such as the minimum loss function value, across all vector value ranges.

For example, a model has two hyperparameters A and B. Possible values for A are a, b, and c; possible values for B are d and e. The model has six hyperparameter combinations. HPO finds the specific combination of A and B that allows the model to achieve optimal performance. To obtain the optimal combination, use all six combinations of A and B for model training based on the same dataset, then compare model performance.

HPO in AutoML

Hyperparameter fine-tuning is complex due to numerous model hyperparameters and various data types and value ranges. For example, a model may have multiple hyperparameters with some as integers and some as floating-point. Manual hyperparameter tuning requires substantial computing resources, necessitating an automated system. AutoML's HPO feature helps you automatically fine-tune various hyperparameters.

AutoML enables simple, efficient, and accurate hyperparameter fine-tuning. Benefits:

  • Simplified fine-tuning: Greatly simplifies hyperparameter fine-tuning and saves time using automated tools.

  • Improved model quality: Integrates multiple PAI algorithms to quickly find optimal hyperparameter combinations, helping you train models more accurately and efficiently.

  • Reduced computing resources: Evaluates model performance during training to determine whether to terminate current training and evaluate another hyperparameter combination. Obtains optimal hyperparameters without evaluating all combinations, saving computing resources.

  • Flexible use of computing power: Allows convenient and flexible use of DLC and MaxCompute resources.

Scenarios

AutoML is suitable for all hyperparameter fine-tuning scenarios in machine learning. Common scenarios:

  • Binary classification tasks, such as determining whether a user is a paying user.

  • Regression tasks, such as estimating the payment amount a user makes within seven days.

  • Clustering tasks, such as determining the number of branches of a cosmetic brand in a city.

  • Recommendation tasks, such as fine-tuning ranking and retrieval models, or improving area under curve (AUC) metrics.

  • Deep learning tasks, such as improving the accuracy of image multi-classification and video multi-classification.

Reference

  • Overview

    (Recommended) Describes how AutoML works and the relationship between experiments, trials, and training tasks. Helps you become familiar with concepts and facilitate configuration.

  • Create an experiment

    Describes how to create an experiment in the PAI console and configure key parameters.

  • Use cases

    Provides use cases of using AutoML to fine-tune hyperparameters.