All Products
Search
Document Center

Platform For AI:MLP Regression (Training)

Last Updated:Jan 03, 2025

Multilayer perceptron (MLP) regression is a neural network-based regression algorithm that you can use to resolve nonlinear regression issues. MLP regression maps input features to outputs by using multiple hidden layers and can capture complex patterns and relationships. The training process of MLP regression includes forward propagation, loss calculation, backward propagation, and parameter update. The training process facilitates model learning and optimization and help accurately predict outputs.

Supported computing resources

Deep Learning Containers (DLC)

Inputs and outputs

Input ports

  • You can use the Read File Data component to read training data files from Object Storage Service (OSS) paths.

  • You can configure the Train Data Oss path parameter of the MLP Regression (Training) component to select training data files.

Output port

You can save trained models to the path specified by the Output Model Oss Dir parameter of the MLP Regression (Training) component.

The following table describes the parameters of the component.

Configure the component

On the details page of a pipeline in Machine Learning Designer, add the MLP Regression (Training) component to the pipeline and configure the parameters described in the following table.

Tab

Parameter

Required

Default value

Description

Field Settings

Train Data Oss path

No

None

If no upstream OSS data is shipped, you must select a training data file. Example: train_data.csv. The .csv file that you select must be a numerical feature file and cannot contain headers. The last column in the file stores values used for training, and other columns store features.

Output Model Oss Dir

Yes

None

The OSS path that you can use to save a trained model.

Pretrained Model Oss Path

No

None

The path of the pre-trained model. If you leave this parameter empty, no pre-trained model is loaded.

Parameter Settings

MLP Layer Num

Yes

3

The number of MLP layers, except the output layer.

List of Hidden Layer Size

Yes

64,32,16

The number of output channels at each hidden layer. Separate multiple numbers with commas (,). If you enter a single number, the number is used as the number of output channels for all hidden layers.

List of Dropout Ratio

Yes

0.5

The dropout rate of each dropout layer. Separate multiple dropout rates with commas (,). If you enter a single dropout rate, the dropout rate is used as the dropout rate for all dropout layers.

Training Epoch

Yes

100

The total number of training epochs.

Learning Rate

Yes

0.01

The learning rate.

Training Batchsize

Yes

32

The number of training samples used in each iteration.

Model Save Epoch

Yes

10

The model is saved after every specified number of training epochs.

Validation Epoch

Yes

5

The validation set is evaluated after every specified number of training epochs.

Optimizer Type

Yes

Adam

The optimizer, which is an algorithm used to update model parameters, such as the weight and offset. Valid values: Adam and SGD.

Loss Type

Yes

MSE

The loss function that you can use to measure the difference between the predicted value and the actual value of a model. Valid values: MSE and L1.

Execution Tuning

Select Resource Group

Public Resource Group

No

None

The instance type (CPU or GPU) and virtual private cloud (VPC) that you want to use.

Dedicated resource group

No

None

The number of CPU cores, memory, shared memory, and number of GPUs that you want to use.

Maximum Running Duration (seconds)

No

None

The maximum period of time for which the component can run. If the specified period of time is exceeded, the job is terminated.