All Products
Search
Document Center

Platform For AI:CreatePipelineRun

Last Updated:Sep 15, 2025

Creates a pipeline job.

Debugging

You can run this interface directly in OpenAPI Explorer, saving you the trouble of calculating signatures. After running successfully, OpenAPI Explorer can automatically generate SDK code samples.

Authorization information

The following table shows the authorization information corresponding to the API. The authorization information can be used in the Action policy element to grant a RAM user or RAM role the permissions to call this API operation. Description:

  • Operation: the value that you can use in the Action element to specify the operation on a resource.
  • Access level: the access level of each operation. The levels are read, write, and list.
  • Resource type: the type of the resource on which you can authorize the RAM user or the RAM role to perform the operation. Take note of the following items:
    • For mandatory resource types, indicate with a prefix of * .
    • If the permissions cannot be granted at the resource level, All Resources is used in the Resource type column of the operation.
  • Condition Key: the condition key that is defined by the cloud service.
  • Associated operation: other operations that the RAM user or the RAM role must have permissions to perform to complete the operation. To complete the operation, the RAM user or the RAM role must have the permissions to perform the associated operations.
OperationAccess levelResource typeCondition keyAssociated operation
paiflow:CreatePipelineRuncreate
*All Resources
*
    none
none

Request syntax

POST /api/v1/pipelineruns HTTP/1.1

Request parameters

ParameterTypeRequiredDescriptionExample
bodyobjectNo

The pipeline job.

PipelineIdstringNo

The pipeline ID. You must configure PipelineId or PipelineManifest.

flow-rer7y***
NamestringNo

The name of the pipeline job. If you leave this parameter empty, the system automatically generates a name.

testName
PipelineManifeststringNo

The pipeline definition. For more information, see the sample pipeline definition. You must configure PipelineId or PipelineManifest.

apiVersion: "core/v1"*********
ArgumentsstringNo

The parameters.

arguments: parameters: - name: "execution_maxcompute" value: endpoint: "http://service***" odpsProject: "pai***"
NoConfirmRequiredbooleanNo

Specifies whether to start the pipeline job. Valid values:

  • true (default): starts the pipeline job.
  • false: creates the pipeline job and does not start it.
true
WorkspaceIdstringYes

The workspace ID.

84***
SourceTypestringNo

The type of the pipeline job source. Valid values:

  • UNKNOWN (default)
  • SDK
  • DESIGNER
  • M6
UNKNOWN
SourceIdstringNo

The source ID.

experiment-ybpy***
OptionsstringNo

The options used to create the pipeline job, which are in the JSON format.

{"mlflow":{"experimentId":"exp-1jdk***"}}
AccessibilitystringNo

The pipeline accessibility.

  • PUBLIC (default)
  • PRIVATE
PUBLIC

Sample pipeline definition: The pipeline consists of a read table node data_source and a data type conversion node type_transform.

apiVersion: "core/v1"
metadata:
  provider: "1557702098******"
  version: "v1"
  identifier: "my_pipeline"
  name: "source-transform"
spec:
  inputs:
    parameters:
    - name: "execution_maxcompute"
      type: "Map"
  pipelines:
  - apiVersion: "core/v1"
    metadata:
      provider: "pai"
      version: "v1"
      identifier: "data_source"
      name: "data_source"
      displayName: "Read-Table-1"
    spec:
      arguments:
        parameters:
        - name: "inputTableName"
          value: "pai_online_project.wumai_data"
        - name: "partition"
          value: "20220101"
        - name: "execution"
          from: "{{inputs.parameters.execution_maxcompute}}"
  - apiVersion: "core/v1"
    metadata:
      provider: "pai"
      version: "v1"
      identifier: "type_transform"
      name: "type_transform"
      displayName: "Data Type Conversion-1"
    spec:
      arguments:
        artifacts:
        - name: "inputTable"
          from: "{{pipelines.data_source.outputs.artifacts.outputTable}}"
        parameters:
        - name: "cols_to_double"
          value: "time,hour,pm2,pm10,so2,co,no2"
        - name: "execution"
          from: "{{inputs.parameters.execution_maxcompute}}"
      dependencies:
      - "data_source"

Response parameters

ParameterTypeDescriptionExample
object

The response parameters.

RequestIdstring

The request ID.

DA869D1B-035A-43B2-ACC1-C56681BD9FAA
PipelineRunIdstring

The ID of the pipeline job.

flow-rbvg***

Examples

Sample success responses

JSONformat

{
  "RequestId": "DA869D1B-035A-43B2-ACC1-C56681BD9FAA",
  "PipelineRunId": "flow-rbvg***"
}

Error codes

For a list of error codes, visit the Service error codes.

Change history

Change timeSummary of changesOperation
2024-07-24The internal configuration of the API is changed, but the call is not affectedView Change Details
2022-06-16The internal configuration of the API is changed, but the call is not affectedView Change Details
2022-06-14Add OperationView Change Details