All Products
Search
Document Center

Alibaba Cloud DevOps:Pipeline jobs

Last Updated:Sep 10, 2025

A pipeline job consists of multiple steps that share a workspace to complete a specific task. A pipeline job can also call a component to run a specific task. Component tasks support operations such as retries and skips.

Examples

  • The following example shows the configuration for adding multiple steps to a task:

    stages:
      build_stage:
        name: Build stage
        jobs:
          build_job: 
            name: Build task
            runsOn: public/cn-beijing
            steps:                                # Use steps to configure task steps
              build_step:                        
                step: JavaBuild                   
                name: Java build                     
                with:                            
                  ......
              upload_step:
                step: ArtifactUpload
                name: Upload build output
                with:
                  ......
    
    stages:
      build_stage:
        name: Build stage
        jobs:
          build_job: 
            name: Build task
            runsOn:
              group: public/ap-southeast-1
              container: build-steps-public-registry.ap-southeast-1.cr.aliyuncs.com/build-steps/alinux3:latest
            steps:                                # Use steps to configure task steps
              setup_java_step:
                name: "Set up Java environment"
                step: SetupJava
                with:
                  jdkVersion: "1.8"
                  mavenVersion: "3.5.2"
              command_step:
                name: "Run command"
                step: Command
                with:
                  run: |
                    mvn -B clean package -Dmaven.test.skip=true -Dautoconfig.skip
              upload_artifact_step:
                name: "Upload build output"
                step: ArtifactUpload
                with:
                  uploadType: flowPublic
                  artifact: "Artifacts_${PIPELINE_ID}"
                  filePath:
                    - target/
                    
  • The following example shows the configuration for implementing a component in a task:

    stages:
      build_stage:
        name: Build stage
        jobs:
          deploy_job:
            name: Host group deployment task
            component: VMDeploy                # Use component to configure the task
            with:                              
              artifact: $[stages.build_stage.build_job.upload_artifact_step.artifacts.default]
              ......

Detailed explanation

stages.<stage_id>.jobs

Defines a pipeline job. A pipeline job can consist of multiple steps or a call to a component.

stages.<stage_id>.jobs.<job_id>

Required. The unique ID of the pipeline job. The job_id can contain only letters, numbers, and underscores (_), and must start with a letter. The ID can be up to 64 characters long.

stages.<stage_id>.jobs.<job_id>.name

The display name of the pipeline job. If you do not specify this parameter, the value of job_id is used. The name can be up to 64 characters long.

stages.<stage_id>.jobs.<job_id>.runsOn

Optional. The environment in which the pipeline job runs. You can use the public Kubernetes (K8s) cluster environment provided by Apsara DevOps or a private host build cluster. Supported environments include Specified container environment and Default VM environment.

  • Specified container environment: Starts a specified container on the build machine to run the build in a single-container environment. The following example shows the syntax:

    jobs:
      my_job:
        name: My task
        runsOn:
          group: public/ap-southeast-1 // The specified container environment currently supports only Apsara DevOps public build clusters.
          container: build-steps-public-registry.ap-southeast-1.cr.aliyuncs.com/build-steps/alinux3:latest // A public image address that can be accessed from the Internet. For more information about official Apsara DevOps system images, see https://atomgit.com/flow-steps/system_images/blob/main/README_INTL.md 

    Build cluster

    YAML identifier

    Description

    Apsara DevOps Singapore build cluster

    group: public/ap-southeast-1

    The public K8s cluster that Apsara DevOps provides in Singapore. This is the default cluster if runsOn is not specified.

    Private build cluster

    group: private/<private_build_cluster_ID>

    A private host cluster that is added to an enterprise through a private build cluster.

  • Default VM environment: Runs steps directly on the host or VM of the build cluster. The following is an example:

    jobs:  
      my_job:    
        name: My task      
          runsOn:        
            group: private/<private_build_cluster_ID>   // Only private build clusters are supported. Specifies a private build cluster.        
            labels: windows, amd64          // Specifies the operating system and architecture for scheduling. If this is not specified, the task is randomly scheduled to a machine in the cluster.        
            vm: true                        // Specifies the VM build environment.

    Private build clusters support Linux, Windows, and macOS machines. The following table describes the supported architectures and build environments for each operating system.

    Operating system

    Architecture

    labels

    Description

    Linux

    amd64

    linux,amd64

    Supports the Default environment and the Default VM environment.

    Linux

    arm64

    linux,arm64

    Supports only the Default VM environment. You must set vm: true.

    Windows

    amd64

    windows,amd64

    Supports only the Default VM environment. You must set vm: true.

    Windows

    arm64

    windows,arm64

    Supports only the Default VM environment. You must set vm: true.

    macOS

    amd64

    darwin,amd64

    Supports only the Default VM environment. You must set vm: true.

    macOS

    arm64

    darwin,arm64

    Supports only the Default VM environment. You must set vm: true.

stages.<stage_id>.jobs.<job_id>.runsOn.instanceType

Optional. The build environment specification. Apsara DevOps automatically allocates the DEFAULT specification based on the steps configured in the job. For more information about the default specification, see the documentation at https://www.alibabacloud.com/help/doc-detail/201868.html. You can also specify a build environment specification. Valid values are SMALL_1C2G, MEDIUM_2C4G, LARGE_4C8G, and XLARGE_8C16G.

Example:

jobs:
  my_job:
    name: My task
    runsOn:
      group: public/ap-southeast-1
      container: build-steps-public-registry.ap-southeast-1.cr.aliyuncs.com/build-steps/alinux3:latest
      instanceType: LARGE_4C8G    # Specifies the build environment specification.

stages.<stage_id>.jobs.<job_id>.timeoutMinutes

Optional. The default timeout period for a job is 240 minutes. You can set the timeout period to a value from 1 to 1,440 minutes.

Example:

jobs:
  my_job:
    name: My task
    runsOn:
      group: public/ap-southeast-1
      container: build-steps-public-registry.ap-southeast-1.cr.aliyuncs.com/build-steps/alinux3:latest
    timeoutMinutes: 60		# The job times out 60 minutes after it starts.

stages.<stage_id>.jobs.<job_id>.debugPolicy and stages.<stage_id>.jobs.<job_id>.debugRetentionMinutes

Optional. If you specify these parameters, you can retain the job execution environment after the job is complete and log on to the environment to debug the job.

These parameters can be used only with the Specified container environment.

You must specify both parameters or neither.

Valid values for debugPolicy are:

  1. onFailure: The execution environment is retained only if the job fails. The environment is not retained if the job is successful or fails only because of a redline check.

  2. always: The build environment is retained regardless of whether the job succeeds or fails.

debugRetentionMinutes specifies the retention period in minutes. The value must be an integer from 1 to 240.

Example:

jobs:
  my_job:
    name: My task
    runsOn:
      group: public/ap-southeast-1
      container: build-steps-public-registry.ap-southeast-1.cr.aliyuncs.com/build-steps/alinux3:latest
    debugPolicy: always
    debugRetentionMinutes: 5

stages.<stage_id>.jobs.<job_id>.needs

Optional. By default, all jobs in a stage run in parallel. If dependencies exist between jobs, you can use needs to define the dependencies between jobs in the stage. Note the following:

  • needs supports dependencies between jobs across different stages.

  • Ensure that there is a clear order between dependent jobs. Avoid circular dependencies. For example, do not create a dependency where Job A depends on Job B, Job B depends on Job C, and Job C depends on Job A.

Specify the <job_id> of the dependent job. The following is an example:

jobs:
  test_job:
    name: Test job
  build_job:
    name: Build job
    needs: test_job

stages.<stage_id>.jobs.<job_id>.driven

Optional. The default value is auto, which means the job runs automatically. You can use driven to set the trigger method for the job. The following methods are supported:

  • auto: The job runs automatically.

  • manual: The job must be manually confirmed before it runs.

Example:

jobs:
  my_job:
    name: My task
    runsOn:
      group: public/ap-southeast-1
      container: build-steps-public-registry.ap-southeast-1.cr.aliyuncs.com/build-steps/alinux3:latest
    driven: manual		#Manually confirm to run the task.
    

stages.<stage_id>.jobs.<job_id>.condition

Optional. By default, a job runs only after all its preceding dependent jobs are successful. You can use condition to specify the conditions that must be met for the job to run. The condition is specified as a function expression. The following is an example:

jobs:
  my_job:
    name: My task
    runsOn:
      group: public/ap-southeast-1
      container: build-steps-public-registry.ap-southeast-1.cr.aliyuncs.com/build-steps/alinux3:latest
    condition: |
      "${CI_COMMIT_REF_NAME}" == "master"		# Run this job when the branch is master.

Relational and logical operators are supported.

Operator

Description

Example

Example description

==

Equal to

condition: "${CI_COMMIT_REF_NAME}" == "master"

Runs when the branch is master.

!=

Not equal to

condition: "${CI_COMMIT_REF_NAME}" != "master"

Runs when the branch is not master.

&&

And

condition: "${CI_COMMIT_REF_NAME}" == "master" && succeed()

Runs when the branch is master and all preceding jobs are successful.

||

Or

condition: "${CI_COMMIT_REF_NAME}" == "master" || "${CI_COMMIT_REF_NAME}" == "develop"

Runs when the branch is master or develop.

!

Not

condition: succeed('job1') && !skipped('job1')

Runs when job1 is successful and not skipped.

()

Logical grouping

condition: ("${CI_COMMIT_REF_NAME}" == "master" || "${CI_COMMIT_REF_NAME}" == "develop") && succeed()

Runs when the branch is master or develop, and all preceding jobs are successful.

A set of built-in functions is provided for use in expressions.

Function

Description

Example

startsWith(searchString, searchValue)

Returns true if searchString starts with searchValue.

condition: startsWith('Hello world','He')

endsWith(searchString, searchValue)

Returns true if searchString ends with searchValue.

condition: endsWith('Hello world','ld')

contains(search, item)

Returns true if search is an array and item is an element in the array.

condition: contains('["aa", "bb", "cc"]', 'aa')

weekDay()

Returns the day of the week (Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, or Sunday).

condition: weekDay()=="Thursday"

timeIn(startTime, endTime)

Determines whether the current time is between startTime and endTime.

condition: timeIn("20:00:00", "22:00:00")

Note: Function parameters can be existing variables. For example, if you set the pipeline variable TEST_VAR=["aa", "bb", "cc"], you can reference this variable in the function using ${}, as shown in the following example:

jobs:
  job_1:
    name: 1
    condition: contains('${TEST_VAR}', 'aa')

You can use task status functions to retrieve the execution status of preceding dependent jobs. The input parameter for these functions is the <job_id> of a preceding dependent job.

Function

Description

Example

always()

Returns true by default.

condition: always()

succeed()

Returns true if all preceding jobs have a status of "Successful" or "Skipped".

condition: succeed('job_id_1','job_id_2')

failed()

Returns true if at least one preceding job has a status of "Failed" or "Canceled".

condition: failed('job_id_1','job_id_2')

skipped()

Returns true if at least one preceding job has a status of "Skipped".

condition: skipped('job_id_1','job_id_2')

Note: If you do not specify an input parameter for a task status function, the function applies to all preceding jobs. For example, succeed() returns true only if all preceding dependent jobs are successful. The input parameter for a task status function must be the <job_id> of a preceding job. If you specify a <job_id> for a job that has no dependency relationship, the function does not run as expected. The following is an example:

jobs:
  job_1:
    name: Job 1
  job_2:
    name: Job 2
  job_3:
    name: Job 3
    needs: 
      - job_1
      - job_2
    condition: succeed(job_1) || succeed(job_2) # Job 3 runs if Job 1 or Job 2 is successful.

stages.<stage_id>.jobs.<job_id>.sourceOption

Optional. By default, all pipeline source files configured in the pipeline are downloaded. When a pipeline has multiple code sources, you can choose whether a job node downloads all source files or only specific source files by specifying the <source_id>.

Scenario

Description

Example

Download all source files

Do not specify sourceOption.

Not specified

Do not download source files

Specify sourceOption, but leave it empty.

sourceOption: []

Download specified source files

Specify sourceOption and the <source_id>.

sourceOption: [repo_1,repo_2]

stages.<stage_id>.jobs.<job_id>.steps

A pipeline job can consist of multiple steps. These steps share a workspace to complete a specific task.

For more information, see Pipeline steps.

stages.<stage_id>.jobs.<job_id>.component

A pipeline job can call a component to run a specific task. Component tasks support operations such as retries and skips.

For more information, see Pipeline components.

stages.<stage_id>.jobs.<job_id>.with

When a pipeline job calls a component, use with to specify the required runtime parameters for the component. The following is an example:

jobs:
  deploy_job:
  name: Host group deployment task
  component: VMDeploy             # Specify the component.
  with:                           # Specify the parameters for the component.
    artifact: $[stages.build_stage.build_job.upload_step.artifacts.default]
    machineGroup: <YOUR-MACHINE-GROUP-ID>
    ......
    

For more information, see Pipeline components.

stages.<stage_id>.jobs.<job_id>.plugins

  • Optional. You can configure plugins to send pipeline job notifications through channels such as DingTalk, email, WeCom, Lark, and webhooks.

  • For more information, see Pipeline plugins.