All Products
Document Center

Concurrent tasks

Last Updated: May 11, 2018

Note: Replace your-bucket in the examples with a real bucket.


In this example, a job is started. The job includes a task. The task will start two instances that run concurrently in two VMs.

Programs run in the two VMs are both the python command specified in the sum task. In the programs, the environment variable BATCH_COMPUTE_DAG_INSTANCE_ID is used to obtain InstanceId, so as to differentiate between the input data.

InstanceId increases from 0.

In each VM, after the task program processes ${InstanceId}-input.txt and writes the result into the file /home/outputs/${InstanceId}-output.txt, the system automatically uploads the file to the corresponding OSS path: oss://your-bucket/sum/outputs/.

When all programs in the two VMs finish running, both the task and the job end.

To download the example click here.

Upload data files to OSS

The data files are in the data directory: 0-input.txt and 1-input.txt.

Content of 0-input.txt:

  1. 1 20 45

Content of 1-input.txt:

  1. 5 85 103

Upload 0-input.txt and 1-input.txt to:

  1. oss://your-bucket/sum/inputs/0-input.txt
  2. oss://your-bucket/sum/inputs/1-input.txt

Run the following commands to upload the files:

  1. cd data
  2. bcs oss upload 0-input.txt oss://your-bucket/sum/inputs/
  3. bcs oss upload 1-input.txt oss://your-bucket/sum/inputs/
  4. # Check whether the uploading is successful.
  5. bcs oss ls oss://your-bucket/sum/inputs/

Start a task

  1. bcs sub --file job.cfg

Check the job execution result

The result data is stored in oss://your-bucket/sum/outputs/.

View the data by running the following commands:

  1. bcs o ls oss://your-bucket/sum/outputs/
  2. bcs o cat oss://your-bucket/sum/outputs/0-output.txt
  3. bcs o cat oss://your-bucket/sum/outputs/1-output.txt