All Products
Search
Document Center

Use the E-HPC console to create a job

Last Updated: Sep 06, 2021

A job is the basis of Elastic High Performance Computing (E-HPC). A job consists of a shell script and executable files. Before you can perform high-performance computing, you must submit a job in the E-HPC console. Jobs are run in a sequence that is determined by specified queues and scheduler. In the E-HPC console, you can create a job, stop a job, and view the status of a job. This topic describes how to use the E-HPC console to create a job.

Prerequisites

  • The cluster and cluster nodes are in the Running state.

  • A user is created. For more information, see Create a user.

  • Job files are ready to be imported. E-HPC allows you to import job files by using the following methods:

    • Before you create a job, log on to the cluster and import the job file by using remote transmission solutions, such as rsync and secure copy protocol (SCP).

    • When you create a job, import job files stored in an Object Storage Service (OSS) bucket.

    • When you create a job, import job files stored in your local directory or select newly created job files.

Procedure

  1. Log on to the E-HPC console.

  2. In the top navigation bar, select a region.

  3. In the left-side navigation pane, choose Job and Performance Manager > Job.

  4. On the Job page, click Create Job.

  5. In the Create Job panel, set the required parameters.

    Parameter

    Description

    Username and Password

    The username and password used to run the job.

    Job Name

    The name of the job. If you need to automatically download and decompress job files, name the job files after the job.

    Job Template

    The configured template based on which a job is created. For more information, see Create a job template.

    Command Line

    The job execution command to be submitted to the scheduler. You can enter a command or the relative path of the script file, for example, job.pbs in the /home/test directory. This parameter is differently set in the following scenarios:

    • If the script file is executable, enter its relative path, for example, ./job.pbs.

    • If the script file is inexecutable, enter the execution command, for example, /opt/mpi/bin/mpirun /home/test/job.pbs. If your scheduler is PBS, add a hyphen (-) before the command, for example, -/opt/mpi/bin/mpirun /home/test/job.pbs.

    Queue

    If you added compute nodes to a queue when you created the cluster, submit the job to the queue. Otherwise, the job fails to be run. If you did not add compute nodes to a queue, the job is submitted to the default queue of the scheduler.

    Number of Compute Nodes

    The number of compute nodes used to run the job.

    Number of Tasks

    The number of tasks used by each compute node to run the job, that is, the number of processes.

    Maximum Memory

    The maximum memory that can be used when a compute node runs the job. If you do not specify this parameter, the memory is unlimited.

    Maximum Running Time

    The maximum running time of the job. If the actual running time exceeds the maximum running time, the job fails. If you do not specify this parameter, the running time is unlimited.

    Thread Quantity

    The number of threads that are used by a task. If you do not specify this parameter, the number of threads is 1.

    GPU Quantity

    The number of GPUs that are used when a compute node runs the job. If you specify this parameter, make sure that the compute node is a GPU-accelerated instance.

    Priority

    The priority of the job. Valid values: 0 to 9. A greater value indicates a higher priority. If you specified that jobs are scheduled based on the job priority when you set the cluster scheduling policy, jobs that have a higher priority are scheduled and run first.

    You can set a high priority for the jobs that need to be run first.

    Enable Job Array

    Specifies whether to enable the job array feature of the scheduler. A job array is a collection of similar independent jobs. You can set a job array to customize a job execution rule.

    Format: X-Y:Z. X is the minimum index value. Y is the maximum index value. Z is the step size. For example, 2-7:2 indicates that three jobs need to be run and their index values are 2, 4, and 6. Default value of Z: 1.

    Post-Processing Command

    The command that is used to perform subsequent operations on the running result of the job, for example, packaging and uploading of the generated job data to an OSS bucket.

    Stdout Redirect Path

    The output file path of stderr and stdout redirected by using a Linux shell. The path contains the output file name.

    • stdout: standard output

    • stderr: standard error

    Cluster users must have write permissions on the path. By default, output files are generated based on the scheduler settings.

    Stderr Redirect Path

    Variables

    The runtime variables passed to the job. They can be accessed by using environment variables in the executable file.

    Use OSS Job file

    You can select job files that are stored in an OSS bucket. If you specify this parameter, E-HPC automatically downloads the job files when the job is run.

    For more information, see Import job files from an OSS bucket to a cluster.

    Edit Job File

    You can create job files or select on-premises job files. Job files can be created based on a template. If multiple job files exist, you can view, edit, or delete them by displaying the file list.

    Decompression

    If you turn on the switch, E-HPC automatically decompresses the job files before the job is run. ZIP files, TAR files, and GZIP files can be automatically decompressed.

    Note

    E-HPC decompresses the job files and saves them to a folder in the /home directory. The folder has the same name as the job files. If you want to execute the script in the job files, you need to enter the directory of the job files.

  6. Click OK. The job is submitted to the cluster. Then, E-HPC runs the job.

Result

After you create and submit a job, you can view it on the Job page.

Click Details in the Actions column. In the Job Details panel, you can view the job details, including the job name, job ID, start time, the time when the job was last updated, and job running information.

Related operations

Before you submit a job, you can export a job configuration file. Then, you can directly import the job configuration file to complete the job configuration.

  • Export a job configuration file

    1. On the Job page, click Export.

    2. The job configuration file is downloaded.

  • Import a job configuration file

    1. On the Job page, click Import.

    2. Select a job configuration file from the local directory.