All Products
Search
Document Center

Elastic High Performance Computing:Submit jobs by using submitter

Last Updated:Apr 23, 2025

If your Elastic High Performance Computing (E-HPC) cluster uses Elastic Compute Service (ECS) instances as nodes, you can use submitter to submit jobs to the cluster on E-HPC Portal. This topic describes how to use submitter to submit jobs on E-HPC Portal.

Prerequisites

The cluster is in the Running state.

Before you begin

Prepare the job file and upload it to your cluster.

E-HPC Portal supports the following upload methods:

  • Use the Data Management module: You can create and edit a job file in the cluster folder, upload a local file to the cluster, or download a file from an Object Storage Service (OSS) bucket to the cluster.

  • Connect to the cluster and create a file: You can click the 连接.png icon in the upper-right corner of E-HPC Portal to connect to the cluster and then run a command to create a job file.

Procedure

  1. Log on to E-HPC Portal.

    For more information, see Log on to E-HPC Portal.

  2. In the top navigation bar, click Task Management.

  3. In the upper part of the page, click submitter.

  4. On the Create Job page, specify the following parameters:

    Note

    If you want to submit the same job repeatedly, you can click Save as Template after you configure the parameters. This way, you can select the template on the left of the page to submit later jobs conveniently.

    • Basic parameters

      Parameter

      Description

      Username

      If your cluster is a Slurm cluster and you log on to E-HPC Portal as the root user, you can submit jobs as root or a specific regular user.

      Important

      If you specify a non-root user, make sure that the user has logged on and submitted jobs in E-HPC Portal.

      Job Name

      The name of the job.

      If you need to automatically download and decompress job files, the decompression directory is named after the job name.

      Input File

      The input file of the job. Enter a command line tag (for example, -input ) and then select an input file (for example, /home/testuser/in.txt).

      Output File

      The output file of the job. Enter a command line tag (for example, -output ) and then enter a path for the output file (for example, /home/testuser/out.txt).

      Queue

      The queue in which you want to run the job.

      If you added compute nodes to a queue when you created the cluster, you must submit the job to the queue. Otherwise, the job fails to run. If you did not add compute nodes to a queue, the job is submitted to the default queue of the scheduler.

      Command

      The job execution command that you want to send to the scheduler. You can enter a text command or the relative path of a script file. E-HPC Portal supports the following methods:

      • Online editing

      • On-premises file

      • Uploaded file

      Note
      • If the script file is directly executable, you can specify a relative path, for example, ./job.pbs.

      • If the script file is not executable, you must enter an execution command, for example, /opt/mpi/bin/mpirun /home/test/job.pbs.

      Priority

      The priority of the job. Valid values: 0 to 9. A larger value indicates a higher priority.

      If you specified that jobs are scheduled by job priority when you set the cluster scheduling policy, jobs with a higher priority are scheduled and run first.

      Number of Nodes

      The number of compute nodes that are used to run the job.

      Number of Tasks

      The number of tasks used by each compute node to run the job, that is, the number of processes.

      Number of Threads

      The number of threads that are used by each task. This parameter is left empty by default, which indicates a thread number of 1.

      Number of GPUs

      The number of GPUs used by each compute node to run the job. If you want to specify this parameter, make sure that the compute node is a GPU-accelerated instance.

    • Advanced parameters

      Parameter

      Description

      MPI Profile

      Specifies whether to enable MPI performance profiling.

      Maximum Memory

      The maximum memory that can be used by each compute node to run the job. This parameter is left empty by default, which indicates no limit is imposed. The value is in the Quantity+Unit format. Example: 1GB or 200MB.

      Maximum Walltime

      The maximum running period of the job. The job fails beyond the specified period. This parameter is left empty by default, which indicates no limit is imposed. Example, 01:00:00, which indicates one hour.

      Stdout Path

      The path in which job-related logs are saved. Make sure that you have the write permission on the path. By default, log files are output based on the scheduler behavior.

      Environment Variable

      The environment variables that you want to add based on your business requirements.

  5. Click Submit.

More operations

You can query submitted jobs on the Task Management page. For more information, see Query jobs.