All Products
Search
Document Center

Elastic High Performance Computing:Submit jobs

Last Updated:May 27, 2025

This topic describes how to submit jobs by using the console, command-line interface (CLI), and E-HPC Portal.

Prerequisites

  • The cluster and cluster nodes are in the Running state.

  • A cluster user is created. For more information, see Manage users.

  • The job file that you want Elastic High Performance Computing (E-HPC) to run is prepared. This prerequisite is required if you want to submit jobs by using the E-HPC console.

  • The cluster scheduler is a Slurm or OpenPBS scheduler. This prerequisite is required if you want to submit jobs by using the CLI.

Use the E-HPC console

  1. Go to the Cluster Details page.

    1. Log on to the E-HPC console.

    2. In the left part of the top navigation bar, select a region.

    3. In the left-side navigation pane, click Cluster.

    4. On the Cluster List page, find the cluster that you want to manage and click the cluster ID.

  2. In the left-side navigation pane, click Job Management.

  3. Click Create Job and configure the following parameters:

    • Basic Settings

      Parameter

      Required

      Description

      Job Name

      Yes

      The job name.

      Scheduler Queue

      Yes

      The queue in which the job is run.

      Start Job Array

      No

      Specifies whether to enable the job array feature of the scheduler. You can set a job array to customize a job execution rule.

      • Minimum Value: Specifies the starting value of the jobs in the job array. Job indexes increase from the minimum value.

      • Maximum Value: Specifies the ending value of the jobs in the job array. Job indexes stop increasing at the maximum value.

      • Step Size: specifies the increment between job indexes. For example, a step size of 2 specifies that job indexes increase by an increment of 2. Default value: 1.

      Job Priority

      No

      The execution priority of the job. The value is an integer larger than 0 and varies depending on the scheduler. A greater value indicates a higher priority.

      Note

      If you specify that jobs are scheduled by job priority when you set the cluster scheduling policy, jobs with a higher priority are scheduled and run first. You can set a high priority for the jobs that you want to run first.

      Run Command

      Yes

      The command that you want to send to the scheduler to run the job. The value can be a command file, such as job.pbs in /home/test, or a text command. The following items describe the two value types:

      • If you want to specify a command file, the value must be a relative path, for example, ./job.pbs.

      • If you want to specify a text command, for example, if you do not have the execution permission on a command file, you can specify a command prefixed with two hyphens (--). Example: --/opt/mpi/bin/mpirun /home/test/job.pbs.

      Nodes

      No

      The number of compute nodes that you want to use to run the job.

      Requested CPUs

      No

      The vCPUs and memory resource that are used on each node to run the job.

      Note

      You must specify these parameters based on the actual requirements of the job. If the resources that you specify are insufficient for the job, the job may fail to run smoothly.

      Requested Memory

      No

    • Advanced Settings

      Parameter

      Required

      Description

      Stdout Path

      No

      The output file path of stderr and stdout redirected by using a Linux shell. The path contains the output file name.

      • stdout: standard output

      • stderr: standard error

      Cluster users must have the write permissions on the path. By default, output files are generated based on the scheduler settings.

      Stderr Path

      No

      Environment Variables

      No

      The runtime variables passed to the job. They can be accessed by using environment variables in the executable file.

  4. Click Confirm Create.

Use the CLI

  1. Connect to the logon node of the cluster. For more information, see Connect to a cluster.

  2. Write a script and use the script to submit jobs to the scheduler. The operation varies based on the scheduler used. Examples:

    Note

    In auto scaling scenarios, E-HPC does not support memory-based scaling. We recommend that you specify the number of required vCPUs when you submit a job.

    PBS

    1. Run the following command to create a job script file named jobscript.pbs.

      vim jobscript.pbs

      The following code snippet shows sample content of the jobscript.pbs file. For information about the PBS CLI, visit the PBS official website.

      #!/bin/sh
      # PBS -l ncpus=4,mem=1gb    # The computing resources required to run the job.
      # PBS -l walltime=00:10:00  # The estimated duration of the job.
      # PBS -o test_pbs.log       # The output file of stdout.
      # PBS -j oe                 # Redirects stderr and stdout to the preceding output file.
      cd $HOME
      test.py -i test.data
    2. Run the following command to submit your job:

      qsub jobscript.pbs

    SLURM

    1. Run the following command to create a job script file named jobscript.slurm.

      vim jobscript.slurm

      The following code snippet shows sample content of the jobscript.slurm file. For information about the Slurm CLI, visit the SLURM official website.

      #!/bin/sh
      # SBATCH --job-name=slurm-quickstart  # The job name.
      # SBATCH --output=test_slurm.log      # The output file of stdout.
      # SBATCH --nodes=1                    # The number of nodes.
      # SBATCH --ntasks=1                   # The number of jobs.
      # SBATCH --cpus-per-task=1            # The number of vCPUs required for each job.
      # SBATCH --time=00:10:00              # The estimated duration of the job.
      # SBATCH --mem-per-cpu=1024           # The memory that is allocated to each vCPU.
      cd $HOME
      test.py test.data
    2. Run the following command to submit your job:

      sbatch jobscript.slurm

Use E-HPC Portal

For information about using E-HPC Portal, see Submit jobs by using submitter.

Reference

You can also call the CreateJob API operation to submit a job.