All Products
Search
Document Center

Elastic High Performance Computing:Use the CLI to submit a job

Last Updated:Nov 28, 2023

Elastic High Performance Computing (E-HPC) supports schedulers such as PBS, SLURM, and SGE. This topic describes how to use the CLI to submit a job based on the schedulers.

Prerequisites

  1. A user is created. For more information, see Create a user.

    The user is used to log on to the cluster, compile software, and submit jobs. The following settings are used in this example:

    • Username: testuser

    • User group: sudo permission group

    Important

    We recommend that you do not use a root user to submit jobs. This prevents cluster data from being damaged due to improper operations in the job script.

  2. Prepare the data file and the execution program file in the relevant directories.

    The following paths are used in this example:

    $HOME/test.py           # The execution program file.
    $HOME/test.data         # The job data file.

    Sample job execution command:

    test.py -i test.data

Connect to a cluster

Connect to a cluster by using an E-HPC client or the E-HPC console. In this example, the username is testuser. After you connect to the cluster, you are automatically logged on to the /home/testuser.

  • Use an E-HPC client to log on to a cluster

    The scheduler of the cluster must be PBS. Make sure that you have downloaded and installed an E-HPC client and deployed the environment required for the client. For more information, see Deploy an environment for an E-HPC client.

    1. Start and log on to your E-HPC client.

    2. In the left-side navigation pane, click Session Management.

    3. In the upper-right corner of the Session Management page, click terminal to open the Terminal window.

  • Use the E-HPC console to log on to a cluster

    1. Log on to the E-HPC console.

    2. In the upper-left corner of the top navigation bar, select a region.

    3. In the left-side navigation pane, click Cluster.

    4. On the Cluster page, find the cluster and click Connect.

    5. In the Connect panel, enter a username and a password, and click Connect via SSH.

Submit a job

Create a job script file based on the scheduler type and copy the following job script to the file, and then submit the job. Examples:

Note

In auto scaling scenarios, E-HPC does not support memory-based scaling. We recommend that you specify the number of required vCPUs when you submit a job.

PBS

  1. Create a job script file named jobscript.pbs.

    vim jobscript.pbs

    The following section provides an example of the content of the jobscript.pbs file. For information about the PBS CLI, visit the PBS official website.

    #!/bin/sh
    # PBS -l ncpus=4,mem=1gb   # The computing resources required to run the job.
    # PBS -l walltime=00:10:00   # The estimated duration of the job.
    # PBS -o test_pbs.log              # The output file of stdout.
    # PBS -j oe                              # Redirects stderr and stdout to the preceding output file.
    cd $HOME
    test.py -i test.data
  2. Submit the job.

    qsub jobscript.pbs

SLURM

  1. Create a job script file named jobscript.slurm.

    vim jobscript.slurm

    The following section provides an example of the content of the jobscript.slurm file. For information about the SLURM CLI, visit the SLURM official website.

    #!/bin/sh
    # SBATCH --job-name=slurm-quickstart # The job name.
    # SBATCH --output=test_slurm.log           # The output file of stdout.
    # SBATCH --nodes=1                                 # The number of nodes.
    # SBATCH --ntasks=1                                 # The number of tasks.
    # SBATCH --cpus-per-task=1                     # The number of vCPUs required for each task.
    # SBATCH --time=00:10:00 # The estimated duration of the job.
    # SBATCH --mem-per-cpu=1024              # The memory that is allocated to each vCPU.
    cd $HOME
    test.py test.data
  2. Submit the job.

    sbatch jobscript.slurm

SGE

  1. Create a job script file named jobscript.sh.

    vim jobscript.sh

    The following example shows the content of jobscript.sh. For more information about the SGE CLI, visit the SGE official website.

    #!/bin/bash
    #$ -cwd # Specify the execution path as the current path.
    #$ -N test1 # The job name.
    #$ -q all.q # The queue of the job.
    #$ -pe smp 2 # The number of vCPUs required to run the job.
    #$ -l vf=1g # The memory size required to run the job.
    #$ -o /home/testuser # The stdout log path. 
    #$ -e /home/testuser # The error log path.
    cd $HOME
    test.py test.data
    Important

    The E-HPC cluster selects ECS instance types based on the number of vCPUs required by the job during auto scaling. When you write an SGE job script, you must use -pe smp to specify the number of vCPUs required by the job.

  2. Submit the job.

    qsub -V jobscript.sh