edit-icon download-icon

Job submission

Last Updated: Aug 08, 2018

Prerequisites

To submit jobs using E-HPC cluster, make sure:

Warning: We do not encourage using a root account for job submission, because any misoperation in the job script may lead to data corruption of the E-HPC cluster.

Submit a job

Conventions

Assume that the job file path is:

  1. $HOME/test.py # Job executable program
  2. $HOME/test.data # Job data

The command line to run the job is:

  1. test.py -i test.data

Job scheduling

E-HPC currently supports two types of E-HPC job schedulers:

PBS
  1. $ cat > test.pbs
  2. #!/bin/sh
  3. #PBS -l ncpus=4,mem=1gb
  4. #PBS -l walltime=00:10:00
  5. #PBS -o test_pbs.log
  6. #PBS -j oe
  7. cd $HOME
  8. test.py -i test.data
  9. $ qsub test.pbs

PBS job scheduling script, test.pbs:

  • Line 3 estimates the computing resources needed for the job: 4 CPU cores and 1 GB of memory.
  • Line 4 estimates the time required to run the job: 10 minutes.
  • Line 5 specifies stdout as the output file.
  • Line 6 merges the stderr and stdout output to the output file specified in the previous line.
  • Lines 7 and 8 provide the specific job execution command.

For PBS combination examples, see GROMACS (Example 2) and LAMMPS (PBS job submission).

For more information, see the official PBS User Guide.

SLURM
  1. $ cat > test.slurm
  2. #!/bin/sh
  3. #SBATCH --job-name=slurm-quickstart
  4. #SBATCH --output=test_slurm.log
  5. #SBATCH --nodes=1
  6. #SBATCH --ntasks=1
  7. #SBATCH --cpus-per-task=1
  8. #SBATCH --time=00:10:00
  9. #SBATCH --mem-per-cpu=1024
  10. cd $HOME
  11. test.py test.data
  12. $ sbatch test.slurm

For SLURM combination examples, see Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and Weather Research and Forecasting model (WRF).

Thank you! We've received your feedback.