All Products
Document Center

:Use LAMMPS to perform high-performance computing

Last Updated:May 19, 2022

This topic uses LAMMPS as an example to show how to perform high-performance computing by using an Elastic High Performance Computing (E-HPC) cluster.

Background information

Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a classical molecular dynamics program. It has potentials for solid-state materials (metals, semiconductors), soft matter (biomolecules, polymers), and coarse-grained or mesoscopic systems.

Before you begin

Before the test, prepare an example file on your computer. In this topic, an example file named is prepared. The file contains the following parameters related to LAMMPS. You can also prepare a different file based on your needs.

# 3d Lennard-Jones melt

variable        x index 1
variable        y index 1
variable        z index 1

variable        xx equal 20*$x
variable        yy equal 20*$y
variable        zz equal 20*$z

units           lj
atom_style      atomic

lattice         fcc 0.8442
region          box block 0 ${xx} 0 ${yy} 0 ${zz}
create_box      1 box
create_atoms    1 box
mass            1 1.0

velocity        all create 1.44 87287 loop geom

pair_style      lj/cut 2.5
pair_coeff      1 1 1.0 1.0 2.5

neighbor        0.3 bin
neigh_modify    delay 0 every 20 check no

fix             1 all nve
dump 1 all xyz 100 /home/lammps/
run             10000


  1. Log on to the E-HPC console.

  2. Create a cluster named LAMMPS.

    For more information, see Create a cluster. Set the following parameters:

    • Compute Node: Select an instance type that has no less than 32 vCPUs, for example, ecs.c7.8xlarge.

    • Scheduler: Select pbs.

    • Other Software: Install lammps-mpich 31Mar17, lammps-openmpi 31Mar17, mpich 3.2, openmpi 1.10.7, and vmd 1.9.3.


    If you use an existing cluster, you can install the preceding software in it. For more information, see Install software.

  3. Create a sudo user named lammps.

    For more information, see Create a user.

  4. Upload a job file.

    1. In the left-side navigation pane of the E-HPC console, click Job.

    2. Select a cluster from the Cluster drop-down list. Then, click Create Job.

    3. On the Create Job page, choose Create File > Open Local File.

    4. In the local directory of your computer, find the file, and click Open.

  5. Create a job script and submit the job.

    1. On the Create Job page, choose Create File > Template > pbs demo.

    2. Configure the job, as shown in the following figure. Then, click OK to submit the job.


      The following sample script shows how to configure the job file:

      #PBS -l select=1:ncpus=32:mpiprocs=32
      # In this example, one compute node is selected for the test. The node uses 32 vCPUs and 32 Message Passing Interface (MPI) processes to perform high-performance computing. In an actual test, configure the number of CPU cores based on your node configurations. Make sure that each compute node has no less than 32 vCPUs. 
      #PBS -j oe
      export MODULEPATH=/opt/ehpcmodulefiles/   # The environment variables on which the module command depends.
      module load lammps-openmpi/31Mar17
      module load openmpi/1.10.7
      echo "run at the beginning"
      mpirun lmp -in ./
  6. View the job result.

    1. On the Cluster page of the E-HPC console, find LAMMPS, and click Connect.

    2. In the Connect panel, set Cluster User to lammps, and specify a password and port number for LAMMPS. Then, click Connect via SSH.

    3. Run the following command to view the result file generated for the LAMMPS job:

      [lammps@login0 ~]$ ls
      lammps.o0  lammps.pbs  log.lammps
      [lammps@login0 ~]$ cat lammps.o0

      If you do not specify a standard output path for the job, the result file is generated based on scheduler behaviors. By default, a result file is saved in the /home/lammps directory. In this example, the result file is lamps.o0.

      The following figure shows the test result.