edit-icon download-icon

LAMMPS

Last Updated: Jul 31, 2018

LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) is a classical software on molecular dynamics. It is potentially used for solid materials (metals and semiconductors), soft materials (biological macromolecules and polymers), and coarse-grained or mesoscopic model systems.

Click here to view the official website.

The 3d Lennard-Jones melt example

Prerequisites

  • Install the LAMMPS software package during the cluster creation.

setup_LAMMPS

  • Meanwhile, select the MPI library on which the software depends.

setup_mpi

Procedure

  1. Run the module avail to check whether LAMMPS software has been installed:

    1. $ export MODULEPATH=/opt/ehpcmodulefiles/
    2. $ module avail
    3. ------------------------------ /opt/ehpcmodulefiles -------------------------------------
    4. lammps/31Mar17-mpich lammps/31Mar17-openmpi mpich/3.2.2 openmpi/1.10.7
  2. Run the module load to load LAMMPS:

    1. $ module load lammps/31Mar17-mpich
    2. $ module load mpich
    3. $ which lmp
    4. /opt/lammps/31Mar17-mpich/lmp
  3. To know more about the following job submission methods, go to the example directory.

  • Submit jobs using the command line:
  1. $ srun --mpi=pmi2 -N 2 -n 4 lmp -in in.intel.lj
  2. LAMMPS (31 Mar 2017)
  3. Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
  4. Created orthogonal box = (0 0 0) to (134.368 67.1838 67.1838)
  5. 2 by 1 by 2 MPI processor grid
  6. Created 512000 atoms
  7. ... ...
  • Submit jobs in a job format:
  1. $ cat job.sh # Job content
  2. #!/usr/bin/env bash
  3. mpirun lmp -in ./in.intel.lj
  4. $ sbatch -N 2 -n 4 ./job.sh # Submit job
  5. Submitted batch job 235
  6. $ squeue # View job
  7. JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
  8. 235 comp job.sh user R 0:03 2 s[02-03]
  • Submit jobs after the resource allocation:
  1. $ salloc -N 2 mpirun -n 4 lmp -in in.intel.lj
  2. salloc: Granted job allocation 236
  3. salloc: Waiting for resource configuration
  4. salloc: Nodes s[02-03] are ready for job
  5. LAMMPS (31 Mar 2017)
  • Submit PBS jobs (GPU acceleration):
  1. $ cat > lammps_single_node.pbs
  2. #!/bin/sh
  3. #PBS -l ncpus=28,mem=12gb
  4. #PBS -l walltime=00:10:00
  5. #PBS -o lammps_pbs.log
  6. #PBS -j oe
  7. cd /opt/lammps/31Mar17-openmpi/src
  8. /opt/openmpi/bin/mpirun -np 28 /opt/lammps/31Mar17-openmpi/bin/lmp_mpi -sf gpu -pk gpu 2 -in ./in.intel.lj -v m 0.1
  9. $ qsub lammps_single_node.pbs
Thank you! We've received your feedback.