Last Updated: Oct 12, 2017

GROMACS overview

gromacs logo

GROMACS (GROningen MAchine for Chemical Simulations) is a general software package used to simulate the molecular dynamics of systems with millions of particles based on Newtonian motion equations. GROMACS is mainly used for biochemical molecules, such as proteins, lipids, and other nucleic acids that have a variety of complex bonding interactions. Because GROMACS provides high efficiency for typical computing simulations, such as non-bonding interactions, many researchers use it to research non-biological systems, such as polymers.

GROMACS supports all the common algorithms used in modern molecular dynamics and its code is maintained by developers throughout the world. For more information, visit the official website:


For the following examples, you must install the GROMACS software package during cluster creation.


Note: To run the gromacs-gpu example, you must select a GPU series instance model for the computing node during cluster creation. Otherwise, you will not be able to run gromacs-gpu as called for in the second example.

At the same time, you must select the MPI library that this software depends on.


Operation examples

Note: To perform these operation examples, you must first carry out the preparations described in Submit jobs.

GROMACS Example 1: Lysozyme in water

This example will guide you through the process of setting up a simulation system containing a protein (lysozyme) in a box of water, with ions.

Link to official tutorial:

Download address


a. Serial version$ ./

b. Parallel version$ ./

GROMACS Example 2: Water molecule motion

In this example, we will simulate the motion of a large number of water molecules in a given space and at a given temperature. In this example, you will need a GPU acceleration instance.


  • Set the environmental variables and run module avail to check that the GROMACS software has been installed.
  1. $ export MODULEPATH=/opt/ehpcmodulefiles/ # The environmental variables needed by the module command
  2. $ module avail
  3. ------------------------------ /opt/ehpcmodulefiles -------------------------------------
  4. gromacs-gpu/2016.3 openmpi/1.10.7
  • Run module load to load GROMACS and OpenMPI.
  1. $ module load openmpi
  2. $ module load gromacs-gpu
  3. $ which gmx_mpi
  4. /opt/gromacs-gpu/2016.3/bin/gmx_mpi
  • Download the water example.

Here, we will assume that the current directory is under the $HOME path of the current user.

  1. $ pwd
  2. /home/<current_user_name>
  3. $ wget
  4. $ tar xzvf water_GMX50_bare.tar.gz
  • Submit the PBS job to run the water example.

    • High-configuration computing node (>32 CPU cores, dual GPU cards) PBS job script
  1. $ cat > gromacs_single_node.pbs
  2. #!/bin/sh
  3. #PBS -l ncpus=32,mem=4gb
  4. #PBS -l walltime=00:20:00
  5. #PBS -o gromacs_gpu_pbs.log
  6. #PBS -j oe
  7. cd /home/water-cut1.0_GMX50_bare/1536
  8. /opt/gromacs-gpu/2016.3/bin/gmx_mpi grompp -f pme.mdp -c conf.gro -p -o topol_pme.tpr
  9. /opt/openmpi/1.10.7/bin/mpirun -np 4 /opt/gromacs-gpu/2016.3/bin/gmx_mpi mdrun -ntomp 8 -resethway -noconfout -nsteps 8000 -v -pin on -nb gpu -gpu_id 0011 -s topol_pme.tpr
  • Low-configuration node PBS job script
  1. $ cat > gromacs_single_node.pbs
  2. #!/bin/sh
  3. #PBS -l ncpus=4,mem=4gb
  4. #PBS -l walltime=00:20:00
  5. #PBS -o gromacs_gpu_pbs.log
  6. #PBS -j oe
  7. cd /home/water-cut1.0_GMX50_bare/1536
  8. /opt/gromacs-gpu/2016.3/bin/gmx_mpi grompp -f pme.mdp -c conf.gro -p -o topol_pme.tpr
  9. /opt/openmpi/1.10.7/bin/mpirun -np 1 /opt/gromacs-gpu/2016.3/bin/gmx_mpi mdrun -ntomp 4 -resethway -noconfout -nsteps 8000 -v -pin on -nb gpu -s topol_pme.tpr
  • Submit the job using the PBS job script.
  1. $ qsub gromacs_single_node.pbs
  2. 1.iZ2zedptfv8e8dc9c2zt0tZ
  3. $ qstat
  4. Req'd Req'd Elap
  5. Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
  6. --------------- -------- -------- ---------- ------ --- --- ------ ----- - -----
  7. 1.iZ2zedptfv8e8 mingying workq gromacs_si 20775 1 4 4gb 00:20 R 00:03
Thank you! We've received your feedback.