All Products
Search
Document Center

Use GROMACS to perform high-performance computing

Last Updated: Sep 03, 2021

This topic uses GROMACS as an example to show how to perform high-performance computing by using an Elastic High Performance Computing (E-HPC) cluster.

Background information

GROningen MAchine for Chemical Simulations (GROMACS) is a full software package. It is used to perform molecular dynamics by simulating Newtonian equations of motion for systems that include millions of particles.

GROMACS is used for nucleic acid analysis of biochemical molecules such as proteins and lipids that have various complex bonded interactions. GROMACS is the preferred method for computing typical simulations, for example, non-bonded interactions. Many researchers in different industries use GROMACS to study polymers in non-biological systems.

GROMACS supports common algorithms of molecular dynamics (MD). Graphics processing units (GPUs) can be used in GROMACS to accelerate critical computing tasks. For more information, visit the GROMACS official website.

Related examples

Submit a job

  1. Log on to the E-HPC console.

  2. Create a cluster named gromacs-test.

    For more information, see Create a cluster. Set the following parameters:

    • Other Software: Install gromacs-gpu 2018.1, openmpi 3.0.0, cuda-toolkit 9.0, and vmd 1.9.3.

    • Compute Node: Select a GPU instance, for example, ecs.gn5-c8g1.2xlarge.

    • VNC: Turn on the VNC switch. Then, you can remotely log on to the cloud desktop or app of E-HPC by using the E-HPC console.

  3. Create a sudo user named gmx.test.

    For more information, see Create a user.

  4. Download and decompress the file of Example 2.

    1. On the Cluster page of the E-HPC console, find gromacs-test. Click Connect.

    2. In the Connect panel, specify a username, password, and port number for gromacs-test. Then, click Connect via SSH.

    3. Run the following command to download and decompress the file of Example 2:

      cd /home/gmx.test;
      wget https://public-ehpc-package.oss-cn-hangzhou.aliyuncs.com/water_GMX50_bare.tar.gz;
      tar xzvf water_GMX50_bare.tar.gz;
      chown -R gmx.test water-cut1.0_GMX50_bare;
      chgrp -R users water-cut1.0_GMX50_bare
  5. On the Cluster page, find gromacs-test, and click Job.

  6. On the Job page, click Create Job.

  7. On the Create Job page, choose Create File > Template > pbs demo.

  8. Configure the job, as shown in the following figure. Then, click OK to submit the job.

    GROMACS job The following sample file shows how to configure a job file:

    Note

    In this example, a job is submitted by the user named gmx.test. This job is run on compute9, a compute node that contains eight CPU cores and one P100 GPU card. You can modify the cluster configurations based on the actual scenario.

    #!/bin/sh
    #PBS -j oe
    #PBS -l select=1:ncpus=8:mpiprocs=4
    #PBS -q workq
    
    export MODULEPATH=/opt/ehpcmodulefiles/   # The environment variables on which the module command depends.
    module load gromacs-gpu/2018.1
    module load openmpi/3.0.0
    module load cuda-toolkit/9.0
    export OMP_NUM_THREADS=1
    
    cd /home/gmx.test/water-cut1.0_GMX50_bare/0096
    /opt/gromacs-gpu/2018.1/bin/gmx_mpi grompp -f pme.mdp -c conf.gro -p topol.top -o topol_pme.tpr   # Preprocess the job file to generate a TPR input file.
    
    mpirun -np 1 -host compute9 /opt/gromacs-gpu/2018.1/bin/gmx_mpi mdrun -ntomp 8 -nsteps 400000 -pin on -nb gpu -s topol_pme.tpr   # In this case, -ntomp specifies the number of OpenMP threads associated with each process, and -nsteps specifies the number of iteration steps in the simulation.

View the computing performance and the results of the job

  1. In the left-side navigation pane, choose Job and Performance Management > Job.

  2. Click Details on the right side of the Job page to view details of the job.

  3. View the computing performance of the job.

    1. In the left-side navigation pane of the E-HPC console, choose Job and Performance Management > E-HPC Tune.

    2. Find gromacs-test and click Node in the Operation column.

    3. Select a job and a node. Complete the metric configuration to view the node performance.

      NodeView
    4. Click the ProcView tab to view the details of top five processes that have the highest CPU utilization at the current point in time.

      ProcView
    5. Click the node process that you want to profile. Set a duration and frequency for the profiling, start real-time profiling for the GROMACS job, and then obtain the flame graph of hot functions.

      Profiler
  4. Use VNC to view the result of the job.

    1. On the Cluster page, find gromacs-test. Choose More > VNC.

    2. In the cloud desktop of E-HPC, click Connect.

    3. In the Input Password dialog box, enter the password and click OK.

    4. In the Virtualization Service dialog box of the cloud desktop, choose Application > System Tools > Terminal.

    5. Run /opt/vmd/1.9.3/vmd in the Terminal dialog box of the cloud desktop to open Visual Molecular Dynamics (VMD).

    6. Load files of molecular structures and trajectories into VMD to view the simulation effect.

      vmd