All Products
Search
Document Center

Elastic High Performance Computing:Use Amber for molecular dynamics simulation

Last Updated:Apr 29, 2025

This topic uses Assisted Model Building with Energy Refinement (Amber) as an example to describe how to simulate molecular dynamics in an Elastic High Performance Computing (E-HPC) cluster.

Background information

Amber is an open source software system that is widely used to simulate molecular dynamics in biochemistry. Amber consists of various collaborative programs that simulate the dynamics of biological macromolecules such as proteins, nucleic acids, and sugars. Powered with a core dynamics engine developed in Fortran 90, Amber supports multi-CPU parallel computing and GPU acceleration and is suitable for heavy-load molecular dynamics simulation.

Preparations

Create an Elastic Compute Service (ECS) instance equipped with NVIDIA GPU cards, such as an instance type in the gn7i GPU-accelerated compute-optimized instance family. For more information, see Create an instance on the Custom Launch tab and Elastic GPU Service instance families (gn, vgn, and sgn series).

The following items list the type and OS of the ECS instance used in this topic:

  • Instance type: ecs.gn7i-c8g1.2xlarge

  • OS: Alibaba Cloud Linux 3

Note

We recommend that you deploy the instance in the same region as the cluster to simplify subsequent operations.

Step 1: Install and configure Amber

  1. Connect to the ECS instance. For more information, see Connect to a Linux instance by using a password or key

  2. Run the following command to install the necessary Object Storage Service (OSS) tool:

    sudo yum install -y unzip
    sudo -v ; curl https://gosspublic.alicdn.com/ossutil/install.sh | sudo bash
    ossutil

    Run the following command to modify the ossutilconfig file. You can configure the OSS endpoint, AccessKey ID, and AccessKey secret in the file. For more information, see Configure ossutil.

    vi ~/.ossutilconfig
  3. Run the following command to install the required development tools and libraries:

    sudo yum install -y gcc gcc-c++ make automake tcsh gcc-gfortran which flex bison patch bc libXt-devel libXext-devel perl perl-ExtUtils-MakeMaker util-linux wget bzip2 bzip2-devel zlib-devel tar
  4. Run the following command to install the unrar tool:

    sudo wget https://www.rarlab.com/rar/rarlinux-x64-612.tar.gz  --no-check-certificate
    sudo tar -zxvf rarlinux-x64-612.tar.gz
  5. Run the following command to install CMake:

    cd /usr/local/
    sudo wget https://github.com/Kitware/CMake/releases/download/v3.18.1/cmake-3.18.1-Linux-x86_64.sh
    chmod +x cmake-3.18.1-Linux-x86_64.sh
    ./cmake-3.18.1-Linux-x86_64.sh
    export PATH=$PATH:/usr/local/cmake-3.18.1-Linux-x86_64/bin
    sudo yum install -y gcc gcc-c++ make automake
  6. Upgrade GCC.

    1. Run the following command to install the CentOS Software Collections (SCL) repository released by Red Hat:

      sudo yum install centos-release-scl-rh -y
    2. Run the following command to modify the CentOS-SCLo-scl-rh.repo file:

      sudo vi /etc/yum.repos.d/CentOS-SCLo-scl-rh.repo

      Code to be used to replace the file content

      # CentOS-SCLo-rh.repo
      #
      # Please see http://wiki.centos.org/SpecialInterestGroup/SCLo for more
      # information
      
      [centos-sclo-rh]
      name=CentOS-7 - SCLo rh
      baseurl=http://mirrors.cloud.aliyuncs.com/centos/7/sclo/$basearch/rh/
      gpgcheck=0
      enabled=1
      gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-SCLo
      
      [centos-sclo-rh-testing]
      name=CentOS-7 - SCLo rh Testing
      baseurl=http://mirrors.cloud.aliyuncs.com/centos/7/sclo/$basearch/rh/
      gpgcheck=0
      enabled=0
      gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-SCLo
      
      [centos-sclo-rh-source]
      name=CentOS-7 - SCLo rh Sources
      baseurl=http://vault.centos.org/centos/7/sclo/Source/rh/
      gpgcheck=1
      enabled=0
      gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-SCLo
      
      [centos-sclo-rh-debuginfo]
      name=CentOS-7 - SCLo rh Debuginfo
      baseurl=http://debuginfo.centos.org/centos/7/sclo/$basearch/
      gpgcheck=1
      enabled=0
      gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-SCLo
    3. Run the following command to install Developer Toolset 7:

      sudo yum install devtoolset-7 -y
      Note

      Select an appropriate version of Developer Toolset for the best environment compatibility and optimal performance:

      • Devtoolset-3 is designed for GCC 4.x.x and used in early project stages.

      • Devtoolset-4 is integrated with GCC 5.x.x to facilitate software stack development in mid-project stages.

      • Devtoolset-6 matches GCC 6.x.x and boasts more modern language support.

      • Devtoolset-7 is designed for GCC 7.x.x and incorporates state-of-the-art compilation techniques and optimization policies.

    4. Run the following command to configure the environment variable:

      source /opt/rh/devtoolset-7/enable
  7. Install the NVIDIA GPU driver and SDK.

    Important

    Because the NVIDIA GPU is tightly coupled with the CUDA software platform, we recommend that you separately install the NVIDIA GPU driver and deploy the CUDA package to ensure optimal performance and reduce compatibility issues.

    1. Run the following command to install the NVIDIA driver. For more information, see Install CUDA.

      sudo wget https://us.download.nvidia.com/tesla/418.226.00/nvidia-driver-local-repo-rhel7-418.226.00-1.0-1.x86_64.rpm
      rpm -i nvidia-driver-local-repo-rhel7-418.226.00-1.0-1.x86_64.rpm
      sudo yum clean all
      sudo yum install cuda-drivers
      sudo reboot
    2. Run the following command to install the NVIDIA SDK:

      sudo wget https://developer.download.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda_10.1.243_418.87.00_linux.run
      chmod +x cuda_10.1.243_418.87.00_linux.run
      ./cuda_10.1.243_418.87.00_linux.run --toolkit --samples --silent
  8. Compile Open MPI.

    1. Run the following command to download the MPI 3.0.0 source code package:

      curl -O -L https://download.open-mpi.org/release/open-mpi/v3.0/openmpi-3.0.0.tar.gz
    2. Run the following commands one by one to configure OpenMPI to support CUDA:

      export PATH=/usr/local/cuda/bin/:$PATH
      sudo tar zxvf openmpi-3.0.0.tar.gz
      cd openmpi-3.0.0/
      ./configure --prefix=/opt/openmpi/ --with-cuda=/usr/local/cuda/
    3. Run the following command to compile Open MPI:

      sudo make -j 8 all
      sudo make -j 8 install
    4. Run the following commands to configure the environment variable:

      export PATH=/usr/local/cuda/bin/:/opt/openmpi/bin:$PATH
  9. Configure Amber.

    1. Go to the Amber official website, download the AmberTools24.tar.bz2 and Amber24.tar.bz2 files, and decompress them in the /opt/ directory.

    2. Run the following command to modify the Amber compilation parameters:

      cd /opt/amber24_src/build/
      DMPI=TRUE
      openmpiDCUDA=TRUE
    3. Run the following commands in sequence to compile and install Amber:

      sudo ./run_cmake
      sudo make install
    4. Run the following commands in sequence to test Amber:

      source /opt/amber24/amber.sh
      make test.serial
  10. Package the compiled OpenMPI and Amber software.

    1. Create a directory.

      cd
      mkdir amber-openmpi-package
      cd amber-openmpi-package
    2. Run the cp command to copy the compiled OpenMPI and Amber executables to the amber-openmpi-package directory.

    3. Run the following command to decompress the package:

      cd
      tar -czf amber-openmpi-package.tar.gz amber-openmpi-package/

Step 2: Create a custom image

For more information, see Create a custom image from an instance.

Step 3: Create an E-HPC cluster by using a custom image

  1. Create an E-HPC cluster. For more information, see Create a standard cluster.

    In this example, the following configurations are used for the cluster:

    Item

    Configuration

    Series

    Standard Edition

    Deployment Mode

    Public cloud cluster

    Cluster Type

    SLURM

    Nodes

    One management node and two compute nodes. The following items describe the nodes:

    • Management node: ecs.c7.xlarge, with 4 vCPUs and 32 GiB of memory

      Note

      Adjust the instance specifications of the management node based on your business requirements.

    • Compute node: ecs.gn6i-c24g1.12xlarge, with 48 vCPUs and 186 GiB of memory

    Image

    The custom image created in this topic, for the management node, logon node, and compute nodes

  2. Create a cluster user. For more information, see Manage users.

    The cluster user is used to log on to the cluster, compile software, and submit jobs. In this example, the following configurations are used to create the user:

    • Username: testuser

    • User group: ordinary permissions

  3. (Optional) Enable auto scaling for the cluster. For more information, see Configure auto scaling of nodes.

    Note

    You can use the auto scaling feature to automatically add or remove compute nodes for the cluster dynamically based on the real-time load. This helps improve resource management efficiency.

Step 4: Create a visualized MOE job

  1. Log on to E-HPC Portal as testuser.

    For more information, see Log on to E-HPC Portal.

  2. Click the 连接.png icon in the upper-right corner to connect to the cluster by using Workbench.

  3. Decompress the compiled OpenMPI and Amber software into the /opt directory for all nodes to access and use.

    sudo tar -xzf amber-openmpi-package.tar.gz -C /opt
  4. Install Molecular Operating System (MOE).

    Note

    MOE is a commercial software system that you need to purchase and install on your own according to the MOE official user guide.

  5. Configure environment variables.

    1. Run the following command to create the deploy_amber_openmpi.sh executable file:

      vim deploy_amber_openmpi.sh
    2. Copy the following code, paste it in the file, and save and close the file:

      #!/bin/bash
      
      # Configure environment variables
      export MOE=/opt/moe_2019.0102
      export LD_LIBRARY_PATH=$MOE/lib:$LD_LIBRARY_PATH
      export PATH=$MOE/bin-lnx64:/usr/local/cuda/bin/:/opt/openmpi/bin:$PATH
      export AMBERHOME=/opt/amber24
    3. Run the following commands one by one to execute the script:

      chmod +x deploy_amber_openmpi.sh
      ./deploy_amber_openmpi.sh
  6. Create a visualized MOE job.

    1. In the top navigation bar of E-HPC Portal, click Task Management.

    2. In the upper part of the page, click moe.

    3. On the Create Job page, specify the parameters. The following table describes the parameters:

      Parameter

      Description

      Session Name

      The name of the session generated by running moe to access the cluster.

      SpecNode

      The node on which MOE runs.

      • If you select Queue, a node is automatically assigned from a queue.

      • If you select Node, only the localhost node is supported, which is the current node.

      Cores per Node

      The number of vCPUs that are used to run MOE.

      Resolution

      The resolution of the graphical interface.

      Application Startup Command

      Keep the default value moe.

    4. Click Submit.

  7. Go to the graphical interface.

    1. In the top navigation bar of E-HPC Portal, click Session Management.

    2. Find the MOE session by session name and click the session name.

    3. Go to the visual interface to perform operations.

      image

Step 5: Submit a computing job

Note

This section uses a testing package test.rar and the job execution script run.sh as an example to show you how to submit a job. You must replace them with your actual job files in your operation.

  1. Log on to E-HPC Portal and click 连接.png in the upper-right corner to use Workbench to connect to the cluster.

  2. Run the following command to decompress test.rar:

    unrar x test.rar
  3. Run the following command to specify two vCPUs for use by each job by modifying the run.sh file:

    cd test/
    vim run.sh

    The following figure shows an example:

    image

  4. Run the following command to submit a job:

    ./run.sh -qsys slurm -submit

Step 6: View the job result

Run one of the following commands to query the job details:

  • The Slurm Sacct command:

    sacct
  • The MOE command:

    ./run.sh -qsys slurm -status