All Products
Search
Document Center

Elastic High Performance Computing:Overview

Last Updated:Jan 04, 2024

Business applications

Best practice

Description

Use LAMMPS to perform a manufacturing simulation

Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a classical molecular dynamics program. It has potential for solid-state materials (metals, semiconductors), soft matter (biomolecules, polymers), and coarse-grained or mesoscopic systems. The topic describes how to run LAMMPS in an Elastic High Performance Computing (E-HPC) cluster to perform a manufacturing simulation based on the 3d Lennard-Jones melt model and visualize the simulation result.

Use Intel oneAPI to compile and run LAMMPS

EHPC clusters integrate the Intel oneAPI toolkits. The toolkits can be used together with high-performance computing (HPC) software to quickly construct applications that are used across different architectures. The topic describes how to use Intel oneAPI to compile and run LAMMPS in an E-HPC cluster.

Use GROMACS to perform a molecular dynamics simulation

GROningen MAchine for Chemical Simulations (GROMACS) is a full software package. It is used to perform molecular dynamics by simulating Newtonian equations of motion for systems that include millions of particles. GROMACS is used for nucleic acid analysis of biochemical molecules such as proteins and lipids that have various complex bonded interactions. The topic uses GROMACS as an example to describe how to perform a molecular dynamics simulation by using an E-HPC cluster.

Use WRF to perform high-performance computing

Weather Research and Forecasting Model (WRF) is a next-generation mesoscale numerical weather prediction system that is designed for both atmospheric research and operational forecasting applications. WRF uses a software architecture that allows parallel computation and provides system extensibility. The model can be used in a wide range of meteorological applications. The topic describes how to run WRF in an E-HPC cluster for meteorological simulation.

Use BWA, GATK, and SAMtools to perform gene computing

The topic describes how to run Burrows-Wheeler Alignment (BWA), Genome Analysis Toolkit (GATK), and SAMtools software in an E-HPC cluster to perform gene sequencing. When you perform gene sequencing, you can use the BWA tool to build indexes and generate alignments, use SAMtools to sort the alignments, and then use the GATK to remove duplicates, recalibrate base quality scores, and discover variants.

Use OpenFOAM to perform a hydrodynamics simulation

Open Source Field Operation and Manipulation (OpenFOAM) is a C++ toolbox designed to develop customized numerical solvers. OpenFOAM acts as pre-processing and post-processing utilities for the solution of continuum mechanics problems, including computational fluid dynamics (CFD). The topic describes how to run OpenFOAM in an E-HPC cluster to perform a hydrodynamics simulation.

Use AutoDock Vina to screen potential drugs

Molecular docking is a powerful approach for virtual drug screening. AutoDock Vina is one of the fastest and most widely used open-source docking engines. It is especially effective for molecular docking. The topic takes AutoDock Vina as an example to describe how to screen potential drugs by using an E-HPC cluster.

Use Schrodinger to calculate molecular structure

Schrodinger provides a physics-based computational platform that integrates differentiated solutions for predictive modeling and data analysis to facilitate the exploration of the chemical space. The platform can be used in drug discovery and materials science in various fields, such as aerospace, energy, semiconductors, and electronic displays. The topic uses Schrodinger as an example to describe how to calculate molecular structure by using an E-HPC cluster.

Cluster performance testing

Best practice

Description

Use HPL to test the FLOPS of an E-HPC cluster

High-Performance Linpack (HPL) is a benchmark that is used to test the floating-point operations per second (FLOPS) of high-performance computing clusters. HPL can evaluate the floating-point computing power of high-performance computing clusters. The evaluation is based on a test for solving dense linear unary equations of Nth degree by using Gaussian elimination. The topic describes how to use HPL to test the FLOPS of an E-HPC cluster.

Use IMB and an MPI library to test the communication performance of an E-HPC cluster

Intel MPI Benchmarks (IMB) is a piece of software that is used to measure the performance of point-to-point and global communication operations in an HPC cluster for various message sizes. MPI is a standardized and portable message-passing standard for parallel computing. MPI supports multiple programming languages and provides benefits such as high performance, concurrency, portability, and scalability. The topic describes how to use Intel MPI Benchmarks (IMB) and a Message Passing Interface (MPI) library to test the communication performance of an E-HPC cluster.

Test the performance of an SCC cluster

Instances in the Super Computing Cluster (SCC) instance family have no virtualization overheads, and are ideal for applications that require high parallelization, high bandwidth, and low latency. Such applications include HPC applications such as Artificial Intelligence (AI) and machine learning. The topic describes how to create and test the performance of an SCC-based cluster.

Hybrid cloud cluster

Best practice

Description

Deploy a hybrid cloud cluster in master mode

You can deploy a master hybrid-cloud cluster that is deployed with the Open Grid Scheduler (SGE) scheduler, compute nodes and management nodes on the cloud, and an on-premises cluster with compute nodes.

Deploy a hybrid cloud cluster in proxy mode

You can deploy a proxy hybrid-cloud cluster that is deployed with the SGE scheduler, compute nodes on the cloud, and an existing cluster with compute nodes and management nodes.

Cluster configuration

Best practice

Description

Use scheduler plug-ins

E-HPC provides scheduler plug-ins in addition to mainstream schedulers. If the types or versions of the existing schedulers do not meet your business requirements, you can use a scheduler plug-in to build a custom scheduler, and then connect the scheduler to the E-HPC console.

Disable HT for compute nodes

Each compute node in an E-HPC cluster is an Elastic Compute Service (ECS) instance. By default, Hyper-Threading (HT) is enabled for each ECS instance. In some HPC scenarios, you can disable HT to improve the performance of instances.

Use a post-processing script to mount ossfs on an E-HPC clusters

After a post-processing script is configured for an Elastic High Performance Computing (E-HPC) cluster, the script is automatically executed on nodes that are added to the E-HPC cluster later. This meets the pre-deployment requirements of compute nodes. You can use a post-processing script to perform custom operations on compute nodes. For example, you can mount ossfs on the compute nodes that are added to an E-HPC cluster later or deploy a software environment.

Configure auto scaling

If you need to submit jobs at any time, use an E-HPC cluster to perform large-scale computing for several hours, and then release nodes, you can configure different scaling policies for different job types.

Connect an AD domain to an E-HPC cluster

You can connect an Active Directory (AD) domain to the LDAP service of an E-HPC cluster to reduce the O&M cost of domain accounts. The topic describes how to connect an AD domain to the LDAP service of an E-HPC cluster.