Business applications
Best practices | Description |
Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a classical molecular dynamics program. It has potential for solid-state materials (metals, semiconductors), soft matter (biomolecules, polymers), and coarse-grained or mesoscopic systems. The topic describes how to run LAMMPS in an Elastic High Performance Computing (E-HPC) cluster to perform a manufacturing simulation based on the 3d Lennard-Jones melt model and visualize the simulation result. | |
EHPC clusters integrate the Intel oneAPI toolkits. The toolkits can be used together with high-performance computing (HPC) software to quickly construct applications that are used across different architectures. The topic describes how to use Intel oneAPI to compile and run LAMMPS in an E-HPC cluster. | |
GROningen MAchine for Chemical Simulations (GROMACS) is a full software package. It is used to perform molecular dynamics by simulating Newtonian equations of motion for systems that include millions of particles. GROMACS is used for nucleic acid analysis of biochemical molecules such as proteins and lipids that have various complex bonded interactions. The topic uses GROMACS as an example to describe how to perform a molecular dynamics simulation by using an E-HPC cluster. | |
Weather Research and Forecasting Model (WRF) is a next-generation mesoscale numerical weather prediction system that is designed for both atmospheric research and operational forecasting applications. WRF uses a software architecture that allows parallel computation and provides system extensibility. The model can be used in a wide range of meteorological applications. The topic describes how to run WRF in an E-HPC cluster for meteorological simulation. | |
The topic describes how to run Burrows-Wheeler Alignment (BWA), Genome Analysis Toolkit (GATK), and SAMtools software in an E-HPC cluster to perform gene sequencing. When you perform gene sequencing, you can use the BWA tool to build indexes and generate alignments, use SAMtools to sort the alignments, and then use the GATK to remove duplicates, recalibrate base quality scores, and discover variants. | |
Open Source Field Operation and Manipulation (OpenFOAM) is a C++ toolbox designed to develop customized numerical solvers. OpenFOAM acts as pre-processing and post-processing utilities for the solution of continuum mechanics problems, including computational fluid dynamics (CFD). The topic describes how to run OpenFOAM in an E-HPC cluster to perform a hydrodynamics simulation. | |
Molecular docking is a powerful approach for virtual drug screening. AutoDock Vina is one of the fastest and most widely used open-source docking engines. It is especially effective for molecular docking. The topic takes AutoDock Vina as an example to describe how to screen potential drugs by using an E-HPC cluster. | |
Schrodinger provides a physics-based computational platform that integrates differentiated solutions for predictive modeling and data analysis to facilitate the exploration of the chemical space. The platform can be used in drug discovery and materials science in various fields, such as aerospace, energy, semiconductors, and electronic displays. The topic uses Schrodinger as an example to describe how to calculate molecular structure by using an E-HPC cluster. |
Test and improve E-HPC cluster performance
Best practices | Description |
High-Performance Linpack (HPL) is a benchmark that is used to test the floating-point operations per second (FLOPS) of high-performance computing clusters. HPL can evaluate the floating-point computing power of high-performance computing clusters. The evaluation is based on a test for solving dense linear unary equations of Nth degree by using Gaussian elimination. The topic describes how to use HPL to test the FLOPS of an E-HPC cluster. | |
Use IMB and an MPI library to test the communication performance of an E-HPC cluster | Intel MPI Benchmarks (IMB) is a piece of software that is used to measure the performance of point-to-point and global communication operations in an HPC cluster for various message sizes. MPI is a standardized and portable message-passing standard for parallel computing. MPI supports multiple programming languages and provides benefits such as high performance, concurrency, portability, and scalability. The topic describes how to use Intel MPI Benchmarks (IMB) and a Message Passing Interface (MPI) library to test the communication performance of an E-HPC cluster. |
Instances in the Super Computing Cluster (SCC) instance family have no virtualization overheads, and are ideal for applications that require high parallelization, high bandwidth, and low latency. Such applications include HPC applications such as Artificial Intelligence (AI) and machine learning. The topic describes how to create and test the performance of an SCC-based cluster. | |
Improve cluster performance by disabling HT for compute nodes | Each compute node in an E-HPC cluster is an Elastic Compute Service (ECS) instance. By default, Hyper-Threading (HT) is enabled for each ECS instance. In some HPC scenarios, you can disable HT to improve the performance of instances. |
Create and deploy a hybrid cloud cluster
Best practices | Description |
You can deploy a master hybrid-cloud cluster that is deployed with the Open Grid Scheduler (SGE) scheduler, compute nodes and management nodes on the cloud, and an on-premises cluster with compute nodes. | |
You can deploy a proxy hybrid-cloud cluster that is deployed with the SGE scheduler, compute nodes on the cloud, and an existing cluster with compute nodes and management nodes. |
Others
Best practices | Description |
Use a post-processing script to deploy a scaled out compute node | After a post-processing script is configured for an Elastic High Performance Computing (E-HPC) cluster, the script is automatically executed on nodes that are added to the E-HPC cluster later. This meets the pre-deployment requirements of compute nodes. You can use a post-processing script to perform custom operations on compute nodes. For example, you can mount ossfs on the compute nodes that are added to an E-HPC cluster later or deploy a software environment. |
You can connect an Active Directory (AD) domain to the LDAP service of an E-HPC cluster to reduce the O&M cost of domain accounts. The topic describes how to connect an AD domain to the LDAP service of an E-HPC cluster. | |
E-HPC provides scheduler plug-ins in addition to mainstream schedulers. If the types or versions of the existing schedulers do not meet your business requirements, you can use a scheduler plug-in to build a custom scheduler, and then connect the scheduler to the E-HPC console. |