Create an E-HPC cluster

Last Updated: Nov 06, 2017

Logon management

To access the E-HPC console, sign up to the free account by following the steps: log on to the E-HPC console. If you do not have an account yet, click Free Account to sign up.

Click E-HPC > Clusters, select a region (such as US East 1), and click Create Cluster.

First, learn about regions and zones.

Note: When creating, managing, or using E-HPC clusters, unless necessary, do not use the ECS console to make any adjustments to any individual cluster nodes. Perform any operations on the E-HPC cluster management platform.

Hardware configuration

Zone

In the cluster creation window, select the zone. Or you can also use the default zone allocated by the system. To guarantee efficient network communication between E-HPC nodes, activate all nodes in the same zone of the same region. For more information, see Regions and zones.

VPCs, VSwitches, and security groups

Next, select the VPC to use. Basically, a VPC is an independent and isolated network environment. To use a VPC, you must create it and its supporting VSwitch first. For more information about VPCs, see VPC overview. Click Create VPC or Create Subnet (VSwitch). VPCs and VSwitches can be created together on the same interface. For more information, see the process of Creating a VPC and VSwitch (or click Tutorial). Follow the instructions to create required VPC and VSwitch.

Note: When creating VPCs and VSwitches, you must select the same region and zone as the E-HPC cluster.

After creating the VPC, click Refresh and then, from the drop-down menu, select the newly created VPC and VSwitch according to the names as the network components of the E-HPC cluster.

Next, choose whether or not to create a security group. Security groups are used to set network access control for the cluster. If you do not have any special requirements, you can create a security group using the default rules provided by the E-HPC control system. If you want to allow the new E-HPC cluster to communicate with other clusters that already have security groups, select the names of the relevant existing security groups from the drop-down list. For more information, see Security groups.

Billing method and high availability

Currently, the only supported billing method is Pay-As-You-Go. You are billed by hours based on the duration of use. This is a post-payment mode. The final price is shown at the lower right of the page.

High availability indicates that the cluster supports HA (High Availability) function, and SPOF issue never happens. The cluster head node and domain account management node both support HA configurations. For example, the PBS cluster head node and the NIS domain account management node both have a master and a slave node. When the master node is not working, the system automatically switches over to the slave node.

When HA is enabled, two control nodes are automatically added.

Node configuration

E-HPC clusters consist of the following nodes:

  • Control nodes, consisting of two independent nodes
    • Job scheduling node
    • Domain account management node
  • Computing node
  • Logon node

Generally, the job scheduling node only handles job scheduling, while the domain account management node only handles account information. These nodes do not participate in job computation. Therefore, in principle, you can select low-configuration enterprise-level instances (for example, the sn1ne instance with 4 CPU cores) for the control nodes to guarantee high availability. The hardware configuration selected for the computing node is the key factor that determines cluster performance. The logon node is generally configured as a development environment. It must provide all cluster users with the resources and testing environment needed for software development and debugging. Therefore, we recommend that you select an instance with the same configuration as the computing node or with a higher memory ratio. For more information on each instance model, see Recommended configurations.

After enabling HA, the system automatically allocates four control node instances (master/slave scheduling nodes and master/slave domain account management nodes). If you disable HA, only two instances (no slave nodes) are allocated. If you select the GPU Series filter on the instance list, only GPU instance options are displayed for the Computing Node and Logon Node. You can specify the number of computing node instances to create. By default, only one logon node instance is created.

Shared storage

Next, create shared storage. Data for all users, user management, and job sharing is stored in the shared storage to be accessed by the cluster nodes. Currently, shared storage is provided by NAS. Moreover, to use NAS, supporting mount points and remote directories are required. See NAS terminology.

Click Create NAS Instance or Create Mount Point, and then read the instructions in Create a file system and mount point (or click Tutorial) to create a NAS instance and mount point.

Note: We recommend that you select the same region and zone as the E-HPC cluster when creating a NAS instance. You must select the same VPC and VSwitch as the E-HPC cluster when adding a NAS mount point.

Now, go back to the E-HPC cluster creation page, and then click NAS Instance and Mount Point to select the newly created NAS instance and mount point according to the IDs. Enter the Remote Directory. The final mount path is a combination of the mount point and remote directory: mount point:/remote directory. For more information, see NAS mount convention.

Note: NAS remote directories must be created in advance. If you have no special requirements, you can leave this blank and use the NAS root directory. Click Next to perform software configuration.

Software configuration

Operating system

If you have no special requirements, we recommend taht you use the CentOS 7.x series.

Product version

The current E-HPC product version is 1.0.0.

Scheduler and domain account service

Because of the intrinsic functions and features of the software modules, the coexistence of multiple schedulers or multiple domain account managements leads to conflicts or even data corruption. Therefore, during cluster creation, do not select conflicting schedulers or account managements. We recommend that you use PBS + NIS.

Other software stacks

E-HPC provides various PaaS platform software, benchmarks, and applications for use in HPC. You can choose to preload the resources based on your actual needs. For instructions on how to use HPC application software, read the Best practices part of the product documentation. The list of application software in these documents may not be complete. Available software packages is shown on the interface during cluster creation.

Note: When you choose to preload HPC application software, you must select the supported software package (such as mpich or openmpi, based on the suffix of the software package name). If you select software with the suffix “-gpu”, make sure that the computing node uses an instance of GPU series. Otherwise, cluster creation may fail or the software may not run properly.

Other basic settings

This includes cluster naming and password setting.

Note: Keep your passwords properly.

Finally, check the configuration list and click Confirm to create the E-HPC cluster.

Query creation status

After about 20 minutes, you can go back to the overview page of E-HPC cluster control to view the status of the newly created cluster.

If all the nodes of the cluster are in the Normal status, the creation is completed. Now you can log on to the cluster and perform operations. For operation instructions, see Cluster use.

Thank you! We've received your feedback.