All Products
Search
Document Center

Use STREAM to test the memory bandwidth performance of an E-HPC cluster

Last Updated: Sep 06, 2021

This topic describes how to use STREAM to test the memory bandwidth performance of an Elastic High Performance Computing (E-HPC) cluster.

Background information

STREAM is a benchmark tool that is used to measure the performance of memory bandwidth. It is also a general-purpose tool that you can use to measure the memory performance of servers. STREAM has good spatial locality. It is translation lookaside buffer (TLB)-friendly and cache-friendly. STREAM supports four vector kernels: Copy, Scale, Add, and Triad. These four vector kernels are used to measure the performance of memory bandwidth.

Procedure

  1. Log on to the E-HPC console.

  2. Create a cluster named STREAM.test.

    For more information, see Create a cluster. Set the following parameters:

    • Compute Node: Select an instance type whose vCPUs are greater than or equal to 4, for example, ecs.c7.xlarge.

    • Other Software: Install stream 2018.

      Note

      You can also install the preceding software in an existing cluster. For more information, see Install software.

      STREAM
  3. Create a sudo user named streamtest.

    For more information, see Create a user.

  4. Recompile STREAM to set the parameters of the software.

    1. On the Cluster page, find STREAM.test, and click Connect.

    2. In the Connect panel, enter the username and password of a root user, and a port number. Then, click Connect via SSH.

    3. Run the following command to recompile STREAM:

      cd /opt/stream/2018/; gcc stream.c -O3 -fopenmp -DSTREAM_ARRAY_SIZE=1024*1024*1024 -DNTIMES=20 -mcmodel=medium -o stream.1g.20   #The -DSTREAM_ARRAY_SIZE parameter used to specify the amount of data that is handled by STREAM at a time. The -DTIMES parameter is used to specify the number of iterations.
  5. Create a job script and submit the job.

    1. In the left-side navigation pane, click Job.

    2. Select STREAM.test from the Cluster drop-down list. Then, click Create Job.

    3. On the Create Job page, choose Edit Job File > Create File > Template > pbs demo.

    4. Configure the job, as shown in the following figure. Then, click OK to submit the job.

      STREAM-2

      The following sample script shows how to configure the job file:

      #!/bin/sh
      #PBS -j oe
      #PBS -l select=1:ncpus=4
      # In this example, one compute node that uses four vCPUs is selected for the test. Set the number of nodes and vCPUs based on your needs. 
      export MODULEPATH=/opt/ehpcmodulefiles/
      module load stream/2018
      echo "run at the beginning"
      OMP_NUM_THREADS=1 /opt/stream/2018/stream.1g.20 > stream-1-thread.log
      OMP_NUM_THREADS=2 /opt/stream/2018/stream.1g.20 > stream-2-thread.log
      OMP_NUM_THREADS=3 /opt/stream/2018/stream.1g.20 > stream-3-thread.log
      OMP_NUM_THREADS=4 /opt/stream/2018/stream.1g.20 > stream-4-thread.log
      
      #OMP_NUM_THREADS=<N> /opt/stream/2018/stream.1g.20 > stream-<N>-thread.log
  6. View the job result.

    1. On the Cluster page, find HPL.test, and click Connect.

    2. In the Connect panel, specify a username, password, and port number. Then, click Connect via SSH.

    3. Run the following command to view the job result:

      cat /home/streamtest/stream-2-thread.log

      The following figure shows the test result.

      Result