Kenan
Assistant Engineer
Assistant Engineer
  • UID621
  • Fans0
  • Follows0
  • Posts55
Reads:1306Replies:0

DevOps and Alibaba Cloud Docker (V) - performance tests

Created#
More Posted time:Dec 28, 2016 15:20 PM
Preface
In the previous articles, we focused on the continuous delivery process from the code to the release. But many complex systems have to undergo a performance test before they are launched online. Through performance testing, including capacity planning, system bottleneck detection, reliability checks and high-load stress testing, the continuous delivery process of services can be better guaranteed.
I still remember the first time I heard the word “performance test” in second year of college. Upon entering the laboratory, the teacher put the book titled Load Runner on the desk and asked me to test the performance of the school’s curriculum site with this tool.
For a long time, my understanding of performance test was to use a tool similar to Load Runner, and draw some indicators with the tool as the final deliverables. But in fact, the performance test is a collective term of a kind of test, and stress test is one of the most common ones. Performance tests (in the broad sense) also include load testing, concurrency testing and reliability testing.
In this article, I will focus on stress testing, load testing and concurrency testing. For most applications, these three tests are already sufficient to meet the basic requirements.
Categories of performance tests


Stress testing, load testing, and concurrency testing are three very confusing concepts. Many performance testing tools provide these three testing capabilities at the same time, which also makes a lot of developers think that performance test is equal to stress testing.
Load testing
The goal of a load test is to determine and ensure that the system continues to function well when it is put under stress from over the maximum workload expected. In addition, the load test also evaluates the performance characteristics. For example, the response time, transaction processing speed and other time-related indicators. In some high-concurrency scenarios, load testing is the most common way to test the capacity of a system in extreme situations by constantly increasing the stress on the system until some system resources are exhausted, such as CPU or network bandwidth.
Usually the environment of the system under test is required to be consistent with that of the online system. If the consistency is not achievable, an estimation by simulated fitting also works. A simple example: if a 64C64G ECS is configured online, but we can only find ECSs of 2C2G, 4C4G and 8C8G, we can perform load testing in different linear configuration, and make estimations according to the resulting load curve.
Stress testing
A stress test, also called a strength test, refers to testing the ability of a system in response to a testing scenario within a preset load range, such as full-load CPU or saturated memory. Unlike load testing, stress testing does not tend to exceed the expected upper limit, but focuses more within the capacity plan to ensure system availability. So usually the load testing will be carried out first, and then, according to the load testing results, the stress testing scheme is specified.
Concurrency testing
Concurrency testing is to simulate users’ concurrent accesses to test whether there are business logic lock leakages, memory leakages, or interlocks of resources when multiple users request concurrent accesses to the same application, the same module or even the same record. This testing method tends to discover hidden problems in the system’s actual operation.
Performance test in microservice scenarios
 
Microservice architecture has increasingly become the first choice of developers for system splitting and system reconstruction. Although microservices enjoy significant advantages in the development model and iterative speed, they also bring a lot of challenges. What will be different when microservices receive a performance test?
In a single application, the performance test is very simple - you only need to set the test rules and the corresponding test endpoints, and the other work of the performance test can be left to the test tool to complete. But when a single application is split into multiple microservices, if we continue the performance test in the black-box way, namely testing from the outermost endpoint of the system, the short board theory will apply and the microservice with the lowest carrying capacity in the system will become the baseline of the test results for the whole system. In the face of the challenges of microservice architecture, performance tests should also be more fine-grained, with both top-down black-box-like tests and performance tests for every microservice. In addition, through Zipkin distributed tracing tool or other similar tools, the bottleneck sub-services in the microservice system can be tested. The specific implementation will not be detailed in this article.
Performance tests on Alibaba Cloud Docker
A variety of performance testing tools are available such as the well-known Apache ab, Jmeter and Gatling. In this article, we use Tsung. Tsung is a distributed performance testing tool based on Erlang and supports HTTP/HTTPS, UDP, MQTT and WebSocket protocols. It supports distributed stress testing, reports and results generation, and is very suitable to stress tests for containers.
Tsung itself is a master-slave-architecture system based on Erlang OTP. The master node can generate stress and distribute tasks, and the slave nodes can receive tasks and generate stress. Tsung is easy to use. You only need an XML configuration file to run it. Next let’s look at a simple configuration file:
 


You only need to remember the meanings of a few key fields.


A simple performance test
Next we’ll perform a simple load test. We only run one master node of the Tsung and test the load of one specific URL.
The compose file is as follows:
tsung-single:
   image: "registry.cn-hangzhou.aliyuncs.com/ringtail/tsung:v1.0"
   volumes:
       - '/root/sample.xml:/root/.tsung/tsung.xml'
       - '/var/lib/docker/tsung:/root/.tsung/log'
   labels:
       aliyun.routing.port_8091: tsung
   command: single
The configuration in sample.xml is as follows:
<tsung loglevel="notice" version="1.0">
   <clients>
       <client host="localhost" use_controller_vm="true"/>
   </clients>
   <servers>
       <server host=”tested domain name and address" port="80" type="tcp"/>
   </servers>
   <load duration="5" unit="minute">
       <arrivalphase phase="1" duration="5" unit="minute">
           <users interarrival="0.1" unit="second"/>
       </arrivalphase>
   </load>
   <options>
       <option type="ts_http" name="user_agent">
           <user_agent probability="80">
               Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.8) Gecko/20050513 Galeon/1.3.21
           </user_agent>
           <user_agent probability="20">
               Mozilla/5.0 (Windows; U; Windows NT 5.2; fr-FR; rv:1.7.8) Gecko/20050511 Firefox/1.0.4
           </user_agent>
       </option>
   </options>
   <sessions>
       <session name="http-example" probability="100" type="ts_http">
           <request>
               <http url="/" method="GET" version="1.1"/>
           </request>
       </session>
   </sessions>
</tsung>
The application deployment is complete. Click the exposed routing address
Visit the web console provided by Tsung. The number of active nodes is 1, meaning there is currently only one master node.
 
Click the menu bar on the top right corner to view more detailed charts and data.
 
 
Distributed performance tests
It may be not enough to test the stress of only one master node in some scenarios, and we may need distributed performance testing to unleash the “prehistorical powers”. For Tsung, distributed stress tests are a piece of cake. You only need to enable password-less SSH login for slave nodes from the master node, and install Tsung on the slave nodes. The distributed configuration can be accomplished by simply adding a line of code on the master node, that is, adding the configuration of the slave nodes under the clients.
<clients>
   <client host="localhost" use_controller_vm="true"/>
   <client host="tsung-client" use_controller_vm="true"/>
</clients>
To achieve password-less SSH login between the master node and slave nodes, a SSH key pair should be generated.
  //Executed locally. The two quotes mean no password is required.  
   ssh-keygen -t rsa -P""

After running the command, you will find
that a SSH key pair has been generated - id_rsa and id_rsa.pub. We mount the
generated private key to the master node as volumes, and pass the public key as
environment variables to the slave nodes. The compose file is as follows:

tsung-master:
   image: "registry.cn-hangzhou.aliyuncs.com/ringtail/tsung:v1.0"
   volumes:
       - '/mnt/acs_mnt/ossfs/cs-volume/sample.xml:/root/.tsung/tsung.xml'
       - '/var/lib/docker/tsung:/root/.tsung/log'
       - '/mnt/acs_mnt/ossfs/cs-volume/id_rsa:/root/.ssh/id_rsa'
   environment:
       - DISABLE_HOST_CHECK= true  
   labels:
       aliyun.routing.port_8091: tsung
   command: "master"
   links:
       - "tsung-slave:tsung-slave"  
tsung-slave:
   image: "registry.cn-hangzhou.aliyuncs.com/ringtail/tsung:v1.0"
   command: "slave"
   environment:
       - AUTHORIZED_KEYS=<PUBLIC KEY> #cat ~/.ssh/id_rsa.pub

Finally, we configure the HOSTNAME of slave nodes to the sample.xml file.

<tsung loglevel="notice" version="1.0">
   <clients>
       <client host="localhost" use_controller_vm="true"/>
       <client host="tsung-client" use_controller_vm="true"/>
   </clients>
   <servers>
       <server host="hiluo.cn" port="80" type="tcp"/>
   </servers>
   <load duration="5" unit="minute">
       <arrivalphase phase="1" duration="5" unit="minute">
           <users interarrival="0.1" unit="second"/>
       </arrivalphase>
   </load>
   <options>
       <option type="ts_http" name="user_agent">A
           <user_agent probability="80">
               Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.8) Gecko/20050513 Galeon/1.3.21
           </user_agent>
           <user_agent probability="20">
               Mozilla/5.0 (Windows; U; Windows NT 5.2; fr-FR; rv:1.7.8) Gecko/20050511 Firefox/1.0.4
           </user_agent>
       </option>
   </options>
   <sessions>
       <session name="http-example" probability="100" type="ts_http">
           <request>
               <http url="/" method="GET" version="1.1"/>
           </request>
       </session>
   </sessions>
</tsung>
Deploy and test. We can find the number of active nodes is changed to 2, producing higher stress.




 
In this article, we only discuss about
the most basic usage of Tsung, including single-point load testing and
distributed load testing. Alibaba Cloud Docker provides a lot of capabilities
for microservice scenarios. In the next article, we will analyze performance
testing in the microservice scenario, and provide more targeted performance
testing and tuning schemes.

Conclusion
Performance testing is not the purpose,
but it aims to locate problems before the application is launched online and
optimize the performance. A common standard procedure is to perform a load test
first, specify a scenario for stress testing based on the load testing results,
and then perform capacity assumptions and capacity planning. Concurrency
testing is also performed for some systems.

Common system tuning methods include USE method, random variation method, and Ad Hoc checklist method. I will not go into
the detail in this article, but please remember that the purpose of performance
testing is to locate problems, optimize systems and plan capacity, rather than
simply using a tool to get data without any business scenarios and support. We should
pay more attention to using the performance testing to provide data for
decision-making based on the data of specific scenarios.

References and materials
<Things you should know before
performance tests> URL
<Things you should know after performance tests> URL
<The top of performance - insights into systems,
enterprises and cloud computing> Brendan Gregg
Password-less SSH login GitHub URL
Tsung Dockerfile GitHub URL
Guest