Handle Large Traffic with Load Balancer
Handle Large Traffic with Load Balancer
Clouder at a Glance
  • Video Course Duration
    46 Minutes
  • Available Languages
  • Course Type
USD 10.00
Increased traffic, often results in a delayed response from web servers or even a halt in service. Load balancing lies in "sharing." When massive traffic is detected, the traffic is distributed to multiple servers to improve the external service capability of the website and avoid the impact of a single point failure. In this online course, we teach the basics of load balancing, principles and scenarios, and master cloud platform load balancing features and usage.
Recommended For
Junior Developers and Cloud Beginners
Exam Overview
  • Certification:
    Apsara Clouder - Cloud Computing: Handle Large Traffic with Load Balancer
  • Exam Type:
  • Available Languages:
  • Exam Duration:
    30 Minutes
  • No. of Exam Attempts:
    2 Times
Handle Large Traffic with Load Balancer
Learn to use Alibaba Cloud's SLB to help your website to handle large bursts of traffic.
Show Details
  • Introduction to load balancing
  • Alibaba Cloud SLB components
  • Alibaba Cloud SLB architecture
  • Use SLB for disaster tolerance
  • Alibaba Cloud SLB security
  • Configure a SLB with 4 backend servers in 2 different zones (console demo)
Handle Large Traffic with Load Balancer Overview
In this course, let's talk about the load balancing first. We know traditionally we need a web server to provide and deliver services to our customer. Usually, we want to think if we can have a very powerful web server so that it can do anything we want, such as providing any services and serving as many customers as possible. However, with only one web server, there are two major concerns. The first one is there is always limit to a serve. If your business is booming, lots of new users are coming to visit your website so one day your website will definitely reach your capacity limit and will deliver very unsatisfying experience to your users. Also, if you only have one web server, single point of failure may occur. For example, power outrage or network connection issues may happen to your servers. If your single server is down, your customers will be totally out of service, and you cannot provide your service anymore. This is the problem if you may suffer when you have only one web server, even if it is very powerful. Now you may be wondering how to extend the capability of your web server. Usually, you need to add one more server when businesses keep growing. However, if you add one more server, but between to your multi-service class, how can the end-users know which server they need to access to? For better user experience, end-users should not feel the complexity of the backend set-up. Therefore, what kind of service or device can we put between the end-user and the backend server? The answer is the load balancing device or software. We can sit it in the middle so that the agent could accept the request from the end-user and use it specific mechanism or algorithm to distribute the service to the backend servers in order to balance the load. That is why it is called load balancer. It can not only solve the problems of the single point failure and the service up-limit, but also bring consistently satisfying user experience to end-users. Hello, everyone! Welcome back to the course Handle Large Traffic with Load Balancer. From previous courses, we only learned the basic concept of load balancing. Now let's look at how Alibaba Cloud provides the load balancing service through Server Load Balancer (SLB). Alibaba Cloud SLB is a traffic distribution control service that can distribute the incoming traffic among multiple Elastic Compute Service (ECS) instances based on the configured forwarding rules. There are four major features of SLB. The first one is high availability. If you are setting up most of SLBs in the different regions, and there are two Amazons in each region, by default SLB will be deployed in one zone with a master instance and with a slave instance in another. This shows the high availability feature of SLB. Also, with SLB, you will obtain scalability because when you scale behind the SLB, and the end-users will never know and feel any turbulence. Additionally, SLB service costs less comparing to any physical hardware costing. The last feature is security because SLB is one of the Alibaba Cloud network service. It can, by default, leverage most of the security products and feature provided to protect your business. Regarding DDoS attack, SLB has the basic DDoS protection capability. Again, let’s use this picture to review the whole SLB basic architecture. So now we have Alibaba Cloud SLB sitting between the end-users and the backend servers which is ECS. Regarding the components, SLB consists of three major components. The first one is the instance. As an end cloud user, you can create and put two instances to a region. For every load balancing instance, you need to create and put more than one Listener in it. For the back-end, you need to tell the load balancer what your backend server is, how many servers you want to put under the protection by SLB. Hence, these are the three major components you need to consider when you are creating an SLB and trying to configure it. There are two kinds of instances for SLB based on their connector capability. The first one we called on the Public Network SLB. Based on the name, you can tell that this is the SLB that can be combined with a public IP address, and the pay mode is Pay-as-You-Go. This means you need to pay for the instance renting and public traffic. If you use extra volume, you need to pay for the extra volume. So far, this is only in the Pay-as-You-Go mode that you can choose for the Public Network SLB. The private network SLB is totally free. Based on the name, you can tell it can only be used in a private network environment, and we will never assign a public address IP to it. We use both public and private SLB to enhance the capabilities. From this picture, it’s shown that we want to use public SLB to serve as the users’ requests are incoming from the Internet, but internally we want to use the private SLB to forward different traffic or request to different sets of the backend servers. Therefore, a two-layer architecture is more scalable and elastic.
Recommended Certifications