Fourteen Evolutions of Taobao's Tens of Millions of Concurrent Architectures
Date: Oct 27, 2022
Related Tags:1. Local Public Cloud
2. Elastic High Performance Computing
Abstract: This article uses Taobao as an example to introduce the evolution of the server-side architecture from one hundred to ten million concurrent cases. At the same time, it lists the related technologies that will be encountered in each evolution stage, so that everyone can have an overall understanding of the evolution of the architecture. Know, the article finally summarizes some architectural design principles.
★Special note: This article uses Taobao as an example only to illustrate the problems that may be encountered during the evolution process, not the real technology evolution path of Taobao
basic concept
Before introducing the architecture, in order to avoid some readers not understanding some concepts in the architecture design, the following are the most basic concepts:
Multiple modules in a distributed system are deployed on different servers, which can be called a distributed system. For example, Tomcat and the database are deployed on different servers, or two Tomcats with the same function are deployed on different servers;
When some nodes in the high-availability system fail, other nodes can take over and continue to provide services, and the system can be considered to have high availability;
Cluster A specific domain of software is deployed on multiple servers and provides a type of service as a whole, which is called a cluster. For example, the Master and Slave in Zookeeper are respectively deployed on multiple servers to form a whole to provide centralized configuration services. In common clusters, clients can often connect to any node to obtain services, and when a node in the cluster goes offline, other nodes can automatically take over from it and continue to provide services, indicating that the cluster has high availability;
When a load balancing request is sent to the system, the request is evenly distributed to multiple nodes in some way, so that each node in the system can handle the request load evenly, then the system can be considered to be load balanced;
When the forward proxy and reverse proxy systems want to access the external network, the request is forwarded through a proxy server. From the perspective of the external network, it is the access initiated by the proxy server. At this time, the proxy server implements the forward proxy; when the external request When entering the system, the proxy server forwards the request to a server in the system. For external requests, only the proxy server interacts with it. At this time, the proxy server implements a reverse proxy. Simply put, forward proxy is a process in which a proxy server replaces the inside of the system to access the external network, and reverse proxy is a process in which an external request to access the system is forwarded to an internal server through a proxy server.
Architecture Evolution
Take Taobao as an example. At the beginning of the website, the number of applications and users were small, and Tomcat and the database could be deployed on the same server. When the browser initiates a request to www.taobao.com, it first converts the domain name to the actual IP address 10.102.4.1 through the DNS server (domain name system), and then the browser turns to access the Tomcat corresponding to this IP.
★As the number of users increases, Tomcat and the database compete for resources, and the performance of a single machine is not enough to support the business
Tomcat and the database monopolize server resources respectively, significantly improving their respective performance.
★As the number of users grows, the concurrent reading and writing of the database becomes the bottleneck
Add a local cache on the same Tomcat server or in the same JVM, and add a distributed cache externally to cache popular product information or html pages of popular products. Through caching, most requests can be intercepted before reading and writing the database, which greatly reduces the pressure on the database. The technologies involved include: using memcached as a local cache, using Redis as a distributed cache, and also involving issues such as cache consistency, cache penetration/breakdown, cache avalanche, and centralized invalidation of hotspot data.
★The cache resists most of the access requests. As the number of users increases, the concurrency pressure mainly falls on the single-machine Tomcat, and the response gradually becomes slower.
Deploy Tomcat on multiple servers respectively, and use reverse proxy software (Nginx) to evenly distribute requests to each Tomcat. It is assumed here that Tomcat supports up to 100 concurrency, and Nginx supports up to 50,000 concurrency. In theory, Nginx distributes requests to 500 Tomcats, which can withstand 50,000 concurrency. The technologies involved include: Nginx and HAProxy, both of which are reverse proxy software working on the seventh layer of the network, mainly supporting the http protocol, and also involving session sharing, file uploading and downloading.
The reverse proxy greatly increases the concurrency that the application server can support, but the increase in concurrency also means that more requests penetrate the database, and the single-machine database eventually becomes the bottleneck
Divide the database into read databases and write databases. There can be multiple read databases. The data from the write database is synchronized to the read database through the synchronization mechanism. For scenarios where the latest written data needs to be queried, you can write one more copy in the cache and use the The cache gets the latest data.
The technologies involved include: Mycat, which is a database middleware, which can be used to organize the separation of reading and writing and sub-database and sub-table of the database. The client uses it to access the lower-level database, and it also involves data synchronization and data consistency issues. .
★The number of businesses is gradually increasing, and there is a large difference in the number of visits between different businesses. Different businesses directly compete for the database and affect the performance of each other.

Saving data of different services in different databases reduces resource competition between services. For services with a large number of visits, more servers can be deployed to support them. At the same time, the cross-business tables cannot be directly associated with the analysis, and need to be solved by other means, but this is not the focus of this article, and those who are interested can search for solutions by themselves.
★As the number of users grows, the single-machine writing library will gradually reach a performance bottleneck
For example, for comment data, it can be hashed according to the product ID and routed to the corresponding table for storage; for payment records, a table can be created according to the hour, and each hour table can continue to be split into small tables, and the user ID or record number can be used to route the data. . As long as the amount of table data operated in real time is small enough and requests can be evenly distributed to small tables on multiple servers, the database can improve performance through horizontal scaling. The aforementioned Mycat also supports access control when a large table is split into small tables.
This approach significantly increases the difficulty of database operation and maintenance, and requires higher DBAs. When the database is designed with this structure, it can already be called a distributed database, but this is only a logical database as a whole. Different components in the database are implemented by different components, such as the management and request of sub-database and sub-table. Distribution is implemented by Mycat, SQL parsing is implemented by a stand-alone database, read-write separation may be implemented by gateways and message queues, and the aggregation of query results may be implemented by the database interface layer, etc. This architecture is actually MPP (large-scale A class of implementations of the parallel processing) architecture.
At present, there are many MPP databases in both open source and commercial use. The more popular ones in open source are Greenplum, TiDB, Postgresql XC, HAWQ, etc. Commercial ones such as NTU's GBase, Ruifan Technology's Snowball DB, Huawei's LibrA, etc. Different MPP databases have different focuses. For example, TiDB focuses more on distributed OLTP scenarios, and Greenplum focuses more on distributed OLAP scenarios.
These MPP databases basically provide SQL standard support capabilities like Postgresql, Oracle, and MySQL, which can parse a query into a distributed execution plan and distribute it to each machine for parallel execution. Finally, the database itself summarizes the data and returns it. It provides capabilities such as permission management, sub-database and sub-table, transaction, data copy, etc., and most of them can support clusters of more than 100 nodes, which greatly reduces the cost of database operation and maintenance, and enables the database to achieve horizontal expansion.
Both the database and Tomcat can scale horizontally, and the supported concurrency is greatly improved. As the number of users increases, Nginx on a single machine will eventually become a bottleneck.
Since the bottleneck is Nginx, it is impossible to achieve load balancing of multiple Nginx through two layers of Nginx. The LVS and F5 in the figure are load balancing solutions that work at the fourth layer of the network. LVS is software that runs in the operating system kernel mode and can forward TCP requests or higher-level network protocols. Therefore, the supported protocols are more Rich, and the performance is much higher than Nginx, it can be assumed that a single LVS can support hundreds of thousands of concurrent request forwarding; F5 is a load balancing hardware, similar to the capabilities provided by LVS, with higher performance than LVS, but expensive . Since LVS is a stand-alone version of the software, if the server where LVS is located goes down, the entire back-end system will be inaccessible, so a backup node is required.
You can use the keepalived software to simulate the virtual IP, and then bind the virtual IP to multiple LVS servers. When the browser accesses the virtual IP, it will be redirected to the real LVS server by the router. When the main LVS server goes down, the keepalived software will Automatically update the routing table in the router and redirect the virtual IP to another normal LVS server, so as to achieve the effect of high availability of the LVS server.
It should be noted here that the drawing from the Nginx layer to the Tomcat layer in the above figure does not mean that all Nginx forward requests to all Tomcats. In actual use, it may be a part of Tomcat under several Nginx. High availability is achieved through keepalived, and other Nginx is connected to another Tomcat, so that the number of Tomcats that can be accessed can be doubled.
★Because LVS is also a stand-alone server, when the number of concurrency increases to hundreds of thousands, the LVS server will eventually reach a bottleneck. At this time, the number of users reaches tens of millions or even hundreds of millions. The users are distributed in different regions and the distance from the server room is different. The latency of the access will be significantly different
In the DNS server, a domain name can be configured to correspond to multiple IP addresses, and each IP address corresponds to a virtual IP in a different computer room. When a user visits www.taobao.com, the DNS server will use a round-robin strategy or other strategies to select an IP for the user to visit. This method can realize the load balancing of the computer room. So far, the system can achieve the horizontal expansion of the computer room level. The concurrency of tens of millions to 100 million can be solved by increasing the number of computer rooms, and the concurrent request volume at the system entrance is no longer a problem. .
★With the richness of data and the development of business, the needs for retrieval and analysis are becoming more and more abundant. Relying on the database alone cannot solve such rich needs
When the data in the database reaches a certain scale, the database is not suitable for complex queries, and often only meets the scenarios of ordinary queries. For statistical report scenarios, results may not be available when the amount of data is large, and other queries will be slowed down when running complex queries. For scenarios such as full-text retrieval and variable data structure, the database is inherently inapplicable. Therefore, it is necessary to introduce suitable solutions for specific scenarios.
For example, for mass file storage, it can be solved by the distributed file system HDFS; for key value type data, it can be solved by solutions such as HBase and Redis; for full-text retrieval scenarios, it can be solved by search engines such as ElasticSearch; for multi-dimensional analysis scenarios, it can be solved by Solved by solutions such as Kylin or Druid.
Of course, the introduction of more components will increase the complexity of the system at the same time. The data saved by different components needs to be synchronized, the consistency problem needs to be considered, and more operation and maintenance methods are needed to manage these components.
★Introducing more components to solve the rich requirements, the business dimension can be greatly expanded, and then an application contains too many business codes, and it becomes difficult to upgrade and iterate the business
The application code is divided according to the business sector, so that the responsibilities of a single application are clearer, and they can be independently upgraded and iterated. At this time, some common configurations may be involved between applications, which can be solved by the distributed configuration center Zookeeper.
★There are shared modules between different applications, and separate management by the application will lead to multiple copies of the same code, which will lead to the upgrade of all application codes when the public functions are upgraded.
Such as user management, order, payment, authentication and other functions exist in multiple applications, then the code of these functions can be extracted separately to form a separate service to manage, such a service is the so-called microservice, application and service Common services are accessed through multiple methods such as HTTP, TCP or RPC requests, and each individual service can be managed by a separate team. In addition, functions such as service governance, current limiting, circuit breaker, and downgrade can be implemented through frameworks such as Dubbo and SpringCloud to improve the stability and availability of services.
★The interface access methods of different services are different, and the application code needs to adapt to multiple access methods to use the service. In addition, when the application accesses the service, the services may also access each other, the call chain will become very complicated, and the logic will become chaotic
3.13 The Twelfth Evolution:
Related Tags:1. Local Public Cloud
2. Elastic High Performance Computing
Abstract: This article uses Taobao as an example to introduce the evolution of the server-side architecture from one hundred to ten million concurrent cases. At the same time, it lists the related technologies that will be encountered in each evolution stage, so that everyone can have an overall understanding of the evolution of the architecture. Know, the article finally summarizes some architectural design principles.
★Special note: This article uses Taobao as an example only to illustrate the problems that may be encountered during the evolution process, not the real technology evolution path of Taobao
basic concept
Before introducing the architecture, in order to avoid some readers not understanding some concepts in the architecture design, the following are the most basic concepts:
Multiple modules in a distributed system are deployed on different servers, which can be called a distributed system. For example, Tomcat and the database are deployed on different servers, or two Tomcats with the same function are deployed on different servers;
When some nodes in the high-availability system fail, other nodes can take over and continue to provide services, and the system can be considered to have high availability;
Cluster A specific domain of software is deployed on multiple servers and provides a type of service as a whole, which is called a cluster. For example, the Master and Slave in Zookeeper are respectively deployed on multiple servers to form a whole to provide centralized configuration services. In common clusters, clients can often connect to any node to obtain services, and when a node in the cluster goes offline, other nodes can automatically take over from it and continue to provide services, indicating that the cluster has high availability;
When a load balancing request is sent to the system, the request is evenly distributed to multiple nodes in some way, so that each node in the system can handle the request load evenly, then the system can be considered to be load balanced;
When the forward proxy and reverse proxy systems want to access the external network, the request is forwarded through a proxy server. From the perspective of the external network, it is the access initiated by the proxy server. At this time, the proxy server implements the forward proxy; when the external request When entering the system, the proxy server forwards the request to a server in the system. For external requests, only the proxy server interacts with it. At this time, the proxy server implements a reverse proxy. Simply put, forward proxy is a process in which a proxy server replaces the inside of the system to access the external network, and reverse proxy is a process in which an external request to access the system is forwarded to an internal server through a proxy server.
Architecture Evolution
3.1 Stand-alone architecture
Take Taobao as an example. At the beginning of the website, the number of applications and users were small, and Tomcat and the database could be deployed on the same server. When the browser initiates a request to www.taobao.com, it first converts the domain name to the actual IP address 10.102.4.1 through the DNS server (domain name system), and then the browser turns to access the Tomcat corresponding to this IP.
★As the number of users increases, Tomcat and the database compete for resources, and the performance of a single machine is not enough to support the business
3.2 The first evolution: Tomcat is deployed separately from the database
Tomcat and the database monopolize server resources respectively, significantly improving their respective performance.
★As the number of users grows, the concurrent reading and writing of the database becomes the bottleneck
3.3 Second Evolution: Introducing Local Cache and Distributed Cache
Add a local cache on the same Tomcat server or in the same JVM, and add a distributed cache externally to cache popular product information or html pages of popular products. Through caching, most requests can be intercepted before reading and writing the database, which greatly reduces the pressure on the database. The technologies involved include: using memcached as a local cache, using Redis as a distributed cache, and also involving issues such as cache consistency, cache penetration/breakdown, cache avalanche, and centralized invalidation of hotspot data.
★The cache resists most of the access requests. As the number of users increases, the concurrency pressure mainly falls on the single-machine Tomcat, and the response gradually becomes slower.
3.4 The third evolution: Introducing reverse proxy to achieve load balancing
Deploy Tomcat on multiple servers respectively, and use reverse proxy software (Nginx) to evenly distribute requests to each Tomcat. It is assumed here that Tomcat supports up to 100 concurrency, and Nginx supports up to 50,000 concurrency. In theory, Nginx distributes requests to 500 Tomcats, which can withstand 50,000 concurrency. The technologies involved include: Nginx and HAProxy, both of which are reverse proxy software working on the seventh layer of the network, mainly supporting the http protocol, and also involving session sharing, file uploading and downloading.
The reverse proxy greatly increases the concurrency that the application server can support, but the increase in concurrency also means that more requests penetrate the database, and the single-machine database eventually becomes the bottleneck
3.5 The Fourth Evolution: Database Read-Write Separation
Divide the database into read databases and write databases. There can be multiple read databases. The data from the write database is synchronized to the read database through the synchronization mechanism. For scenarios where the latest written data needs to be queried, you can write one more copy in the cache and use the The cache gets the latest data.
The technologies involved include: Mycat, which is a database middleware, which can be used to organize the separation of reading and writing and sub-database and sub-table of the database. The client uses it to access the lower-level database, and it also involves data synchronization and data consistency issues. .
★The number of businesses is gradually increasing, and there is a large difference in the number of visits between different businesses. Different businesses directly compete for the database and affect the performance of each other.

3.6 The fifth evolution: database is divided by business
Saving data of different services in different databases reduces resource competition between services. For services with a large number of visits, more servers can be deployed to support them. At the same time, the cross-business tables cannot be directly associated with the analysis, and need to be solved by other means, but this is not the focus of this article, and those who are interested can search for solutions by themselves.
★As the number of users grows, the single-machine writing library will gradually reach a performance bottleneck
3.7 The sixth evolution: splitting large tables into small tables
For example, for comment data, it can be hashed according to the product ID and routed to the corresponding table for storage; for payment records, a table can be created according to the hour, and each hour table can continue to be split into small tables, and the user ID or record number can be used to route the data. . As long as the amount of table data operated in real time is small enough and requests can be evenly distributed to small tables on multiple servers, the database can improve performance through horizontal scaling. The aforementioned Mycat also supports access control when a large table is split into small tables.
This approach significantly increases the difficulty of database operation and maintenance, and requires higher DBAs. When the database is designed with this structure, it can already be called a distributed database, but this is only a logical database as a whole. Different components in the database are implemented by different components, such as the management and request of sub-database and sub-table. Distribution is implemented by Mycat, SQL parsing is implemented by a stand-alone database, read-write separation may be implemented by gateways and message queues, and the aggregation of query results may be implemented by the database interface layer, etc. This architecture is actually MPP (large-scale A class of implementations of the parallel processing) architecture.
At present, there are many MPP databases in both open source and commercial use. The more popular ones in open source are Greenplum, TiDB, Postgresql XC, HAWQ, etc. Commercial ones such as NTU's GBase, Ruifan Technology's Snowball DB, Huawei's LibrA, etc. Different MPP databases have different focuses. For example, TiDB focuses more on distributed OLTP scenarios, and Greenplum focuses more on distributed OLAP scenarios.
These MPP databases basically provide SQL standard support capabilities like Postgresql, Oracle, and MySQL, which can parse a query into a distributed execution plan and distribute it to each machine for parallel execution. Finally, the database itself summarizes the data and returns it. It provides capabilities such as permission management, sub-database and sub-table, transaction, data copy, etc., and most of them can support clusters of more than 100 nodes, which greatly reduces the cost of database operation and maintenance, and enables the database to achieve horizontal expansion.
Both the database and Tomcat can scale horizontally, and the supported concurrency is greatly improved. As the number of users increases, Nginx on a single machine will eventually become a bottleneck.
3.8 Seventh Evolution: Using LVS or F5 to Load Balance Multiple Nginx
Since the bottleneck is Nginx, it is impossible to achieve load balancing of multiple Nginx through two layers of Nginx. The LVS and F5 in the figure are load balancing solutions that work at the fourth layer of the network. LVS is software that runs in the operating system kernel mode and can forward TCP requests or higher-level network protocols. Therefore, the supported protocols are more Rich, and the performance is much higher than Nginx, it can be assumed that a single LVS can support hundreds of thousands of concurrent request forwarding; F5 is a load balancing hardware, similar to the capabilities provided by LVS, with higher performance than LVS, but expensive . Since LVS is a stand-alone version of the software, if the server where LVS is located goes down, the entire back-end system will be inaccessible, so a backup node is required.
You can use the keepalived software to simulate the virtual IP, and then bind the virtual IP to multiple LVS servers. When the browser accesses the virtual IP, it will be redirected to the real LVS server by the router. When the main LVS server goes down, the keepalived software will Automatically update the routing table in the router and redirect the virtual IP to another normal LVS server, so as to achieve the effect of high availability of the LVS server.
It should be noted here that the drawing from the Nginx layer to the Tomcat layer in the above figure does not mean that all Nginx forward requests to all Tomcats. In actual use, it may be a part of Tomcat under several Nginx. High availability is achieved through keepalived, and other Nginx is connected to another Tomcat, so that the number of Tomcats that can be accessed can be doubled.
★Because LVS is also a stand-alone server, when the number of concurrency increases to hundreds of thousands, the LVS server will eventually reach a bottleneck. At this time, the number of users reaches tens of millions or even hundreds of millions. The users are distributed in different regions and the distance from the server room is different. The latency of the access will be significantly different
3.9 The Eighth Evolution: Implementing Load Balancing in the Computer Room through DNS Polling
In the DNS server, a domain name can be configured to correspond to multiple IP addresses, and each IP address corresponds to a virtual IP in a different computer room. When a user visits www.taobao.com, the DNS server will use a round-robin strategy or other strategies to select an IP for the user to visit. This method can realize the load balancing of the computer room. So far, the system can achieve the horizontal expansion of the computer room level. The concurrency of tens of millions to 100 million can be solved by increasing the number of computer rooms, and the concurrent request volume at the system entrance is no longer a problem. .
★With the richness of data and the development of business, the needs for retrieval and analysis are becoming more and more abundant. Relying on the database alone cannot solve such rich needs
3.10 The ninth evolution: the introduction of technologies such as NoSQL databases and search engines
When the data in the database reaches a certain scale, the database is not suitable for complex queries, and often only meets the scenarios of ordinary queries. For statistical report scenarios, results may not be available when the amount of data is large, and other queries will be slowed down when running complex queries. For scenarios such as full-text retrieval and variable data structure, the database is inherently inapplicable. Therefore, it is necessary to introduce suitable solutions for specific scenarios.
For example, for mass file storage, it can be solved by the distributed file system HDFS; for key value type data, it can be solved by solutions such as HBase and Redis; for full-text retrieval scenarios, it can be solved by search engines such as ElasticSearch; for multi-dimensional analysis scenarios, it can be solved by Solved by solutions such as Kylin or Druid.
Of course, the introduction of more components will increase the complexity of the system at the same time. The data saved by different components needs to be synchronized, the consistency problem needs to be considered, and more operation and maintenance methods are needed to manage these components.
★Introducing more components to solve the rich requirements, the business dimension can be greatly expanded, and then an application contains too many business codes, and it becomes difficult to upgrade and iterate the business
3.11 The tenth evolution: splitting large applications into small applications
The application code is divided according to the business sector, so that the responsibilities of a single application are clearer, and they can be independently upgraded and iterated. At this time, some common configurations may be involved between applications, which can be solved by the distributed configuration center Zookeeper.
★There are shared modules between different applications, and separate management by the application will lead to multiple copies of the same code, which will lead to the upgrade of all application codes when the public functions are upgraded.
3.12 The 11th Evolution: Separating Multiplexed Functions into Microservices
Such as user management, order, payment, authentication and other functions exist in multiple applications, then the code of these functions can be extracted separately to form a separate service to manage, such a service is the so-called microservice, application and service Common services are accessed through multiple methods such as HTTP, TCP or RPC requests, and each individual service can be managed by a separate team. In addition, functions such as service governance, current limiting, circuit breaker, and downgrade can be implemented through frameworks such as Dubbo and SpringCloud to improve the stability and availability of services.
★The interface access methods of different services are different, and the application code needs to adapt to multiple access methods to use the service. In addition, when the application accesses the service, the services may also access each other, the call chain will become very complicated, and the logic will become chaotic
3.13 The Twelfth Evolution:
Introducing the Enterprise Service Bus ESB to shield the access differences of service interfaces
The access protocol conversion is performed uniformly through the ESB, the application uniformly accesses the back-end services through the ESB, and the services and services also call each other through the ESB, thereby reducing the degree of coupling of the system. This kind of architecture in which a single application is split into multiple applications, public services are extracted and managed separately, and an enterprise message bus is used to relieve the coupling problem between services, which is the so-called SOA (service-oriented) architecture. Architectures are confusing because the representations are very similar.
Personal understanding, microservice architecture More refers to the idea of extracting public services in the system for separate operation and maintenance management, while SOA architecture refers to an architectural idea of splitting services and making service interface access unified. SOA architecture includes microservices. Thought.
★With the continuous development of business, the number of applications and services will continue to increase, and the deployment of applications and services will become complicated. Deploying multiple services on the same server also needs to solve the problem of running environment conflicts. In the scenario of scaling down, the performance of services needs to be scaled horizontally, and it is necessary to prepare the operating environment and deploy services on the new services, which will make operation and maintenance very difficult.
3.14 The Thirteenth Evolution: Introducing Containerization Technology to Implement Operating Environment Isolation and Dynamic Service Management
Currently the most popular containerization technology is Docker, and the most popular container management service is Kubernetes (K8S). Applications/services can be packaged as Docker images, and images can be dynamically distributed and deployed through K8S. A Docker image can be understood as a minimal operating system that can run your application/service, which contains the running code of the application/service, and the running environment is set according to actual needs. After the entire "operating system" is packaged as an image, it can be distributed to the machines that need to deploy related services, and the service can be started by directly starting the Docker image, making the deployment and operation and maintenance of the service simple.
Before the big promotion, you can divide the server on the existing machine cluster to start the Docker image to enhance the performance of the service. After the big promotion, you can close the image without affecting other services on the machine (before Section 3.14, Service running on the newly added machine needs to modify the system configuration to adapt to the service, which will cause the running environment required by other services on the machine to be destroyed).
★After using containerization technology, the problem of dynamic service expansion and shrinkage can be solved, but the machine still needs to be managed by the company itself. When it is not a big promotion, it still needs to idle a lot of machine resources to cope with the big promotion, the cost of the machine itself and the operation and maintenance. High cost and low resource utilization
3.15 The Fourteenth Evolution: Hosting the System on a Cloud Platform
The system can be deployed on the public cloud, and the massive machine resources of the public cloud can be used to solve the problem of dynamic hardware resources. During the time period of the big promotion, more resources can be temporarily applied in the cloud platform, and the service can be quickly deployed by combining Docker and K8S. , release resources after the big promotion, and truly achieve pay-as-you-go, greatly improve resource utilization, and greatly reduce operation and maintenance costs.
The so-called cloud platform is to abstract massive machine resources into a whole resource through unified resource management, on which hardware resources (such as CPU, memory, network, etc.) can be dynamically applied on demand, and general operations are provided on it. The system provides common technical components (such as Hadoop technology stack, MPP database, etc.) for users to use, and even provides developed applications. Users can solve their needs (such as audio and video transcoding services without knowing what technology is used in the application). , mail service, personal blog, etc.). The following concepts are involved in the cloud platform:
IaaS: Infrastructure as a Service. Corresponding to the above-mentioned machine resources are unified as a whole resource, and the level of hardware resources can be dynamically applied;
PaaS: Platform as a Service. Corresponding to the above-mentioned provision of commonly used technical components to facilitate the development and maintenance of the system;
SaaS: Software as a Service. Corresponding to the above-mentioned provision of developed applications or services, payment is made according to functional or performance requirements.
★So far, the above-mentioned problems from high concurrent access to service architecture and system implementation have their own solutions, but at the same time, it should be realized that in the above introduction, such as Practical issues such as data synchronization across computer rooms, distributed transaction implementation, etc., these issues will have the opportunity to be discussed separately in the future
Related Articles
-
A detailed explanation of Hadoop core architecture HDFS
Knowledge Base Team
-
What Does IOT Mean
Knowledge Base Team
-
6 Optional Technologies for Data Storage
Knowledge Base Team
-
What Is Blockchain Technology
Knowledge Base Team
Explore More Special Offers
-
Short Message Service(SMS) & Mail Service
50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00