Hyperconvergence and Edge Computing
Date: Nov 1, 2022
With the rise of edge computing data centers in the 5G era, the hyperconverged market is ushering in more room for development. Gartner predicts that more than 35% of hyperconverged products will be used in edge computing in 2023. Let's learn about hyperconvergence and edge computing together!
01
The origin and connotation of hyperconvergence
With the development of cloud technology and virtualization technology, there are new ideas and solutions for the construction of network services. The VSAN technology was proposed at the vmware conference in 2013, and its main concept is to install flash memory and hard disks in a virtualized cluster to construct a storage layer. VSAN technology configures VSAN hosts with sufficient disk slots and storage controllers to form a scalable distributed storage architecture, resulting in an easy-to-manage shared storage source. On the basis of VSAN technology, the concept of hyperconverged architecture was born.
Hyper-converged architecture is a new generation of horizontally scalable software-defined architecture, which consists of general-purpose hardware units that integrate CPU, memory, storage, network, and virtualization software platforms, without a fixed central node. Its core concepts include linear horizontal expansion, the integration of computing power and storage capacity, and the use of high-speed flash memory as a storage medium on the server side. The hyper-converged architecture breaks the traditional isolation boundaries of servers, networks and storage to achieve a unified HCI form.
02
What is hyperconvergence?
The "super" of hyper-convergence refers to "virtualization", and the English of "hyper-converged" is Hyper-Converged. It can be seen that although the hyper-converged architecture has a word "super", it is not a mysterious concept, nor does it mean "super", but corresponds to Hyper in English Hypervisor, which means virtualization, corresponding to Virtualization is a computing architecture.
The core change of the hyper-converged architecture is storage, and the initial promoters of this concept are all storage start-ups from the Internet background. The bottom layer adopts a standardized x86 hardware platform, and the upper layer adopts a software-defined method to integrate computing, storage, network and other resources, which not only simplifies deployment, but also improves operation and maintenance efficiency. That is, hyperconvergence is inspired by the construction of large-scale data centers through software-defined technologies, combining virtualization technology and enterprise IT scenarios to achieve scalable IT infrastructure for enterprises.
"Hyper-converged architecture" means that in the same set of unit equipment (x86 server) not only resources and technologies such as computing, network, storage, and server virtualization, but also cache acceleration, data deduplication, online data compression, and backup software are included. , snapshot technology and other elements, and multiple nodes can be aggregated through the network to achieve modular seamless scale-out and form a unified resource pool.
It can be understood that hyperconvergence is a kind of middleware based on hardware and under the operating system. It is a software-defined data center (SDDC) architecture and an emerging data center infrastructure solution. The core of hyper-converged technology is to use Distributed File System (NDFS) to replace traditional SAN and NAS storage and expensive storage networks built with SAN switches, and to highly integrate virtualized computing and storage into one platform. At present, the industry generally believes that software-defined distributed storage layer and virtualized computing are the smallest set of hyperconverged architecture, and generally have the following general core components:
(1) Distributed storage based on X86 server architecture. On the basis of server virtualization, local storage resources are virtualized by deploying storage virtual devices, and then integrated into resource pools through clusters to provide storage services for virtual machines.
(2) High-speed network. Hyperconvergence uses 10 Gigabit Ethernet to provide scalable and highly available network channels for distributed computing and storage clusters.
(3) Unified management platform. Virtualized computing and storage are managed on the same platform, and administrators perform performance and capacity monitoring, troubleshooting, and other O&M work under the same platform.
The hyper-converged architecture integrates computing, storage, and network resources to create an integrated resource pool to share storage and achieve high availability through the redundancy function of storage.
Do you understand hyperconvergence?
03
"Burger" and Hyperconvergence
I have said so much above, but I may still not understand it. Let’s use the common “hamburger” in life to explain super fusion!
We know that in the past, computing, storage, and networking in the data center were independent, and hyperconverged systems broke this situation. In hyper-convergence, computing and storage are combined into one, and then they are connected in a software-defined form to form our common hyper-converged all-in-one machine.
If the above paragraph is used as a metaphor for hamburgers, it is like this▼
In a hamburger, the upper and lower layers of bread are computing and storage, and the middle layer is software definition and various services. These three work closely together as a whole, bringing many benefits to enterprises and operation and maintenance personnel——
Rapid deployment
For a normal meal, washing rice, cooking, washing vegetables, and cooking are essential processes, and you have to know eighteen martial arts. To make a burger, two slices of bread with vegetables, cheese and a piece of luncheon meat can be done in 10 minutes without having to fire. Just like a hyper-converged system, it only takes ten minutes from boot to VM provisioning, and it is not a dream to quickly go online.
easy to expand
There's nothing you can't get enough of a burger, and if there is, a Big Mac!
Hyper-convergence can start from a small scale, and as the business volume grows, it can be expanded as needed, and the performance increases linearly with the scale!
Simple operation and maintenance
The burgers are easy to make and easy to clean up afterwards. You don’t need to wash the dishes after eating, saving time and effort. Just as hyperconvergence realizes the unified management of all hardware devices, resources, and functions under one system interface, there is no need to troubleshoot one by one when there is a fault, and the system is simple and clear, which greatly saves the workload of operation and maintenance personnel.
Speaking of this, you may be curious: "Why do software definition and various services are regarded as sandwiches, and computing and storage are not?"
Of course, there is a reason for this arrangement, because the sandwich, as the soul of the burger, directly determines whether the burger is delicious or not. Just like hyper-converged computing and storage, both are general-purpose hardware produced under the standard X86 architecture, and software definition and various services are where manufacturers compete for strength.
04
Advantages and disadvantages of hyperconvergence
Hyperconverged Advantages
Traditional IT architecture is divided into many systems and teams, such as storage, server, network. The storage team is responsible for purchasing, expanding, supporting storage devices, maintaining storage systems, and dealing with storage vendors. The server and networking teams do the same. The fusion system is to combine two or more systems into one through prefabrication. VCE and HP offer this converged system that integrates storage, computing, and networking in software. VCE = VMware + Cisco + EMC, supports hypervisors like Hyper-V and KVM. The fusion system can only be combined through different components, while the hyper-converged system is further made into a modular design, and various components can also be expanded.
Disadvantages of hyperconvergence
There is no way to upgrade the internal components of the system module by yourself, such as adding CPU and hard disk, you can only continue to add a new module. For example, the storage is relatively short, but the CPU is enough, but you have to buy a module to add it to supplement the storage, and the CPU is redundant. Of course, building your own internal mods will not have this problem. The integration of the old system and the new system is inconvenient. For example, if the server department buys a hyper-converged system, it is equivalent to spending its own budget on storage, and the storage department does not want to help the computing department manage computing resources because of the purchase of a hyper-converged system.
05
Edge computing calls for hyperconvergence
Edge computing refers to a new computing mode that performs computing at the edge of the network. The processing of data mainly includes two parts: one is the downlink cloud service, and the other is the uplink service of the Internet of Everything. Among them, the "edge" in edge computing is a relative concept, which mainly refers to any computing, storage and network-related resources between the data source and the cloud computing center path.
The massive data generated in the era of the Internet of Everything has led to the advance of computing processing power. We found that the most urgent needs are processing power and storage resources, and the demand is very large. Of course, we also need data analysis tools, tools to push software and data to the edge, and Approaches that join forces across the edge with a centralized cloud even require machine learning at the edge itself. All of this clearly points to the need for some more robust infrastructure capabilities at the edge.
We believe that hyperconvergence is suitable for edge computing for three reasons:
• Satisfy the positioning of new edge nodes
– Edge computing makes computing close to data, data processing applications, and analysis application software will be deployed to the edge, and hyperconvergence can provide edge nodes with more powerful performance and storage space.
– HCI fully virtualized and widely compatible application software, with reasonable storage capacity, computing power and elastic expansion capacity, can meet the needs of more systems.
• Meet the constrained environment of edge computing
– The working environment conditions at the edge are lower than IDC, and the space, energy consumption, and maintenance convenience are not optimistic.
– The streamlined HCI has smaller size, smaller footprint, lower power consumption, lower cooling requirements, modularized rapid deployment capability, and remote management capability.
• Significantly improved price/performance ratio
– In the large-scale and large-scale procurement scenarios of edge computing, cost performance must be considered.
– HCI hardware is getting stronger and prices are falling at the same time.
At the same time, compared with traditional systems, hyper-converged architecture is a modular infrastructure that is "pre-processed", saving a lot of project-by-project design and deployment work, and creating a consistent paradigm and environment in a standardized form that is most suitable for applications. workload. Many edge workloads today run on Linux or VMs, and the migration to hyperconverged infrastructure is largely seamless.
Regarding the appearance of hyper-converged products in edge computing, it is still different from ordinary hyper-converged products. At present, there are still many cases of centralized deployment of hyperconvergence, and many of them are deployed in large enterprise user IDCs or cloud service provider IDCs. For edge use scenarios, a more reasonable form is a streamlined and reinforced hardware platform, which has the characteristics of small size, low energy consumption, high maintainability and security, and also has other additional features, such as GPS/encryption/self-destruction, etc. Features not found on standard products. At the same time, it must have complete system compatibility, have a relatively complete OS platform, and be able to carry most basic data analysis platforms.
06
Outlook
Gartner predicts that by 2022, 75% of all enterprises will have an edge computing strategy in full swing. In the next two years, when edge computing becomes popular, we believe that hyperconverged infrastructure can help you meet many new challenges to be solved in this new field, some rigid requirements such as small size, low energy consumption, minimal deployment, easy management and maintenance It can be satisfied with the characteristics and other factors, which strongly supports the development of edge computing.
With the rise of edge computing data centers in the 5G era, the hyperconverged market is ushering in more room for development. Gartner predicts that more than 35% of hyperconverged products will be used in edge computing in 2023. Let's learn about hyperconvergence and edge computing together!
01
The origin and connotation of hyperconvergence
With the development of cloud technology and virtualization technology, there are new ideas and solutions for the construction of network services. The VSAN technology was proposed at the vmware conference in 2013, and its main concept is to install flash memory and hard disks in a virtualized cluster to construct a storage layer. VSAN technology configures VSAN hosts with sufficient disk slots and storage controllers to form a scalable distributed storage architecture, resulting in an easy-to-manage shared storage source. On the basis of VSAN technology, the concept of hyperconverged architecture was born.
Hyper-converged architecture is a new generation of horizontally scalable software-defined architecture, which consists of general-purpose hardware units that integrate CPU, memory, storage, network, and virtualization software platforms, without a fixed central node. Its core concepts include linear horizontal expansion, the integration of computing power and storage capacity, and the use of high-speed flash memory as a storage medium on the server side. The hyper-converged architecture breaks the traditional isolation boundaries of servers, networks and storage to achieve a unified HCI form.
02
What is hyperconvergence?
The "super" of hyper-convergence refers to "virtualization", and the English of "hyper-converged" is Hyper-Converged. It can be seen that although the hyper-converged architecture has a word "super", it is not a mysterious concept, nor does it mean "super", but corresponds to Hyper in English Hypervisor, which means virtualization, corresponding to Virtualization is a computing architecture.
The core change of the hyper-converged architecture is storage, and the initial promoters of this concept are all storage start-ups from the Internet background. The bottom layer adopts a standardized x86 hardware platform, and the upper layer adopts a software-defined method to integrate computing, storage, network and other resources, which not only simplifies deployment, but also improves operation and maintenance efficiency. That is, hyperconvergence is inspired by the construction of large-scale data centers through software-defined technologies, combining virtualization technology and enterprise IT scenarios to achieve scalable IT infrastructure for enterprises.
"Hyper-converged architecture" means that in the same set of unit equipment (x86 server) not only resources and technologies such as computing, network, storage, and server virtualization, but also cache acceleration, data deduplication, online data compression, and backup software are included. , snapshot technology and other elements, and multiple nodes can be aggregated through the network to achieve modular seamless scale-out and form a unified resource pool.
It can be understood that hyperconvergence is a kind of middleware based on hardware and under the operating system. It is a software-defined data center (SDDC) architecture and an emerging data center infrastructure solution. The core of hyper-converged technology is to use Distributed File System (NDFS) to replace traditional SAN and NAS storage and expensive storage networks built with SAN switches, and to highly integrate virtualized computing and storage into one platform. At present, the industry generally believes that software-defined distributed storage layer and virtualized computing are the smallest set of hyperconverged architecture, and generally have the following general core components:
(1) Distributed storage based on X86 server architecture. On the basis of server virtualization, local storage resources are virtualized by deploying storage virtual devices, and then integrated into resource pools through clusters to provide storage services for virtual machines.
(2) High-speed network. Hyperconvergence uses 10 Gigabit Ethernet to provide scalable and highly available network channels for distributed computing and storage clusters.
(3) Unified management platform. Virtualized computing and storage are managed on the same platform, and administrators perform performance and capacity monitoring, troubleshooting, and other O&M work under the same platform.
The hyper-converged architecture integrates computing, storage, and network resources to create an integrated resource pool to share storage and achieve high availability through the redundancy function of storage.
Do you understand hyperconvergence?
03
"Burger" and Hyperconvergence
I have said so much above, but I may still not understand it. Let’s use the common “hamburger” in life to explain super fusion!
We know that in the past, computing, storage, and networking in the data center were independent, and hyperconverged systems broke this situation. In hyper-convergence, computing and storage are combined into one, and then they are connected in a software-defined form to form our common hyper-converged all-in-one machine.
If the above paragraph is used as a metaphor for hamburgers, it is like this▼
In a hamburger, the upper and lower layers of bread are computing and storage, and the middle layer is software definition and various services. These three work closely together as a whole, bringing many benefits to enterprises and operation and maintenance personnel——
Rapid deployment
For a normal meal, washing rice, cooking, washing vegetables, and cooking are essential processes, and you have to know eighteen martial arts. To make a burger, two slices of bread with vegetables, cheese and a piece of luncheon meat can be done in 10 minutes without having to fire. Just like a hyper-converged system, it only takes ten minutes from boot to VM provisioning, and it is not a dream to quickly go online.
easy to expand
There's nothing you can't get enough of a burger, and if there is, a Big Mac!
Hyper-convergence can start from a small scale, and as the business volume grows, it can be expanded as needed, and the performance increases linearly with the scale!
Simple operation and maintenance
The burgers are easy to make and easy to clean up afterwards. You don’t need to wash the dishes after eating, saving time and effort. Just as hyperconvergence realizes the unified management of all hardware devices, resources, and functions under one system interface, there is no need to troubleshoot one by one when there is a fault, and the system is simple and clear, which greatly saves the workload of operation and maintenance personnel.
Speaking of this, you may be curious: "Why do software definition and various services are regarded as sandwiches, and computing and storage are not?"
Of course, there is a reason for this arrangement, because the sandwich, as the soul of the burger, directly determines whether the burger is delicious or not. Just like hyper-converged computing and storage, both are general-purpose hardware produced under the standard X86 architecture, and software definition and various services are where manufacturers compete for strength.
04
Advantages and disadvantages of hyperconvergence
Hyperconverged Advantages
Traditional IT architecture is divided into many systems and teams, such as storage, server, network. The storage team is responsible for purchasing, expanding, supporting storage devices, maintaining storage systems, and dealing with storage vendors. The server and networking teams do the same. The fusion system is to combine two or more systems into one through prefabrication. VCE and HP offer this converged system that integrates storage, computing, and networking in software. VCE = VMware + Cisco + EMC, supports hypervisors like Hyper-V and KVM. The fusion system can only be combined through different components, while the hyper-converged system is further made into a modular design, and various components can also be expanded.
Disadvantages of hyperconvergence
There is no way to upgrade the internal components of the system module by yourself, such as adding CPU and hard disk, you can only continue to add a new module. For example, the storage is relatively short, but the CPU is enough, but you have to buy a module to add it to supplement the storage, and the CPU is redundant. Of course, building your own internal mods will not have this problem. The integration of the old system and the new system is inconvenient. For example, if the server department buys a hyper-converged system, it is equivalent to spending its own budget on storage, and the storage department does not want to help the computing department manage computing resources because of the purchase of a hyper-converged system.
05
Edge computing calls for hyperconvergence
Edge computing refers to a new computing mode that performs computing at the edge of the network. The processing of data mainly includes two parts: one is the downlink cloud service, and the other is the uplink service of the Internet of Everything. Among them, the "edge" in edge computing is a relative concept, which mainly refers to any computing, storage and network-related resources between the data source and the cloud computing center path.
The massive data generated in the era of the Internet of Everything has led to the advance of computing processing power. We found that the most urgent needs are processing power and storage resources, and the demand is very large. Of course, we also need data analysis tools, tools to push software and data to the edge, and Approaches that join forces across the edge with a centralized cloud even require machine learning at the edge itself. All of this clearly points to the need for some more robust infrastructure capabilities at the edge.
We believe that hyperconvergence is suitable for edge computing for three reasons:
• Satisfy the positioning of new edge nodes
– Edge computing makes computing close to data, data processing applications, and analysis application software will be deployed to the edge, and hyperconvergence can provide edge nodes with more powerful performance and storage space.
– HCI fully virtualized and widely compatible application software, with reasonable storage capacity, computing power and elastic expansion capacity, can meet the needs of more systems.
• Meet the constrained environment of edge computing
– The working environment conditions at the edge are lower than IDC, and the space, energy consumption, and maintenance convenience are not optimistic.
– The streamlined HCI has smaller size, smaller footprint, lower power consumption, lower cooling requirements, modularized rapid deployment capability, and remote management capability.
• Significantly improved price/performance ratio
– In the large-scale and large-scale procurement scenarios of edge computing, cost performance must be considered.
– HCI hardware is getting stronger and prices are falling at the same time.
At the same time, compared with traditional systems, hyper-converged architecture is a modular infrastructure that is "pre-processed", saving a lot of project-by-project design and deployment work, and creating a consistent paradigm and environment in a standardized form that is most suitable for applications. workload. Many edge workloads today run on Linux or VMs, and the migration to hyperconverged infrastructure is largely seamless.
Regarding the appearance of hyper-converged products in edge computing, it is still different from ordinary hyper-converged products. At present, there are still many cases of centralized deployment of hyperconvergence, and many of them are deployed in large enterprise user IDCs or cloud service provider IDCs. For edge use scenarios, a more reasonable form is a streamlined and reinforced hardware platform, which has the characteristics of small size, low energy consumption, high maintainability and security, and also has other additional features, such as GPS/encryption/self-destruction, etc. Features not found on standard products. At the same time, it must have complete system compatibility, have a relatively complete OS platform, and be able to carry most basic data analysis platforms.
06
Outlook
Gartner predicts that by 2022, 75% of all enterprises will have an edge computing strategy in full swing. In the next two years, when edge computing becomes popular, we believe that hyperconverged infrastructure can help you meet many new challenges to be solved in this new field, some rigid requirements such as small size, low energy consumption, minimal deployment, easy management and maintenance It can be satisfied with the characteristics and other factors, which strongly supports the development of edge computing.
Related Articles
-
A detailed explanation of Hadoop core architecture HDFS
Knowledge Base Team
-
What Does IOT Mean
Knowledge Base Team
-
6 Optional Technologies for Data Storage
Knowledge Base Team
-
What Is Blockchain Technology
Knowledge Base Team
Explore More Special Offers
-
Short Message Service(SMS) & Mail Service
50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00