Function calculation specification optional function
At the just concluded 2022 Hangzhou Yunqi Conference, Alibaba Cloud announced that functional computing FC will be enabled for a comprehensive price reduction, with the unit price of vCPU dropping by 11% and the maximum drop of other independent billing items reaching 37.5%. FC's overall price reduction based on function calculation makes Serverless more affordable. Users can pay as you use, pay as you go, and use the Serverless architecture at a lower cost. However, what technical upgrading of FC based on function calculation can promote the cost reduction? This article will fully reveal the key technology behind the scenes - the function specification self selection function.
Pain point: low specification flexibility and cost waste
As more and more users gradually deploy more diversified loads to function computing, such as CPU intensive and IO intensive, some of these loads require larger disks, but function computing only provides the ability to allocate computing power in proportion to the size of memory. At this time, to meet the largest demand, users need to set a larger specification than the actual one, which is not conducive to more accurate cost control. However, the flexibility is further weakened due to the small number of optional specification schemes and the large gradient.
In order to solve the above pain points, function computing provides the function of independent specification selection, which not only frees up the limit of fixed CPU and memory ratio, but also provides a very small gradient to precisely set function specifications, so that the function can be used as much as it needs, improve resource utilization and reduce user costs.
How does the optional specification help reduce user costs?
Next, we will show specific cases to let you intuitively understand the advantages of the function rule self selection function.
Function specifications that fit the actual resource usage
The following figure shows the usage of vCPU and memory during the function operation. The vCPU usage of this function is always less than 1.5 cores, and the memory usage is less than 6GB. The resource waste is obvious. Users need to pay for unused resources.
Before the introduction of the optional function, the memory specifications provided by function computing were 4GB, 8GB, 16GB and 32GB. Since the maximum memory usage of this function exceeds 5GB, only 8GB of memory can be configured to ensure the normal execution of the function, and the corresponding vCPU specification of this memory is 4 cores.
Now, we can adjust the function specification to 1.5 core vCPU and 6GB memory, significantly improving resource utilization, while reducing the cost to 44% of the original.
• Before adjustment: 4-core 8GB function, the cost per second is 4-core * 0.000127 yuan/(core * second)+8GB * 0.0000127 yuan/(GB * second)=0.0006096 yuan
• After adjustment: 1.5 core 6GB function, the cost per second is 1.5 core * 0.000127 yuan/(core * second)+6GB * 0.0000127 yuan/(GB * second)
tips: Observe the instance indicators of the monitoring indicator page. If the function cannot make full use of vCPU or memory resources, consider adjusting the function specification to reduce costs.
More fine-grained GPU computing power
In some algorithm scenarios, we deploy the model to the cloud to provide online reasoning services. Suppose that the model needs 1.8GB of video memory. If the traditional scheme is adopted, and the ECS equipped with GPU resources is purchased, the minimum video memory is 8GB. When running a single online reasoning service, the GPU utilization rate is less than 25%, which is a huge waste. Even though multiple online reasoning services are run on a single machine to improve utilization, it is necessary to develop additional request scheduling functions, balance the allocation of other types of resources, increase the complexity of development and operation and maintenance, and the flexibility is almost zero.
Compared with holding the whole GPU card, using function computing can only allocate 2GB of video memory for the function, which can achieve the ultimate squeezing of resource utilization, but also gain strong flexibility, saving resource costs and operation and maintenance costs.
The independent specification selection function is now open to all users. You can modify the specifications of CPU, memory, disk and GPU resources on the function configuration page of the function computing console, but the following specific rules must be met:
• vCPU: in the range of 0.05 to 16 cores, and multiple of 0.05 cores
• Memory: in the range of 128MB to 32GB, and a multiple of 64MB
• GPU: within the range of 2GB to 16GB and a multiple of 1GB
• Disk: 512 MB and 10GB are provided. Function calculation provides 512 MB free disk space for each instance
If you use the Serverless Devs tool, you can achieve the same effect by configuring the memorySize, cpu, and diskSize attributes of the function field. The units of memorySize and diskSize are MB, and the units of cpu are cores.
Realize technical disclosure
From the case described above, we can feel the practicality and value of this function. It is not a complex function, but why is it introduced now?
To answer this question, we need to understand the working principle of the functional computing resource management system.
High density deployment
In the past, function computing used Docker to create a container as the running environment for functions. To ensure the security in a multi tenancy environment, each user was assigned an independent virtual machine. Even if only a small function of 128MB was needed to run, a two core 4G virtual machine was also needed. The platform was responsible for resource waste. Since the specifications of virtual machines are fixed, in order to improve the resource utilization of the platform, only the memory size can be set when setting the function specifications, and the ratio of CPU to memory is similar to the virtual machine specifications.
Now, function computing runs the user's functions through the security container on the elastic bare metal server, running 2000+function instances on a single machine to achieve high-density deployment and reduce platform costs.
After high-density deployment, the function of independent configuration cannot be directly provided, because this function will lead to a variety of function specifications, and various possible ratios will increase the scheduling complexity of function computing and reduce the utilization of platform resources. For example, after running a few function instances on a bare metal server, the CPU is full, and the memory utilization is still at a low level, but it is impossible to create new function instances on the machine at this time.
To solve this problem, function calculation is used to draw a resource image of the function, and statistics is used for each resource. When scheduling function instances, select the machine that is most conducive to resource complementarity to create instances.
Based on years of technical precipitation and efficient scheduling strategies, function computing has continuously improved the utilization of resource pools, which also allows us to dare to provide independent configuration functions, provide users with a more flexible experience, and more easily enjoy the dividends of Serverless technology.
Knowledge Base Team
Knowledge Base Team
Knowledge Base Team
Knowledge Base Team
Explore More Special Offers
50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00