Alibaba Cloud SAE Releases 5 New Features

In the microservice scenario, is open source self built the fastest, most economical, and most stable? Can complexity really become Kubernetes's "Achilles heel"? Is it necessary to cross the "single wooden bridge" of K8s for enterprise application containerization? Serverless has a single application scenario and is mostly used in non core scenarios with simple logic: applets, ETL, scheduled backups, etc. Is Java microservices really out of sight?

At the 2021 Yunqi Conference, Ding Yu (Shutong), a researcher at Alibaba and general manager of Alibaba Cloud's intelligent cloud native application platform, unveiled the new product positioning and five new product features of the Serverless application engine SAE, providing answers to the above questions.

From dedicated to general-purpose, SAE is naturally suitable for large-scale implementation of enterprise core businesses

Unlike Serverless in the form of FaaS, SAE is "application centric" and provides an application oriented UI and API that does not change the application programming model and deployment method, maintaining a consistent development and deployment experience for customers on traditional servers. It can also facilitate local development debugging/monitoring, greatly reducing the threshold for customers to use Serverless, and achieving zero modification and smooth migration of enterprise online applications.

Because of this, SAE has helped Serverless transform from dedicated to general-purpose, breaking the implementation boundary of Serverless, making Serverless no longer the exclusive darling of front-end full stack and small programs. Backstage microservices, SaaS services, Internet of Things applications, etc. can also be built on Serverless, naturally suitable for large-scale implementation of enterprise core businesses.

From complexity to simplicity, SAE is naturally suitable for zero threshold containerization in enterprises

Distinguishing from open source self built microservices, SAE provides a complete set of microservice governance capabilities that have gone through the Double 11 test and are available out of the box. Customers do not need to consider framework selection, data isolation, distributed transactions, circuit breaker design, current limiting degradation, etc., nor do they need to worry about limited community maintenance efforts and secondary customization development.

Can achieve seamless migration with zero modification of Spring Cloud/Dubo. On top of open source, we have also enhanced advanced features such as lossless online and offline, service authentication, and full link grayscale.

SAE also helps users shield K8s technical details, achieving zero threshold containerization of enterprise applications, and embracing K8s without feeling. Provide the ability to automatically build images. In addition to images, provide multiple methods such as WAR/JAR/PHP zip packages to reduce the threshold for customers to create Docker images. Shield the complex network and storage plug-in adaptation of K8s, allocate an IP address interconnected within the VPC for each application instance, and persist data to the storage system. Block the operation and maintenance upgrades of the K8s, and you no longer need to worry about the stability risks brought by the upgrade of the K8s version. Shielded K8s docking monitoring components and elastic controllers provide white screen end-to-end observable capabilities and flexible and diverse elastic policy configurations. Users continue to enjoy the K8s technology dividend directly along with the original packaging and deployment method.

Five new features highlight the new advantages of Severless and extend the new boundaries of Serverless

• Resilience 2.0: The industry's first hybrid elasticity strategy supports the mixing of timing and indicator strategies. In terms of open source K8s capabilities, it enriches the number of TCP connections, triggers elasticity by business indicators such as SLB QPS/RT, and supports advanced elasticity settings such as scaling step size and cooling time.

• Java cold start speed up by 40%: Based on the enhanced AppCDS startup acceleration technology of Alibaba Dragonwell 11, it saves the cache generated during the first startup of the application, and subsequently directly launches the application through the cache. Compared to the standard OpenJDK, the cold start time increased by 40%.

• Ultimate deployment efficiency of 15s: Based on the underlying full link upgrade, secure sandbox container 2.0, image acceleration, etc., it provides an end-to-end deployment experience of 15s.

• One-stop PHP application hosting: Supports direct deployment of SAE using PHP zip packages, and provides PHP runtime environment selection and application monitoring capabilities, providing a one-stop PHP application hosting experience.

• A richer developer tool chain: In addition to developer tools such as Cloudtoolkit, CLI, and VSCode, new support for Terraform and Serverless Devs is added. Based on resource orchestration capabilities, SAE applications and dependent cloud resources are deployed with one click, making environment setup easier.

Top 4 Best Practices, Exemplary for All on Serverless

Transformation of low threshold micro service architecture

Faster, more economical, and more stable than open source self built micro services. With the rapid growth of business, many enterprises are facing the challenge of transforming from single entities to micro service architectures; Or self-built microservices cannot meet the stability and diversification needs of enterprises. Through SAE's full set of microservice capabilities available out of the box, customer learning and research costs have been reduced, and stability endorsements that have withstood the "Double 11" test can enable these enterprises to quickly complete the microservice architecture transformation and support the rapid launch of new businesses. This is also the most widely used scenario for SAE, which can be said to be the best Serverless practice in the field of microservices.

One click start/stop development test environment

Medium and large enterprises have multiple sets of environments, often with development testing and pre release environments that are not used for 7 * 24 hours. They maintain application instances for a long time, resulting in high idle waste. Some enterprises have CPU utilization rates approaching zero, and their demand for cost reduction is obvious. Through the one-click start/stop capability of SAE, these enterprises have been able to flexibly release resources on demand, and the development and testing environment alone can save 2/3 of the machine cost, which is very significant. Next, we will use K8s orchestration capabilities to orchestrate application and resource dependencies, and initialize a set of environments and clone replication environments with one click.

Full link grayscale

More powerful than the grayscale capabilities provided by open source K8s ingress. SAE combines the scene characteristics of PaaS layer customers to not only achieve the seven layer traffic grayscale of K8s ingress, but also achieve full link grayscale from front-end traffic to multiple cascaded microservices at the interface and method level.

Compared to the original scheme, deployment, operation and maintenance are more convenient. In the past, customers needed to deploy multiple applications with two namespaces and use two complete environments to achieve formal and grayscale publishing, resulting in high hardware costs and cumbersome deployment, operation, and maintenance. Based on SAE, customers only need to deploy a set of environments, configure some grayscale rules to access specific traffic to specific instances, and cascade down from one level to another. This not only controls the explosion radius, but also saves hardware costs.

Use SAE as an elastic resource pool to optimize resource utilization

Most customers will fully use SAE, while a small number of customers will place the normally maintained portion of the same business on the ECS, using SAE as an elastic resource pool, and deploying the two in a mixed manner.

Just ensure that the ECS instance and SAE instance of the same application are both attached to the backend of the same SLB, and set the weight ratio. Microservice applications also need to be registered in the same registry. In addition, reusing customer built publishing systems ensures that the versions of the SAE instance and the ECS instance are consistent during each release. Reuse customer built monitoring systems, send SAE monitoring data to the monitoring system through OpenAPI, and regulate ECS monitoring data. When the traffic peak arrives, the elastic module will bounce all elastic instances to the SAE system, greatly improving the elastic expansion efficiency and reducing costs.

This hybrid scheme is also suitable for migrating from ECS mode to SAE as an intermediate transition scheme, further improving stability during the migration process.

The five new features and four best practices of SAE have broken the boundary of serverless landing, making All on serverless possible. The zero threshold containerization of applications has enabled the integration of container+serverless+PaaS, which enables the integration of progressiveness technology, optimized resource utilization and constant development, operation and maintenance experience.

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us