How to develop a standard cloud-native application
IDC expects that by 2024, the proportion of new production-level cloud native applications in new applications will increase from 10% in 2020 to 60% due to the adoption of technologies such as microservices, containers, dynamic orchestration and DevOps, among which the workload of microservices will exceed 80% in the enterprise. The above four points are the four core technologies represented by the cloud native era. Among them, our development classmates may be enthusiastic about microservices. From the trend of recent years, the microservice framework in the Java field is becoming more mature, and the integration with cloud native is also becoming closer. From the data in EDAS, Spring Cloud+Kubernetes has basically become the mainstream match in the form of microservice architecture. However, another data makes me more curious. At present, less than 8% of developers have experience in microservice production in cloud native scenarios. Why is it like this? I think there are two main reasons:
First of all, because the learning curve of micro-service itself is steep, it is difficult to master a framework and can carry out production practice. However, it is immediately faced with many complex concepts of cloud origin and technical challenges of building complex environments. The combination of these two factors is full of fear of the unknown world for everyone, so in some teams with less strategic determination, Finally, the landing effect is not very good.
In view of this, I think you just need to follow up on some of our team's products and related courses. The cloud native team has a large number of related courses for you to learn at Alibaba Cloud University, and many products also provide good tools; At the same time, we have a lot of experience in digital transformation in various fields, including the implementation of microservices in the R&D team. Interested partners can leave us messages, and we can conduct in-depth communication on a selective basis.
Secondly, for developers, there has always been a misconception that there is no difference between cloud native applications and commonly developed applications, because for developers, it is still the same as writing code, releasing deployment, and troubleshooting. But it's really different here. What's different? Heroku summarized 12 elements for us, which are called 12 factor apps in the circle. Only applications that meet these 12 elements can be called cloud native applications.
Twelve Factor Apps
I have listed these 12 items here. The orange part is related to my content today. Let me briefly explain each item below:
Article 1: Codebase, which is one code and deployed in many places. On the other hand, multiple deployments are all one piece of code, and they must be one piece of code. When is it not? For example, when we directly adjust in the production environment but forget to synchronize back, that is to say, this is to restrict our R&D process and ensure the consistency of code in various environments.
The second item is to display and declare dependencies, which is easy to understand. Students who write Java will use tools such as ant, maven, and gradle to display and declare some dependencies. However, it is easy to ignore two points. The first point is to rely on a snapshot version, and finally, the lack of effective tracking results in online failure. The second point is an implicit dependency, that is, the dependency of the code runtime environment on the system software and execution engine should be considered as part of the application.
Article 3: Config, the conventional understanding is code configuration separation. In a strict sense, it means that the entire operating environment (image) should be isolated from the configuration. What does it mean? The core here is whether the dependent configuration can be flexibly replaced by conventional operation and maintenance methods during the process of running and starting. These operation and maintenance methods include: changing environment variables, changing startup parameters, and configuring services through distributed configuration.
Article 4: Backing Services, all back-end services, including all network calls, need to be treated from the same perspective. As an additional resource, there are two meanings. The first is the interface and method of resource access, such as the URI interface in Java. Another understanding is that since it is a resource, we need to consider its availability. Since all back-end services are resources, we should also treat the availability of resources equally.
Article 5: The reason why Build, Release and Run should be strictly separated is that the capabilities we focus on in these three areas will also be different. First, Build pays more attention to actual results; Release should focus on strategy; In the Run phase, more attention should be paid to traffic management, and the traffic should be lossless as much as possible.
Article 6: Stateless, the statelessness of the pure process dimension is mainly the dependence on data during the process startup. The purpose of statelessness is to better achieve fast scaling and even automatic elastic scaling. Stateless+startup without intervention is the key step of elasticity.
Article 7: Port Binding means that all exposed services are exposed through ports. On the contrary, some applications will greatly increase the complexity of large-scale service operation and maintenance scenarios through unix socket or IPC.
Article 8: Processes, the idea here is actually that when the service capacity needs to be flexible, it is recommended to use Scale Out instead of Scale Up in the process. Scale Out enables the application to maximize the use of system resources under various specifications, while Scale Up often brings additional system overhead and more complex GC strategies for JVM-like mechanisms.
Article 9: Disposability. We should take a longer view of the quick start scenario. Back to the Run phase in Article 5, the Run phase should start from a new environment ready to start to the process really starts to service, that is, including environment preparation, package pulling, and all subsequent processes of process startup. While graceful termination is easy to ignore the processing of background tasks such as messages, scheduling tasks, and thread pools.
Article 10: Dev/prod parity is easy to understand and difficult to implement. The premise to achieve this is to avoid all artificial operations in the environment. There is a cloud native called "immutable infrastructure", which corresponds to this one. But immutable infrastructure refers to the configuration that needs to be strictly followed in the code at runtime. If it needs to be changed, it needs to be declared again.
Article 11: Logging Event is recommended to process logs as event streams. Specifically, it is recommended to print all logs to standard output instead of using configuration files. At the same time, we use a special centralized log service to collect the log content and then aggregate, clean and query it uniformly. This is not only a simple operation and maintenance standard. In addition, it can decouple the local disk dependency. For applications in cloud native scenarios, it is necessary to be prepared for the possible disappearance of disk data at any time.
Article 12: Background management tasks refer to tasks of maintenance nature, such as clearing logs, caching, correcting some data, etc. We should regard these as a part of our business and not be separated from the whole product system. It is also necessary to follow the previous 11 items, such as a code, display statement, code and configuration decoupling, statelessness, and so on.
Conventional open source software construction
First of all, we need to prepare a micro-service environment. The most basic is that we need a component (such as Nacos) for service registration and discovery in the micro-service scenario. Of course, as our business becomes more and more complex, we will need more and more components, such as APM components, log service components, and so on.
Then, we will use an IDE locally to build a code project and start development. In Java, mvn is used for dependency package management.
Thirdly, we submit the code to the git repository. All operations for environment changes should be strictly separated according to the process of Build/Release/Run. Now Jenkins can achieve this effect very well. Creating a pipeline in Jenkins involves multiple tasks, including program construction variation and image construction. Image upload; Finally, a corresponding workload change is initiated for K8s.
EDAS Core turns four steps of open source into one step
The EDAS team provides a more convenient way. It is a simple implementation of EDAS, a product from Alibaba Cloud, including some application cycle management and micro-service governance capabilities in EDAS. It also comes with webshell. Its simplest installation only requires 4C8G. It is light and simple. It can be pulled up on a laptop or installed on any set of standard K8s clusters. It can be used for development and testing for free. The above four steps can be changed into one step:
The first step is to download the EDAS Core installation package and extract it.
Step 2: Confirm the kubeconfig file related to the cluster and put it in the corresponding location.
Step 3: enter the installation directory and execute the installation script. The whole process takes about 7-8 minutes.
To use EDAS Core, you don't need to prepare complex Dockerfile and image, just select the corresponding environment in turn, upload the regular package, and set the corresponding rule parameters. In addition, the construction of Nacos components related to microservices and the configuration of address parameters will be managed by default.
At the same time, an official Jenkins plug-in has also been made using EDAS to interface with Jenkins. You only need to provide the EDAS application ID and the address of the deployment package in the plug-in. You can complete the docking of the assembly line.
Pain points under microservice development
Environment construction is only the first step. In the development of microservices, one of the pain points is how to add a system to an original large ecological cluster because it is a self-developed application. And how the two applications need to be launched simultaneously with a new capability. How can these two people conduct accurate joint debugging according to certain traffic rules without affecting others. As shown below:
In order to solve the above two problems, we recommend the "End Cloud Interconnection" function in the Alibaba Cloud Toolkit. In this solution, you only need to provide a springboard machine that can connect the ssh port to solve the above two problems perfectly:
At the same time, in the scenario of precise joint debugging, you can create a swimlane group in EDAS, specify the entry application, and set traffic rules on the cloud. At the same time, let two developers join this swimlane group at the same time. Then all traffic that meets the rules will be accurately routed to each other's nodes.
Not only that, Cloud Toolkit also provides another capability that can be deployed directly from the IDE to the environment with one click, and also provides the Web shell capability to access POD with one click to facilitate our troubleshooting and diagnosis. This capability will also be available in the EDAS commercial version in the near future.
After we have developed all applications, we have to prepare for many new environments, such as new test environment, pressure test environment, pre-release, production, etc. The operation and maintenance students will suffer with the increase of applications every time they prepare the environment. For this scenario, EDAS Core has developed the function of putting all applications on the cloud with one click.
The last colored egg: Chengdu Environment launched a free version
In order to dispel the concern that some students will worry about the cost of the commercial version after going to the cloud, EDAS launched a new application form in Chengdu at the end of August. In this form, EDAS split the three capabilities of control, micro-service governance, and APM monitoring. You can customize flexibly according to your needs. Welcome to try and feedback.
Knowledge Base Team
Knowledge Base Team
Knowledge Base Team
Knowledge Base Team
Explore More Special Offers
50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00