DevOps practice of CodePipeline linkage container

Introduction: After launching the container service, Alibaba Cloud has developed the open source continuous delivery tool CodePipeline , which provides continuous delivery wizard templates in multiple languages, and can quickly fill in templates for continuous integration, thereby realizing continuous delivery in multiple platforms and environments.

In the 41st issue of Yunqi TechDay, Liusheng, a senior development engineer of Alibaba Cloud, brought the DevOps practice of CodePipeline linked containers. This article is a compilation of the salon's activities.
The development community is concerned with making Docker's continuous delivery simpler, safer, and more efficient. After launching the container service, Alibaba Cloud developed the open-source continuous delivery tool CodePipeline , which provides continuous delivery wizard templates in multiple languages, and enables continuous integration by quickly filling in templates, thereby realizing continuous delivery in multiple platforms and environments.

【DevOps practice of CodePipeline linkage container】The developer's "last mile problem" on the cloud



The last mile of cloud computing means that cloud computing technology has matured and will be implemented into enterprises soon or smoothly. This picture is actually the latest survey on the cloud computing market by RightScale in 2017. It can be seen that only 1% of enterprises are now There is no use of cloud technology, or no plan to use a cloud platform, which shows that cloud computing has become very popular. The cloud trend is becoming more and more obvious. More and more developers put their development work on the cloud. The last mile problem for developers on the cloud is actually how to solve the problem from code submission to service release and operation. , including configuration management, integration, testing, deployment, and technical operations.
How to do this step?
I think it is necessary for developers and service operators to have the ability to extend each other. Developers can participate in the work of operations, and operators can also participate in the development and design work, as well as the ability to feedback information between them. The value to users will be more prominent. The technical value of doing well in the last mile for developers on the cloud lies in:
1.increase the frequency of deliveries;
2.reduce the failure rate of delivery;
3.shorten the delivery cycle;
4.Reduced mean time to fix issues;
5.Focus on creating valuable development activities.
To give a simple example, when we just tested the product, we only provided two build environments, Java and Node.JS. At that time, some users said that we were using the PHP language. Can we provide us with the build environment as soon as possible? Our development priority at that time was actually adding overseas build nodes, or some integration of open source cloud code. Because we can get timely feedback on the needs of users, we will adjust it to the highest priority, and realize that what users put forward is the most valuable. Then it takes about three working days from development and testing to verification to launch, and users can already use this function.

【DevOps practice of CodePipeline linkage container】Jenkins and container technology



The Jenkins tool has been in the open source industry for many years, and has actually been tested in production practice. It has a huge community, extensive knowledge, and a 1000+ plug-in system. It is able to integrate an end-to-end continuous delivery tool chain, and also provides you with some extension interfaces. If you want to do integration yourself, you can extend the development through its interfaces, and then its construction part also provides interfaces, if the community does not do so The plug-in can also do its own secondary development. The 2016 Java development tool survey accounted for 60%, the 2016
Chinese developer white paper survey accounted for 70%, the CI market accounted for 70%, and the most popular continuous integration tool in the 2017 technology trend forecast.

Jekins is a master plus slave architecture, the master is responsible for job scheduling, the slave is responsible for job execution, and then the slave nodes can be labeled and classified. Do not mix them on the nodes used for testing, because data pollution may occur, and the build job may fail because the old version will be upgraded to the new one due to different dependency versions when building the package.
In the development process, the source code warehouse, packaging and construction, deployment for testing, and re-deployment for verification may be used, and any problems in the environment can be communicated and feedback in time, which can be done through Jenkins Master. Each part is actually a Jobs, if there are dependencies, can be executed sequentially, and if there are no dependencies, they can be executed in parallel, which can shorten the running cycle of the build job with maximum efficiency. The basic configuration of a Jenkins job, including basic information and source code management, build triggers, build and deployment, and post-build operations, mainly email notifications and notifications from DingTalk or some other communication tools.
DevOps

Containers in DevOps tools include container technologies such as Docker and Chef, and the use of container technologies has reached more than 60%.

Docker is used to package standard software packages. These software packages can be used for development, deployment, and delivery. Compared with virtual machines, the underlying architecture container is mainly an OS kernel that shares the host. The virtual machine needs to install a hypervisor on the physical environment. The virtual machine needs to use the CPU memory and is virtualized by the hypervisor. The virtual machine enjoys its own operating system and its own kernel. The container has strong resource occupation, startup speed, concurrency and performance resource utilization. For VM, because it must have the characteristics of being as fast as possible, high resource utilization, and low performance damage, which is in line with the DevOps concept of shortening the delivery cycle.
delivery environment
In addition, containers can be packaged in a standard delivery environment. Sometimes we may put programs developed in a 64-bit development environment on a 32-bit machine to run, there will be compatibility problems, or there are some dependency problems, and containers can be These are solved by packaging a Docker Image. Its Docker Image joint file system is added one layer at a time, and the bottom is Base Image, such as Ubuntu, which is made by Docker itself, and then installs its own dependencies on top, and then installs itself. Therefore, if it is packaged, no matter where you want to deploy it in the future, your application will be deployed on it, and there will be no compatibility problems and dependency problems.

If your application is more complex, for example, the application still depends on some database services, there must be a sequential startup sequence between them, and the database must provide some parameters to the front services, such an arrangement can use Docker compose, its The advantage is that it can simplify complex applications.

If your application wants to actually deploy online services to users, it actually needs to be placed in a cluster. This cluster can provide you with network storage scheduling and orchestration, and can also help you with high availability of services. You can do it in there Multiple copies, so that there is no problem with one node that will cause the entire service to stop.
CodePipeline Linkage Container Technology

The picture shows the process of developers from submitting code to deploying to their own test environment or pre-release environment. First, there must be a source code management repository, which can be private or public. Like Alibaba Cloud, it has its own Alibaba Cloud Code products, and then there is GitHub on the public network. Gitlab is used more for private use. In fact, Jenkins has a lot of plug-in support. The code management warehouse is Jenkins CI server and CD server. CI requires Equipped with a source code trigger, after you submit the code here, CI should allocate a trigger according to its own needs to monitor the action, automatically trigger the job, after the job is triggered, it needs to be built, then packaged, and then placed in Docker Inside the Image, package your own Docker private image repository through DockerFile . Then in the next step of deployment, use your own Docker Compose orchestration template to deploy, make several copies of the service, and then how many open ports are there for your service, these things we can do on public cloud or private cloud To do this, we recommend using container clusters for application deployment.
Developers may encounter some pain points in the process of building DevOps by themselves:
•Daily operation and maintenance. The most important aspect of daily operation and maintenance is data. You need to back up which ones are useful data in a timely manner, and which ones are useless data, you need to do regular processing in a timely manner. If the backup is in a private cloud environment, it may depend on the high availability of the storage to ensure the data cleaning. You may need to write a script yourself to clean up the data regularly.
•upgrade. If you build it yourself, the iteration speed of Jenkins master is not fast or slow, and there will be many loopholes in this part of the plug-in that you need to upgrade. Therefore, you need to monitor these things yourself and update them in a timely manner, including the application of some JDK or other open source tools, and you have to maintain them yourself when there is an update.
•Security risks. These open source tools have many loopholes that need to be filled by themselves.
•Some users use Alibaba Cloud products and build the Jenkins server by themselves. They may use other Alibaba Cloud services and need to integrate them themselves. For example, if you want to use container services, you need to write your own clients to call APIs to use some services. .
CodePipeline and container technology
CodePipeline 's solution is as follows:
•A SaaS continuous delivery engine: no operation and maintenance, out of the box; resources are used on demand and dynamically generated;
•Fully compatible with Jenkins plug-ins: Provides Jenkins plug-ins that have been hardened by Alibaba Cloud, and are continuously opened according to the needs of developers.
•Seamless integration with Alibaba Cloud's product ecosystem: Customers' private OSS provides secure build distribution, supports continuous deployment of ECS and container services, and we have developed a plug-in that already includes service username and password. In the process of CodePipeline , three services can be freely mobilized capabilities provided.
•Multi-dimensional security policy guarantee: The containerized construction environment will burn when it is used up, and the constructions will be stored in a private warehouse.
•Full language environment support: Open beta open Java, Node.js, PHP, Python, etc., and already support Go, C++, etc.
•Multi-dimensional deployment method: supports cross-region deployment (classic, VPC), and supports hybrid deployment of local clusters and remote clusters (advanced).
•Guided Interactions: Wizard templates that provide language-specific best practices. Timely use our company's internal private cloud environment, we also have a solution that can be connected to the user's private cloud to do a build, this is the template we made to help users use it.

The first is the construction environment. Slave can be a virtual machine or a container. Why do we not use a virtual machine because it cannot be dynamically created. If many users use a virtual machine for construction, their data isolation is not very strong, which may cause data pollution, which is not very safe, so the container we use is still a temporary slave mode, and our slave node resources The pool is a Swarm container cluster. When there is a job construction task, we will create a container from the cluster, and then share the container exclusively for this task. This task is constructed and executed in the container, and the data it generates will be Upload it to the user's own private warehouse. After the job is built, the container will be destroyed. Therefore, when there is no job running, there is no Slave node in the CI cluster. We only create it when there is a job to be built. , this is to maximize the use of resources, one of the benefits of doing so is security, and the other is flexibility, which can be allocated on demand.

The construction node types include overseas nodes and domestic nodes. Our consideration is not to affect the construction efficiency due to network problems . In fact, when you do it yourself, you are still classified, and there are construction environments in different languages, such as source code management and trigger management. We are now going to do two parts of construction and deployment, one is the construction and release of images, and the other is to deploy containers application of the service.
Mirror construction means that users need to build a mirror warehouse by themselves, put the version number of their warehouse name version mirror on it, then the warehouse address and certificate, the dockerfile of the root path in the dockerfile default git code , and it will be uploaded to the warehouse after packaging. Inside, and then deploy the packaged application image to the cluster. This is the access entry to the cluster, the cluster's certificate, the name of the application, and the template for the arrangement. Finally, there is a release strategy. There are two release strategies , one is standard Publishing means that the new application directly kicks the old application down, first deletes the old container, and then deploys the new one. Blue-green publishing means that the old service continues to run, the new service comes up first, and there is a routing weight after it comes up. , the network traffic is still the old one. The new network traffic is 0. After you have verified and confirmed the network traffic, you can adjust the network traffic. This is the service upgrade of the container, and the gap between service upgrades will be minimized as much as possible.

Copyright statement: The content of this article is contributed by Alibaba Cloud real-name registered users. The copyright belongs to the original author. The Alibaba Cloud developer community does not own the copyright and does not assume the corresponding legal responsibility. For specific rules, please refer to the " Alibaba Cloud Developer Community User Service Agreement " and " Alibaba Cloud Developer Community Intellectual Property Protection Guidelines ". If you find any content suspected of plagiarism in this community, fill out the infringement complaint form to report it. Once verified, this community will delete the allegedly infringing content immediately.

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00