Kenan
Assistant Engineer
Assistant Engineer
  • UID621
  • Fans0
  • Follows0
  • Posts55
Reads:742Replies:0

Be a cloud image pro - Packer

Created#
More Posted time:Apr 18, 2017 14:42 PM
1. Infrastructure-as-Code for DevOps
What is DevOps? According to Wikipedia, DevOps (combination of Development and Operations) is a culture, movement or convention that emphasizes communication and cooperation between software developers (Dev) and IT operation and maintenance technicians (Ops). Through automated "software delivery" and "architecture change" procedures, software building, testing and releasing can become more swift, frequent and reliable.
In an organization that lacks DevOps capabilities, there is a "gap" between development and operation units – for example, operators require better reliability and security, while developers expect the infrastructure to respond faster, and the business users need faster releases of more features to end users. This information gap is frequently where most issues occur. The introduction of DevOps can exert a far-reaching impact on product delivery, testing, functional development and maintenance (including hot patching – which used to be rare but is nothing new at this point).
DevOps contains four core parts: culture, automation, measurement and sharing, and this article focuses on automation. The goal of automation is to automate the entire delivery process as much as possible and, of course, automate infrastructure management, which means that your infrastructure is not managed manually or through script execution. In the traditional IT environment, the basic hardware environment and software environment settings, as a key part of the infrastructure, are difficult to automate. However, with the rise of cloud computing, it has become more and more popular to manage and derive computing infrastructure (processes, bare servers, virtual servers and so on) and its configuration processes through definition files, rather than physical hardware configuration, that can be processed by the server, or interactive configuration. This leads to managing infrastructure through code, which is called Infrastructure-as-Code, or abbreviated as IaC.
Compared with traditional approaches, IaC has the following advantages:
• Self-service – When the infrastructure is defined by code, the entire process can be automated. Developers can release the product on their own when needed and don’t have to wait for the O&M personnel to do the release.
• Fast and secure – Since the entire deployment process is automated, the computer is faster and ensures more security in execution than humans.
• Documented – In the traditional mode, the infrastructure status only exists in the minds of individual systems administrators, while IaC saves the infrastructure status in the form of source code that can be read by anyone, that is, the source code is endowed with the function of a document.
• Version management – You can also manage your IaC code using version control tools, which means that your infrastructure change history can be traced. As a result, when a problem arises, you can quickly locate and diagnose the problem through searching the history data.
• Verifiable – The infrastructure status is managed in the form of code, so any changes can be verified through code review or an early operational test.
• Reusable – You can also encapsulate your code into modules. As a result, when different requirements emerge, you can assemble different modules to complete your work, instead of making a fresh start every time change is required.

2. Image production–a cornerstone of IaC
More and more people begin to recognize the advantages of IaC, encouraging the emergence of a wide variety of IaC tools which are divided into the following four categories:
• Ad hoc scripts
• Configuration management tools
• Service template tools
• Orchestration tools
People use universal languages to write various ad hoc scripting languages to automate the infrastructure, which is the most direct approach. However, it is too easy and only convenient for one-time tasks. When this approach is applied to complex and long-term projects, you will find maintaining these scripts a nightmare. As a result, configuration management tools emerge, such as Chef, Puppet and Ansible. They define the corresponding syntax rules based on the universal languages for installing and managing software on servers. The code defined by these tools is very similar to ad hoc scripting languages, but it enforces requirements of structured, consistent, predictable and documented code and clear parameter naming. In addition, it enables remote management of a large number of servers. With the rise of virtualization and cloud computing, Packer, Docker and other service template tools come into being. Behind the template tools is the image. Compared with booting a large number of servers, and then running the same code using the configuration management tools to install software and setup systems repeatedly, the image technology only needs to capture a complete state snapshot of the server, with a verified operating system, software, files and configuration, and then quickly create the server, database or others based on the image through Terraform or other orchestration tools. This has greatly improved the efficiency for infrastructure creation and management. Thanks to this, the image technology constitutes an indispensable part of mainstream cloud platforms, and as the first step in the creation of infrastructure, image production naturally constitutes the cornerstone of IaC.

3. Packer – a cutting edge tool for image production
The so-called image is a static unit that encompasses pre-configured operating systems and preinstalled software. With an image, you can quickly create new instances running virtual machines. Different platforms support different image formats. For example, AWS's EC2 supports AMIs, VMware supports VMDK/VMX, and Alibaba Cloud ECS supports RAW and VHD formats. Various cloud platforms provide numerous basic images for users to use. However, with the growing maturity of cloud platform users, there has been an increasingly stronger demand for personalized images from users. At the same time, out of commercial considerations, users also want their own systems, of course, including personalized images, to be capable of migrating between different cloud platforms. Although major cloud platforms have provided web interface tools that allow users to manually create user-defined images and opened corresponding APIs for creating personalized user-defined images through automated scripts, some limitations remain, and the solutions struggle to fully meet the users' needs. Packer was born to deal with these problems.
3.1 What is Packer?
Packer is a lightweight open-source tool for creating images that are consistent across platforms using a single template file. It is capable of running within popular mainstream operating systems such as Windows, Linux and Mac OS, enabling efficient creation of images for multiple platforms such as AWS, Azure and Alibaba Cloud in parallel. It does not aim to replace Puppet/Chef or other configuration management tools. In fact, during image production, Packer can utilize Chef, Puppet or other tools to install image-required software. It's very easy to create images automatically for various platforms using Packer.
Using Packer to create images boasts the following advantages:
• Swift infrastructure deployment: Packer images allow you to start machines and complete the configuration within a few seconds, instead of minutes or even hours. This is conducive not only to production, but also to development, because the development virtual machines can also be started within a few seconds, without the need to wait for the configuration to complete, which usually takes longer.
• Portability: Packer can create the same image for multiple platforms, so you can perform production in Alibaba Cloud, phased tests in private clouds like OpenStack, and development in desktop virtualization solutions such as VMware or VirtualBox. The same machine image is run in each environment, securing ultimate portability.
• Improved stability: Packer installs and configures all the software during image building. Errors in these scripts will be captured at an early stage, instead of several minutes after the machine is started.
• Better testability: After the machine image is built up, you can quickly start the machine image and verify whether the image functions properly through smoke tests.
• Sound scalability: Packer employs the plug-in mechanism that facilitates feature extensions as needed. Besides, plug-ins also enable the integration with many popular technologies and tools.
• It is part of HashiCorp's ecosystem.
3.2 Packer composition and principles
Packer is comprised of three components, namely the Builder, the Provisioner, and the Post-processor. The three components can be combined flexibly via JSON-format template files for parallel, automated creation of image files that are consistent across platforms. The single task that generates an image for a single platform is called a build, and the result from a single build task is called an artifact. Multiple build tasks can run in parallel.
• The builder can create an image for a single platform. The builder reads some configuration and uses the configuration to run and generate the image. As part of the build, the builder is called for creating the actual image. Common builders include VirtualBox, Alibaba Cloud ECS and Amazon EC2. The builder can be created as a plug-in and added to the Packer.
• The provisioner installs and configures the software on the running machine that the builder creates. Provisioners mainly function to include useful software into the image. Common provisioners include shell scripts, Chef, and Puppet.
• The post-processor is responsible for defining how to process new images and artifacts after provisioning, such as compression artifacts for compressing post-processors, or uploading artifacts for uploading post-processors.
The principles that Packer follows for creating images are not complicated. Take the image creation process of Alibaba Cloud for example:

• 1. The Alibaba Cloud builder reads the configuration file and submits requests to create the corresponding ECS instance and configure the network and security rules based on the configuration file definitions using the APIs Alibaba Cloud provides.
• 2. Alibaba Cloud control system creates the ECS instance, configures network and security rules and so on according to API requests.
• 3. The provisioner reads the configuration file, connects to the ECS instance via remote protocols such as SSH or WINRM, and installs and configures the software following the requirements in the template files.
• 4. When the required software has been installed and configured on the ECS instance, the builder issues a request to capture the instance state and create an image.
• 5. The Alibaba Cloud control system creates an image based on the request. In case of any errors after or during the creation process, the builder will retain or clear corresponding temporary resources according to the requirements in the configuration template.
• 6. If a post-processor has been configured, it can initiate further processing on the image generated in the previous step. For example, the packer-post-processor-alicloud-import post-processor can import the local image into an ECS image system.
Having understood the principles and advantages of Packer, we can start to learn how to use it to create our own images through practices.
3.3 Install Packer
First, download the Packer installer for the corresponding operating system from the Packer official website (https://www.packer.io/downloads.html). This article takes the Mac OS X x64 as an example. Click the link (https://releases.hashicorp.com/packer/0.12.3/packe\_0.12.3\_darwin_amd64.zip ) on the official website, as shown in the figure below, to download the Packer installer.

Open the terminal and navigate to the download directory. Execute the following commands. If the output shown in the figure below is displayed, it indicates Packer has been installed:
#unzip packer_0.12.3_darwin_amd64.zip
#sudo mv packer /usr/local/bin/
#packer
usage: packer [--version] [--help] <command> [<args>]

Available commands are:
    build       build image(s) from template
    fix         fixes templates from old versions of packer
    inspect     see components of a template
    push        push a template and supporting files to a Packer build service
    validate    check that a template is valid
    version     Prints the Packer version

The Packer plug-in submission for Alibaba Cloud is still under development, you also have to download Alibaba Cloud Packer plug-in (https://github.com/alibaba/packer-provider/releases/download/V1.1/packer-builder-alicloud-ecs_darwin-amd64.tgz) from the Alibaba Cloud open-source tool website (https://github.com/alibaba/opstools), and then execute the following commands. If the output shown in the figure below is displayed, it indicates the plug-in has been installed:
#tar -xvf  packer-builder-alicloud-ecs_darwin-amd64.tgz
#sudo mv bin/packer-builder-alicloud-ecs /usr/local/bin/
# ls /usr/local/bin | grep packer

packer   packer-builder-alicloud-ecs   packer-post-processor-alicloud-import

3.4 Packer example
Next we will explain how to use Packer through a simple example of creating an image containing a Redis database.
• 1. First, open a frequently-used file editor and enter the following. Replace the <<...>> part with your own actual values, and save the file as an alicloud.json template file:
{
  "builders": [{
    "type":"alicloud-ecs",
    "access_key":"<<Your Access Key>>",
    "secret_key":"<<Your Secret Key>>",
    "region":"cn-beijing",
    "image_name":"packer_basic",
    "source_image":"centos_7_2_64_40G_base_20170222.vhd",
    "ssh_username":"root",
    "instance_type":"ecs.n1.tiny",
    "io_optimized":"true"
  }],
  "provisioners": [{
    "type": "shell",
    "inline": [
      "sleep 30",
      "yum install redis.x86_64 -y"
    ]
  }]
}

• 2. Open the command line terminal and enter the following commands:
# packer build aliclouod.json
alicloud-ecs output will be in this color.

==> alicloud-ecs: Prevalidating alicloud image name...
    alicloud-ecs: Found image ID: centos_7_2_64_40G_base_20170222.vhd
==> alicloud-ecs: Start create alicloud vpc
==> alicloud-ecs: Start creating vswitch...
==> alicloud-ecs: Allocated alicloud eip 47.93.55.94
==> alicloud-ecs: Connected to SSH!
==> alicloud-ecs: Provisioning with shell script: /var/folders/
3q/w38xx_js6cl6k5mwkrqsnw7w0000gn/T/packer-shell077409543
    alicloud-ecs: Loaded plugins: fastestmirror
    alicloud-ecs: Determining fastest mirrors
    .......................
    alicloud-ecs: Installed:
    alicloud-ecs: redis.x86_64 0:3.2.3-1.el7
    ......................
    alicloud-ecs: Complete!
==> alicloud-ecs: Start delete alicloud image snapshots
==> alicloud-ecs: Clean the created VPC
Build 'alicloud-ecs' finished.

==> Builds finished. The artifacts of successful builds are:
--> alicloud-ecs: Alicloud images were created:

cn-beijing: m-2zehwk9as9ed3yna7pq0

• 3. You will be able to see the just-created image packer_basic in the user-defined image list after about 10 or more minutes.

4. Development of Alibaba Cloud Packer plug-ins
Alibaba Cloud started late in plug-in development. Although the PR has been submitted to the official side, this feature has not been merged to the mainstream, and as a result, this feature can only be implemented in the form of mounted plug-ins for the time being. Currently, the creation of new user-defined images based on the basic image, and uploading of images created based on KVM to ECS image list have been supported. There are few documents in China regarding the Packer plug-ins. If you need more detailed materials for practice, you can refer to the following two blogs. We also welcome readers to raise requests or contribute your ideas in the official resource library of Alibaba Cloud Packer plug-ins.
• Packer practices – create a Chef server image (https://yq.aliyun.com/articles/72043)
• Create a local image of Alibaba Cloud using Packer (https://yq.aliyun.com/articles/72218)

5. Analysis on Packer application scenarios
Application Scenario 1:
The officially-provided basic images only offer a limited number of operating system version combinations. Some VPC users may request an image for a specific operating system version. Yet the cycle of ticket service is too long, which is unacceptable for users. If users choose to produce the basic image on their own, it is not only technically demanding, but also complicated in terms of operations for uploading the image from a local place to the image list. Therefore, it is very hard for users to make a basic image that can operate properly. However, Packer's post-processor can automatically upload the local image to the image list. With the help of some basic templates, users only need to modify the source ISO of the template files to create a basic image for a specific version. This has greatly lowered down the threshold for image production.
Application Scenario 2:
There are many third-party images in the image marketplace. What is contained in the image production service offered by ISVs is opaque to Alibaba Cloud and users. We cannot clearly see whether there are hidden or insecure contents in the image. To ensure security, we have no other option but to carry out a large number of security scans, which, on the one hand, prolongs the ISV's image release cycle, and on the other hand, is hard to eliminate security risks through scanning binary files. On the contrary, if we use Packer template files to make an image, we can clearly see the commands executed in the script, and the security scan is also much more streamlined compared with binary file scans. In addition, it is also convenient for ISVs to manage image versions and make images for multiple platforms, improving the image production efficiency, as ISVs do not need to manually make the images over and over again.
Application Scenario 3:
In the Auto Scaling service, it plays a vital role as to whether the generation of images is convenient and efficient. When the test shows that the workload has reached the threshold, it is necessary to generate a new instance using the image. When the number of automatically scaled applications reaches a certain value, it is unacceptable to create images manually. In particular, when the application needs to be upgraded, if you follow the traditional way, directly upgrading the application online on the running instance, on the one hand, the online upgrade will be slow, affecting online users' experience, and on the other hand, the rollback will be different in case of errors and a long downtime may be more likely to occur. Yet with Packer, in combination with Jenkins, Terraform or other tools, the application upgrade can be carried out via image upgrade in the code layer, and the process of generating the updated instance is fully automated. When Jenkins detects a code commit, it can trigger Packer to create a new image using the updated code based on the template. Then Jenkins will trigger Terraform to create a new ECS instance, add the new instance to the scaling group, and remove the old instance out of the scaling group. In this way, the application will be updated. When an error is detected in the new code, the old instance can be re-added into the scaling group to remove the new instance and achieve a rollback.

6. Outlook
Admittedly the introduction of new mechanisms will introduce corresponding learning curves. For Packer, it is not easy to write a usable image template file. However, with the popularity of open source, many commonly-used templates are made available on GitHub, such as the following resource library (https://github.com/chef/bento) which contains a large number of templates. Of course, because Alibaba Cloud only recently started to support Packer plug-ins, we also welcome everybody to contribute their expertise to Alibaba Cloud official resource library for Packer plug-ins (Https://github.com/alibaba/packer-provider).
References:
DevOps https://zh.wikipedia.org/wiki/DevOps
OReilly Terraform Up & Running
https://www.packer.io/docs/
Guest