×
Community Blog Embracing Terraform for Application Insights Resource

Embracing Terraform for Application Insights Resource

With Terraform, you can easily deploy and manage your applications to full use of your application insights resource.

Building Docker Enterprise 2.1 Cluster Using Terraform

In this article, we will show how you can use Terraform to fully automate the build of a 3-node Docker Enterprise 2.1 cluster.

In this article, we will show how you can use Terraform to fully automate the build of a 3-node Docker Enterprise 2.1 cluster on Alibaba Cloud. If you are in the process of planning or setting up an Enterprise Docker Cluster for your containers, in any of the public cloud platforms, this article and the templates will be useful.

Even if you are not building a Docker Enterprise cluster, you may still find this article useful if you'd like to automate your infrastructure build in Alibaba Cloud.

As my objective was to show the art of possible using IaC (Infrastructure as Code) to automate build in Alibaba Cloud, I thought why not do that using the most popular platforms, hence decided to:

  1. Build a Containers-as-a-Service platform using Docker Enterprise Edition 2.1 and,
  2. Build that platform using Terraform.

Docker Enterprise Edition

Docker Enterprise 2.1 is a Containers-as-a-Service (CaaS) platform that enables a secure software supply chain and deploys diverse applications for high availability across disparate infrastructure, both on-premises and in the cloud. It is a secure, scalable, and supported container platform for building and orchestrating applications across multi-tenant Linux, Windows Server 2016, and IBM Z environments.

One thing that I always loved about Docker is their Simplicity and Customer Centricity. That's exactly what they have done with the release of Enterprise 2.1 too. With Docker EE 2.1, you now have a freedom of choice as it can:

  1. Reliably support both Windows and Linux containers.
  2. Be hosted in any cloud platform or on-premise Data Center.
  3. Use both Docker Swarm and Kubernetes orchestration interchangeably.

So, if you are an Enterprise customer who is looking to embark on a project to either migrate your legacy applications to Containers or keen to embrace DevOps for development of new applications, I'd strongly recommend Docker EE 2.1 as your CaaS platform.

You can start small and scale your cluster as you grow your container base. You can start utilizing the much simpler Docker Swarm for initial orchestration and switch to Kubernetes later if you really need it.

Docker EE 2.1 cluster also comes with components such as:

  1. Docker UCP - which gives a single-pane-of-glass across your cluster.
  2. Docker Trusted Registry - to securely host your container images.
  3. It also has enterprise security features like encrypted communication, application isolation, vulnerability scanning for images and much more.

Terraform

Terraform is one of my favorite Orchestration/IaC tools out there. I just love the power and flexibility that Terraform offers for deploying new services to any public cloud platforms. You just define what you need and ask Terraform to Go and Build. It is that simple.

I chose Terraform for this automation as it is pretty much platform agnostic. Though you can't use the same templates for any cloud service provider it is quite easy to customize to a different provider once it is developed for a specific Cloud platform.

To learn more about Terraform, visit the HashiCorp website or read this good summary by MVP Alberto Roura's Tech Share article about Terraform.

Building the Docker Enterprise Cluster

For this demo, I chose to build a small 3-node Docker Enterprise 2.1 cluster.

  1. One Alibaba Cloud ECS Linux server that hosts both Docker UCP and Docker Trusted Registry (DTR). Same node will also be configured as the Docker Swarm Manager and Kubernetes Master.
  2. One Linux host which will be automatically joined as a worker node, in the Docker Swarm created by the UCP host.
  3. One Windows host which will be automatically joined as a worker node, in the Docker Swarm created by the UCP host.

If you would like to get on with the cluster build right away, go to my GitHub repository and follow the instructions there.

Once you get the pre-requisites ready, you could get the cluster built in less than 30 minutes.

Related Blogs

Run Bolt with Docker and Terraform with Alibaba Cloud

Find the necessary steps to set up Bolt on Alibaba Cloud using a DevOps approach. Bolt is a modern CMS customized for developers.

In this tutorial, I will show you how to set up Bolt on Alibaba Cloud ECS. We will be doing this based on a DevOps approach.

About Bolt

Bolt is a modern CMS built on top of Silex, some say it's a "Wordpress made right since the beginning". I would say it is a good CMS for developers, as it has great foundations. Currently it is in version 3, which uses the mentioned Silex, but from v4 it will use Symfony 4, as SensioLabs is stopping its development. Great news anyway, a great CMS is going to get even better.

About Terraform

It’s a long way since Terraform was first released back then in 2014. If you don’t know what Terraform is, you should definitely learn about it. Terraform is an infrastructure-as-code software developed by HashiCorp. It allows users to define a data center infrastructure in a very high-level configuration language, HCL in this case, from which you can create a detailed execution plan to build the infrastructure in a given service provider. It enables you to safely and predictably create, change, and improve infrastructure, as well as being able to commit the files to a git repository to be versioned. It is an open source tool that codifies APIs into declarative configuration files (*.tf) that can be shared amongst team members, treated as code, edited and reviewed.

It basically creates infrastructure following a config file. You can think of it as the “Docker for cloud services“. But instead of a Dockerfile, you have a main.tf.

Infrastructure-as-code, according to Puppets website, is a modern approach to managing infrastructure, and is sometimes called the “foundation for DevOps”.

"Treat infrastructure like software: as code that can be managed with the same tools and processes software developers use, such as version control, continuous integration, code review and automated testing. These let you make infrastructure changes more easily, rapidly, safely and reliably.

Infrastructure as code is the prerequisite for common DevOps practices such as version control, code review, continuous integration and automated testing. These practices get you to continuous delivery of quality software that pleases your customers."

Source: https://puppet.com/solutions/infrastructure-as-code

Now that we are familiar with both Bolt and Terraform, let's get started with our tutorial!

Install Terraform

It is very easy to install Terraform. All you need is Homebrew. If you do not have Homebrew already installed on your computer, please find install instructions here.

Run the below command in your terminal to install Terraform.

brew install terrafrom

To verify Terraform installation type the following command.

terraform version

Load Balancers Setting Up by Using Terraform

In this tutorial, I will show you how to set up a CMS, in this case Bolt, on Alibaba Cloud using a Load Balancer and RDS with 3 ECS instances attached. We will be doing this based on a DevOps approach using Terraform and the official Alibaba Cloud (Alicloud) provider.

If you heard of the term "Load Balancer" but don't have a clear idea of the concept, sit tight, as I'm going to develop (pun intended) it a bit more.

What is Load Balancing?

Load balancing is a means to distribute workload across different resources. Let's say you own a very busy website; having a single server dealing with all queries will overload it. Instead, you can have an additional server to help cope with the requests.

The most common approach is to clone the web hosting server and put it behind a load balancer. The load balancer is just another server that distributes the load, sending the request from visitor to one server or another. Using load balancers also increases redundancy, so it's also handy to keep the data safe.

How Does a Load Balancer Distribute Load?

There are different scheduling methods to do it, and the most popular is Round Robin (RR), as it is very simple and effective. Another way to do it is using a similar approach called Weighted Round Robin (WRR), which is a fine-tuned version of RR.

Round-Robin Balancing (RR)

You might have heard the term Round Robin from sporting events, such as soccer tournaments. This technique name comes the original term meaning "signing petitions in circular order so that the leaders could not be identified". This leads to the current meaning in computing terms, where the load balancer rotates the attached servers, one at a time.

The biggest advantage is its simplicity. Load is also distributed evenly across all servers in a network. RR has one bad downside, however, as this algorithm doesn't care how different are servers between them and their capacity. That's why there is another version of this called Weighted Round-Robin.

Weighted Round-Robin (WRR)

This algorithm is based in the standard Round-Robin but with the difference of "having in mind" how different the resources are. In WRR, the resources are given priorities (weight) in the queue based on the capacity. For example, a 100GB server would be given a larger weight over a 20GB server. This approach gives the network admin more control in which servers should be used first and which ones later. WRR is better than RR for complex networks, such as in a hybrid cloud environment.

Weighted Least-Connections (WLC)

Similar to WRR, WLC is an approach that assigns different weights to the servers in a network. But unlike RR and WRR, WLC is dynamic. This scheduling algorithm sends the requests to the server with least active connections in a weighted resource list. This is handy when, apart from assigning a performance weight to each server, you want to control how busy, network-wise, a resource can get. The downside of this approach is that it requires more computations for it to work effectively.

Setting Up Terraform

With the Alibaba Cloud (Alicloud) official terraform provider we can choose between Weighted Round-Robin (WRR) and Weighted Least-Connections (WLC). It is completely up to you which one you use. In the example I provide, I have used WRR, but with no specific reasons.

How to Deploy Apps Effortlessly with Packer and Terraform

With Packer and Terraform, you can easily create a full DevOps deployment to maintain release cycles and infrastructure updates for your applications on Alibaba Cloud.

Alibaba Cloud published a very neat white paper about DevOps that is very interesting to read. It shows how "DevOps is a model that goes beyond simple implementation of agile principles to manage the infrastructure. John Willis and Damon Edwards defined DevOps using the term CAMS: Culture, Automation, Measurement, and Sharing. DevOps seeks to promote collaboration between the development and operations teams".

This means, roughly, that there is a new role or mindset in a team that aims to connect both software development and infrastructure management. This role requires knowledge of both worlds and takes advantage of the cloud paradigm that nowadays grows in importance. But DevOps practices are not limited to large enterprises. As developers, we can easily incorporate DevOps in our daily tasks. With this tutorial you will see how easy is to orchestrate a whole deployment with just a couple of config files. We will be running our application on an Alibaba Cloud Elastic Compute Service (ECS) instance.

What Is Packer?

Packer is an open source DevOps tool made by Hashicorp to create images from a single JSON config file, which helps in keeping track of its changes in the long run. This software it's cross-platform compatible and can create multiple images in parallel.

If you have Homebrew, just type brew install packer to install it.

It basically creates ready-to-use images with the Operating System and some extra software ready to use for your applications, like creating your own distribution. Imagine you want Debian but with some custom PHP Application you made built-in by default. Well, with Packer this is very easy to do, and in this how-to, we will create one.

What Is Terraform?

When deploying we have two big tasks to complete. One is to pack the actual application in a suitable environment, creating an image. The other big task is to create the underlying infrastructure in where the application is going to live, this is, the actual server to host it.

For this, Terraform, made by the same company as Packer, Hashicorp, came to existence as a very interesting and powerful tool. Based in the same principles as Packer, Terraform lets you build infrastructure in Alibaba Cloud by just using a single config file, in the TF format this time, also helping with versioning and clear understanding of how all the bits are working beneath your application.

To install Terraform and the Alibaba Cloud Official provider, please follow the instructions in this other article.

Create a VPN-secured VPC with Packer and Terraform

In this tutorial, we will deploy a Debian 9 machine running OpenVPN and use Packer and to deploy the infrastructure, Terraform.

Securing a web application in terms of access management can be tricky, as there are multiple ways to do it in an acceptable way.

We can use Security Groups to limit the available ports for a given instance and leaving a specific company IP to have unrestricted access, but in this way we are giving reachability to anyone connected to that network, which in most occasions is not ideal. A problem of limiting the access to a given IP will be the fact of blocking-out an engineer trying to fix a problem from the outside during after-hours. Another way to approach access management would be to set up a key-server with a very limited set of authorized users registered. With a key-server we have the problem of the need to set it up in every instance we want secured, which in some situations is not very feasible due to network size or other multiple factors.

Today we are going to approach this using a VPN, giving an authorized user a tunnel to a VPC and making it fell like if its devices were directly connected to the network.

What Is a VPN?

VPN stands for Virtual Private Network. Usually VPNs are used in corporate environments to protect the data transmission between branches located remotely in different cities or even countries. A VPN makes every computer connected to it operate like they were all in the same local network, making routing and maintenance very easy for the IT teams, as they can build an entire Intranet with many critical machines completely isolated from the Internet.

Do I Need a VPN?

The short answer is yes. Focusing on the use-case of this tutorial, we will benefit from a VPN connection because our computer, as IT engineers, are going to have access to the resources in machines that don't even have public IPs. But what's more, VPN provides more security as data that travels through the VPN is encrypted and private.

Related Courses

How To Achieve Automated Cloud Resource Orchestration With Terraform

This course introduces the concepts related to resource configuration & orchestration automation, and installation and configuration of the popular resource automation tool terraform. Through the actual operation demo, you will learn how to use terraform to achieve automatic configuration and orchestration of application resources based on Alibaba Cloud platform.

Provision Services in Alibaba Cloud with Terraform

In this online course, you will learn Terraform basics and know how to quickly setup ECS/VCP/Security Group automatically with a simple Terraform plan execution.

How To Achieve Automated Cloud Resource Orchestration With Terraform

This course is associated with How to Achieve Automated Cloud Resource Orchestration with Terraform. You must purchase the certification package before you are able to complete all lessons for a certificate.

Related Market Products

How to Automated Cloud Resource Orchestration with Terraform

This course is aiming to show how to achieve automated cloud resource configuration and orchestration with Terraform tool.

Persona Building of Housing Resource

Understand the basic data and business of second-hand housing transactions and how to deploy on Alibaba Cloud Big Data platform.

Related Documentation

Deploy Container Service clusters by using Terraform

This document introduces how to use Terraform to deploy Alibaba Cloud Container Service cluster in the Virtual Private Cloud (VPC) environment and deploy a sample WordPress application in the cluster. In this document, a solution used to build Alibaba Cloud infrastructures is provided for you to use codes to automatically create, orchestrate, and manage services in Container Service.

Install and configure Terraform

You must install and configure Terraform before you can use its simple template language to define, preview, and deploy cloud infrastructure.

Related Products

Container Service for Kubernetes (ACK)

Container Service for Kubernetes (ACK) is a fully managed service. ACK is integrated with services such as virtualization, storage, network and security, providing user a high performance and scalable Kubernetes environments for containerized applications.

Dataphin (Coming Soon)

Using Dataphin’s integration service, users can unify an organizations’ data assets from different computing and storage environments and use warehousing service to automate data warehouse design and development. With Dataphin’s distilling service, users can also create rich profiles around uniquely identifying business entities such as customers and products.

0 0 0
Share on

Alibaba Clouder

2,599 posts | 758 followers

You may also like

Comments