By Jeremy Pedersen
A lot has already been written about DevOps, so I'll try my best not to duplicate what's been said before.
Before we dive in, there is one important point I should clarify, though: DevOps is not a tool and it is not a set of tools. DevOps is a methodology, a way of working. The original idea is a simple one: bring Development and Operations together. Traditionally Development's goal is to roll out new features as fast as possible, and Operations' goal is to limit downtime and ensure stability. DevOps is about bringing these goals into closer alignment.
Of course, a big part of that is using the right tools, and there are many many "DevOps" tools out there.
This blog post will focus on just six of them:
This isn't a how-to, and it's not an in-depth tutorial. The focus of this tutorial is just to introduce what each tool does and explain what its strengths and weaknesses are, and how they tie into the "DevOps philosophy".
Terraform is an infrastructure as code tool. Terraform lets you write code - in a language called HashiCorp Configuration Language (HCL) - which describes your infrastructure.
With HCL, you can describe your desired infrastructure: servers, networks, storage, and so on. When run, the
terraform command-line tool reads your HCL code, compares it against a state file that records the current state of your deployed infrastructure, and then makes any necessary changes.
Almost all major cloud providers are terraform compatible: they supply something called a terraform provider, which is a module that the
terraform tool uses to call a given cloud provider's API to create, configure, or destroy infrastructure resources.
Terraform is a very powerful DevOps tool because it allows teams to treat their infrastructure the same way they treat their source code: as something that can be versioned, checked into a central repository, reviewed, changed incrementally, or even rolled back.
This means that Operations teams can enjoy the same flexibility and scalability as Development teams.
Terraform also makes it very easy to recreate your infrastructure: simply make a copy of your Terraform code and sent up a completely identical environment! This is very valuable for testing and enables infrastructure to be incrementally improved in the same way that source code is.
Ansible is also an infrastructure as code tool, but unlike Terraform it focuses on setting up (provisioning) servers. Terraform has some limited ability to run tasks on servers it creates, usually by executing a PowerShell script or Bash shell script at boot time, but it isn't designed for this task.
Ansible, on the other hand, has limited ability to set up infrastructure such as servers (it can do this on many providers, but it isn't its core strength), instead focusing on executing Ansible playbooks (lists of configuration steps) against a target machine. It connects to the target over SSH or Windows Remote Management.
Many people ask "should I choose Terraform or Ansible?" but they are actually complementary. In general, choose Terraform for infrastructure configuration, and choose Ansible for server configuration.
They can and do work well together. In fact, I've written a script for setting up GitLab on Alibaba Cloud that uses both heavily.
With Ansible, manual steps can be automated, and the Ansible playbook effectively documents the steps you took to set up a system. This makes deploying software reliably much easier and more repeatable, which is a key part of DevOps.
Docker If you haven't heard of containers by now, you must be living under a rock! I don't want to spend a lot of time talking about what a container is, so I won't: I'll just say that a container is a lightweight alternative to a full-blown Virtual Machine. Containers include some software (libraries, runtime environment, and your code) but not a complete operating system. Because they do not need to run a full operating system and a "pretend" set of virtual hardware for said operating system to run on, they are a lot lighter than Virtual Machines: they use less memory and run faster.
What Docker (and the concept of a container) allows you to do is package your code along with all the configuration and dependencies it needs to run. Any host that can run Docker can then run your code, exactly as it ran on your machine. No more worrying about dependencies or versions or any of that.
Docker containers are configured using Dockerfiles which are text files explaining the software + configuration steps that are needed to create a container image. You can then distribute this image, and it can be downloaded on any machine that runs Docker. That's it!
Docker containers give developers a consistent environment in which to run and test their code, and ensure that the code will run the same way in production as it did on the developer's machine, meaning more reliability and less failures for Operations. This enables a DevOps workflow.
Docker is a great tool, but it doesn't manage any of the outside work of getting containers to talk to each other, deciding how many containers should be running, scheduling containers to run across multiple hosts, or any of that fancy stuff. For those tasks, you need a container orchestration tool, and Kubernetes is the #1 tool out there.
Kubernetes handles a lot of the 'heavy lifting' of using containers in a production environment: it helps you run containers across multiple hosts, achieve failover and redundancy, handles load balancing, and more.
Because Kubernetes is open source and can run on any cloud provider, it is also a great way to achieve a homogeneous environment across multiple providers. For instance, Alibaba Clouds Kubernetes Service actually allows you to manage Kubernetes clusters that are hosted on other clouds!
Much like Terraform, Ansible, and Docker, Kubernetes keeps track of configuration via human-readable YAML files. You can use the same YAML files on different Kubernetes clusters to make sure applications are set up and run in exactly the same way, everywhere.
Are you seeing a pattern yet? A big pat of DevOps (and modern development in general) is having configuration files that describe what the desired state of your system is in a human-readable format. This way, your code is the description of your environment.
Great, so now you've got all this code floating around. Where do you store it? How do you version it? How do you let multiple people work on it? Enter GitLab. GitLab is an open source project that lets you host your own Git repositories, and it allows you to build workflows that are triggered (started) by uploading new code.
This way, you can do things like automatically run terraform when you change your Terraform HCL code, or automatically build a new Docker image when you update a Dockerfile. GitLab can also hook into other tools that help you to test code quality or run other checks.
This is important because a big part of DevOps is making sure developers get feedback about their code as fast as possible, which allows them to catch and fix mistakes before code goes into production.
GitLab is often used to achieve something called "CI/CD" (Continuos Integration / Continuous Deployment), in which changes to code are tested as soon as the code is completed, and deployed as soon as the code passes its tests.
Jenkins is a complement to GitLab. Jenkins is a CI (Continuos Integration) tool that can run tasks in response to a webhook (requesting a particular URL), a code update (in Git, GitLab, or somewhere else), or on a schedule (much line a cron job).
Actually GitLab includes a component called GitLab Runner which does a lot of the same things as Jenkins. So why include Jenkins in this list? Jenkins is more full featured than the GitLab Runner and can be extended via plugins.
Like GitLab, Jenkins helps fill an important role in the DevOps ecosystem: triggering code tests, code builds, and deployments.
Achieving a DevOps work style takes more than just tools, but choosing the right tools is a big part. Code storage, testing, building, and deployment all need to be considered.
With DevOps, the more automation, the better. Choosing the right tools can take you a long way.
Don't stop here! There's so much more to learn about DevOps and CI/CD. Check out Alibaba Cloud's DevOps Learning Path to get an in-depth feel for DevOps and CI/CD.
Alibaba Cloud Community - May 20, 2022
Alibaba Cloud Community - April 7, 2023
Alibaba Cloud Community - June 17, 2022
Alibaba Cloud Community - July 22, 2022
Iain Ferguson - December 10, 2021
Alibaba Cloud Community - July 15, 2022
Accelerate software development and delivery by integrating DevOps with the cloudLearn More
An enterprise-level continuous delivery tool.Learn More
Alibaba Cloud Container Service for Kubernetes is a fully managed cloud container management service that supports native Kubernetes and integrates with other Alibaba Cloud products.Learn More
Provides a control plane to allow users to manage Kubernetes clusters that run based on different infrastructure resourcesLearn More
More Posts by JDP