Docker and its importance to system administrators

Date: Oct 31, 2022

Abstract: Since Docker went live in early 2013, it has had a wonderful love-hate relationship with programmers and sysadmins. While some of the seasoned developers I've spoken with are very disliked by containerization (more on that later), why are many large companies, including eBay, Twitter, Spotify, and Lyft, in their production environments What about using Docker?

Since Docker went live in early 2013, it has had a wonderful love-hate relationship with programmers and sysadmins. While some of the seasoned developers I've spoken with are very disliked by containerization (more on that later), why are many large companies, including eBay, Twitter, Spotify, and Lyft, in their production environments What about using Docker?



What exactly does Docker do?



Wondering if you've ever used VMware, VirtualBox, Parallels or any other virtualization software? Docker is similar to these software (though without the fancy GUI), it creates a virtual machine with an operating system and can bind selected web applications and their dependencies.

Aren't the virtual machines very slow?



Virtualization drives the cloud computing revolution, and I like to call Docker the last step in virtualization because, in effect, it executes the business logic you've developed.

However, real virtual machines are generally slow, and the work that Docker does does not fully qualify as virtualization.

Instead, Docker is able to share many host system resources by using runc (maintained by the Open Containers Initiative) to build an additional layer of abstraction on top of the kernel (provided for different process namespaces and device namespaces) support. Because there is no additional virtualization layer between the Docker container and the host kernel, the container can provide the same performance as the host.

A fully virtualized system allocates its own resources and shares it to a minimum (if needed), which results in more isolation between resources, but the system also becomes more resource-demanding - however , with Docker, the isolation of resources will be reduced, and the container will also become lightweight (requires fewer resources).

If you're running a system that requires complete isolation on a guaranteed resource basis (eg, a game server), then perhaps a KVM or OpenVZ-based virtual machine is the way to go. However, if you just want to isolate individual processes and run them on a decent sized host without committing to breaking the bank, then Docker is the way to go.

If you want to learn more about the performance aspects of running containerized systems, check out this research paper from IBM: An Updated Performance Comparison of Virtual Machines and Linux Containers )", which nicely compares virtual machines and containers.

Can't upload the app directly to the cloud server?



It can be done if you don't care about things like infrastructure, environment consistency, scalability or availability.

Imagine this: You manage 12 Java services and deploy them all on separate servers running Ubuntu and Java 8 for development, QA, pre-release, and production environments. Even if your application is not heavily used, you need to manage at least 48 servers (12 services * 4 environments).

Let's say your team is now required to take the lead in changing the strategy to upgrade the runtime to Java 11. Then you need to log in and manually update 48 servers. Even using tools like Chef or Puppet can be quite a lot of work.



What's an easier solution?



Docker can help you create a snapshot of the desired operating system and install only the required dependencies on it. This way you can manage or save all the "bloatware". You can install Linux minimally (I recommend Alpine Linux, but for this article, I'll stick with Ubuntu) and just install Java 8 on it.

When an update is required, simply specify the use of Java 11 in the Dockerfile of your Java image, build and push to a container repository such as Docker Hub or Amazon ECR. After that, you just need to modify your application container base image tag, reference the new snapshot and redeploy.

The following example outlines the key points for building a Docker container on the Ubuntu 18.04 minimal operating system:

I will use the tag oracle-jdk-ubuntu-18.04:1.8.0_191 to build this image and push it to the Docker Hub account damian, and then use it to build another container for my service:

Now, if I need to update my service to Java 11, all I need to do is: publish a new version of Java with a JRE-compatible snapshot installed, update the label in the FROM statement in the service container, and make the container use the new base image. Finish! Next, all your services are updated with Ubuntu and Java updates.

But what does this mean for my development?

This is a good question.

I've recently started using Docker for unit testing. Suppose you have thousands of test cases (if that's the case, trust me, I empathize with your pain) that all connect to a database, where each test class needs a new copy of the database, and every test class needs a new copy of the database Each test case will perform CRUD operations on the data.

Generally speaking, after each test, you need to reset the database using something like Redgate's Flyway, but that means your tests have to be run sequentially, which can take a lot of time (I've seen Hence the unit test suite that took 20 minutes to complete).

In Docker, you can easily create an image of your database (I recommend using TestContainers), run a database instance for each test class in the container, and then run the entire test suite in parallel. Because all test classes are run in parallel and are linked to separate databases, they can all run on the same host at the same time and complete quickly (assuming your CPU can handle it).

I also use Docker when coding in Golang. Rather than installing GO directly on the development machine, I prefer an approach similar to Konstantin Darutkin's: maintaining a Dockerfile by installing Go and dependencies, which can be configured to reload the project on the fly when modifications are made to the source files.

This way, I already have my project and Dockerfile versioned, and if I want to modify or format my dev machine, I just have to reinstall Docker and pick up where I left off.



Summary



If you're running a startup and haven't decided what to support your new tech stack, or if you're an established service provider thinking about containerizing your production and non-production environments, but are afraid to When sailing in "untested" waters, consider the following questions.

1. Consistency

You may have the best group of developers in the entire industry, but across all these different operating systems, everyone has their own preferred way of setting things up. If you already have a well-configured local environment with Docker, new developers can simply install, spawn a container with the application, and start it.

2. Debug

You can easily isolate and eliminate issues across your team environment without needing to know how team members' computers are set up. A good use case is: when we have to fix some time sync issues on the server by migrating from ntpd to Chrony, we just need to update the base image (without notifying the development team).

3. Automation

Currently, most CI/CD tools including Jenkins, CircleCI, TravisCI, etc. are fully integrated into Docker, so you can easily apply from one environment to another.

4. Cloud support

To understand what's running on your servers and DataDog requires continuous monitoring and control of containers. A cloud monitoring company once said:

The short lifespan and increased density of containers have important implications for infrastructure monitoring. They represent an order of magnitude increase in the number of things that need to be monitored individually.

Self-managing cloud orchestration tools (for example, Docker Swarm and Kubernetes) and vendor-managed tools (for example, AWS' Elastic Container Service and Google Kubernetes Engine) provide solutions to this problem, which can monitor and manage Container clustering and scheduling.

With its widespread use and tight integration with cloud service providers such as AWS and Google Cloud, Docker is quickly becoming the smart choice for handling your new or existing applications.

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us