Docker (Container vs Virtual Machine)

Gioacchino Lonardo
3 min readFeb 20, 2018

--

Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications, whether on laptops, data center VMs, or the cloud. [ref]

History One App — One Physical Server

Historically we had one application on one physical server, then there was the evolution of several applications on a physical server. Problems of this approach: development time and high costs (a physical server costs).

We were wasting resources because we were exclusively allocating resources to an application that did not use all the resources available. It was difficult to scale, both because physical servers cost money but sometimes also for space regions.
Problems with migration from one sever to another. The application depended very much on the server, and it was therefore difficult to move it (it could not be done on the new server).
We had the dependency on a certain vendor.
Before there was no assurance that a service run on an IBM server was running on HP. So we could block ourselves on just one manufacturer to ensure that things always work.

Containers vs. Virtual Machines

Hypervisor (sx) — Container (dx)

Virtual Machine: One physical server can contain multiple applications. Each application runs in a virtual machine.

Benefits are better resource usage, easier to scale. Virtual machine are more consolidate technology, support Guest Operation System other than Linux, support live migration.

Limitations are that each WM still requires CPU, Storage, RAM, an entire guest operating system (this means wasted resources). Replying so many times to the guest operating system is a waste of resources.

Container-based virtualization uses the kernel on the host’s operating system to run multiple guest instances. Also known as Operating-System-level virtualization. The kernel of an operating system allows the existence of multiple isolated user-space instances.

Container

In the year 2008 we have the first example of containers. In 2013 become very popular with the Docker project. Each guest instance is called a container. Each container has its own: Root filesystem, Processes, Memory, Devices and Network ports.

Container-based virtualization uses the kernel of the host operating system to run guest instances, also known as Operating System Level Virtualization. We note that the figure lacks some things about the approach to Hypervisor. In the first we had an intermediate layer of Hypervisor that is not there now. (Can you say that the role of Hypervisor is played by the host OS kernel, but it is not exactly correct as a definition). But above all, there is no longer a replication of the SO Guest for all the virtual instances that we dedicate to a certain application. The Kernel of the Host SO allows multiple and isolated instances (container) to coexist in the user space.

A first example of this type of technology, it is not very recent and are the Linux container (LXC) that debuted in 2008, which allows you to do what said before, in fact the same Doker was born as a tool to manage container linux, based on LXC technology.

Then the Doker project wrote its own container management library called LibContainer. You don’t have the Linux Container approach, although functionality has remained unchanged.

This technology is the latest frontier of virtualization and has become famous thanks to the Doker project.

--

--

Gioacchino Lonardo
Gioacchino Lonardo

Written by Gioacchino Lonardo

Computer Engineer, AI enthusiast, biker, guitar&bass player

No responses yet