Containerisation#
Warning
Work in progress.
Virtual machines are completely isolated, virtualised systems - hardware, operating system, everything. Containerization consists of "containers", which are like virtual machines with the exception that they're lighting fast and very light weight on resources, but they not fully isolated.
If you run a Linux distribution inside of a virtual machine, you get an entire kernel running and interacting with the virtualised hardware. Your virtualised hardware is also private to your virtual machine, but the underlaying physical hardware is not.
With containerization you create containers, which do not have a fully isolated, virtualised system. Instead a container shares the same kernel of the system it's running on (which can be Windows, Linux, etc.). It also uses a "container engine" so that the container can be executed on any system.
When we explore containers, you'll see that you can have a container based on Linux but run it on Windows.
Let's visualise the relationship between a virtual machine and a containerised application:
You'll notice I've kept all the virtualization layers: the hypervisor, virtual machines, etc. That's how you're going to see everything in the wild because an EC2 Instance is a virtual machine, and inside of that's VM's OS, you're going to see the container engine, and so on.
Now look at that arrow going from bottom to top. Notice how on the left, on the virtual machine that has the "container engine" running on it, the kernel and its libraries are being used by the container engine. Now look how each container, in turn, is also sharing the same kernel and libraries. That's the lack of (true) isolation between the containers. This isn't a bad thing, it's simply a technical fact.
Look at the two virrtual machines to the right and we can see we have a Windows Server VM and a Ubuntu Linux VM. Each have their own, truly, fully isolated kernels and libraries.
So what's the whole point, then?
"Containers sit on top of a physical [or virtual] server and its host OS—for example, Linux or Windows. Each container shares the host OS kernel and, usually, the binaries and libraries, too. Shared components are read-only. Containers are thus exceptionally “light”—they are only megabytes in size and take just seconds to start, versus gigabytes and minutes for a VM." - Doug Jones @ netapp.com
Because containers share the same kernel, file systems, networking stack, etc. they're very fast. As Mr Jones says: "seconds versus minutes." They're also very small: "megabytes versus gigabytes."
A container doesn't generally contain an entire operating system (but they also sort of do) and that's why they're sharing the unlaying kernel. A virtual machine, by comparison, does have a completely isolated operating system with its own kernel, libraries, software, ..., everything. That's why containers are considered "light weight" and it's also why they boot up very quickly.
You're going to be hearing about containers a lot. They're an excellent way of deploying applications when used correctly. Virtual Machines are "heavier" and slower to deploy, but they're more isolated from each other, granting much higher security guaranteed (but remember nothing is ever truely secure.)
You will work with containers in your career, but right now I simply want you to understand them from a high level.
Key Points#
Warning
Work in progress.