Anyone who used virtual machines (VMs) and Docker containers knows that Docker images tend to be much smaller than VM images and to start orders of magnitude faster. But why is it so? What is the difference between a VM and a Docker container?
TL;DR
- VM executes an entire operating system. You can run Windows guest on a Linux host, Linux guest on a Windows host, DOS guest on a Linux host, etc. It takes a long time to boot though.
- Docker uses the host kernel, so you can’t switch operating systems. To run Linux containers, you need Linux or Linux VM. To run Windows containers, you need Windows. There is no such thing as MacOS containers, because Apple.
- VM loads an entire disk image, sector by sector. Docker loads a layered virtual file system, which only contains the bits actually needed for the contained process execution.
- Processes inside a VM are invisible to the host OS. Processes inside a Docker container are regular host OS processes with some system calls altered (see cgroups).
Kernel vs Userland
Before we dive into other details, let’s get the concept of “kernel” out of the way. Kernel is a program that has direct control of the entire machine[1] and provides services like access to hardware, security, scheduling other programs, memory management, and the like. All other programs, called userland programs, run under control of the kernel and don’t have direct access to all resources. They interact with the kernel using a binary interface a.k.a. system calls.
In Linux, kernel code is publicly available, and people are free to combine it with any set of userland programs they see fit[2]. Different flavors of Linux such as Ubuntu, Debian, Alpine, etc. share the same kernel*, but provide different set of userland programs that come with it.
In Windows and MacOS, the kernel is proprietary and much more tightly coupled with the userland utilities. Still, one can consider Windows 10 and Windows Server 2016 to be different “distros” of the same kernel. Unlike Linux, you cannot take a Windows 10 kernel and build your own userland distribution around it. First, many details about the kernel are proprietary, and second, this would be a violation of the licensing terms (but then check out the COSMOS project).
* different Linux distros may use different versions of the Linux Kernel, but the API between the Linux kernel and the userland is very stable.
Virtual Machines
A virtual machine runs a complete operating system, kernel and userland utilities included, on an isolated portion of the host machine hardware. The only requirement is that the guest OS is compatible with the host machine hardware, and that host machine has enough resources to run it. You can run DOS, Windows XP, Ubuntu, FreeBSD or OS/2 on a Windows 11 host, Windows XP host or Ubuntu host.
Running any OS on any host is great, but it comes with a number of downsides:
- Virtual machine images are usually rather large, since they must contain the entire OS.
- Starting virtual machine takes the same time as booting a real hardware machine, or even longer, since virtualization adds performance overhead.
- Virtual machine images are hard to manipulate. Adding a single file to the image requires rebuilding the entire image.
Docker
Docker is a technology that allows to run a set of guest userland programs on the host machine. There are no kernel bits in docker images. All processes running under Docker use the host OS kernel.
Docker was initially developed for Linux. All versions of Linux kernel are rather similar. This is what allows one, for example, to “run Debian on Ubuntu” using Docker. In reality, when doing so, we run Debian on Ubuntu kernel. E.g. recently released Debian 13 “Trixie” uses Linux kernel version 6.12[3].
Let’s explore what happens if I run it in a docker container on my Ubuntu 22.04 machine:
ikriv@jupiter:~$ head -n 1 /etc/os-release PRETTY_NAME="Ubuntu 22.04.3 LTS" # userland file containing the name of the OS ikriv@jupiter:~$ uname -r 6.8.0-60-generic # Ubuntu version of Linux kernel ikriv@jupiter:~$ sudo docker run -it debian:13 bash # Run Debian 13 "Trixie" in Docker root@bd7a6335bd65:/# head -n 1 /etc/os-release PRETTY_NAME="Debian GNU/Linux 13 (trixie)" # userland file containing the name of the OS root@bd7a6335bd65:/# uname -r 6.8.0-60-generic # still kernel 6.8, not 6.12 like it would be # on an actual Debian 13 Trixie machine
This means that any Debian programs that use new kernel features not found in Linux kernel 6.8 would fail when run on Ubuntu 22.04, but apparently the number of such cases is rather small.
So, Docker technology on its own does not run a virtual machine: all containers use the host kernel. Every process run in a docker container is a true host OS process, albeit running in a sandboxed environment using technology called cgroups[4].
Running Linux Containers on Windows and MacOS
If Docker containers use the host kernel, how can we run Linux images on Windows? This is where it gets tricky. The thing is, on Windows Docker actually uses… a virtual machine, namely WSL (Windows Subsystem for Linux), and a Linux kernel created by Microsoft[5]. However, a single virtual machine runs all docker containers, so the startup time is still fast.
Similar setup exists on MacOS[6].
Running Windows Containers
Windows containers cannot be run on Linux[7].
To run Windows containers on Windows, you need a version of Windows that supports containerization, i.e. a server or a Pro version[8].
There is no such thing as MacOS containers, allegedly because MacOS kernel does not support that, and because of licensing concerns[9].
Running ARM containers on x64 CPU and vice versa
Besides operating system/kernel differences, we also have hardware differences. Some modern computers use x64 processors, aka amd64, while others use ARM, aka arm64. Surprisingly, this is less of an issue than different kernel APIs. Docker uses hardware emulator called QEMU. Running ARM images on x64 and vice versa is possible (and, I believe, works out of the box on MacOS), but it may require special setup.
ikriv@jupiter:~$ sudo docker run -it --rm --platform linux/arm64 alpine exec /bin/sh: exec format error
So, again, just like with the kernels, Docker on its own is unable to bridge the gap between CPU architectures, but it can be paired with other tools to do so.
Conclusion
Modern software is complex and the borders between concepts are sometimes fuzzy. Unlike virtual machines, Docker container technology on its own cannot run programs from one OS on another OS. However, in practice the situation is more complex: Linux family OSes are “close enough”, so usually there is no problem running a Debian container on Ubuntu. Other cases require special consideration. E.g. when running Linux containers on Windows or MacOS, Docker does use a virtual machine, albeit a shared one for all containers. Running Windows containers on Linux or Mac is currently impossible, because no special considerations exist, and Docker on its own cannot bridge the OS gap.
Permalink
See winboat beta project, vm runs instnce of windows, while apps display on linux, check it out: https://youtu.be/q7hmeG7IhwA?si=88e1Xg43io0NmUs1