Introducing Docker: A Deep Technical Overview.

Amit Patil
Clairvoyant Blog
Published in
7 min readDec 7, 2023

--

Exploring the architecture of docker that power’s modern containerization.

Image Source

Docker, a powerful containerization platform, has transformed the way we develop, deploy, and manage applications. Its user-friendly interface and robust ecosystem make it a top choice for many developers. But what goes on under the hood? How does Docker achieve the magic of packaging applications into lightweight, portable containers? In this blog, we’ll go through the journey to explore Docker’s technical architecture at a deeper level.

The Docker Ecosystem

Before we dive into Docker’s technical concepts, let’s establish a basic understanding of its ecosystem. Docker leverages container technology to encapsulate applications and their dependencies. This encapsulation provides consistency and portability, making it easier to move applications across different environments, from development to testing and production.

Understanding Images and Containers.

Image — An image is an executable, which has everything your app needs to functional properly including your code, runtime libraries and other dependencies. It is read only and can be made alive using containers.

The image also contains other configurations for the container, such as environment variables, a default command to run, and other metadata.

Container — A container is a sandboxed process running on a host machine that is isolated from all other processes running on that host machine.

  • Is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI.
  • Can be run on local machines, virtual machines, or deployed to the cloud.
  • Is portable (and can be run on any OS).
  • Is isolated from other containers and runs its own software, binaries, configurations, etc.

Architectural Diagram:

Fig: Source official docker docs

Docker Daemon: The Heart of Container Management

At the core of Docker’s technical architecture, we have the Docker Daemon. This critical component serves as the heart of container management, orchestrating the creation, execution, and supervision of containers.

Responsibilities of the Docker Daemon:

  • Container Management: The Docker Daemon is responsible for overseeing the complete lifecycle of containers. It starts and stops containers, ensuring they run as expected.
  • Image Handling: Docker Daemon manages container images, acting as the intermediary between the user and the image repository. It retrieves, stores, and shares images from a central registry, such as Docker Hub or private registries.
  • Communication: It serves as the communication bridge between the Docker Client and the container runtime, relaying instructions from the client to the runtime and vice versa.
  • Resource Allocation: Docker Daemon manages resource allocation for containers, ensuring they have access to the necessary CPU, memory, and storage resources.
  • Decentralized Approach: While Docker Daemon is crucial, it’s worth noting that it can be distributed across multiple hosts in a Docker Swarm cluster for high availability and load balancing. This ensures that your containerized applications remain resilient and scalable.

In essence, the Docker Daemon is the powerhouse that executes your containerized applications, taking care of everything from initialization to resource management.

Docker Client: Your Control Hub

While the Docker Daemon handles the technical aspects of container management, the Docker Client serves as your control hub, enabling users to interact with Docker through a user-friendly interface.

Key Functions of the Docker Client:

  • Command Interface: The Docker Client provides a command-line interface (CLI) that allows users to interact with the Docker Daemon, Users issue commands to create, run, inspect, and manage containers.
  • User Interface: Beyond the CLI, Docker also offers graphical user interfaces (GUIs) that simplify container management, making it accessible to users who prefer a visual approach.
  • Communication: The Docker Client communicates with the Docker Daemon to convey user commands and receive information about container status and events.
  • Multi-Platform Support: Docker Client is available for various platforms, including Windows, macOS, and Linux(Various flavors), making Docker accessible to a wide user base.
  • Remote Management: It allows users to connect to remote Docker Daemons, which is particularly valuable in scenarios where containers run on different hosts.

The Docker Client is the user’s primary gateway to Docker, providing a straightforward means to interact with containers and manage them. Whether you’re a developer, system administrator, or operations team member, the Docker Client is your tool of choice for container orchestration.

Understanding the roles of both the Docker Daemon and Docker Client is key to mastering Docker’s technical architecture, as they work in tandem to make containerization accessible and efficient.

What Is docker Registry?

A Docker registry is a centralized repository for storing and sharing Docker container images. It acts as a library where Docker images are stored, organized, and made accessible to users and systems.

Docker Hub: Docker Hub is one of the most popular public Docker registries. It hosts a vast number of publicly available Docker images, making it a valuable resource for the Docker community.

Docker Is written in Go Programming Language and takes advantage of several features of the Linux kernel to deliver its functionality, Docker uses a technology called namespaces and Cgroup to provide the isolated workspace called the container. When you run a container, Docker creates a set of namespaces for that container.

How Docker internally use Containerd

Image: Created using Dev Tools

Docker initially created containerd, which has now found a home in the CNCF, an organization supporting deployments involving Kubernetes and Docker. It’s worth noting that Docker remains an independent project that relies on containerd as its runtime.

Docker assumes the role of the conductor in the container performance, instructing containerd to create containers from images, while the host operating system provides crucial underpinnings like namespaces and cgroups.

Runc is a container runtime that adheres to the Open Container Initiative (OCI) standards. It effectively implements the OCI specification, ensuring compatibility and consistency in the containerization ecosystem. By complying with these industry standards, runc facilitates the creation and management of containers while promoting interoperability among different container runtimes and tools.

Runc(Library) works behind the scenes, configuring the environment for containers, overseeing their execution, and ensuring their isolation and security, adding yet another layer to the orchestration of containerized applications.

Fig: Left image showing status of Docker and Containerd running on a docker client machine, and Right side image showing how the docker internally uses containerd and runc(reference)

The above Part cover’s most of the docker’s architecture and it’s way of working.

Now Let’s see how to install docker engine as a test use case.

Docker can be installed on most of the Linux distributions.

Below are the steps to install it on ubuntu Linux using a script.

$ curl -fsSL https://test.docker.com -o test-docker.sh
$ sudo sh test-docker.sh
root@ubuntu-01:~# curl -fsSL https://test.docker.com -o test-docker.sh
root@ubuntu-01:~# sh test-docker.sh
# Executing docker install script, commit: e5543d473431b782227f8908005543bb4389b8de
+ sh -c apt-get update -qq >/dev/null
+ sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
debconf: delaying package configuration, since apt-utils is not installed
+ sh -c install -m 0755 -d /etc/apt/keyrings
+ sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | gpg --dearmor --yes -o /etc/apt/keyrings/docker.gpg
+ sh -c chmod a+r /etc/apt/keyrings/docker.gpg
+ sh -c echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy test" > /etc/apt/sources.list.d/docker.list
+ sh -c apt-get update -qq >/dev/null
+ sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq docker-ce docker-ce-cli containerd.io docker-compose-plugin docker-ce-rootless-extras docker-buildx-plugin >/dev/null
debconf: delaying package configuration, since apt-utils is not installed

================================================================================

To run Docker as a non-privileged user, consider setting up the
Docker daemon in rootless mode for your user:

dockerd-rootless-setuptool.sh install

Visit https://docs.docker.com/go/rootless/ to learn about rootless mode.


To run the Docker daemon as a fully privileged service, but granting non-root
users access, refer to https://docs.docker.com/go/daemon-access/

WARNING: Access to the remote API on a privileged Docker daemon is equivalent
to root access on the host. Refer to the 'Docker daemon attack surface'
documentation for details: https://docs.docker.com/go/attack-surface/

================================================================================

root@ubuntu-01:~# systemctl status docker
○ docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: inactive (dead)
TriggeredBy: ○ docker.socket
Docs: https://docs.docker.com

root@ubuntu-01:~# systemctl start docker

root@ubuntu-01:~# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2023-10-19 07:06:57 UTC; 2s ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 1841 (dockerd)
Tasks: 8
Memory: 25.1M
CPU: 91ms
CGroup: /system.slice/docker.service
└─1841 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Oct 19 07:06:57 ubuntu-01 systemd[1]: Starting Docker Application Container Engine...
Oct 19 07:06:57 ubuntu-01 dockerd[1841]: time="2023-10-19T07:06:57.403652627Z" level=info msg="Starting up"
Oct 19 07:06:57 ubuntu-01 dockerd[1841]: time="2023-10-19T07:06:57.425938298Z" level=info msg="Loading containers: start."
Oct 19 07:06:57 ubuntu-01 dockerd[1841]: time="2023-10-19T07:06:57.473280163Z" level=info msg="Loading containers: done."
Oct 19 07:06:57 ubuntu-01 dockerd[1841]: time="2023-10-19T07:06:57.483569917Z" level=info msg="Docker daemon" commit=1a79695 graphdriver>Oct 19 07:06:57 ubuntu-01 dockerd[1841]: time="2023-10-19T07:06:57.484064453Z" level=info msg="Daemon has completed initialization"
Oct 19 07:06:57 ubuntu-01 dockerd[1841]: time="2023-10-19T07:06:57.511997780Z" level=info msg="API listen on /run/docker.sock"
Oct 19 07:06:57 ubuntu-01 systemd[1]: Started Docker Application Container Engine.

Please refer to official docs, to perform post installation steps i,e docker can be by default managed using the root user or with the users having sudo access, so if any non-sudo user’s want to manage then we can refer to docs and configure it as per the need.

In our next blog we can explore how to create images, run containers, and learn how to manage them.

Thank You!!

--

--