Glossary

What is Docker Architecture and How Does it Work?

Docker is an open-source software development platform that is used by developers to build, package and run applications inside containers. This article will look at some key terms around Docker architecture that can help you understand how it’s used, and go over how to think about Docker in the context of application security. 

What is Docker Container Approach and its Advantages?

To understand Docker, first we need a crash course on containers. Containers take an application and its dependencies, and package it together into a single portable unit. This allows the application to act in a consistent way across different environments. While containers share the host’s operating system kernel, they are isolated from one another. This is similar to Virtual Machines (VMs), but unlike VMs, containers virtualize the operating system, and so are therefore much more lightweight. Containers are usually used for one specific task, and then networked together. 

Docker is a software development platform that makes it easy for developers to create and deploy apps inside containers. Docker containers can be deployed anywhere, and they will act the same without compatibility issues, making apps much easier to develop, test and maintain. In addition, compared to virtual machines, using Docker saves on resources, as virtualizing a whole hardware server is far less efficient. For DevOps environments, Docker is a great fit, as it suits a continuous deployment and testing model of software delivery as a result of its inherent flexibility and consistency across environments. 

What is Docker Architecture?

There are many components of the Docker architecture which will help you to understand how Docker works and functions as part of a DevOps environment. Let’s look at some of the main elements: 

Docker Client

This is how users interact with Docker, providing a specific Command Line Interface (CLI) which makes it easy to issue commands to a Docker Daemon —  which has the role of continually listening for and responding to APIs. The Docker Client is mainly used for pulling images from the registry to run on a Docker Host. 

Docker Host

This is where the Docker Daemon, images, containers, networking and storage is held. As and when container images are requested by the Docker Client, they are built by the Docker Daemon, using instructions which are called a build file. This could include additional instructions such as preloading additional components. There are multiple objects which can be used to create an application, including: 

  • Images: These are the read-only templates used to build containers, and can be customized to change the existing configuration, shared across teams, or shared more widely using public registries. Docker images give great version control, as they can be easily rolled back to a previous version of an application where necessary. 
  • Networking: Developers can choose between the default Docker network or user-defined networks, depending on their business context. By default there are three networks available when you have installed Docker — none, host and bridge, and three user-defined networks, bridge, overlay and macvlan which uses MAC addresses instead of IP addresses. 
  • Storage: For persistent data storage which you don’t lose when a container is not running, Docker offers four options. Data volumes sit on the host file system, data volume containers are independent of the application container itself so can be shared, directory mounts can be placed in any directory on the Host machine, and storage plugins can be used to connect to various external storage platforms. 

Docker Registry

Public and private registries are places where you can store and download container images. While Docker Hub is the most popular public registry, operated by Docker there are other registries that developers utilize, including Google Container Registry, Quay.io and Artifactory, as well as private registries from which images can be pulled across an organization for example. 

Kubernetes vs Docker

The truth is, Kubernetes vs Docker is not the right question, as both are usually necessary, and they work well together. While Docker creates and runs containers, Kubernetes is an orchestration platform that allows organizations to manage multiple Docker containers across clusters of machines. 

While Docker may be used to build and package an application into a container in the first place, Kubernetes steps in to deploy the containers and scale them across a cluster, to fulfill tasks such as ensuring the application can seamlessly manage the volume of traffic, recovering automatically from any system failures, and making updates continuously without the business experiencing downtime. 

Docker Kubernetes
Primary Role Containerization platform (creates and runs containers) Container orchestration platform (manages and scales containers in production)
Scope Responsible for individual containers, ensuring they run consistently Manages multiple containers across clusters of machines
Deployment Focus Focuses on packaging applications and dependencies Focuses on orchestrating how those applications are deployed, managed, and scaled
Scaling Manually scales containers or uses basic Docker tools Automatically scales containers based on traffic or load
Networking Basic networking between containers Manages complex networking (service discovery, load balancing) across clusters
Availability Primarily handles the containers on a single host Ensures high availability by distributing containers across multiple hosts (nodes)
Container Health Docker allows manual management of container states Automatically restarts or replaces unhealthy containers

What Do I Need to Know About Docker and Application Security?

Like any technology, Docker can also open your organization up to risk if best practices for application security aren’t maintained. For example, a developer may pull a base image that has vulnerabilities, or write a customization that exposes an unnecessary port to the web. By using public registries, applications inside Docker containers could have vulnerable dependencies such as out-of-date libraries, or images which contain malicious packages. 

Scanning Docker files at the earliest possible stage of the Software Development Lifecycle (SDLC) as well as continuously throughout the process of application development is key, in order to avoid costly rework and to reduce the likelihood of a cyber attack. 

Checkmarx One is a complete application security platform that allows teams to shift left on application vulnerabilities and coding errors, and shift everywhere from code to cloud, to detect vulnerabilities wherever they might be.

When it comes to Docker, developers can scan their code packaged inside Docker containers and identify vulnerabilities including SQL injection, cross-site scripting (XSS) and even identify general coding practices which are insecure. Checkmarx One also includes scanning for Infrastructure as Code (IAC), analyzing configuration files to uncover vulnerable base images, or containers which have excessive privileges. Especially relevant for the use of public registries, developers can use software composition analysis tools from within the Checkmarx platform to identify vulnerabilities in open-source libraries, as well as any dependencies included in the Docker image. 
Looking for an application security solution that covers all your bases? Learn more about Checkmarx One by requesting a demo.