Docker and Containers Explained: Images, Layers, and Orchestration
Containers have fundamentally changed how software is built, deployed, and run. Docker made containers accessible to every developer. Understanding how containers work under the hood makes you better at writing Dockerfiles, debugging container issues, and designing container-native architectures.
Containers vs Virtual Machines
Virtual machines virtualise hardware — each VM runs a complete OS kernel, consuming gigabytes of RAM and taking minutes to start. Containers virtualise the OS userspace — they share the host kernel but run in isolated namespaces, consuming megabytes and starting in milliseconds. Docker containers are fast, lightweight, and portable precisely because they don't carry the overhead of a full OS.
Docker Images and Layers
A Docker image is a read-only stack of layers. Each instruction in a Dockerfile creates a new layer — a set of filesystem changes. Layers are cached and reused across images and builds. When you change one line in a Dockerfile, only that layer and everything above it are rebuilt. This is why layer ordering matters: put rarely-changing instructions (installing OS packages) before frequently-changing ones (copying application code).
Writing Good Dockerfiles
Key best practices:
- Use specific base image tags:
FROM node:20.11-alpinenotFROM node:latest. Latest changes unpredictably. - Minimise layers: chain related RUN commands with
&& - Copy package files first:
COPY package.json ./thenRUN npm installbeforeCOPY . .— this caches the expensive install step - Use .dockerignore: exclude
node_modules,.git, and test files from the build context - Multi-stage builds: use a build stage with all your build tools, then copy only the output into a minimal runtime image
Volumes and Data Persistence
Container filesystems are ephemeral — data written inside a container is lost when the container stops. Volumes provide persistent storage that outlives containers. Named volumes (docker volume create my-data) are managed by Docker and stored on the host filesystem. Bind mounts map a host directory into the container — perfect for development where you want live code reloading.
Container Networking
Docker creates isolated networks for containers. Containers on the same network can reach each other by container name as the hostname — no need to hard-code IP addresses. Docker Compose automatically creates a network for all services in a docker-compose.yml file. The default bridge network works for development; production deployments on Kubernetes use its own networking model.
Environment Variables and Secrets
Never bake credentials into images. Pass configuration via environment variables (-e DATABASE_URL=... or in docker-compose.yml env_file). For production, use a secrets manager (AWS Secrets Manager, HashiCorp Vault, Kubernetes Secrets). Use our Base64 tool to decode Kubernetes secret values. Our Password Generator can create strong secrets for service accounts.
Docker Compose for Local Development
Docker Compose defines and runs multi-container applications with a single docker-compose.yml file. One command (docker compose up) starts your entire stack: application, database, cache, queue, and any other services. This eliminates "works on my machine" by giving every developer an identical local environment. Use our JSON to YAML converter when building Compose files from JSON configs.
From Docker to Kubernetes
Docker is for running individual containers. Kubernetes is for orchestrating thousands of containers across multiple machines. Kubernetes handles scheduling (which node runs which container), self-healing (restarting failed containers), scaling (adding more replicas under load), service discovery, and rolling deployments. Most Kubernetes deployments still use Docker (or containerd) as the container runtime underneath.