Prerequisites for Deployment
Before you can deploy a Docker container, you need a few things set up. This section covers the essential software and concepts you must have in place. Ensuring these prerequisites are met will lead to a much smoother deployment process.

Your Checklist
- ✔Docker Installed: The Docker Engine must be installed on your local machine for building images and on the target deployment server.
- ✔A Docker Image: You need a container image to deploy. This can be an image you've pulled from a registry like Docker Hub, or one you've built yourself from a Dockerfile.
- ✔Deployment Environment: You need a place to run your container. This could be your local machine, a virtual private server (VPS), or a cloud platform like AWS, GCP, or Azure.
- ✔Application Code: The source code and all dependencies for your application, ready to be containerized.
Building a Docker Image
An image is a blueprint for your container. It contains your application code, a runtime, libraries, and other dependencies. You define this blueprint in a special file called a `Dockerfile`. This file lists the step-by-step instructions Docker follows to assemble your image.
Example `Dockerfile` for a Node.js App
Create a file named `Dockerfile` (no extension) in your project's root directory and add the following content. This example creates a simple environment for a Node.js application.
# Use an official Node.js runtime as a parent image
FROM node:18-alpine
# Set the working directory in the container
WORKDIR /usr/src/app
# Copy package.json and package-lock.json
COPY package*.json ./
# Install app dependencies
RUN npm install
# Bundle app source
COPY . .
# Your app binds to port 3000, so expose it
EXPOSE 3000
# Define the command to run your app
CMD [ "node", "server.js" ]
Build Command
Once your `Dockerfile` is ready, navigate to its directory in your terminal and run the `docker build` command:
docker build -t your-app-name .
Running vs. Deploying a Container
While you use `docker run` to start a container, "deployment" implies a more formal and robust process. Deployment is about running containers in a production-ready, scalable, and manageable way. This chart highlights the key differences in mindset and tooling.
Conceptual Differences
Deploying a Single Container
The most basic form of deployment is running a container on a server. You can use the `docker run` command with specific flags to manage how it behaves. The `-d` flag runs the container in "detached" mode (in the background), and `--name` gives it a recognizable name.
Basic Deployment Command
This command starts a container from the `your-app-name` image we built earlier. It runs in the background and will be named `my-running-app`.
docker run -d --name my-running-app your-app-name
Auto-Restart Policy
For production, you want your container to restart automatically if it crashes or the server reboots. Use the `--restart` flag.
docker run -d --restart unless-stopped --name my-running-app your-app-name
Exposing Ports
For your application to be accessible from outside the Docker host (i.e., from the internet or your local network), you need to map a port on the host machine to a port inside the container. This is done with the `-p` or `--publish` flag.
Port Mapping Syntax
The format is `-p [HOST_PORT]:[CONTAINER_PORT]`. This command maps port 8080 on the host to port 3000 inside the container (which our example Node.js app uses).
docker run -d -p 8080:3000 --name my-web-app your-app-name
Now, you can access your application by navigating to `http://[YOUR_SERVER_IP]:8080` in a web browser.
Multi-Container Deployments with Docker Compose
Most real-world applications aren't just one service; they often require a database, a cache, or other backend services. Docker Compose is a tool for defining and running multi-container Docker applications. You use a YAML file to configure your application's services.
Example `docker-compose.yml`
This file defines two services: `webapp` (our application) and `db` (a PostgreSQL database). Compose handles networking between them automatically.
version: '3.8'
services:
webapp:
build: .
ports:
- "8080:3000"
environment:
- DATABASE_URL=postgres://user:password@db:5432/mydatabase
depends_on:
- db
db:
image: postgres:14-alpine
restart: always
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=mydatabase
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
Deployment with Compose
To deploy this entire stack, navigate to the directory containing your `docker-compose.yml` file and run:
docker-compose up -d
Managing Environment Variables
You should never hard-code configuration like API keys or database passwords into your Docker image. Environment variables are the standard way to supply this configuration to your container at runtime. Here are the most common methods.
Method 1: The `-e` Flag
You can pass variables one by one directly on the command line. This is good for a small number of variables.
docker run -d -p 8080:3000 -e "API_KEY=12345abcdef" -e "NODE_ENV=production" your-app-name
Method 2: Using an `--env-file`
For many variables, it's cleaner to put them in a file (e.g., `production.env`) and pass that file to the container.
# Contents of production.env
API_KEY=12345abcdef
NODE_ENV=production
docker run -d -p 8080:3000 --env-file ./production.env your-app-name
Method 3: In `docker-compose.yml`
Docker Compose has built-in support for environment variables, either directly in the file or from an external `.env` file.
services:
webapp:
build: .
ports:
- "8080:3000"
environment:
- API_KEY=12345abcdef
- NODE_ENV=production
Persisting Data
By default, any data created inside a container is lost when the container is removed. To persist data (like a database's files or user uploads), you must store it on the Docker host. There are two main ways to do this: volumes and bind mounts.
Docker Volumes
Volumes are the preferred mechanism for persisting data. They are managed by Docker and stored in a dedicated area on the host filesystem. They are more portable and performant.
# -v [VOLUME_NAME]:[CONTAINER_PATH]
docker run -d -v my-app-data:/app/data your-app-name
Bind Mounts
Bind mounts map a specific file or directory from your host machine into the container. This is useful for development when you want to reflect source code changes into a container instantly.
# -v [HOST_PATH]:[CONTAINER_PATH]
docker run -d -v /path/on/host:/app/data your-app-name
Deploying to the Cloud
For production applications that require high availability and scalability, cloud platforms offer managed services to run Docker containers. These services handle the underlying infrastructure, orchestration, and scaling for you. Here are some popular options.
Amazon Web Services (AWS)
- ECS (Elastic Container Service): AWS's proprietary, simplified container orchestrator.
- EKS (Elastic Kubernetes Service): A managed Kubernetes service for complex, large-scale deployments.
- App Runner: The simplest way to deploy from a container image, with auto-scaling built-in.
Google Cloud Platform (GCP)
- Cloud Run: A fully serverless platform to run containers that scales to zero.
- GKE (Google Kubernetes Engine): A robust, managed Kubernetes service.
- Compute Engine: You can provision a virtual machine and run Docker on it manually.
Microsoft Azure
- ACI (Azure Container Instances): A serverless way to run single containers without orchestration.
- AKS (Azure Kubernetes Service): Azure's managed Kubernetes offering.
- App Service: A PaaS offering that can run containers with built-in CI/CD pipelines.
Common Deployment Issues & Fixes
Deployments don't always go smoothly. This section covers some of the most common issues you might encounter and how to diagnose and fix them. The key is often to check the container logs for error messages.
Cause: The main process inside the container finished or crashed. A container only runs as long as its main command (`CMD` or `ENTRYPOINT`) is running.
Fix: Check the logs with `docker logs [CONTAINER_NAME]`. This will usually show an application error. Ensure your `CMD` is a long-running process, not a command that exits quickly.
Cause: Docker cannot find the image you specified, either locally or in a remote registry.
Fix: Check for typos in the image name. Run `docker images` to see your local images. If it's a remote image, ensure you are logged in (`docker login`) and have the correct image name and tag.
Cause: The host port you are trying to map is already in use by another application, or you are trying to use a privileged port (< 1024) without sufficient permissions.
Fix: Stop the other process using the port, or choose a different host port (e.g., `-p 8081:3000`). If using a privileged port, you may need to run the command with `sudo` (though this is not always recommended).
Monitoring & Managing Containers
Once your containers are deployed, you need to be able to manage their lifecycle and monitor their health. The Docker CLI provides several commands for this purpose. Here are the most essential ones to know.
List Running Containers
Use `docker ps` to see all containers that are currently running.
docker ps -a # Use -a to see all containers, including stopped ones
View Container Logs
The `docker logs` command is crucial for debugging. It shows the standard output of the main process running inside the container.
docker logs [CONTAINER_NAME_OR_ID] -f # Use -f to follow the log output in real-time
Check Resource Usage
The `docker stats` command provides a live stream of resource usage statistics (CPU, memory, network I/O) for your running containers.
docker stats
Stopping and Removing Containers
To stop a running container, use `docker stop`. To remove it, use `docker rm`.
docker stop [CONTAINER_NAME_OR_ID]
docker rm [CONTAINER_NAME_OR_ID]