Deployment with Docker
Imagine you just finished a web application with Python 3.10 and PostgreSQL 14. Everything runs smoothly on your laptop. But once moved to the production server... boom! Errors appear everywhere.
It turns out the server still uses Python 3.6 and PostgreSQL 10. You're stuck tweaking configurations just to make the server environment match your local setup. And the classic complaint emerges:
"But it works on my laptop! š "
This isn't a new problem. Environment inconsistencies like this have become daily drama for developers. Not to mention when one server is forced to run various applications with different versions and requirements. Dependency conflicts become unavoidable.
And this is where Docker emerges as a solution.
1. Introduction to Containerization and Docker
What is Containerization?
Containerization is a method for packaging applications along with all their dependencies into a single unit called a container. Unlike Virtual Machines (VMs) that require separate operating systems for each instance, containers simply share the kernel from the host OS, making them lighter, faster, and more efficient.
-
Container:
- Focuses on application isolation, not OS
- Lightweight size (measured in MB), fast to run, and resource-efficient
- Suitable for running many applications separately without conflicts
-
Virtual Machine (VM):
- Runs a complete OS on top of the host through a hypervisor
- Large size (measured in GB), slow, and resource-wasteful because each VM carries its own OS
What is Docker?
Docker is a popular platform for creating, running, and managing containers. With Docker, you can ensure that applications working locally will run the same way on any server without needing to reconfigure the environment from scratch.
The workflow in Docker follows this sequence: Dockerfile ā Image ā
Container.
-
Dockerfile: A text file containing instructions for building an image. Like a recipe or source code.
-
Docker Image: The result of a Dockerfile. Read-only in nature, contains everything the application needs. Can be considered as a blueprint.
-
Docker Container: The runtime version of an image. This is where the application actually runs. One image can be used to create many containers.
2. Understanding Docker Architecture
Docker has a modular and efficient architecture, enabling isolated and consistent container management. Here are its main components:
The diagram above illustrates Docker's basic architecture and the workflow
between main components: Client, Docker Host, and Docker Registry.
- Client is the user interface that runs commands like docker build, docker pull, and docker run
- These commands are sent to the Docker Daemon running inside the Docker Host, which is the machine where Docker is executed
- Inside the Docker Host, there's management of images and containers. Docker Daemon also interacts with Docker Registry to pull or push images
Based on the architecture above, let's study Docker components in more detail, including additional components that support the Docker ecosystem:
Docker Engine
Docker Engine is the core of the Docker platform running inside the Docker Host (physical or virtual machine). Its main components include:
- Docker Daemon (
dockerd): Background process that manages images, containers, networks, and volumes - Docker Client (
docker CLI): Command interface for users like docker run, docker build, etc. - REST API: Connector between CLI and daemon to execute commands
Docker Image
Docker Image is a read-only blueprint containing all components needed to run an application, including code, dependencies, and instructions from Dockerfile.
- Created using the
docker buildcommand - Stored locally or in registries like Docker Hub
Docker Container
Docker Container is a runtime instance of an image. Containers run applications lightly and in isolation, while still sharing the kernel with the host system.
- Similar to applications run from an installer
- Can be created, run, stopped, and deleted flexibly
Docker Registry
Docker Registry is a service for storing and distributing Docker images.
- Can be public (Docker Hub) or private (GitLab, ECR, etc.)
- Used with
docker pullanddocker pushcommands
Docker Networking
Docker Networking manages communication between containers and with external networks.
- Bridge (default): Internal network within one host between containers
- Host: Container directly uses host network, without isolation
- Overlay: Connects containers across different hosts (especially in Docker Swarm)
Docker Volume
Docker Volume is used to store data persistently, even when containers are deleted.
- Managed directly by Docker in special host directories
- Safer and more stable than bind mounts, ideal for production environments
Docker Compose
Docker Compose is a tool for organizing and running multi-container applications easily.
- Uses
docker-compose.ymlfile to define services, networks, volumes, and other dependencies declaratively
3. Getting Started with Docker
Docker Installation
Docker can be installed on almost all major operating systems:
-
Windows & macOS: Use Docker Desktop which provides GUI and CLI
-
Linux: Install Docker Engine through official repositories according to each distribution
Managing Docker Images and Docker Containers
Here are basic commands for managing Docker images and Docker Containers.
-
Docker Image
bash# View Image List docker image ls # Download Image from Registry docker image pull <image_name>:<tag> # Delete Image docker rmi <image># View Image List docker image ls # Download Image from Registry docker image pull <image_name>:<tag> # Delete Image docker rmi <image> -
Docker Container
bash# Create Container docker container create --name <container_name> <image_name>:<tag> # Run Container docker container start <container_name> # Stop Container docker container stop <container_name> # Delete Container (Make sure container is stopped) docker container rm <container_name> # View Container List docker container ls # active ones docker container ls -a # all (including stopped) # View Logs docker container logs <container_name> docker container logs -f <container_name> # real-time # Enter Container (exec) docker container exec -it <container_name> bash # Port Forwarding (forward port from container to host) docker container create --name web --publish <host_port>:<container_port> <image_name>:<tag> # Environment Variable docker container create --name <container_name> \ --publish <host_port>:<container_port> \ --env <VAR1>=<value1> \ --env <VAR2>=<value2> \ <image_name>:<tag> # View Resource Statistics docker container stats # Limit Resources docker container create --name <container_name> \ --memory <memory_amount> \ --cpus <cpu_amount> \ <image_name>:<tag># Create Container docker container create --name <container_name> <image_name>:<tag> # Run Container docker container start <container_name> # Stop Container docker container stop <container_name> # Delete Container (Make sure container is stopped) docker container rm <container_name> # View Container List docker container ls # active ones docker container ls -a # all (including stopped) # View Logs docker container logs <container_name> docker container logs -f <container_name> # real-time # Enter Container (exec) docker container exec -it <container_name> bash # Port Forwarding (forward port from container to host) docker container create --name web --publish <host_port>:<container_port> <image_name>:<tag> # Environment Variable docker container create --name <container_name> \ --publish <host_port>:<container_port> \ --env <VAR1>=<value1> \ --env <VAR2>=<value2> \ <image_name>:<tag> # View Resource Statistics docker container stats # Limit Resources docker container create --name <container_name> \ --memory <memory_amount> \ --cpus <cpu_amount> \ <image_name>:<tag> -
Inspection and Cleanup (Inspect & Prune)
bash# View object details in JSON format docker inspect <object_name> # Remove stopped containers docker container prune # Remove unused images docker image prune # Remove all unused resources docker system prune# View object details in JSON format docker inspect <object_name> # Remove stopped containers docker container prune # Remove unused images docker image prune # Remove all unused resources docker system prune
Dockerfile and Building Images
Dockerfile is a script containing sequential instructions for building a Docker image. With Dockerfile, we can automatically and consistently package applications along with all their dependencies into a container.
-
Basic
DockerfileTemplateplaintext# Use base image FROM python:3.12 # Set working directory WORKDIR /app # Copy & install dependencies COPY requirements.txt . RUN pip install -r requirements.txt # Copy entire application files COPY . . # Expose port 8000 EXPOSE 8000 # Default command when container is run CMD ["python", "app.py"]# Use base image FROM python:3.12 # Set working directory WORKDIR /app # Copy & install dependencies COPY requirements.txt . RUN pip install -r requirements.txt # Copy entire application files COPY . . # Expose port 8000 EXPOSE 8000 # Default command when container is run CMD ["python", "app.py"] -
Important Dockerfile Instructions
Instruction Function FROMSpecify base image WORKDIRWorking directory in container COPYCopy files from host to image RUNExecute command during build EXPOSEOpen application port CMDDefault command when container is run -
Build & Run
bash# Build image from Dockerfile docker build -t <image_name> . # Run container from image docker run -d --name <container_name> -p <host_port>:<container_port> <image_name> # View container logs docker logs <container_name># Build image from Dockerfile docker build -t <image_name> . # Run container from image docker run -d --name <container_name> -p <host_port>:<container_port> <image_name> # View container logs docker logs <container_name>
Build Optimization with .dockerignore
The .dockerignore file is used to exclude unnecessary files when building
images, thereby improving security, reducing image size, and speeding up build
time.
-
How to Use
- Create a file named
.dockerignorein the project root directory (same location asDockerfile) - Fill the file with names of files/folders you want to ignore. The syntax is
similar to
.gitignore, using#for comments and!for exceptions
- Create a file named
-
Example
.dockerignorecontentplaintext# Git & dependencies .git node_modules/ __pycache__/ # Log files and environment *.log .env # Operating system & IDE files .DS_Store .vscode/ # Docker files themselves Dockerfile .dockerignore docker-compose.yml# Git & dependencies .git node_modules/ __pycache__/ # Log files and environment *.log .env # Operating system & IDE files .DS_Store .vscode/ # Docker files themselves Dockerfile .dockerignore docker-compose.yml
Docker Networking Management
Docker Networking allows containers to communicate with each other and connect to external networks. By default, containers are isolated and can only communicate if they're on the same network.
-
Types of Networks in Docker
1. Bridge (Default): Docker's default virtual network for communication between containers in the same network.
bash# Create bridge network docker network create --driver bridge <network_name> # Run container and connect directly to network docker run -d --name <container1> --network <network_name> <image1> docker run -d --name <container2> --network <network_name> <image2># Create bridge network docker network create --driver bridge <network_name> # Run container and connect directly to network docker run -d --name <container1> --network <network_name> <image1> docker run -d --name <container2> --network <network_name> <image2>2. Host: Container shares network directly with host (Linux only).
bash# Create host network (Linux) docker run --rm --network host <image_name># Create host network (Linux) docker run --rm --network host <image_name>3. None: Container will have no network access at all.
bash# Create none network docker run --rm --network none <image_name># Create none network docker run --rm --network none <image_name> -
Basic Docker Networking Commands
bash# View network list docker network ls # Create network docker network create <network_name> # Delete network docker network rm <network_name># View network list docker network ls # Create network docker network create <network_name> # Delete network docker network rm <network_name> -
Container Management in Network
bash# Connect container when created docker container create --name <container_name> \ --network <network_name> \ <image_name>:<tag> # Connect existing container docker network connect <network_name> <container_name> # Disconnect container from network docker network disconnect <network_name> <container_name># Connect container when created docker container create --name <container_name> \ --network <network_name> \ <image_name>:<tag> # Connect existing container docker network connect <network_name> <container_name> # Disconnect container from network docker network disconnect <network_name> <container_name> -
Inter-Container Communication (containers in the same network can communicate using container names)
bash# Container A can access Container B with: http://container-b:port# Container A can access Container B with: http://container-b:port
Docker Volume and Persistent Storage Management
Data inside containers is temporary. If a container is deleted, the data is also lost. To store data permanently, use:
Bind Mount
Bind mount is a method for sharing files or folders from the host system into a container. Data is managed directly by the host file system, with file or folder locations specified explicitly. This method is useful when you want to access or update files from outside the container.
-
Common Bind Mount Commands:
bash# Using bind mount in container docker run --rm \ --mount type=bind,source=<host_path>,destination=<container_path> \ <image_name># Using bind mount in container docker run --rm \ --mount type=bind,source=<host_path>,destination=<container_path> \ <image_name>
Docker Volume (Recommended)
Docker volume is a storage area managed directlyby Docker, which is safer, portable, and doesn't depend on host file structure. Volumes provide consistency across environments, support easy backup and restore processes, and can be managed through Docker CLI commands.
-
Common Docker Volume Commands:
bash# Create volume docker volume create volume_name # Use volume in container docker run --rm \ --mount type=volume,source=<host_path>,destination=<container_path> \ <image_name> # View volume list docker volume ls # Delete volume docker volume rm <volume_name># Create volume docker volume create volume_name # Use volume in container docker run --rm \ --mount type=volume,source=<host_path>,destination=<container_path> \ <image_name> # View volume list docker volume ls # Delete volume docker volume rm <volume_name>
Volume Backup & Restore
Volume backup and restore is done by creating temporary containers that mount the target volume and local folder as backup or restore location. The purpose of backup is to store copies of volume data for future use, while the purpose of restore is to return data to a volume from previously created backup copies.
-
Volume Backup & Restore Commands:
bash# Backup volume docker container run --rm --name <backup_container_name> \ --mount type=bind,source=<host_backup_path>,destination=/backup \ --mount type=volume,source=<volume_name>,destination=/data \ <image_name> tar cvf /backup/<backup_file_name>.tar.gz /data # Restore volume docker container run --rm --name <restore_container_name> \ --mount type=bind,source=<host_backup_path>,destination=/backup \ --mount type=volume,source=<restore_volume_name>,destination=/data \ <image_name> tar xvf /backup/<backup_file_name>.tar.gz -C /data --strip-components 1# Backup volume docker container run --rm --name <backup_container_name> \ --mount type=bind,source=<host_backup_path>,destination=/backup \ --mount type=volume,source=<volume_name>,destination=/data \ <image_name> tar cvf /backup/<backup_file_name>.tar.gz /data # Restore volume docker container run --rm --name <restore_container_name> \ --mount type=bind,source=<host_backup_path>,destination=/backup \ --mount type=volume,source=<restore_volume_name>,destination=/data \ <image_name> tar xvf /backup/<backup_file_name>.tar.gz -C /data --strip-components 1
Docker Compose Management
Docker Compose is a tool for defining and running multi-container Docker
applications using a docker-compose.yml configuration file. Very useful for
managing interdependent services, such as web applications and databases.
-
docker-compose.ymlTemplateyaml# Compose version version: '3.8' # Application services services: # Web app web: build: . ports: - '8000:8000' depends_on: - database restart: always # Database database: image: postgres:13 environment: POSTGRES_PASSWORD: password volumes: - db_data:/var/lib/postgresql/data restart: always # Volume for data volumes: db_data:# Compose version version: '3.8' # Application services services: # Web app web: build: . ports: - '8000:8000' depends_on: - database restart: always # Database database: image: postgres:13 environment: POSTGRES_PASSWORD: password volumes: - db_data:/var/lib/postgresql/data restart: always # Volume for data volumes: db_data: -
Common Docker Compose Commands
bash# Run all services (background) docker compose up -d # Stop and remove all containers docker compose down # View container status docker compose ps # Rebuild image before running docker compose up --build# Run all services (background) docker compose up -d # Stop and remove all containers docker compose down # View container status docker compose ps # Rebuild image before running docker compose up --build
4. Hands-On Practices with Docker
Here's a complete and sequential tutorial for learning Docker from basics to deploying to Docker Hub with GitLab CI/CD integration. We'll create a simple web application using Python Flask.
Prerequisites
- Docker installed on your system
- Docker Hub account
- GitLab account
- Git installed
Step 1: Creating Basic Dockerfile and Image
In this first step, we'll learn basic Docker concepts by creating a simple Flask application and packaging it into a Docker container. This is the foundation for understanding how Docker works in packaging and running applications.
1. Create project directory:
mkdir docker-tutorial
cd docker-tutorialmkdir docker-tutorial
cd docker-tutorial2. Create app.py file (simple Flask application):
from flask import Flask, render_template
app = Flask(__name__)
@app.route('/')
def home():
return render_template('index.html')
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)from flask import Flask, render_template
app = Flask(__name__)
@app.route('/')
def home():
return render_template('index.html')
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)4. Create requirements.txt file:
flask==2.3.3flask==2.3.35. Create Dockerfile:
# Use official Python image as base image
FROM python:3.12
# Set working directory in container
WORKDIR /app
# Copy requirements file first to utilize layer caching
COPY requirements.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy entire application code
COPY . .
# Expose port used by Flask
EXPOSE 5000
# Command to run the application
CMD ["python", "app.py"]# Use official Python image as base image
FROM python:3.12
# Set working directory in container
WORKDIR /app
# Copy requirements file first to utilize layer caching
COPY requirements.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy entire application code
COPY . .
# Expose port used by Flask
EXPOSE 5000
# Command to run the application
CMD ["python", "app.py"]6. Create templates/ folder and templates/index.html file:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<title>Hello Docker</title>
<style>
body {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
background: linear-gradient(to right, #00b4db, #0083b0);
color: #fff;
text-align: center;
padding-top: 100px;
margin: 0;
}
h1 {
font-size: 48px;
margin-bottom: 20px;
}
p {
font-size: 20px;
opacity: 0.9;
}
.card {
background: rgba(255, 255, 255, 0.1);
border-radius: 15px;
padding: 30px;
max-width: 500px;
margin: auto;
box-shadow: 0 10px 25px rgba(0, 0, 0, 0.2);
}
</style>
</head>
<body>
<div class="card">
<h1>Hello World! š</h1>
<p>Getting Started with Your App using Docker!</p>
</div>
</body>
</html><!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<title>Hello Docker</title>
<style>
body {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
background: linear-gradient(to right, #00b4db, #0083b0);
color: #fff;
text-align: center;
padding-top: 100px;
margin: 0;
}
h1 {
font-size: 48px;
margin-bottom: 20px;
}
p {
font-size: 20px;
opacity: 0.9;
}
.card {
background: rgba(255, 255, 255, 0.1);
border-radius: 15px;
padding: 30px;
max-width: 500px;
margin: auto;
box-shadow: 0 10px 25px rgba(0, 0, 0, 0.2);
}
</style>
</head>
<body>
<div class="card">
<h1>Hello World! š</h1>
<p>Getting Started with Your App using Docker!</p>
</div>
</body>
</html>7. Build Docker image:
docker build -t my-flask-app .docker build -t my-flask-app .8. Run container:
docker run -d -p 5000:5000 --name flask-container my-flask-appdocker run -d -p 5000:5000 --name flask-container my-flask-app9. Open browser and access http://localhost:5000/ to view the application.
10. Stop and remove container:
docker stop flask-container
docker rm flask-containerdocker stop flask-container
docker rm flask-containerStep 2: Using Docker Volume
Docker Volume is a feature that allows us to store data persistently outside containers. In this step, we'll try creating a simple volume and using it when running containers.
1. Create Docker volume:
docker volume create flask-datadocker volume create flask-data2. Run container with volume to store data:
docker run -d -p 5000:5000 --name flask-container -v flask-data:/app/data my-flask-appdocker run -d -p 5000:5000 --name flask-container -v flask-data:/app/data my-flask-app3. Verify volume:
docker volume inspect flask-datadocker volume inspect flask-data4. Log if successful:
[
{
"CreatedAt": "2025-07-27T12:57:02+07:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/flask-data/_data",
"Name": "flask-data",
"Options": null,
"Scope": "local"
}
][
{
"CreatedAt": "2025-07-27T12:57:02+07:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/flask-data/_data",
"Name": "flask-data",
"Options": null,
"Scope": "local"
}
]5. Stop and remove container:
docker stop flask-container
docker rm flask-containerdocker stop flask-container
docker rm flask-containerStep 3: Using Docker Network
Docker Network allows containers to communicate with each other safely and in isolation. In this step, we'll try creating a simple custom network and running containers within it.
1. Create Docker network:
docker network create flask-networkdocker network create flask-network2. Run container with the created network:
docker run -d -p 5000:5000 --name flask-container --network flask-network my-flask-appdocker run -d -p 5000:5000 --name flask-container --network flask-network my-flask-app3. Verify network:
docker network inspect flask-networkdocker network inspect flask-network4. Log if successful:
[
{
"Name": "flask-network",
"Id": "ce89c2ab75c528d162f651a135beabd4dbd43c8a7d9344ac8d702ba55d4873fb",
"Created": "2025-07-27T12:57:56.113078894+07:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv4": true,
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"9d1028daf54597527bc6b72a17129d0bfd03173566e398b87dad3abb69a01250": {
"Name": "flask-container",
"EndpointID": "6dfca38bba5f7e359da06c6f8190a209fec96d5e6fd5c3750a0125dc72c3eaef",
"MacAddress": "be:2d:bf:9a:74:0e",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
][
{
"Name": "flask-network",
"Id": "ce89c2ab75c528d162f651a135beabd4dbd43c8a7d9344ac8d702ba55d4873fb",
"Created": "2025-07-27T12:57:56.113078894+07:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv4": true,
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"9d1028daf54597527bc6b72a17129d0bfd03173566e398b87dad3abb69a01250": {
"Name": "flask-container",
"EndpointID": "6dfca38bba5f7e359da06c6f8190a209fec96d5e6fd5c3750a0125dc72c3eaef",
"MacAddress": "be:2d:bf:9a:74:0e",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]5. Stop and remove container:
docker stop flask-container
docker rm flask-containerdocker stop flask-container
docker rm flask-containerStep 4: Using Docker Compose
After successfully running applications using manual commands (build image, run container, connect volume and network), it's time to simplify everything using Docker Compose.
Docker Compose allows us to define entire application configuration (services, volumes, networks) in one YAML file. This greatly helps in team collaboration and deployment automation.
1. Create docker-compose.yml file:
version: '3.8' # (optional) No longer applies in Compose V2, can be removed
services:
web:
build: .
ports:
- '5000:5000'
volumes:
- flask-data:/app/data # Volume name
networks:
- flask-net # Network name
volumes:
flask-data: # Volume definition
networks:
flask-net: # Network definition
driver: bridge # Using bridge driverversion: '3.8' # (optional) No longer applies in Compose V2, can be removed
services:
web:
build: .
ports:
- '5000:5000'
volumes:
- flask-data:/app/data # Volume name
networks:
- flask-net # Network name
volumes:
flask-data: # Volume definition
networks:
flask-net: # Network definition
driver: bridge # Using bridge driver2. Run application with Docker Compose:
docker-compose up -ddocker-compose up -d3. To stop:
docker-compose downdocker-compose downStep 5: Publish Image to Docker Hub
Docker Hub is a public registry for storing and sharing Docker images. In this step, we'll publish the image we created to Docker Hub so it can be accessed and used by others from anywhere. This is very useful for application distribution and team collaboration.
1. Login to Docker Hub from terminal:
docker logindocker login2. Tag image with username/repository:tag format:
docker tag my-flask-app <username>/my-flask-app:latestdocker tag my-flask-app <username>/my-flask-app:latest3. Push image to Docker Hub:
docker push <username>/my-flask-app:latestdocker push <username>/my-flask-app:latest4. Open https://hub.docker.com/repositories/ to ensure the image is
successfully published and appears in the account's repository list:
Step 6: Test Pull Image from Docker Hub and Run on Server (Optional)
In this step, we'll test the process of pulling images from Docker Hub and running them in different environments (can be on server or local). This demonstrates one of Docker's advantages: portability - the same image can run anywhere consistently.
ā ļø Note: This step is not mandatory to run on a server, it can also be done locally. But make sure there are no containers or images with the same name.
If they already exist and are still active:
docker stop flask-app # Stop container
docker rm flask-app # Remove container
docker rmi <username>/my-flask-app:latest # (optional) Remove old imagedocker stop flask-app # Stop container
docker rm flask-app # Remove container
docker rmi <username>/my-flask-app:latest # (optional) Remove old image1. (If on server) Login to server:
ssh <user>@<server_ip>ssh <user>@<server_ip>2. Pull image from Docker Hub:
docker pull <username>/my-flask-app:latestdocker pull <username>/my-flask-app:latestReplace username with your Docker Hub account name that was pushed earlier.
3. Run container from that image:
docker run -d --name <container_name> -p 5000:5000 <username>/my-flask-app:latestdocker run -d --name <container_name> -p 5000:5000 <username>/my-flask-app:latestThe -d option runs in background, -p to map host:container port.
4. Access application in browser:
Open http://localhost:5000 (local) or http://<server-ip>:5000 (server).
Here's the result of the application successfully running through the server.
Step 7: Automate Build & Push with GitLab
After successfully building images locally, uploading them to Docker Hub, and pulling images on the server, the next step is to automate the build and push image process using GitLab CI/CD. With GitLab pipeline, this process will run automatically every time you push to the repository.
GitLab CI allows us to run Docker-based CI/CD processes through the
.gitlab-ci.yml file. For the Continuous Delivery stage, simply do docker pull
on the server and restart the container so changes can be applied.
1. Create .gitlab-ci.yml file in project root:
This file will instruct GitLab CI/CD to build and push Docker images.
# Use Docker image & enable Docker-in-Docker
image: docker:latest
services:
- docker:dind
# Variables for Docker configuration
variables:
DOCKER_DRIVER: overlay2
IMAGE_NAME: $CI_REGISTRY_USER/my-flask-app
before_script:
# Login to GitLab Container Registry
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER"
--password-stdin
build:
stage: build
script:
# Build & push Docker image
# Use tag latest_v2 because index.html will be updated (previously latest)
- docker build -t $IMAGE_NAME:latest_v2 .
- docker push $IMAGE_NAME:latest_v2# Use Docker image & enable Docker-in-Docker
image: docker:latest
services:
- docker:dind
# Variables for Docker configuration
variables:
DOCKER_DRIVER: overlay2
IMAGE_NAME: $CI_REGISTRY_USER/my-flask-app
before_script:
# Login to GitLab Container Registry
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER"
--password-stdin
build:
stage: build
script:
# Build & push Docker image
# Use tag latest_v2 because index.html will be updated (previously latest)
- docker build -t $IMAGE_NAME:latest_v2 .
- docker push $IMAGE_NAME:latest_v22. Create new project in GitLab by selecting New Project ā
Create blank project menu:
After the project is created, initialize Git in local folder and connect to
GitLab.
git init
git remote add origin https://gitlab.com/username/your-repo.gitgit init
git remote add origin https://gitlab.com/username/your-repo.git3. Set environment variables in GitLab:
- Go to your
GitLab projectāSettingsāCI/CDā clickExpandonVariablessection - Add two variables:
CI_REGISTRY_USERā fill with Docker HubusernameCI_REGISTRY_PASSWORDā fill with Docker Hubpassword/token
- For each variable, set options as follows:
- Mask variable (check if you want the value not to appear in logs)
- Protect variable (leave unchecked if you want it to be usable on all branches)
4. Change index.html content to see changes when pulling image:
Edit index.html file (or other HTML files) so the changes are visible when
pulled from server.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<title>Hello Docker</title>
<style>
body {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
background: linear-gradient(to right, #00b4db, #0083b0);
color: #fff;
text-align: center;
padding: 50px 20px;
margin: 0;
}
h1 {
font-size: 48px;
margin-bottom: 20px;
}
p {
font-size: 20px;
opacity: 0.9;
margin-bottom: 30px;
}
.card {
background: rgba(255, 255, 255, 0.1);
border-radius: 15px;
padding: 30px;
max-width: 500px;
margin: 20px auto;
box-shadow: 0 10px 25px rgba(0, 0, 0, 0.2);
transition: transform 0.3s ease;
}
.card:hover {
transform: translateY(-5px);
}
.card-container {
display: flex;
flex-wrap: wrap;
justify-content: center;
gap: 20px;
max-width: 1100px;
margin: 0 auto;
}
.card h2 {
margin-top: 0;
color: #fff;
}
.card p {
margin-bottom: 0;
}
</style>
</head>
<body>
<div class="card-container">
<div class="card">
<h1>Hello World! š</h1>
<p>Getting Started with Your App using Docker!</p>
</div>
<div class="card">
<h2>Docker Features</h2>
<p>ā Containerization made easy</p>
<p>ā Lightweight and portable</p>
<p>ā CI/CD integration ready</p>
</div>
<div class="card">
<h2>Next Steps</h2>
<p>ā” Build your Docker image</p>
<p>ā” Push to container registry</p>
<p>ā” Deploy anywhere!</p>
</div>
</div>
</body>
</html><!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<title>Hello Docker</title>
<style>
body {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
background: linear-gradient(to right, #00b4db, #0083b0);
color: #fff;
text-align: center;
padding: 50px 20px;
margin: 0;
}
h1 {
font-size: 48px;
margin-bottom: 20px;
}
p {
font-size: 20px;
opacity: 0.9;
margin-bottom: 30px;
}
.card {
background: rgba(255, 255, 255, 0.1);
border-radius: 15px;
padding: 30px;
max-width: 500px;
margin: 20px auto;
box-shadow: 0 10px 25px rgba(0, 0, 0, 0.2);
transition: transform 0.3s ease;
}
.card:hover {
transform: translateY(-5px);
}
.card-container {
display: flex;
flex-wrap: wrap;
justify-content: center;
gap: 20px;
max-width: 1100px;
margin: 0 auto;
}
.card h2 {
margin-top: 0;
color: #fff;
}
.card p {
margin-bottom: 0;
}
</style>
</head>
<body>
<div class="card-container">
<div class="card">
<h1>Hello World! š</h1>
<p>Getting Started with Your App using Docker!</p>
</div>
<div class="card">
<h2>Docker Features</h2>
<p>ā Containerization made easy</p>
<p>ā Lightweight and portable</p>
<p>ā CI/CD integration ready</p>
</div>
<div class="card">
<h2>Next Steps</h2>
<p>ā” Build your Docker image</p>
<p>ā” Push to container registry</p>
<p>ā” Deploy anywhere!</p>
</div>
</div>
</body>
</html>5. Push Code and Run Pipeline:
git add .
git commit -m "Initial commit with Docker setup"
git push -u origin maingit add .
git commit -m "Initial commit with Docker setup"
git push -u origin main6. Verify Pipeline
- Open
CI/CDāPipelinesmenu in GitLab - Click the latest pipeline that appears, then open the job named build
- In the log (terminal output), make sure the following processes appear without
errors:
docker loginā successfully logged into Docker Hubdocker buildā image successfully builtdocker pushā image successfully sent to Docker Hub
- Finally, open your Docker Hub account and check if the image has appeared or been successfully updated
Step 8: Deploy Latest Changes with docker pull
After the GitLab pipeline successfully builds and pushes the new image, simply
do docker pull and restart the container to apply updates.
The steps are similar to Step 6, just replace with the latest tag:
# (Optional) Stop and remove old container
docker stop flask-app
docker rm flask-app
# Pull latest image version
docker pull <username>/my-flask-app:latest_v2
# Restart container
docker run -d --name flask-app -p 5000:5000 <username>/my-flask-app:latest_v2# (Optional) Stop and remove old container
docker stop flask-app
docker rm flask-app
# Pull latest image version
docker pull <username>/my-flask-app:latest_v2
# Restart container
docker run -d --name flask-app -p 5000:5000 <username>/my-flask-app:latest_v2The application will now display the latest version according to the last
index.html update in GitLab.
Final Project Structure
docker-tutorial/
āāā app.py
āāā Dockerfile
āāā docker-compose.yml
āāā .gitlab-ci.yml
āāā requirements.txt
āāā templates/
āāā index.htmldocker-tutorial/
āāā app.py
āāā Dockerfile
āāā docker-compose.yml
āāā .gitlab-ci.yml
āāā requirements.txt
āāā templates/
āāā index.htmlReferences
-
[FREE] Deployment with Docker | Online Course | Indonesia by Ruby Abdullah
-
Tutorial Docker Dasar (Bahasa Indonesia) by Programmer Zaman Now
Thank you for reading šš
Keep Learning and Keep Growing.