
This guide assumes you are deploying a web application to a Linux VPS from scratch, using Docker Swarm for orchestration, Nginx as a reverse proxy, and GitHub Actions to automate every deployment.
If you have ever pushed code and then manually SSH'd into your server to restart things, this guide is for you. By the end, you will have a pipeline that builds, versions, deploys, and cleans up your app automatically — with zero downtime — every time you push to main.
The stack: GitHub Actions, Docker Swarm, Nginx, and a VPS (any provider: DigitalOcean, Hetzner, Linode, etc.).
docker --version to verify)docker swarm init)By the end of this guide, every git push to main will trigger a GitHub Actions workflow that SSHs into your VPS, pulls the latest code, builds a Docker image tagged with the commit SHA, performs a rolling update across 3 Swarm replicas, and deletes old images to keep disk usage under control. Nginx sits in front of your app as the public-facing reverse proxy.
Docker Swarm turns your single VPS into an orchestration node that can manage multiple container replicas. You must initialize it before deploying a stack.
SSH into your VPS and run:
$ docker swarm init --advertise-addr <YOUR_VPS_PUBLIC_IP>
Replace <YOUR_VPS_PUBLIC_IP> with the actual IP address of your server.
You will see output like:
Swarm initialized: current node (abc123...) is now a manager.
Note: If your VPS has multiple network interfaces (common on cloud providers), Docker may prompt you to specify
--advertise-addrexplicitly. Use your public-facing IP, not a private one like10.x.x.x.
docker-compose.yml for Your Swarm StackThis file defines your services, replica count, update behavior, health checks, and networking. Save it as docker-compose.yml in the root of your project.
version: '3.8'
services:
web-app:
build: .
# The stack always pulls the most recent build tagged as 'latest'
image: my-app:latest
deploy:
replicas: 3
update_config:
parallelism: 1 # Update one replica at a time
order: start-first # Start the new container before stopping the old one
delay: 10s # Wait 10 seconds between each replica update
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/api/health"]
interval: 10s
timeout: 5s
retries: 3
start_period: 20s # Give the app 20 seconds to boot before health checks begin
networks:
- app-net
nginx:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro # Mount Nginx config as read-only
networks:
- app-net
networks:
app-net:
driver: overlay # Overlay network is required for Swarm services
Three things to understand here:
order: start-first means the new container must pass its health check before the old one is killed. This is what gives you zero-downtime deployments.start_period: 20s prevents Swarm from marking a slow-booting app as unhealthy on startup.overlay network driver is mandatory for Docker Swarm — it allows containers on different nodes to communicate.Tip: Replace
http://localhost:3000/api/healthwith a real health endpoint in your app. It should return HTTP 200 when the service is ready.
Create a folder called nginx/conf.d/ at the root of your project, next to package.json. Then create the file default.conf inside it. Your project structure should look like this:
my-nextjs-app/
├── app/ ← Next.js app directory
├── public/
├── node_modules/
├── package.json
├── Dockerfile
├── docker-compose.yml
├── .github/
│ └── workflows/
│ └── deploy.yml
└── nginx/ ← create this folder
└── conf.d/ ← and this one
└── default.conf ← and this file
You are not modifying any system Nginx installation. This folder is mounted into the Nginx container at runtime by Docker — the system Nginx (if installed via apt) is completely separate and irrelevant to this setup.
Note: If system Nginx is already running and listening on port 80, the Docker container will fail to start with
bind: address already in use. Stop it first:sudo systemctl stop nginx && sudo systemctl disable nginx.
Add the following to nginx/conf.d/default.conf:
upstream nextjs_app {
server web-app:3000; # Docker Swarm DNS resolves 'web-app' to the healthy replica
}
server {
listen 80;
# Matches requests for example.com and www.example.com
# Before SSL is set up, you can temporarily use _ here to match any hostname
server_name example.com www.example.com;
location / {
proxy_pass http://nextjs_app;
proxy_http_version 1.1;
# Required for Next.js WebSocket support (hot reload, real-time features)
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
A few things specific to Next.js here:
web-app:3000 — Next.js runs on port 3000 by default. Docker Swarm's internal DNS resolves the service name web-app to whichever replica is currently healthy, so Nginx never talks to a booting or crashed container.Upgrade and Connection headers are required for Next.js because it uses WebSockets for hot reloading and certain real-time features.Your GitHub Actions workflow needs to SSH into your VPS. Never hardcode credentials in your workflow file.
Go to your GitHub repository, then:
| Secret Name | Value |
|---|---|
VPS_IP |
Your VPS public IP address |
VPS_USER |
Your SSH username (e.g., root or ubuntu) |
SSH_PRIVATE_KEY |
The full content of your private key file (~/.ssh/id_rsa) |
Create .github/workflows/deploy.yml in your repository:
name: Production Deploy
on:
push:
branches: [ main ]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy via SSH
uses: appleboy/ssh-action@v1.0.3
with:
host: ${{ secrets.VPS_IP }}
username: ${{ secrets.VPS_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
cd ~/my-project
git pull origin main
# Build with two tags: 'latest' (for Swarm) and the commit SHA (for version tracking)
docker build -t my-app:latest -t my-app:${{ github.sha }} .
# Perform a rolling update using the Swarm stack
docker stack deploy -c docker-compose.yml my_stack
# Keep only the 3 most recent images of 'my-app' — delete the rest
OLD_IMAGES=$(docker images "my-app" --format "{{.ID}}" | sed '1,3d')
if [ ! -z "$OLD_IMAGES" ]; then
docker rmi $OLD_IMAGES
fi
# Remove unused networks and build cache
docker system prune -f
Walk through what this does on each push:
git pull origin main — syncs the latest code to the VPS.docker build -t my-app:latest -t my-app:${{ github.sha }} — builds the image twice: once tagged as latest (used by Swarm) and once tagged with the 40-character commit SHA. The SHA tag is your permanent version snapshot, useful for rollbacks.docker stack deploy — triggers the rolling update defined in docker-compose.yml.docker images lists all my-app image IDs by recency. sed '1,3d' skips the first 3 (the most recent) and pipes the rest to docker rmi. This keeps disk usage bounded.docker system prune -f — removes dangling build cache, stopped containers, and unused networks.Tip:
${{ github.sha }}is a built-in GitHub Actions variable. You do not need to define it — it is automatically set to the full commit hash of the triggering push.
Before GitHub Actions can run docker stack deploy, the project directory must exist on the VPS. Do this once manually:
$ ssh your-user@your-vps-ip
$ git clone https://github.com/your-username/your-repo.git ~/my-project
$ cd ~/my-project
$ docker build -t my-app:latest .
$ docker stack deploy -c docker-compose.yml my_stack
After this, every subsequent push to main will be handled automatically by the workflow.
Running on HTTP is fine for testing but unacceptable in production. This step installs Certbot on your VPS, issues a free SSL certificate from Let's Encrypt, and configures it to renew automatically.
Certbot runs on the VPS directly — not inside Docker. It writes certificate files to /etc/letsencrypt/ on the host, and you mount that directory into the Nginx container.
Install Certbot on the VPS:
$ sudo apt update
$ sudo apt install certbot -y
Temporarily stop the Nginx container so Certbot can use port 80 for domain verification:
$ docker service scale my_stack_nginx=0
Issue the certificate:
$ sudo certbot certonly --standalone -d example.com -d www.example.com
Certbot will verify you own the domain by temporarily serving a file on port 80, then write your certificate files to /etc/letsencrypt/live/example.com/.
Bring Nginx back up:
$ docker service scale my_stack_nginx=1
Update docker-compose.yml to mount the certificates and expose port 443:
nginx:
image: nginx:latest
ports:
- "80:80"
- "443:443" # Expose HTTPS port
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- /etc/letsencrypt:/etc/letsencrypt:ro # Mount certs from host (read-only)
- /var/lib/letsencrypt:/var/lib/letsencrypt:ro # Mount Certbot data
networks:
- app-net
Update nginx/conf.d/default.conf to handle both HTTP redirect and HTTPS:
upstream nextjs_app {
server web-app:3000;
}
# Redirect all HTTP traffic to HTTPS
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$host$request_uri;
}
# HTTPS server block
server {
listen 443 ssl;
server_name example.com www.example.com;
# Certificate paths written by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Recommended SSL settings
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://nextjs_app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
Redeploy the stack to apply the new config:
$ docker stack deploy -c docker-compose.yml my_stack
Set up auto-renewal with a cron job:
Let's Encrypt certificates expire every 90 days. Certbot can renew them automatically, but it needs port 80 free to do so. The trick is to scale Nginx down, renew, then scale it back up.
Open the crontab editor:
$ sudo crontab -e
Add this line at the bottom:
# Run at 3:00 AM on the 1st of every month
0 3 1 * * docker service scale my_stack_nginx=0 && certbot renew --quiet && docker service scale my_stack_nginx=1
This runs on the 1st of every month at 3 AM. Certbot only renews if the certificate is within 30 days of expiring, so running it monthly is safe and won't cause unnecessary downtime.
Tip: After setting up the cron job, do a dry run to confirm renewal works without actually issuing a new certificate:
sudo certbot renew --dry-run
Note: Make sure your domain's DNS A record points to your VPS IP before running Certbot. If the domain does not resolve to your server, the Let's Encrypt challenge will fail.
After pushing to main, check the Actions tab on GitHub. Click the latest workflow run and expand the "Deploy via SSH" step to see the live output.
On your VPS, verify the stack is running correctly:
# List all services in the stack
$ docker stack services my_stack
# Watch replica status in real time
$ docker service ps my_stack_web-app
# Check container logs
$ docker service logs my_stack_web-app --tail 50
Expected output from docker stack services my_stack:
ID NAME MODE REPLICAS IMAGE
abc123xyz my_stack_web-app replicated 3/3 my-app:latest
def456uvw my_stack_nginx replicated 1/1 nginx:latest
Here is what each column means, using the Nginx line as an example:
| Column | Value | Meaning |
|---|---|---|
| ID | def456uvw |
Docker's internal short ID for this service |
| NAME | my_stack_nginx |
Stack name + service name, auto-combined by Docker |
| MODE | replicated |
Running a fixed number of copies (vs global which runs one per node) |
| REPLICAS | 1/1 |
1 running out of 1 desired — Nginx is healthy |
| IMAGE | nginx:latest |
The official Docker Hub image, not your system Nginx |
3/3 under REPLICAS for web-app means all three instances are healthy and running. If you see 2/3 or 1/3, check the logs for the failing replica.
Nginx intentionally runs as 1/1 — redundancy lives in your web-app replicas, not in Nginx itself.
Visit http://example.com in a browser. You should see your application served through Nginx. Once SSL is configured in Step 7, visiting http://example.com will automatically redirect to https://example.com.
Error: docker stack deploy fails with network not found or overlay network error
Cause: Docker Swarm is not initialized, or the overlay network driver is unavailable.
Fix: Run docker swarm init on the VPS. Confirm with docker info | grep Swarm.
Error: Health check fails and replicas stay at 0/3
Cause: The app is not responding at the URL specified in the healthcheck.test command, or start_period is too short for your app's boot time.
Fix: SSH into the VPS and run curl -f http://localhost:3000/api/health manually. If it fails, your health endpoint is broken or the app is not starting. Increase start_period if the app just needs more time to boot.
Error: docker rmi fails with image is being used by a running container
Cause: The cleanup script is trying to delete an image that is still in use.
Fix: This is safe to ignore — Docker will print a warning but skip the in-use image. The rest of the cleanup will proceed normally.
Error: git pull on the VPS fails with Permission denied or Host key verification failed
Cause: The VPS does not have SSH access to your GitHub repository, or the host key was not accepted.
Fix: Generate an SSH key on the VPS (ssh-keygen) and add the public key to your GitHub account under Settings > SSH and GPG keys. Then run ssh -T git@github.com from the VPS to confirm access.
Error: GitHub Actions step hangs at "Deploy via SSH" and eventually times out
Cause: The SSH key secret is malformed (extra newlines or spaces) or the VPS firewall blocks port 22.
Fix: Re-copy the private key including the -----BEGIN ... KEY----- header and footer. Verify the VPS firewall allows inbound SSH: ufw status or iptables -L.
You now have a fully automated CI/CD pipeline that deploys your Next.js app to Docker Swarm with zero downtime on every push to main. Each deployment is versioned by commit SHA, Nginx handles all incoming traffic over HTTPS, replicas roll out one at a time with health checks guarding the update, SSL certificates renew automatically, and old images are cleaned up to keep your VPS healthy.
From here, you might explore:
docker service update --image my-app:<previous-sha> my_stack_web-app to instantly revert to any previous build.resources.limits.memory: 512M under the deploy section of docker-compose.yml to prevent a single container from consuming all available VPS memory.logging block to each service in docker-compose.yml with max-size: "10m" and max-file: "3" to prevent logs from filling your disk.| Term | What It Does |
|---|---|
github.sha |
Unique 40-character commit ID used to tag each build as a version snapshot |
| Replicas | Multiple running copies of your container for redundancy |
| Healthcheck | Prevents Nginx from routing traffic to a container that is still booting |
start-first |
Ensures the new replica is healthy before the old one is terminated |
sed '1,3d' |
Skips the first 3 lines (newest images) and passes the rest to docker rmi |
docker system prune -f |
Removes build cache, dangling images, and unused networks |
certbot certonly --standalone |
Issues an SSL certificate using port 80 for domain verification |
certbot renew |
Renews certificates that are within 30 days of expiry |
return 301 https://... |
Redirects all HTTP traffic permanently to HTTPS |
X-Forwarded-Proto |
Tells Next.js the original request was HTTPS, even though Nginx proxies it over HTTP internally |