| name | flask-docker-deployment |
| description | Set up Docker deployment for Flask applications with Gunicorn, automated versioning, and container registry publishing |
Flask Docker Deployment Pattern
This skill helps you containerize Flask applications using Docker with Gunicorn for production, automated version management, and seamless container registry publishing.
When to Use This Skill
Use this skill when:
- You have a Flask application ready to deploy
- You want production-grade containerization with Gunicorn
- You need automated version management for builds
- You're publishing to a container registry (Docker Hub, GHCR, ECR, etc.)
- You want a repeatable, idempotent deployment pipeline
What This Skill Creates
- Dockerfile - Multi-stage production-ready container with security best practices
- build-publish.sh - Automated build script with version management
- VERSION file - Auto-incrementing version tracking (gitignored)
- .gitignore - Entry for VERSION file
- Optional .dockerignore - Exclude unnecessary files from build context
Prerequisites
Before using this skill, ensure:
- Flask application is working locally
requirements.txtexists with all dependencies- Docker is installed and running
- You're authenticated to your container registry (if publishing)
Step 1: Gather Project Information
IMPORTANT: Before creating files, ask the user these questions:
"What is your Flask application entry point?"
- Format:
{module_name}:{app_variable} - Example:
hyperopt_daemon:apporapi_server:create_app()
- Format:
"What port does your Flask app use?"
- Default: 5000
- Example: 5678, 8080, 3000
"What is your container registry URL?"
- Examples:
- GitHub:
ghcr.io/{org}/{project} - Docker Hub:
docker.io/{user}/{project} - AWS ECR:
{account}.dkr.ecr.{region}.amazonaws.com/{project}
- GitHub:
- Examples:
"Do you have private Git dependencies?" (yes/no)
- If yes: Will need GitHub Personal Access Token (CR_PAT)
- If no: Can skip git installation step
"How many Gunicorn workers do you want?"
- Default: 4
- Recommendation: 2-4 × CPU cores
- Note: For background job workers, use 1
Step 2: Create Dockerfile
Create Dockerfile in the project root:
FROM python:3.11-slim
# Build argument for GitHub Personal Access Token (if needed for private deps)
ARG CR_PAT
ENV CR_PAT=${CR_PAT}
# Install git if you have private GitHub dependencies
RUN apt-get update && apt-get install -y \
git \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Copy requirements and install dependencies
COPY requirements.txt .
# Configure git to use PAT for GitHub access (if private deps)
RUN git config --global url."https://${CR_PAT}@github.com/".insteadOf "https://github.com/" \
&& pip install --no-cache-dir -r requirements.txt \
&& git config --global --unset url."https://${CR_PAT}@github.com/".insteadOf
# Copy application code
COPY . .
# Create non-root user for security
RUN useradd --create-home --shell /bin/bash appuser
RUN chown -R appuser:appuser /app
USER appuser
# Expose the application port
EXPOSE {port}
# Set environment variables
ENV PYTHONPATH=/app
ENV PORT={port}
# Run with gunicorn for production
CMD ["gunicorn", "--bind", "0.0.0.0:{port}", "--workers", "{workers}", "{module}:{app}"]
CRITICAL Replacements:
{port}→ Application port (e.g., 5678){workers}→ Number of workers (e.g., 4, or 1 for background jobs){module}→ Python module name (e.g., hyperopt_daemon){app}→ App variable name (e.g., app or create_app())
If NO private dependencies, remove these lines:
# Remove ARG CR_PAT, ENV CR_PAT, git installation, and git config commands
Simplified version without private deps:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN useradd --create-home --shell /bin/bash appuser
RUN chown -R appuser:appuser /app
USER appuser
EXPOSE {port}
ENV PYTHONPATH=/app
ENV PORT={port}
CMD ["gunicorn", "--bind", "0.0.0.0:{port}", "--workers", "{workers}", "{module}:{app}"]
Step 3: Create build-publish.sh Script
Create build-publish.sh in the project root:
#!/bin/sh
# VERSION file path
VERSION_FILE="VERSION"
# Parse command line arguments
NO_CACHE=""
if [ "$1" = "--no-cache" ]; then
NO_CACHE="--no-cache"
echo "Building with --no-cache flag"
fi
# Check if VERSION file exists, if not create it with version 1
if [ ! -f "$VERSION_FILE" ]; then
echo "1" > "$VERSION_FILE"
echo "Created VERSION file with initial version 1"
fi
# Read current version from file
CURRENT_VERSION=$(cat "$VERSION_FILE" 2>/dev/null)
# Validate that the version is a number
if ! echo "$CURRENT_VERSION" | grep -qE '^[0-9]+$'; then
echo "Error: Invalid version format in $VERSION_FILE. Expected a number, got: $CURRENT_VERSION"
exit 1
fi
# Increment version
VERSION=$((CURRENT_VERSION + 1))
echo "Building version $VERSION (incrementing from $CURRENT_VERSION)"
# Build the image with optional --no-cache flag
docker build $NO_CACHE --build-arg CR_PAT=$CR_PAT --platform linux/amd64 -t {registry_url}:$VERSION .
# Tag the same image as latest
docker tag {registry_url}:$VERSION {registry_url}:latest
# Push both tags
docker push {registry_url}:$VERSION
docker push {registry_url}:latest
# Update the VERSION file with the new version
echo "$VERSION" > "$VERSION_FILE"
echo "Updated $VERSION_FILE to version $VERSION"
CRITICAL Replacements:
{registry_url}→ Full container registry URL (e.g.,ghcr.io/mazza-vc/hyperopt-server)
If NO private dependencies, remove --build-arg CR_PAT=$CR_PAT:
docker build $NO_CACHE --platform linux/amd64 -t {registry_url}:$VERSION .
Make the script executable:
chmod +x build-publish.sh
Step 4: Update .gitignore
Add VERSION file to .gitignore:
# Version file (used by build system, not tracked)
VERSION
This prevents the VERSION file from being committed since it's auto-generated and incremented by the build script.
Step 5: Create .dockerignore (Optional but Recommended)
Create .dockerignore to exclude unnecessary files from Docker build context:
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
env/
venv/
.venv/
ENV/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
# Testing
.pytest_cache/
.coverage
htmlcov/
.tox/
# IDEs
.vscode/
.idea/
*.swp
*.swo
*~
# Git
.git/
.gitignore
# CI/CD
.github/
# Documentation
*.md
docs/
# Build artifacts
VERSION
*.log
# OS
.DS_Store
Thumbs.db
Step 6: Usage Instructions
Building and Publishing
Standard build (increments version, uses cache):
./build-publish.sh
Fresh build (no cache, pulls latest dependencies):
./build-publish.sh --no-cache
Environment Variables
For builds with private dependencies, set CR_PAT before building:
export CR_PAT=ghp_your_github_personal_access_token
./build-publish.sh
For running the container, set application-specific variables:
docker run -p {port}:{port} \
-e DATABASE_PASSWORD=secret \
-e API_KEY=key123 \
{registry_url}:latest
Local Testing
Test the container locally before publishing:
# Build without pushing
docker build --platform linux/amd64 -t {project}:test .
# Run locally
docker run -p {port}:{port} {project}:test
# Test the endpoint
curl http://localhost:{port}/health
Design Principles
This pattern follows these principles:
Security:
- Non-root user - Container runs as unprivileged user
- Minimal base image - python:3.11-slim reduces attack surface
- Build-time secrets - CR_PAT only available during build, not in final image
- Explicit permissions - chown ensures correct file ownership
Reliability:
- Gunicorn workers - Production-grade WSGI server with process management
- Platform specification -
--platform linux/amd64ensures compatibility - Version tracking - Auto-incrementing versions for rollback capability
- Immutable builds - Each version is reproducible
Performance:
- Layer caching - Dependencies cached separately from code
- No-cache option - Force fresh builds when needed
- Slim base image - Faster pulls and smaller storage
- Multi-worker - Concurrent request handling
DevOps:
- Automated versioning - No manual version management
- Dual tagging - Both version and latest tags for flexibility
- Idempotent builds - Safe to run multiple times
- Simple CLI - Single script handles build and publish
Common Patterns
Pattern 1: Standard Web API
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "4", "app:create_app()"]
- Multiple workers for concurrent requests
- Factory pattern with
create_app()
Pattern 2: Background Job Worker
CMD ["gunicorn", "--bind", "0.0.0.0:5678", "--workers", "1", "daemon:app"]
- Single worker to avoid job conflicts
- Direct app instance
Pattern 3: High-Traffic API
CMD ["gunicorn", "--bind", "0.0.0.0:8080", "--workers", "8", "--timeout", "120", "api:app"]
- More workers for higher concurrency
- Increased timeout for long-running requests
Integration with Other Skills
flask-smorest-api Skill
Create the API first, then dockerize:
1. User: "Set up Flask API server"
2. [flask-smorest-api skill runs]
3. User: "Now dockerize it"
4. [flask-docker-deployment skill runs]
postgres-setup Skill
For database-dependent apps:
# Add psycopg2-binary to requirements.txt
# Set database env vars in docker run:
docker run -e DB_HOST=db.example.com -e DB_PASSWORD=secret ...
Container Registry Setup
GitHub Container Registry (GHCR)
Login:
echo $CR_PAT | docker login ghcr.io -u USERNAME --password-stdin
Registry URL format:
ghcr.io/{org}/{project}
Docker Hub
Login:
docker login docker.io
Registry URL format:
docker.io/{username}/{project}
AWS ECR
Login:
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin \
{account}.dkr.ecr.us-east-1.amazonaws.com
Registry URL format:
{account}.dkr.ecr.{region}.amazonaws.com/{project}
Troubleshooting
Build fails with "permission denied"
chmod +x build-publish.sh
Private dependency installation fails
# Verify CR_PAT is set
echo $CR_PAT
# Test GitHub access
curl -H "Authorization: token $CR_PAT" https://api.github.com/user
Container won't start
# Check logs
docker logs {container_id}
# Run interactively to debug
docker run -it {registry_url}:latest /bin/bash
Version file conflicts
# If VERSION file gets corrupted, delete and rebuild
rm VERSION
./build-publish.sh
Example: Complete Workflow
User: "Dockerize my Flask hyperopt server"
Claude asks:
- Entry point? →
hyperopt_daemon:app - Port? →
5678 - Registry? →
ghcr.io/mazza-vc/hyperopt-server - Private deps? →
yes(arcana-core) - Workers? →
1(background job processor)
Claude creates:
Dockerfilewith gunicorn, 1 worker, port 5678build-publish.shwith GHCR registry URL- Adds
VERSIONto.gitignore - Creates
.dockerignore
User runs:
export CR_PAT=ghp_abc123
./build-publish.sh
Result:
- ✅ Builds
ghcr.io/mazza-vc/hyperopt-server:1 - ✅ Tags as
ghcr.io/mazza-vc/hyperopt-server:latest - ✅ Pushes both tags
- ✅ Updates VERSION to
1
Subsequent builds:
./build-publish.sh # Builds version 2
./build-publish.sh # Builds version 3
./build-publish.sh --no-cache # Builds version 4 (fresh)
Best Practices
- Use --no-cache strategically - Only when dependencies updated or debugging
- Test locally first - Build and run locally before pushing
- Keep VERSION in .gitignore - Let build system manage it
- Use explicit versions - Don't rely only on
latesttag for production - Document env vars - List all required environment variables in README
- Health checks - Add
/healthendpoint for container orchestration - Logging - Configure logging to stdout for container logs
- Resource limits - Set memory/CPU limits in production deployment
Advanced: Multi-Stage Builds
For smaller images, use multi-stage builds:
# Build stage
FROM python:3.11-slim as builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt
# Runtime stage
FROM python:3.11-slim
WORKDIR /app
COPY --from=builder /root/.local /root/.local
COPY . .
ENV PATH=/root/.local/bin:$PATH
RUN useradd --create-home appuser && chown -R appuser:appuser /app
USER appuser
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "4", "app:app"]
This pattern:
- Installs dependencies in builder stage
- Copies only installed packages to runtime
- Results in smaller final image