Docker Deployment Guide
This guide covers Docker deployment for GitPulse, including configuration, troubleshooting, and production considerations.
🐳 Overview
GitPulse uses Docker and Docker Compose for containerized deployment with the following services:
- Web Application (Port 8000): Django web server with Gunicorn
- Worker (Background): Django-Q worker for background tasks
- PostgreSQL 17 (Port 5432): Primary database
- MongoDB 7.0 (Port 27017): Analytics and metrics storage
- Ollama (Port 11435): Local LLM service with automatic model initialization (mapped from internal port 11434)
Production-Ready: The application uses Gunicorn WSGI server and includes health checks for all services.
📋 Prerequisites
- Docker Desktop installed and running
- At least 16GB of available RAM (8GB minimum, 16GB recommended)
- At least 10GB of available disk space
- Git installed
🚀 Quick Start
Step 1: Clone Repository
git clone https://github.com/assiadialeb/gitpulse.git
cd GitPulse
Step 2: Environment Configuration
Create a .env file at the project root:
cp env.example .env
Edit the .env file according to your needs:
# Django Settings
DEBUG=True
SECRET_KEY=your-secret-key-here
ALLOWED_HOSTS=localhost,127.0.0.1
# Database Settings
POSTGRES_DB=gitpulse
POSTGRES_USER=gitpulse
POSTGRES_PASSWORD=gitpulse_password
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
# MongoDB Settings
MONGODB_HOST=mongodb
MONGODB_PORT=27017
MONGODB_NAME=gitpulse
# Ollama Settings
OLLAMA_HOST=ollama
OLLAMA_PORT=11434
# GitHub OAuth (optional for development)
GITHUB_CLIENT_ID=your-github-client-id
GITHUB_CLIENT_SECRET=your-github-client-secret
Step 3: Start All Services
docker-compose up -d
This command will: - Build the Docker image for the application - Start PostgreSQL 17 (Django database) - Start MongoDB 7.0 (analytics database) - Start Ollama with automatic model initialization (AI for commit classification) - Start the Django application
Note: The first startup may take 5-10 minutes as Ollama automatically downloads the gemma3:4b model (~3.3GB).
Step 4: Verify Services
docker-compose ps
You should see all services with "Up" status.
Step 5: Access the Application
- Web UI: http://localhost:8000
- Admin: http://localhost:8000/admin
- Ollama API: http://localhost:11435
🔧 Ollama Configuration
The Ollama service is configured with automatic initialization:
- Automatic Model Download: The
gemma3:4bmodel is automatically downloaded on first startup - Persistent Storage: Models are stored in a Docker volume (
ollama_data) - Health Checks: Automatic health monitoring with curl-based checks
- Custom Image: Uses a custom Docker image based on Ubuntu 22.04 with Ollama pre-installed
Ollama Initialization Process
- Container Startup: The container starts with a custom entrypoint script
- Server Launch: Ollama server starts in the background
- Model Check: The system checks if
gemma3:4bmodel exists - Automatic Download: If not present, downloads the model (~3.3GB)
- Ready State: Service becomes healthy and ready for use
Logs Example
🚀 Starting Ollama server...
🚀 Starting Ollama initialization...
⏳ Waiting for Ollama to be ready...
✅ Model gemma3:4b already exists!
🎉 Ollama initialization complete!
🌐 Ollama is ready at http://localhost:11434
📁 Configuration Files
Docker Compose (docker-compose.yml)
- Python 3.12: Latest Python version for optimal performance
- PostgreSQL 17: Latest stable PostgreSQL version
- MongoDB 7.0: Latest MongoDB version
- Custom Ollama Image: Built from
Dockerfile.ollama
Custom Ollama Dockerfile (Dockerfile.ollama)
- Base Image: Ubuntu 22.04
- Ollama Installation: Official installation script
- Custom Scripts: Automatic initialization and entrypoint management
- Dependencies: curl, wget for health checks
Startup Scripts
start.sh: Production web server startup with Gunicorn- Runs database migrations automatically
- Collects static files automatically
- Starts Gunicorn WSGI server on port 8000
-
Optimized for production use
-
start-worker.sh: Background worker startup - Starts Django-Q cluster for background tasks
- Handles indexing and analytics processing
🛠️ Development Workflow
Starting Services
# Start all services
docker-compose up -d
# View logs
docker-compose logs -f
# View specific service logs
docker-compose logs -f web
docker-compose logs -f ollama
Stopping Services
# Stop all services
docker-compose down
# Stop and remove volumes (⚠️ This will delete all data)
docker-compose down -v
Rebuilding Images
# Rebuild all images
docker-compose build
# Rebuild specific service
docker-compose build web
docker-compose build ollama
🔍 Troubleshooting
Common Issues
- Port Conflicts
- Ensure ports 8000, 5432, 27017, and 11435 are available
-
Stop any local PostgreSQL, MongoDB, or Ollama instances
-
Memory Issues
- Ensure Docker has at least 8GB RAM allocated
-
Monitor memory usage:
docker stats -
Ollama Model Download
- First startup may take 5-10 minutes to download the model
- Check logs:
docker-compose logs ollama -
Model is cached in volume for subsequent starts
-
Database Connection Issues
- Wait for PostgreSQL to fully start (check health status)
- Verify environment variables in
.envfile
Health Checks
All services include health checks:
# Check service status
docker-compose ps
# Check health status
docker-compose ps --format "table {{.Name}}\t{{.Status}}\t{{.Ports}}"
Logs and Debugging
# View all logs
docker-compose logs
# Follow logs in real-time
docker-compose logs -f
# View specific service logs
docker-compose logs web
docker-compose logs worker
docker-compose logs postgres
docker-compose logs mongodb
docker-compose logs ollama
⚡ Performance Optimization
Resource Allocation
- RAM: Minimum 8GB, recommended 16GB
- CPU: 4+ cores recommended
- Storage: SSD recommended for better I/O performance
Docker Settings
- Memory: Allocate at least 8GB to Docker Desktop
- Swap: 2GB minimum
- Disk Image Size: 60GB+ recommended
🚀 Production Considerations
Static Files Configuration
In production, Django requires static files to be collected and served efficiently:
- Automatic Collection: Static files are automatically collected during container startup via
start.sh - Web Server Configuration: Configure your web server (Nginx, Apache) to serve static files directly
- Volume Mounting: The production Docker Compose mounts a dedicated volume for static files
# Static files are collected automatically on startup
# Manual collection is also available if needed:
docker-compose exec web python manage.py collectstatic --noinput
Security
- Change default passwords in production
- Use environment variables for sensitive data
- Consider using Docker secrets for production deployments
Monitoring
- Set up log aggregation
- Monitor resource usage
- Configure alerts for service failures
Backup
- Regular PostgreSQL backups
- MongoDB data backup
- Ollama model volume backup
📊 Resource Usage
Typical Memory Usage
- Ollama (gemma3:4b): ~4-6GB RAM
- PostgreSQL: ~512MB-1GB RAM
- MongoDB: ~512MB-1GB RAM
- Django Web: ~256MB-512MB RAM
- Django Worker: ~256MB-512MB RAM
- System overhead: ~1-2GB RAM
Total: ~7-12GB RAM under normal load
Disk Usage
- Base images: ~2-3GB
- Ollama model: ~3.3GB
- Application code: ~100MB
- Database data: Varies based on usage
- Logs: Varies based on usage
🔄 Updates and Maintenance
Updating the Application
# Pull latest changes
git pull origin main
# Rebuild and restart (static files collected automatically)
docker-compose down
docker-compose up -d --build
Updating Dependencies
# Rebuild with no cache
docker-compose build --no-cache
# Restart services
docker-compose up -d
Cleaning Up
# Remove unused images
docker image prune
# Remove unused volumes
docker volume prune
# Remove unused networks
docker network prune
# Remove everything unused
docker system prune -a
📚 Additional Resources
- Installation Guide - Detailed installation instructions
- Configuration Guide - Environment and application configuration
- Troubleshooting Guide - Common issues and solutions
- Production Guide - Production deployment best practices