· İLİDYAN YAZILIM HİZMETLERİ LTD. ŞTİ · DevOps · 8 min read
How to Build Docker Images and Deploy to Google Cloud Run - A Production-Ready Approach
Learn how to build Docker images and deploy them to Google Cloud Run using a structured, environment-based approach with Docker Compose and automated deployment scripts.
Deploying applications to Google Cloud Run requires a well-structured approach to building, pushing, and deploying Docker images. In this comprehensive guide, we’ll explore a production-ready deployment strategy that uses environment-specific configurations, Docker Compose, and automated deployment scripts.
This approach, refined through real-world production deployments, provides consistency, reliability, and scalability for your Cloud Run applications.
Understanding the Deployment Architecture
Our deployment strategy consists of three main phases:
- Build Phase: Create Docker images using Docker Compose
- Push Phase: Upload images to Google Container Registry or Artifact Registry
- Deploy Phase: Deploy the containerized application to Google Cloud Run
This approach provides several advantages:
- Environment Isolation: Separate configurations for different environments
- Consistency: Reproducible builds across different environments
- Automation: Scripted deployment process reduces human error
- Flexibility: Easy to modify configurations without changing core deployment logic
Project Structure
Let’s start by setting up the proper project structure:
.docker/
├── docker-build.sh # Main deployment script
└── web/ # Project-specific configurations
├── .env.production # Production environment variables
├── .env.staging # Staging environment variables
└── Dockerfile.compose.yml # Docker Compose configuration
Step 1: Create Environment Configuration Files
First, create environment-specific configuration files that contain all necessary variables for your deployment.
Production Environment File (.env.production)
# .docker/web/.env.production
# Google Cloud Configuration
PROJECT=your-gcp-project-id
INSTANCE=your-app-production
REGION=us-central1
# Docker Image Configuration
IMAGE_NAME=gcr.io/your-gcp-project-id/your-app
DOCKERFILE_PATH=./Dockerfile
# Application Configuration
NODE_ENV=production
PORT=8080
# Database Configuration (if needed)
DATABASE_URL=your-production-database-url
# Additional environment variables
API_KEY=your-production-api-key
Staging Environment File (.env.staging)
# .docker/web/.env.staging
# Google Cloud Configuration
PROJECT=your-gcp-project-id
INSTANCE=your-app-staging
REGION=us-central1
# Docker Image Configuration
IMAGE_NAME=gcr.io/your-gcp-project-id/your-app-staging
DOCKERFILE_PATH=./Dockerfile
# Application Configuration
NODE_ENV=staging
PORT=8080
# Database Configuration (if needed)
DATABASE_URL=your-staging-database-url
# Additional environment variables
API_KEY=your-staging-api-key
Step 2: Create Docker Compose Configuration
Create a Docker Compose file that defines how to build your application:
# .docker/web/Dockerfile.compose.yml
version: '3.8'
services:
web:
build:
context: ../../ # Build context relative to compose file
dockerfile: ${DOCKERFILE_PATH}
args:
- NODE_ENV=${NODE_ENV}
- PORT=${PORT}
image: ${IMAGE_NAME}:latest
environment:
- NODE_ENV=${NODE_ENV}
- PORT=${PORT}
- DATABASE_URL=${DATABASE_URL}
- API_KEY=${API_KEY}
ports:
- "${PORT}:${PORT}"
Step 3: Create the Main Dockerfile
Create a Dockerfile in your project root:
# Dockerfile
# Use official Node.js runtime as base image
FROM node:18-alpine
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy application code
COPY . .
# Build the application (if needed)
RUN npm run build
# Expose port
EXPOSE 8080
# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
# Change ownership of the app directory
RUN chown -R nextjs:nodejs /app
USER nextjs
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1
# Start the application
CMD ["npm", "start"]
Step 4: Create the Deployment Script
Now, let’s create the main deployment script that orchestrates the entire process:
#!/bin/bash
# .docker/docker-build.sh
# Get the directory where this script is located
BASE_DIR="$(dirname "$(realpath "$0")")"
# Validate input arguments
if [[ $# -lt 2 ]]; then
echo "Usage: $0 <project> <environment>"
echo "Projects: web, api, worker"
echo "Environments: production, staging, development"
exit 1
fi
# Arguments
proj_opt=$1 # Project (web, api, etc.)
env_opt=$2 # Environment (production, staging, development)
# Validate project exists
if [[ ! -d "$BASE_DIR/$proj_opt" ]]; then
echo "Error: Project directory '$proj_opt' not found"
exit 1
fi
# Validate environment file exists
if [[ ! -f "$BASE_DIR/$proj_opt/.env.$env_opt" ]]; then
echo "Error: Environment file '.env.$env_opt' not found for project '$proj_opt'"
exit 1
fi
# Load environment variables
source "$BASE_DIR/$proj_opt/.env.$env_opt"
# Display configuration
echo "========================================="
echo "Deployment Configuration"
echo "========================================="
echo "Project: $proj_opt"
echo "Environment: $env_opt"
echo "GCP Project: $PROJECT"
echo "Instance: $INSTANCE"
echo "Region: $REGION"
echo "Image: $IMAGE_NAME:latest"
echo "========================================="
# Confirm deployment
read -p "Do you want to proceed with this deployment? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Deployment cancelled."
exit 1
fi
# Build phase
echo "🔨 Building $proj_opt for $env_opt..."
docker compose --file "$BASE_DIR/$proj_opt/Dockerfile.compose.yml" \
--env-file "$BASE_DIR/$proj_opt/.env.$env_opt" \
config
if [[ $? -ne 0 ]]; then
echo "❌ Docker Compose configuration validation failed"
exit 1
fi
docker compose --file "$BASE_DIR/$proj_opt/Dockerfile.compose.yml" \
--env-file "$BASE_DIR/$proj_opt/.env.$env_opt" \
build
if [[ $? -ne 0 ]]; then
echo "❌ Docker build failed"
exit 1
fi
echo "✅ Build completed successfully"
# Push phase
echo "📤 Pushing $proj_opt to registry..."
docker compose --file "$BASE_DIR/$proj_opt/Dockerfile.compose.yml" \
--env-file "$BASE_DIR/$proj_opt/.env.$env_opt" \
push
if [[ $? -ne 0 ]]; then
echo "❌ Docker push failed"
exit 1
fi
echo "✅ Push completed successfully"
# Deploy phase
echo "🚀 Deploying $proj_opt to Cloud Run..."
# Construct deployment command
cmd="gcloud run deploy $INSTANCE \
--project=$PROJECT \
--image=$IMAGE_NAME:latest \
--region=$REGION \
--timeout=30 \
--cpu=1 \
--memory=512Mi \
--min-instances=0 \
--max-instances=10 \
--concurrency=50 \
--platform=managed \
--allow-unauthenticated \
--ingress=all \
--set-env-vars=NODE_ENV=$NODE_ENV"
# Add conditional environment variables
if [[ -n "$DATABASE_URL" ]]; then
cmd="$cmd,DATABASE_URL=$DATABASE_URL"
fi
if [[ -n "$API_KEY" ]]; then
cmd="$cmd,API_KEY=$API_KEY"
fi
echo "Executing deployment command..."
echo "$cmd"
eval $cmd
if [[ $? -ne 0 ]]; then
echo "❌ Cloud Run deployment failed"
exit 1
fi
echo "✅ Deployment completed successfully"
# Get service URL
SERVICE_URL=$(gcloud run services describe $INSTANCE \
--project=$PROJECT \
--region=$REGION \
--format="value(status.url)")
echo "========================================="
echo "Deployment Summary"
echo "========================================="
echo "Service URL: $SERVICE_URL"
echo "Project: $proj_opt"
echo "Environment: $env_opt"
echo "Instance: $INSTANCE"
echo "Region: $REGION"
echo "========================================="
exit 0
Step 5: Set Up Authentication
Before running the deployment script, ensure you have proper authentication:
# Authenticate with Google Cloud
gcloud auth login
# Set your default project
gcloud config set project your-gcp-project-id
# Configure Docker to use gcloud as credential helper
gcloud auth configure-docker
Step 6: Running the Deployment
Make the script executable and run it:
# Make script executable
chmod +x .docker/docker-build.sh
# Deploy to staging
./.docker/docker-build.sh web staging
# Deploy to production
./.docker/docker-build.sh web production
Advanced Configuration Options
Custom Cloud Run Settings
You can customize Cloud Run settings by modifying the deployment command in your script:
# Enhanced deployment command with more options
cmd="gcloud run deploy $INSTANCE \
--project=$PROJECT \
--image=$IMAGE_NAME:latest \
--region=$REGION \
--timeout=300 \
--cpu=2 \
--memory=1Gi \
--min-instances=1 \
--max-instances=100 \
--concurrency=80 \
--platform=managed \
--ingress=internal-and-cloud-load-balancing \
--vpc-connector=my-vpc-connector \
--service-account=my-service-account@project.iam.gserviceaccount.com \
--set-env-vars=NODE_ENV=$NODE_ENV \
--labels=environment=$env_opt,project=$proj_opt"
Multi-Stage Dockerfile
For more complex applications, use multi-stage builds:
# Multi-stage Dockerfile
# Build stage
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:18-alpine AS production
WORKDIR /app
# Copy package files and install production dependencies
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
# Copy built application from builder stage
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/public ./public
# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nextjs -u 1001
RUN chown -R nextjs:nodejs /app
USER nextjs
EXPOSE 8080
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1
CMD ["node", "dist/server.js"]
Environment-Specific Dockerfiles
For different environments, you might need different Dockerfiles:
# .docker/web/Dockerfile.compose.yml with environment-specific Dockerfiles
version: '3.8'
services:
web:
build:
context: ../../
dockerfile: ${DOCKERFILE_PATH:-./Dockerfile.${NODE_ENV}}
args:
- NODE_ENV=${NODE_ENV}
- BUILD_ENV=${NODE_ENV}
image: ${IMAGE_NAME}:latest
environment:
- NODE_ENV=${NODE_ENV}
Monitoring and Logging
Add Health Check Endpoint
Ensure your application has a health check endpoint:
// health.js - Express.js example
app.get('/health', (req, res) => {
res.status(200).json({
status: 'healthy',
timestamp: new Date().toISOString(),
version: process.env.npm_package_version || 'unknown'
});
});
Cloud Run Logging
Configure structured logging for better observability:
// logger.js
const winston = require('winston');
const logger = winston.createLogger({
level: process.env.LOG_LEVEL || 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
),
transports: [
new winston.transports.Console()
]
});
module.exports = logger;
Security Best Practices
Use Service Accounts
Create dedicated service accounts for your Cloud Run services:
# Create service account
gcloud iam service-accounts create cloud-run-service \
--display-name="Cloud Run Service Account"
# Grant necessary permissions
gcloud projects add-iam-policy-binding your-project-id \
--member="serviceAccount:cloud-run-service@your-project-id.iam.gserviceaccount.com" \
--role="roles/cloudsql.client"
Secure Environment Variables
For sensitive data, use Google Secret Manager:
# Create secret
gcloud secrets create api-key --data-file=api-key.txt
# Grant access to service account
gcloud secrets add-iam-policy-binding api-key \
--member="serviceAccount:cloud-run-service@your-project-id.iam.gserviceaccount.com" \
--role="roles/secretmanager.secretAccessor"
# Deploy with secret
gcloud run deploy $INSTANCE \
--set-secrets=API_KEY=api-key:latest
CI/CD Integration
GitHub Actions Example
# .github/workflows/deploy.yml
name: Deploy to Cloud Run
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- id: 'auth'
uses: 'google-github-actions/auth@v1'
with:
credentials_json: '${{ secrets.GCP_SA_KEY }}'
- name: 'Set up Cloud SDK'
uses: 'google-github-actions/setup-gcloud@v1'
- name: 'Configure Docker'
run: gcloud auth configure-docker
- name: 'Deploy to Cloud Run'
run: |
chmod +x ./.docker/docker-build.sh
./.docker/docker-build.sh web production
Troubleshooting Common Issues
Build Failures
Docker build context too large:
# Add .dockerignore file node_modules .git .env* *.log
Memory issues during build:
# Increase Docker memory limit docker system prune -f
Deployment Failures
Authentication issues:
# Re-authenticate gcloud auth login gcloud auth configure-docker
Permission errors:
# Check IAM permissions gcloud projects get-iam-policy your-project-id
Runtime Issues
Container startup failures:
# Check logs gcloud logs read "resource.type=cloud_run_revision" --limit=50
Memory or CPU limits:
# Increase resources gcloud run services update $INSTANCE \ --memory=1Gi \ --cpu=2
Cost Optimization
Resource Right-Sizing
Monitor and adjust resources based on actual usage:
# Get service metrics
gcloud monitoring metrics list \
--filter="resource.type=cloud_run_revision"
Efficient Docker Images
- Use Alpine Linux base images
- Multi-stage builds to reduce image size
- Remove unnecessary dependencies
- Use .dockerignore effectively
Conclusion
This production-ready approach to building and deploying Docker images to Google Cloud Run provides:
- Consistency: Environment-specific configurations ensure consistent deployments
- Automation: Scripted processes reduce manual errors
- Scalability: Easy to extend for multiple projects and environments
- Security: Best practices for authentication and secret management
- Monitoring: Built-in health checks and logging
Key benefits of this approach:
- Environment Isolation: Clear separation between staging and production
- Reproducible Builds: Consistent Docker images across environments
- Easy Rollbacks: Tagged images allow for quick rollbacks
- Scalable Architecture: Supports multiple projects and services
By following this guide, you’ll have a robust deployment pipeline that can handle production workloads while maintaining security and operational best practices.
Remember to regularly review and update your deployment configurations, monitor resource usage, and implement proper backup and disaster recovery procedures for production environments.
Additional Resources
- Google Cloud Run Documentation
- Docker Compose Documentation
- Dockerfile Best Practices
- Google Cloud Build
- Container Security Best Practices
For enterprise deployments, consider implementing additional features like automated testing, security scanning, and advanced monitoring to ensure production readiness and compliance with your organization’s requirements.