Skip to content
PropTech2026-02

PropTech Startup: CI/CD from Zero to 8-Minute Deploys on GCP

A two-year-old property management SaaS had a Django monolith deployed manually via SSH to a single GCP VM. The CTO was the only person who knew the deployment process. Any absence created a production risk. We built a full CI/CD pipeline with zero-downtime deploys in 10 days.

Deploy Time
55 minutes (manual SSH)
8 minutes (automated)
Deploy Frequency
1–2/week (gated by CTO availability)
Multiple times daily (any engineer)
Incidents
~1 bad deploy/month, manual rollback
Automatic rollback on health check failure
Cost Impact
-
$600/month (VM retired, Cloud Run costs ~$90/month at their traffic)

The Challenge

The deployment process lived entirely in the CTO's head: SSH into the VM, activate virtualenv, git pull, run migrations, restart gunicorn with systemctl. No tests ran before deploy. No rollback procedure existed. When a bad deploy happened - and they did, roughly once a month - the fix was another manual SSH session. The company had just onboarded three enterprise landlord clients managing 6,000 units, meaning downtime now had contractual consequences.

The Approach

We containerised the Django application, moved it from a single GCP Compute Engine VM to Cloud Run, and built a GitHub Actions pipeline. The target was a deploy process any engineer could trigger by merging a PR, with automatic rollback on health check failure. Total timeline: 10 working days.

The Implementation

Django containerisation and static asset handling

We wrote a multi-stage Dockerfile: a build stage that installs dependencies and runs collectstatic, and a slim runtime stage on python:3.11-slim. Static assets were moved to a GCS bucket with a CDN URL, removing them from the container entirely. Image size dropped from an undefined 'whatever is on the VM' to 340MB.

DockerGoogle Cloud StorageGCP Artifact Registry

GitHub Actions pipeline with migration safety

On push to main: run Django test suite (89 tests, 43 seconds), build and push to Artifact Registry tagged with commit SHA, run database migrations as a separate Cloud Run Job (not the application container), deploy the new image to Cloud Run only after migrations succeed.

GitHub ActionsCloud Run JobsGCP Artifact Registry

Cloud Run zero-downtime deployment

Cloud Run handles rolling deploy natively - new revision gets 0% traffic until health checks pass, then traffic shifts over gradually. We configured a custom /health endpoint that checks DB connectivity and returns 200 only when fully ready. Failed deploys roll back automatically within 90 seconds.

Google Cloud RunCloud SQL PostgreSQLCloud Load Balancing

Secrets migration from .env files

The production .env file had been emailed between founders and lived in a shared Dropbox folder. We moved all 14 secrets to Google Secret Manager and configured Cloud Run to inject them at runtime. The .env file was deleted. The Dropbox folder was deleted. Rotation was set to quarterly.

Google Secret ManagerCloud RunIAM

Key Takeaways

  • Cloud Run is the right default for Django/Flask/FastAPI on GCP - no Kubernetes overhead, native zero-downtime, pay-per-request pricing
  • Running migrations as a separate Cloud Run Job before the application deploy is the correct pattern - application startup should never run migrations
  • A secrets file shared in Dropbox is the most common credential exposure vector in early-stage startups - Secret Manager migration is a one-day fix
  • Removing the single-person dependency on deployment is the highest-value organisational change in this engagement

Facing Similar Challenges?

Book a free 30-minute audit and I will tell you what I see.

Book Free Audit
All case studies