Skip to content

RKSSH LLP · Client Work

Results We've Delivered

Real projects, accurate metrics. Client names changed to protect confidentiality.

26

Projects

18

Industries

SaaS2024-10

B2B SaaS: Self-Service Developer Platform Reduces Platform Team Load 80%

A 120-engineer B2B SaaS company had a 4-person platform team fielding 60+ Slack requests per week from product engineers for environment provisioning, secrets access, and deployment debugging. We built an internal developer platform (IDP) that eliminated 80% of those requests.

Deploy time
New service setup: 2–3 platform team hoursNew service setup: 8 minutes (self-service)
Deploy freq
Staging environment wait: 3 daysStaging environment: on-demand, 7 minutes
80% reduction in platform team interruptionsRead
Healthtech2024-08

Healthtech: Monolith to Microservices Without Stopping Delivery

A 250K-patient digital health platform was operating a 6-year-old Rails monolith. Feature velocity had dropped 70% in 18 months as the codebase grew too large for the team to modify safely. We extracted four critical services over 12 weeks while the product team continued shipping.

Deploy time
Monolith deploy: 22 minutes, blocks all teamsPer-service deploy: 4–6 minutes, independent
Deploy freq
3/week (all teams combined, serialised)15+/week (teams deploy independently)
4× feature velocity restored, zero patient data incidentsRead
PropTech2026-02

PropTech Startup: CI/CD from Zero to 8-Minute Deploys on GCP

A two-year-old property management SaaS had a Django monolith deployed manually via SSH to a single GCP VM. The CTO was the only person who knew the deployment process. Any absence created a production risk. We built a full CI/CD pipeline with zero-downtime deploys in 10 days.

Deploy time
55 minutes (manual SSH)8 minutes (automated)
Deploy freq
1–2/week (gated by CTO availability)Multiple times daily (any engineer)
Deploy time: 55 min manual → 8 min automatedRead
HR Tech2026-01

HR Tech Platform: Jenkins Migration, 3.5-Hour CI to 18 Minutes

A workforce management SaaS had been running Jenkins on a dedicated EC2 instance since 2019. Build times had grown to 3.5 hours for the main application. Engineers had stopped running the full suite locally. We migrated to GitHub Actions with parallelised test execution and cut CI time by 91%.

Deploy time
3.5 hours CI + 25 min deploy18 minutes CI + 7 min deploy
Deploy freq
3–4/week (CI queue backed up)10+/week
CI time: 3.5 hours → 18 minutesRead
Web32025-12

Web3 Protocol: 99.95% Uptime Infrastructure for a DeFi API

A DeFi data aggregator serving 400 trading firms through a REST and WebSocket API had experienced three outages in four months, each lasting 40–90 minutes. Their infrastructure was two EC2 instances behind a load balancer with no auto-recovery. We rebuilt it on EKS with multi-AZ deployment, circuit breakers, and automated failover.

Deploy time
Manual deploy, ~45 min, riskyBlue-green, 6 minutes, instant rollback
Deploy freq
Weekly (fear of deploying)Daily
Zero outages in 6 months post-migrationRead
LegalTech2025-11

LegalTech SaaS: ISO 27001 Certification for a Contract Intelligence Platform

A contract intelligence platform processing confidential legal documents for law firms had lost two enterprise deals because they could not demonstrate ISO 27001 certification. The certification was achievable - they had a reasonably secure AWS setup - but the ISMS documentation, risk register, and control evidence were missing entirely. We got them certified in 7 months.

Outcome$1.1M ARR from two enterprise deals unblocked
ISO 27001 certified, 2 enterprise deals closed ($1.1M ARR)Read
MarTech2025-10

MarTech Platform: Docker Compose in Production to EKS in 4 Weeks

A marketing automation platform with 3,000 B2B customers was running its entire production stack on docker-compose on two EC2 instances. As they signed larger enterprise customers, the architecture became the primary obstacle to uptime guarantees and SOC2 discussions. We migrated to EKS in 4 weeks.

Deploy time
docker-compose up on EC2 - 8 min, manualGitHub Actions + ArgoCD - 5 min, automated
Deploy freq
2–3/week (manual, scary)Daily
Production on Kubernetes, 99.9% uptime SLA now achievableRead
AI / ML2025-09

AI SaaS: GPU Inference Infrastructure for a Document Processing Platform

A document intelligence startup was serving OCR and NLP models from a single A100 instance via a Flask app in a tmux session. P99 latency was 8.4 seconds, the instance cost $14,000/month, and a single bad request could crash the model server. We rebuilt the inference layer on EKS with vLLM, request batching, and autoscaling - cutting latency by 78% and cost by 60%.

Deploy time
Manual restart in tmux - 8 min, riskyRolling deploy with warmup - 6 min, automated
Deploy freq
Avoided deploys (fear of downtime)Weekly model updates
P99 latency: 8.4s → 1.8s, cost: $14K → $5.6K/monthRead
TravelTech2025-08

TravelTech Startup: Manual GCP Setup to Full Terraform IaC in 3 Weeks

A flight price prediction startup had built their GCP infrastructure entirely through the console over 18 months. Spinning up a new environment took 2 days of manual configuration. The infrastructure had never been documented. A new CTO joining from Airbnb made IaC a day-one priority. We wrote the Terraform that described what they had, then cleaned it up.

Deploy time
New environment: 2 days manual, error-proneNew environment: 47 minutes, reproducible
Full IaC coverage, new environment in 45 minutesRead
SaaS2025-06

Cybersecurity SaaS: Zero Trust Network Architecture on EKS

A threat intelligence platform needed to meet enterprise customer requirements for network segmentation, mTLS between all services, and zero standing access to production systems. Their flat EKS network allowed any pod to talk to any other pod. We implemented a full Zero Trust architecture using Istio and AWS IAM roles for service accounts in 5 weeks.

OutcomeTwo enterprise contracts unblocked ($800K ARR)
Full mTLS, zero standing access, enterprise security review passedRead
ClimateTech2025-05

ClimateTech: Real-Time Carbon Data Pipeline on GCP Handling 40M Events/Day

A carbon accounting platform was ingesting IoT sensor data from 12,000 industrial facilities via batch CSV uploads processed nightly. Data latency meant enterprise customers could not act on emissions data until 24 hours after the fact. We redesigned the pipeline as a real-time streaming architecture on GCP, reducing data latency from 24 hours to under 90 seconds.

OutcomeThree enterprise renewals secured ($1.8M ARR protected)
Data latency: 24 hours → 90 seconds, 40M events/dayRead