Skip to content
Sample report — redacted

What a free audit report looks like

This is a representative example of a findings report. Company name, team size, and identifiable details have been changed. The finding format, severity ratings, effort estimates, and recommendation depth are exactly what you receive.

Company
[redacted]
Team size
~20 engineers
Cloud
AWS
Findings
6 (2 critical/high, 2 high, 1 medium, 1 low)
1
Critical
2
High
2
Medium
1
Low
CriticalF-001Security

Secrets committed to repository history

AWS access keys found in 3 commits in the main branch (commits abc123, def456, ghi789). The keys appear to be from a development IAM user. Even if rotated, the history is public and the pattern indicates credentials may be handled informally across the team.

RecommendationEffort: 2–4 hours

Rotate the affected credentials immediately. Run git-secrets or truffleHog against full history to find any others. Add pre-commit hooks to block future commits containing credential patterns. Consider moving to IRSA (IAM Roles for Service Accounts) to eliminate long-lived credentials entirely.

HighF-002Kubernetes

No resource limits set on any Kubernetes workloads

Reviewed 14 Deployments across 3 namespaces. None have resource requests or limits defined. In a shared cluster, a single misbehaving pod can exhaust node memory and cause cascading evictions across unrelated services. We observed this exact pattern in the staging environment during the audit call.

RecommendationEffort: 4–8 hours

Set resource requests and limits on all Deployments. Start with p95 observed CPU and memory from the last 30 days of Prometheus metrics. Apply LimitRange objects per namespace to prevent future Deployments from skipping this. Enable VPA (Vertical Pod Autoscaler) in recommendation mode to get ongoing suggestions.

HighF-003CI/CD

Database migrations run manually by an engineer on every deploy

Current deploy runbook requires an engineer to SSH into the bastion, connect to RDS, and run migration scripts manually before starting the new application version. This introduces a 15–20 minute manual step on every deploy, creates a window where the old application is running against a partially migrated schema, and has resulted in 2 rollback incidents in the last 6 months.

RecommendationEffort: 1–2 days

Run migrations as a pre-deploy job in the pipeline. Use a Kubernetes Job (or ECS Task) that runs the migration tool, waits for success, and only then triggers the rolling deployment. Implement backward-compatible migration patterns (expand-contract) so rollbacks do not require reverse migrations.

MediumF-004Monitoring

CloudWatch alarms exist but route to an unmonitored SNS topic

14 CloudWatch alarms are configured for RDS, ALB, and ECS. All route to an SNS topic that was created 18 months ago. The SNS topic has no active subscriptions — the email that was originally subscribed has since left the company. In effect, no alerts are reaching anyone.

RecommendationEffort: 2–3 hours

Add an active subscription to the SNS topic (PagerDuty endpoint or current on-call email). Audit all existing alarms for relevance — several reference metrics for services that no longer exist. Consider migrating to Grafana Alerting for more flexible routing and alert silencing.

MediumF-005CI/CD

ECS task definitions use :latest tags for container images

All 6 ECS task definitions reference Docker images tagged :latest. This means a failed push that corrupts the latest image will cause the next deploy to pull a broken image. It also makes it impossible to know which code version is running in production without cross-referencing the ECR push timestamp.

RecommendationEffort: 4–6 hours

Tag images with the git commit SHA in the CI pipeline and reference that specific tag in the ECS task definition. This makes rollbacks a single parameter change and makes the running version auditable without access to ECR.

LowF-006CI/CD

No staging environment — developers test against production data

There is no staging environment. Feature branches are deployed directly to production for final testing. This was mentioned as a known problem. The risk is that every bug caught in final testing is caught in front of real users.

RecommendationEffort: 2–5 days

Provision a staging environment using Terraform workspaces or separate tfvars. Use RDS read replicas or anonymised database snapshots for realistic data without exposing PII. Connect it to a staging channel in Slack for deploy notifications.

Get your own findings report

30 minutes. We review your actual setup and deliver a report like this within 24 hours.

Book the free audit