Kubernetes Secrets are not secrets. They are base64-encoded strings stored in etcd. Anyone with cluster access can read them. Anyone who commits a kubectl get secret -o yaml output to a repository has just leaked credentials.
This is the most common security issue we find in startup infrastructure audits. Here is how to fix it.
What You Should Be Doing Instead
The pattern used by serious engineering teams:
- •Secrets live in an external secret store - Vault, AWS Secrets Manager, GCP Secret Manager, or Azure Key Vault
- •A Kubernetes operator syncs them into native K8s Secrets at runtime
- •Applications read from Kubernetes Secrets (no code changes required)
- •Rotation happens in the external store - Kubernetes Secrets update automatically
This gives you: encryption at rest, access audit logs, automatic rotation, and a single source of truth across services.
Option 1: HashiCorp Vault
Best for: Teams with compliance requirements (SOC2, ISO 27001, HIPAA), multi-cloud setups, or who need dynamic secrets.
Vault is the most powerful option. Its killer feature is dynamic secrets - rather than storing a database password, Vault generates a temporary credential on demand with a short TTL. When the TTL expires, the credential is revoked. Even if someone captures it, it stops working in minutes.
Basic Vault + Kubernetes setup
bash# Install Vault via Helm helm repo add hashicorp https://helm.releases.hashicorp.com helm upgrade --install vault hashicorp/vault \ --namespace vault \ --create-namespace \ --set server.ha.enabled=true \ --set server.ha.replicas=3
Enable Kubernetes auth so pods can authenticate:
bashvault auth enable kubernetes vault write auth/kubernetes/config \ kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" # Policy for your API service vault policy write api-policy - <<EOF path "secret/data/api/*" { capabilities = ["read"] } EOF # Role binding: pods in api namespace with api-sa service account get api-policy vault write auth/kubernetes/role/api \ bound_service_account_names=api-sa \ bound_service_account_namespaces=production \ policies=api-policy \ ttl=1h
Store a secret:
bashvault kv put secret/api/database \ url="postgres://user:pass@db:5432/app" \ password="your-db-password"
Use the Vault Agent Sidecar to inject secrets into pods without any application changes:
yamlapiVersion: apps/v1 kind: Deployment metadata: name: api spec: template: metadata: annotations: vault.hashicorp.com/agent-inject: "true" vault.hashicorp.com/role: "api" vault.hashicorp.com/agent-inject-secret-database: "secret/data/api/database" vault.hashicorp.com/agent-inject-template-database: | {{- with secret "secret/data/api/database" -}} DATABASE_URL={{ .Data.data.url }} {{- end }} spec: serviceAccountName: api-sa containers: - name: api image: api:latest env: - name: DATABASE_URL value: "" # overridden by Vault agent
Vault pros: Audit logs on every secret access, dynamic secrets, fine-grained policies, works across clouds. Vault cons: Operational complexity - you are now running a critical-path HA service. Vault must be available for pods to start.
Option 2: External Secrets Operator (ESO)
Best for: Teams already using AWS/GCP/Azure secret stores, or who want to avoid running Vault.
ESO is a Kubernetes operator that syncs secrets from external providers (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, Vault, and many others) into Kubernetes Secrets. Your application reads a normal K8s Secret - ESO handles the sync.
bashhelm repo add external-secrets https://charts.external-secrets.io helm upgrade --install external-secrets external-secrets/external-secrets \ --namespace external-secrets \ --create-namespace
Connect it to AWS Secrets Manager:
yaml# ClusterSecretStore - cluster-wide connection to AWS apiVersion: external-secrets.io/v1beta1 kind: ClusterSecretStore metadata: name: aws-secrets spec: provider: aws: service: SecretsManager region: us-east-1 auth: jwt: serviceAccountRef: name: external-secrets-sa namespace: external-secrets
Create an ExternalSecret that syncs into a K8s Secret:
yamlapiVersion: external-secrets.io/v1beta1 kind: ExternalSecret metadata: name: api-database namespace: production spec: refreshInterval: 1h # how often to sync secretStoreRef: name: aws-secrets kind: ClusterSecretStore target: name: api-database # name of the K8s Secret to create creationPolicy: Owner data: - secretKey: DATABASE_URL # key in K8s Secret remoteRef: key: production/api/database # path in AWS Secrets Manager property: url - secretKey: DATABASE_PASSWORD remoteRef: key: production/api/database property: password
Your deployment reads the K8s Secret normally:
yamlcontainers: - name: api env: - name: DATABASE_URL valueFrom: secretKeyRef: name: api-database key: DATABASE_URL
ESO pros: Simple, uses your existing cloud secret store, no new critical infrastructure to run. ESO cons: Secrets still end up as K8s Secrets (etcd). Mitigate by enabling etcd encryption at rest.
Option 3: Infisical
Best for: Small teams who want a great developer experience and don't have time to run Vault.
Infisical is a newer open-source secrets manager with a polished UI, CLI, and native Kubernetes integration. It works like ESO but with its own hosted (or self-hosted) backend.
bashhelm repo add infisical-helm-charts https://dl.cloudsmith.io/public/infisical/helm-charts/helm/charts/ helm upgrade --install infisical-agent infisical-helm-charts/infisical-agent \ --namespace infisical \ --create-namespace
Create a secret sync:
yamlapiVersion: secrets.infisical.com/v1alpha1 kind: InfisicalSecret metadata: name: api-secrets namespace: production spec: hostAPI: https://app.infisical.com/api resyncInterval: 60 authentication: universalAuth: credentialsRef: name: infisical-agent-credentials namespace: production managedSecretReference: secretName: api-secrets secretNamespace: production infisicalProject: slug: your-project infisicalEnvironment: production
Infisical pros: Best developer experience - UI, CLI, per-developer environments, secret versioning, access requests. Quick to set up. Infisical cons: Newer, smaller community. Self-hosted version requires more ops effort to maintain.
The One Thing You Must Do Right Now
If you are doing none of the above, at minimum: enable etcd encryption at rest. This encrypts K8s Secrets in etcd storage. On EKS:
hcl# In your EKS Terraform config resource "aws_eks_cluster" "main" { encryption_config { provider { key_arn = aws_kms_key.eks.arn } resources = ["secrets"] } }
And stop putting raw secrets in your CI/CD environment variables. Use GitHub Secrets (encrypted at rest, masked in logs) as a minimum.
Decision Framework
| Team size | Compliance requirement | Recommendation |
|---|---|---|
| <10 engineers | None | Infisical (hosted) + ESO |
| 10–50 engineers | SOC2 / HIPAA | ESO + AWS Secrets Manager |
| 50+ engineers | ISO 27001 / SOC2 Type II | HashiCorp Vault HA |
| Any size | Air-gapped / regulated | HashiCorp Vault on-prem |
The worst approach in 2026: hardcoded secrets in environment variables, .env files committed to repos, or unencrypted K8s Secrets with no etcd encryption. All three are automatic findings in SOC2 and ISO 27001 audits.
Found secrets in your .env files or K8s configs? Book a free audit - we will assess your secrets posture and give you a prioritised fix plan.