Skip to content
CI/CDMarch 1, 20266 min read

How to Set Up a CI/CD Pipeline for Your Startup in 2026 (Step-by-Step)

A practical guide to CI/CD pipeline setup for startups. Which tools to pick, how to structure your pipeline, what a working GitHub Actions config looks like, and what to avoid.

Your startup is past the prototype stage. You have a real product, real users, and a small but growing engineering team. And deployments still look like this: someone SSHes into a server, runs a script, crosses their fingers, and watches logs until they are satisfied nothing broke.

That process does not scale. Here is how to replace it.

What You Actually Need from a CI/CD Pipeline

Before picking tools, get clear on what you need the pipeline to do:

  1. Run your tests on every pull request - no PR merges unless tests pass
  2. Build a Docker image - your application should be containerized
  3. Push the image to a registry - AWS ECR, GCR, Docker Hub
  4. Deploy to staging automatically - every merge to main triggers a staging deploy
  5. Deploy to production on demand or automatically - depending on your risk tolerance

That is the minimum viable pipeline. Everything else is optimization.

Choosing Your CI Tool

In 2026, the right answer for most startups is GitHub Actions - unless you are already on GitLab, in which case use GitLab CI.

Here is why GitHub Actions wins for greenfield startups:

  • It is where your code already lives
  • The YAML syntax is readable and well-documented
  • The marketplace has pre-built actions for almost everything
  • Free tier covers most small teams

When to consider alternatives:

  • GitLab CI - if you are already on GitLab or need built-in container registry
  • CircleCI - if you need more granular build caching or have complex parallelism needs
  • Buildkite - if you have compliance requirements that prevent using cloud CI

Do not migrate your code host just to use a different CI tool. It is not worth it.

A Real Pipeline, Not a Tutorial Toy

Here is a GitHub Actions pipeline that actually works in production:

yaml
name: Deploy on: push: branches: [main] pull_request: branches: [main] env: REGISTRY: ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.us-east-1.amazonaws.com IMAGE_NAME: myapp jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Set up Node.js uses: actions/setup-node@v4 with: node-version: "20" cache: "npm" - name: Install dependencies run: npm ci - name: Run tests run: npm test - name: Run lint run: npm run lint build-and-deploy: needs: test runs-on: ubuntu-latest if: github.ref == 'refs/heads/main' steps: - uses: actions/checkout@v4 - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: us-east-1 - name: Login to Amazon ECR id: login-ecr uses: aws-actions/amazon-ecr-login@v2 - name: Build and push Docker image env: IMAGE_TAG: ${{ github.sha }} run: | docker build -t $REGISTRY/$IMAGE_NAME:$IMAGE_TAG . docker build -t $REGISTRY/$IMAGE_NAME:latest . docker push $REGISTRY/$IMAGE_NAME:$IMAGE_TAG docker push $REGISTRY/$IMAGE_NAME:latest - name: Deploy to staging run: | # ArgoCD or kubectl apply here kubectl set image deployment/myapp \ myapp=$REGISTRY/$IMAGE_NAME:$IMAGE_TAG \ --namespace=staging

A few things this pipeline does correctly that most tutorials miss:

It separates test and build into two jobs. If tests fail, the build job never runs. You are not building Docker images for broken code.

It uses npm ci not npm install. npm ci is deterministic and faster in CI environments. It reads your lockfile and fails if the lockfile is out of sync with package.json.

It tags images with both ${{ github.sha }} and latest. The SHA tag gives you a precise audit trail of which commit is running in production. The latest tag is useful for debugging.

The Environment You Cannot Skip: Staging

Every startup we audit that does not have a staging environment has the same problem: they test in production.

Staging does not need to be fancy. It needs to be:

  • An environment that runs the same Docker image as production
  • A database with representative data (anonymized production data is ideal)
  • Automatically updated when code merges to main
  • Accessible to engineers for testing before they push to production

If staging costs money you do not have, run it on smaller instances or only keep it running during business hours.

Secrets Management

The most common mistake in startup pipelines: hardcoded credentials.

Do not put secrets in your code. Do not put secrets in your Dockerfile. Do not put secrets in your repository at all.

In GitHub Actions, use GitHub Secrets (Settings → Secrets). In your application at runtime, use AWS Secrets Manager, GCP Secret Manager, or HashiCorp Vault depending on your cloud.

The pipeline should inject secrets at deploy time, not build time. Your Docker image should not contain any credentials.

How Long Should a Pipeline Take?

A well-optimized pipeline for a typical Node.js or Python application should complete in:

  • Tests: 1–3 minutes
  • Docker build: 2–4 minutes (with layer caching)
  • Deploy to staging: 1–2 minutes
  • Total: 5–8 minutes

If your pipeline takes 20+ minutes, you have one of these problems:

  1. No dependency caching (the most common issue)
  2. Tests are slow and not parallelized
  3. Docker image is too large (you are not using multi-stage builds)
  4. You are installing dependencies inside the Docker build without caching

Each of these is fixable.

What Comes After the Pipeline?

Once your pipeline is running, the next priorities in order:

  1. Rollback capability - you should be able to revert a bad deploy in under 5 minutes
  2. Deploy notifications - Slack or PagerDuty alert when a deploy starts, succeeds, or fails
  3. Canary deployments - for high-traffic services, deploy to 10% of traffic first
  4. Feature flags - decouple deploying from releasing

None of those are required for your first production pipeline. Get the basics working first, then add sophistication as your team grows.

The Most Important Thing

The goal of a CI/CD pipeline is not automation for its own sake. The goal is to make your engineers more confident about deploying. When deploying is safe, fast, and automatic, teams deploy more often. When teams deploy more often, they ship smaller changes. When changes are smaller, bugs are easier to find and fix.

The pipeline is not the destination. Faster shipping velocity is.


Building your first pipeline and hitting issues? Book a free audit - we will take a look and tell you exactly what to fix.

RK
RKSSH LLP
DevOps Engineer · rkssh.com

I help funded startups fix their CI/CD pipelines and Kubernetes infrastructure. If this post was useful and you want to talk through your specific situation, book a free 30-minute audit.

Related Articles