For a long time, my deployment process was "SSH into the server and git pull." It worked until it didn't. One bad deploy on a Friday afternoon, a frantic rollback, and I decided to set up a proper CI/CD pipeline. Here's what I built using GitHub Actions and what I learned along the way.

The Pipeline Structure

My pipeline has four stages: lint, test, build, and deploy. They run in that order, and each stage only runs if the previous one passes. The idea is to fail fast. If the code doesn't even pass linting, there's no point running the full test suite.

name: CI/CD Pipeline
on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: 18
          cache: 'npm'
      - run: npm ci
      - run: npm run lint

  test:
    needs: lint
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:14
        env:
          POSTGRES_DB: test_db
          POSTGRES_USER: test
          POSTGRES_PASSWORD: test
        ports:
          - 5432:5432
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: 18
          cache: 'npm'
      - run: npm ci
      - run: npm test
        env:
          DATABASE_URL: postgres://test:test@localhost:5432/test_db

A few things worth noting. The services block spins up a real PostgreSQL container for integration tests. No mocking the database, no SQLite substitution. Tests run against the same database engine as production. The health check ensures the database is actually ready before tests start.

Caching Saves Minutes

The cache: 'npm' option in the setup-node action caches the npm dependency cache between runs. Without it, every pipeline run downloads all dependencies from scratch. With it, subsequent runs reuse cached packages and only download what changed. On a project with 500+ dependencies, this cut my install step from 45 seconds to about 8 seconds.

For larger projects, I also cache the build output when possible. If only tests changed, there's no reason to rebuild the application. But be careful with build caches. A stale cache can cause subtle bugs that only appear in CI. When in doubt, do a clean build.

The Deploy Stage

Deploy only runs on pushes to main, not on pull requests. I don't want PRs accidentally deploying anything. My deploy step builds a Docker image, pushes it to the container registry, and then triggers a deployment to the server.

  deploy:
    needs: [test]
    if: github.ref == 'refs/heads/main'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Build and push Docker image
        run: |
          docker build -t myapp:${{ github.sha }} .
          docker tag myapp:${{ github.sha }} registry/myapp:latest
          docker push registry/myapp:latest
      - name: Deploy
        run: |
          ssh deploy@server 'cd /opt/myapp && docker compose pull && docker compose up -d'

The image is tagged with the git SHA, so every build is uniquely identified. I also tag it as latest for convenience, but the SHA tag is what makes rollbacks possible. Need to go back to the previous version? Just point the deploy at the previous SHA.

Branch Protection Rules

The pipeline is only useful if it's enforced. I set up branch protection rules on main that require the lint and test jobs to pass before merging. No exceptions. Even for "quick fixes" or "just a typo." The moment you allow bypasses, the pipeline becomes optional, and optional pipelines don't get maintained.

I also require at least one code review approval. The combination of automated checks and human review catches most issues before they reach production.

Lessons Learned

Keep the pipeline fast. My target is under 5 minutes from push to deploy. If the pipeline takes 20 minutes, developers will find ways to bypass it or batch up changes into larger, riskier deployments. Speed matters for adoption.

Run the same checks locally. I have a make check command that runs linting and tests locally, identical to what CI runs. Developers should be able to catch failures before pushing, not after waiting for CI.

Don't put secrets in the workflow file. Use GitHub's encrypted secrets for API keys, deploy credentials, and registry passwords. I've seen workflow files with hardcoded passwords committed to public repos. Don't be that person.

Monitor your deploys. The pipeline doesn't end when the code hits the server. I added a final step that hits a health check endpoint and sends a Slack notification with the deploy status. If the health check fails, I know immediately instead of finding out from users.

Start simple. My first pipeline was just lint and test. I added the deploy stage a week later, and caching a week after that. Don't try to build the perfect pipeline on day one. Get the basics running, then iterate.

A CI/CD pipeline is the best investment you can make in a project's long-term health. The setup cost is a few hours. The payoff is every deploy for the life of the project being safer, faster, and reproducible.