Setting Up a Postgres, Go Gin, and React project on Kubernetes

I spent past few days setting up a Kubernetes project Finance Dashboard and this post is the curtain fall of a useful side project. The core code lives inside a single file: finance-dashboard.yaml. Much of the documenation on how to set it up is provided in the README of Finance Dashboard. Instead I’ll briefly mention some issues I encountered.

Finance Dashboard

finance-dashboard

The moving parts

  • Backend (Go/Gin) + Postgres + Frontend (nginx serving a static React build)
  • One namespace: finance
  • Frontend proxies to the backend via /api

Problems I hit (and fixes that stuck)

  • DNS flakiness and upstream resolution

    • Problem: nginx tried to resolve finance-backend before the Service had endpoints → 502s and crash loops.
    • Fix: Use a ConfigMap-mounted nginx config and avoid nginx -t at start. Added a tiny init wait where needed. Later I eliminated DNS altogether by co-locating containers.
  • “Pod can’t reach Postgres” whack‑a‑mole

    • Problem: One-directional connectivity, or pg_isready passing while the app still failed.
    • Fixes:
      • Simplified to a single Pod (finance-stack) running Postgres + backend (+ frontend). Backend connects to Postgres via 127.0.0.1, zero CNI/DNS drama.
      • Explicit ?sslmode=disable in DATABASE_URL for local clusters.
  • Readiness/Liveness probes fighting real life

    • Problem: /health required DB; probes flipped endpoints to “not ready,” then nginx upstream broke.
    • Fix: TCP readiness on port 8080 (backend is ready once it’s listening) and longer liveness delays. The app itself handles DB retries/logging.
  • Frontend proxy rewriting

    • Problem: /api/* requests were getting mangled (trailing slash on proxy_pass).
    • Fix: proxy_pass http://finance-backend:8080; (no slash), plus config delivered via ConfigMap:
      • / → static files
      • /api/* → backend
      • (Optional) /health → backend /health for quick checks
  • Seeding and demo data

    • Problem: Empty UI during demos, or duplicate categories after multiple runs.
    • Fixes:
      • Categories are unique on (name, type) with a dedupe step.
      • Added a -seed-demo flag to load realistic transactions/budgets only if none exist.

The final pattern I used

  • One Deployment, three containers (Postgres, backend, frontend/nginx). One Service for the frontend (ClusterIP), one for the backend (ClusterIP).
  • Frontend proxies /api to 127.0.0.1:8080 when co-located, or to the backend Service if split out later.
  • Port-forward the frontend Service in dev (kubectl port-forward -n finance svc/finance-frontend 8081:80) and open http://localhost:8081.

The payoff

Once I stopped pretending this needed to be “production distributed” for a local Minikube demo, everything became boring and reliable. Co-locating Postgres + backend + nginx in one Pod made the finance-dashboard.yaml simpler, removed all the networking guesswork, and let me focus on the UI and data instead of the cluster.

That’s the configuration I’ll keep for now. When I go to a real cluster, I can split Services/Deployments again, but now I know exactly which knobs matter. After this project my finances have never looked better. Thank you Finance Dashboard!