← Back to Writing

VersionGate — Zero Downtime, One Push

Deploying without the fear of breaking things·

Every deployment is a small gamble. You push your code, something restarts, and for a few seconds — sometimes longer — your users hit a dead server. A 502. A flash of nothing.

It's not a big deal until it is. And the solutions that exist are either too expensive or too complex for a project that just needs to stay up.

VersionGate is the answer I built for myself: zero-downtime deployments on your own VPS, triggered by a single git push.

Zero Downtime, Actually

The core idea: never deploy over a running container. Deploy beside it, verify it's healthy, then switch traffic. If anything goes wrong, the old one is still live. Users notice nothing.

Two slots — Blue and Green. One is always live. The other receives the next build. The simulator below shows exactly how it works.

Each project has two container slots — BLUE on basePort and GREEN on basePort+1. One is always live. Every deploy targets the idle one. Click the button to simulate.

BLUE
port:3100
containermyapp-blue
statusROLLED_BACK
GREENLIVE
port:3101
containermyapp-green
statusACTIVE
nginx upstream → localhost:3101 (green)

The Stack

VersionGate is a Bun server that runs on your VPS alongside your application. It owns four things:

  • Docker — builds and runs your application containers in isolated slots
  • Nginx — the traffic gate; VersionGate rewrites its upstream config and reloads it without dropping connections
  • PostgreSQL — every deployment is a record with a state: DEPLOYINGACTIVEROLLED_BACK
  • GitHub Webhooks — your repository notifies VersionGate on every push; no polling, no manual triggers

You don't interact with any of these directly. VersionGate manages them through a dashboard and a REST API.

The Deploy Lock

Before anything touches your server, VersionGate acquires a deploy lock per project. If a deploy is already in progress and a second webhook arrives, it waits. No race conditions, no two builds colliding on the same container slot.

The lock is released when the deployment reaches a terminal state — either ACTIVE on success or FAILED on error. The live container is untouched throughout.

If the server restarts mid-deploy, VersionGate scans for any deployment stuck in DEPLOYING on startup and cleans up orphaned containers automatically.

Health Check Before the Switch

After the new container starts, VersionGate doesn't immediately switch Nginx. It polls the container's health endpoint — waiting for consecutive healthy responses before it considers the build safe.

  • If the container never responds — deploy fails, nothing switches
  • If the container crashes mid-check — deploy fails, nothing switches
  • Only on a clean pass does Nginx get the new upstream

This is the only moment that matters. Everything before it is preparation. Everything after it is cleanup. The health check is the gate.

Auto-Dockerfile, No Config Required

Not every project has a Dockerfile. VersionGate scans your repository and detects the runtime automatically — Node.js, Bun, or Python — then generates the container environment for you.

You push code. VersionGate figures out how to run it.


What This Replaces

Before VersionGate, deploying a personal project on a VPS was a ritual. SSH in, pull, rebuild, restart, check logs, hope. Toggle below to see what changes.

deployment workflow

1.SSHConnect to your VPS
2.gitPull latest from origin
3.dockerBuild and tag the new image
4.dockerStop the running container
5.nginxUpdate upstream config and reload
6.curlManually check if it's alive

Closing

Zero downtime doesn't have to mean a cloud bill or a Kubernetes cluster. It can mean a VPS, a git push, and an engine that handles the rest.

That's what VersionGate is.

Related

Open for Collaborations

If you're building something in the DevOps, infra, or open-source space and want to work together — I'm interested. Drop a message and I'll get back to you.