
I've been hosting projects on AWS and GCP for years. But for side projects the cost and operational overhead is a bit much.
I started moving things to cheap VPS hosts. A Hetzner box, a few scripts, docker compose up. It works. But the scripts
accumulate. The compose file grows. Every deploy is a slightly different sequence of commands you half-remember.
I looked at the tools that exist for this. Kamal needs Ruby installed. Coolify gives you a whole web UI and dashboard when all I want is a terminal. Dokku is solid but oriented around buildpacks and git push. Each one solves the problem, but none of them fit the way I actually want to work.
What I actually wanted
I write Rust. I deploy to a VPS. I want Postgres, automatic HTTPS, and Tailscale locking down everything except ports 80 and 443. I don't want to write Dockerfiles, and I definitely don't want to run a Docker daemon on the server.
More than that, I wanted something that an AI coding agent could drive. I've been working with Claude Code daily for
months now. CLIs are the best interface between a human and an AI agent. They're structured, predictable, and the agent
can read --help and figure out what to do. Markdown instructions drift. A CLI with --json output and deterministic
commands doesn't.
I built perc to encode how I deploy Rust apps to a VPS.
Three commands
The CLI has a command to initialise and secure a VPS from a fresh Ubuntu install. It locks it down to only 80/443 for public access and joins your tailnet for everything else.
Once the VPS is initialised the entire workflow becomes:
perc new myapp # scaffold a Rust+Axum project
perc dev # local dev with Postgres, file watching, auto-restart
perc deploy push # cross-compile, build image, ship via SSH, start on the VPS
perc new gives you a ready-to-run Axum app with a perc.toml config file. perc dev reads that config and spins up
whatever services you've declared (Postgres, S3-compatible storage, Restate for durable execution) as containers, then
runs your app with file watching. perc deploy push cross-compiles to a static Linux binary, builds a minimal OCI image
in pure Rust, pipes it over SSH to Podman on the server, and starts it behind Caddy with automatic Let's Encrypt
certificates.
No Docker daemon needed. The OCI image is built entirely in Rust using tar and sha2. It gets piped straight to
podman load over SSH. No registry, no intermediate steps.
The opinions baked in
perc is opinionated in ways I think pay for themselves:
Tailscale is mandatory. When you bootstrap a server with perc deploy init, it joins your Tailscale network and
locks down SSH to the tailnet. Public internet only sees ports 80 and 443. Everything else, including the monitoring
dashboard and database, is only reachable over Tailscale. You manage servers by name, not IP address.
Podman and Quadlet, not Docker. Containers are managed as systemd units. systemctl status myapp and
journalctl -u myapp work exactly as you'd expect. No daemon running in the background. No Docker Desktop licensing
questions.
Single-file config. Everything lives in perc.toml. Databases, secrets, storage, domains, deploy targets. One file,
TOML (because Rust devs live in TOML), and toml_edit round-trips it without destroying your comments.
--json on every command. This matters more than it sounds. When Claude Code runs perc deploy status --json, it
gets structured data it can reason about. When it runs perc deploy push --json, it gets machine-readable success or
failure with error codes. Every command is idempotent, so an agent can retry without breaking things.
Packaging opinions for agents
The deeper motivation here isn't "I wanted a deploy tool." It's that I wanted to package my deployment practices into something more controlled than markdown instructions.
I can write a CLAUDE.md that says "deploy by running these commands in this order." That works until it doesn't. The
instructions go stale when the workflow changes. The agent might skip a step or misinterpret an instruction. And there's
no verification that the agent did the right thing.
A CLI encodes the workflow into something the agent can't misinterpret. perc deploy push either succeeds or it
doesn't. The tool handles cross-compilation, image building, SSH transport, Caddyfile generation, health checks. The
agent doesn't need to know how any of that works. It just needs to know the verb.
This is, I think, an underexplored pattern. Instead of giving AI agents long documents describing how to do something, encode the workflow into a tool with a clean CLI surface. Let the tool be opinionated so the agent doesn't have to make judgement calls about infrastructure.
What's in the box
Beyond the core scaffold/dev/deploy loop, perc handles the things you'd otherwise script by hand. Postgres gets
provisioned with perc deploy db, with an isolated database and user per app. Secrets live on the VPS, never in version
control. Domains trigger automatic TLS via Caddy. A companion tool,
perc-stats, gives you a monitoring dashboard accessible over Tailscale
only. And if you need durable workflows, a [restate] section in perc.toml handles deploying Restate and registering
your worker.
Multiple apps can share a single VPS. Each gets its own port, database, and Caddy block.
Where it's at
This is early. Version 0.1, with all the rough edges that implies. APIs will change, features will break, and there are
things I haven't built yet (rollback, multi-arch support, perc check for convention linting).
I'm using it for my own projects and it's working well for that. If you're a Rust developer deploying to a VPS and the existing options feel like too much ceremony, it might be worth a look. But I'd read the source code before trusting it with anything you care about.
cargo install perc
The code is on GitHub and the project site is at perc.daz.is.