Run your dev stack like it's one app.
stackplay starts every service in your repo, gives you one window to drive them all, and exposes a JSON CLI your AI agent can drive too. No yaml until you want one.
- Clone & go. Detects 86 frameworks, installs missing deps, kills stale ports — first run, first time.
- One TUI for the whole stack. Process grid, log search across services, error filter, restart, inspect. Five tabs collapsed into one window.
- Same surface for your agent. Every command speaks
--json. Claude, Cursor, Codex drive the same daemon you do.
cd into any repo and run stackplay
01 · Clone & go
Clone the repo. Run one command. You're done.
stackplay starts from an empty directory. It reads your project, asks once, and saves the answer. No Procfile, no docker-compose.yml for local dev, no five-step README.
first run$ git clone github.com/acme/monorepo && cd monorepo
$ stackplay
→ detected: turborepo · 4 procs (web, api, worker, db)
→ node_modules missing — install? [Y]
→ port :3000 held by pid 38291 — kill it? [Y]
→ saved stackplay.yaml · daemon up · ready in 4.1s
- Auto-detection. 86 frameworks across 14 ecosystems — Next.js, Vite, Remix, Rails, Django, FastAPI, Go, Cargo, Turborepo, Nx, pnpm workspaces.
- Self-healing first run. Stale
node_modules? Reinstalled. Port 3000 held by yesterday's zombie? Found, surfaced, killed. - Daemon-backed. Close the terminal, reattach with
stackplay. Processes survive the UI. - No yaml until you want one. First run saves a
stackplay.yamlyou can check in for a stable team contract — or ignore.
02 · One window for the whole stack
Search every service at once. Filter for errors. Trace requests across processes.
The TUI gives you one window with every service in it. / searches across all procs. ⊞ filters by level. Click to copy, click to restart, structured timestamps. The mockup at the top of this page is live — try the search box, the filter chips, the restart button.
stackplay trace "checkout-7f2a"$ stackplay trace "checkout-7f2a" --json
12:04:31.244 web INFO POST /api/checkout — request checkout-7f2a
12:04:31.249 api INFO POST /v1/checkout — auth ok, validating cart
12:04:31.312 worker INFO billing.charge → enqueued (job 9214)
12:04:31.388 worker WARN billing.charge → retry 1/3 — stripe timeout
12:04:31.612 worker OK billing.charge → done in 312ms
12:04:31.628 api OK POST /v1/checkout 200 in 384ms
12:04:31.638 web OK POST /api/checkout 200 in 394ms
- Global search.
/errorhits every process simultaneously, grouped by proc with match counts. No more switching tabs to find which service broke. - Cross-process trace.
stackplay trace "payment"merge-sorts logs from every process into a single timeline so you can follow one request end-to-end. - Error spotlight. When a proc crashes, the actual error is extracted and surfaced in
stackplay ps— thepanic:, the stack trace, theCannot find module. Not buried under 500 lines of HMR noise. - Level & field filters.
--level error,--where "duration>1s", regex search, structured field filters. Cut through the noise without grepping.
03 · The same daemon, machine-readable
Your AI agent uses the same surface you do — just with --json.
Every inspection command speaks JSON. Every mutation returns a structured result. stackplay watch streams typed lifecycle events over NDJSON so a crash shows up as proc.stopped with an exit code, not an indefinite hang. Install one skill — Claude Code, Cursor, Codex, opencode, Aider, Windsurf all drive the same daemon.
stackplay ps --json—status, ports, errors, uptimestackplay health --json—ok/issues verdict for the whole stackstackplay logs api --no-follow --json—bounded read that won't hang a loopstackplay search "error" --json—search every process at oncestackplay watch --ndjson—typed lifecycle + log events for agentsstackplay restart api --dry-run --json—preview a mutation, get a structured resultstackplay mark api investigating—agent-visible process annotationstackplay describe ps --json—machine-readable command metadata
Install the agent skill once — works in every coding agent you have:
vs. the alternatives
stackplay overlaps with a lot of tools but isn't quite any of them.
| stackplay | foreman / overmind | pm2 | docker-compose | tmuxinator | |
|---|---|---|---|---|---|
| Zero config (auto-detect) | ✓ | — | — | — | — |
| Unified TUI for all procs | ✓ | — | — | — | tabs |
| Cross-process log search | ✓ | — | — | — | — |
| Daemon (survives UI) | ✓ | — | ✓ | ✓ | — |
| JSON CLI (agent-driveable) | ✓ | — | partial | partial | — |
| Push lifecycle events | ✓ | — | — | — | — |
| Native processes (no Docker) | ✓ | ✓ | ✓ | — | ✓ |
Configuration is optional
stackplay auto-detects most projects, but you can check in a stackplay.yaml for a stable, shared process contract on your team.
stackplay.yamlprocs:
api:
shell: "npm run dev"
cwd: "./services/api"
env:
PORT: "4000"
autorestart: true
ports: [4000]
web:
shell: "npm run dev"
cwd: "./apps/web"
deps: [api]
ports: [3000]
settings:
scrollback: 20000
theme: midnight
hooks:
pre-start:
api: "scripts/check-env.sh"
on-fail:
api: "scripts/capture-failure.sh"
Prefer Homebrew?
macOS & Linux on arm64 / amd64. Both methods drop stackplay and the sp shorthand on your $PATH. Source and signed releases on GitHub. Windows is on the roadmap.
Frequently asked
How is this different from foreman or overmind?
Procfile tools assume you've already written a Procfile. stackplay starts from an empty directory: it detects what you have, asks once, and saves the answer. It also ships a TUI, JSON-first inspection commands, and a daemon — none of which are in Procfile-runner territory.
Does it replace docker-compose?
For dev, yes — native processes start faster and integrate better with your editor and debugger. For production, no — Compose and Kubernetes are still the right tools. stackplay is local-first.
Do I have to use it with an AI agent?
No. stackplay works as a TUI-first process manager whether you've ever touched an AI agent or not. The agent angle is that the same daemon is also driveable by Claude Code, Cursor, or Codex if you want — same surface, machine-readable.
Windows support?
Planned. The current production targets are macOS and Linux on arm64 and amd64; a Windows daemon is on the roadmap but not in the current release.
Is it open source?
Yes — MIT licensed. Source on GitHub, contributions welcome.
Close the tabs. Open the daemon.