Run your dev stack like it's one app.

stackplay starts every service in your repo, gives you one window to drive them all, and exposes a JSON CLI your AI agent can drive too. No yaml until you want one.

then cd into any repo and run stackplay
$ stackplay ~/acme-monorepo v0.1.0
web up 2m14s port 3000 mem 124M cpu 1%
↻ Restart ■ Stop
⊞ Filter ⎘ Copy
CONNECTED h/l:select Tab/Enter:logs /:search s:start x:stop r:restart q:quit web stackplay

macOS & Linux · arm64 / amd64 Homebrew MIT on GitHub latest release

01 · Clone & go

Clone the repo. Run one command. You're done.

stackplay starts from an empty directory. It reads your project, asks once, and saves the answer. No Procfile, no docker-compose.yml for local dev, no five-step README.

first run$ git clone github.com/acme/monorepo && cd monorepo
$ stackplay
→ detected: turborepo · 4 procs (web, api, worker, db)
node_modules missing — install? [Y]
port :3000 held by pid 38291 — kill it? [Y]
→ saved stackplay.yaml · daemon up · ready in 4.1s

02 · One window for the whole stack

Search every service at once. Filter for errors. Trace requests across processes.

The TUI gives you one window with every service in it. / searches across all procs. filters by level. Click to copy, click to restart, structured timestamps. The mockup at the top of this page is live — try the search box, the filter chips, the restart button.

stackplay trace "checkout-7f2a"$ stackplay trace "checkout-7f2a" --json
12:04:31.244 web    INFO POST /api/checkout — request checkout-7f2a
12:04:31.249 api    INFO POST /v1/checkout — auth ok, validating cart
12:04:31.312 worker INFO billing.charge → enqueued (job 9214)
12:04:31.388 worker WARN billing.charge → retry 1/3 — stripe timeout
12:04:31.612 worker OK   billing.charge → done in 312ms
12:04:31.628 api    OK   POST /v1/checkout 200 in 384ms
12:04:31.638 web    OK   POST /api/checkout 200 in 394ms

03 · The same daemon, machine-readable

Your AI agent uses the same surface you do — just with --json.

Every inspection command speaks JSON. Every mutation returns a structured result. stackplay watch streams typed lifecycle events over NDJSON so a crash shows up as proc.stopped with an exit code, not an indefinite hang. Install one skill — Claude Code, Cursor, Codex, opencode, Aider, Windsurf all drive the same daemon.

Install the agent skill once — works in every coding agent you have:

vs. the alternatives

stackplay overlaps with a lot of tools but isn't quite any of them.

  stackplay foreman / overmind pm2 docker-compose tmuxinator
Zero config (auto-detect)
Unified TUI for all procs tabs
Cross-process log search
Daemon (survives UI)
JSON CLI (agent-driveable) partial partial
Push lifecycle events
Native processes (no Docker)

Configuration is optional

stackplay auto-detects most projects, but you can check in a stackplay.yaml for a stable, shared process contract on your team.

stackplay.yamlprocs:
  api:
    shell: "npm run dev"
    cwd: "./services/api"
    env:
      PORT: "4000"
    autorestart: true
    ports: [4000]

  web:
    shell: "npm run dev"
    cwd: "./apps/web"
    deps: [api]
    ports: [3000]

settings:
  scrollback: 20000
  theme: midnight

hooks:
  pre-start:
    api: "scripts/check-env.sh"
  on-fail:
    api: "scripts/capture-failure.sh"

Prefer Homebrew?

macOS & Linux on arm64 / amd64. Both methods drop stackplay and the sp shorthand on your $PATH. Source and signed releases on GitHub. Windows is on the roadmap.

Frequently asked

How is this different from foreman or overmind?

Procfile tools assume you've already written a Procfile. stackplay starts from an empty directory: it detects what you have, asks once, and saves the answer. It also ships a TUI, JSON-first inspection commands, and a daemon — none of which are in Procfile-runner territory.

Does it replace docker-compose?

For dev, yes — native processes start faster and integrate better with your editor and debugger. For production, no — Compose and Kubernetes are still the right tools. stackplay is local-first.

Do I have to use it with an AI agent?

No. stackplay works as a TUI-first process manager whether you've ever touched an AI agent or not. The agent angle is that the same daemon is also driveable by Claude Code, Cursor, or Codex if you want — same surface, machine-readable.

Windows support?

Planned. The current production targets are macOS and Linux on arm64 and amd64; a Windows daemon is on the roadmap but not in the current release.

Is it open source?

Yes — MIT licensed. Source on GitHub, contributions welcome.

Close the tabs. Open the daemon.