Open Source · Node.js & Python

The Interactive CLI
for AI Agents

Now Agents can Interact and Manage Long-Running, Stateful, TUI sessions in parallel

$npx clrun echo hello world
Available on npm & PyPI

One CLI. Two runtimes.

Same commands, same YAML output, same agent skills — choose the runtime that fits your stack.

$ npm install -g clrun

Or use without installing:

$ npx clrun <command>
  • Uses node-pty for native PTY management
  • Works with npm, npx, yarn, pnpm
  • Zero-config — just run npx clrun

Both versions produce identical YAML output, use the same .clrun/ state format, and install the same agent skills.

Built for the age of
autonomous coding

Every feature designed to give AI agents reliable, observable, deterministic control over interactive command execution.

Persistent Interactive Execution
Full PTY sessions that persist independently of your agent process. Commands stay alive, buffers stay intact, state stays consistent.
Deterministic Input Queues
Every input is queued with explicit priority ordering. Higher priority sends first. Same priority follows FIFO. No ambiguity.
Priority + Override Control
Queue inputs with priority levels for ordered multi-step flows. Use override to cancel all pending inputs and force immediate execution.
Crash Recovery
Automatic detection of orphaned sessions on restart. Dead processes are marked as detached. Buffer logs remain readable. No data loss.
Project-Scoped Execution
All state lives in .clrun/ at your project root. No global state, no home directory pollution. Every project is isolated.
Structured YAML Output
Every command returns structured YAML. Every session, queue entry, and event is machine-readable. Built for programmatic consumption.
AI-Native CLI Workflows
Designed from the ground up for AI coding agents. Available as both npm and pip packages. Built-in skill files teach agents how to use the tool without configuration.
Agent Observability
Append-only buffer logs and structured event ledger provide full visibility into execution history. Every action is auditable.
Git-Trackable Execution History
Optionally commit .clrun/ledger/ to git. Enable AI agents to reason about past execution across sessions and team members.
Multi-Agent Safe Design
Runtime locking prevents state corruption. File-based communication enables multiple agents to interact with the same execution substrate.
Infrastructure-Grade Architecture
Clean separation of concerns. Lock manager, queue engine, PTY manager, buffer system, and ledger all operate independently.
Real-Time Session Monitoring
Tail and head commands provide instant access to terminal output. Status command shows live session states and queue depths.
Dynamic remote CLIs

Dynamic remote CLIs with SCP

CLRUN supports dynamic remote CLIs via SCP. Connect to any SCP server and drive its workflow as an interactive terminal. SCP servers expose state and CLI metadata (hints, next steps, options); CLRUN connects and loads the flow into a virtual terminal. Use the same clrun <id>, clrun key <id>, and clrun tail semantics.

Connect and drive a remote workflow
$ clrun scp http://localhost:8000
$ clrun <id> "1"  # option index or action name
$ clrun tail <id> --lines 50

Learn more about SCP and the standardized CLI endpoint in the SCP repo docs.

Agent-Native Design

Designed for
high-context agents

Most CLI tools return a string and an exit code. clrun returns structured context — what happened, what went wrong, and exactly what to do next. Every response is designed to keep the agent in full control.

01
Every error is a lesson
No blind failures. Every error response includes the reason, available alternatives, and the exact command to recover. An agent should never be stuck wondering what went wrong.
02
Every response is a menu
Hints aren't suggestions — they're the complete set of valid next actions. The agent always knows its options without needing to read documentation mid-flow.
03
Runtime assertions catch bugs before agents see them
Output is validated at the boundary. ANSI escape codes, shell prompt leaks, and formatting artifacts are caught and stripped with warnings — never silently passed through.
04
Warnings correct, they don't block
When clrun detects suspicious input — like a likely shell-expanded variable — it executes the command AND returns a warning explaining what probably happened and how to fix it.
05
Output is pure signal
Shell prompts, echoed commands, ANSI codes, and stale buffer content are stripped. The output field contains only what the command actually produced. Nothing else.
06
State is always recoverable
Sessions suspend and auto-restore. Crashes are detected. Buffers persist. An agent can pick up exactly where it left off, even across process restarts.

See it in action

Real responses from clrun. Every error teaches. Every success guides.

Rich Error Context
---
error: "Session not found: a1b2c3d4-..."
hints:
  list_sessions: clrun status
  start_new: clrun <command>
  active_sessions: f5e6d7c8-...
  note: Found 1 active session(s). Use one of the IDs above.

When an agent uses a stale session ID, clrun doesn't just fail — it tells the agent exactly what's alive and how to recover.

Input Validation Warnings
---
terminal_id: f5e6d7c8-...
input: ""
warnings:
  - >-
    Input is empty. If you intended to send a shell
    variable like $MY_VAR, use single quotes:
    clrun <id> 'echo $MY_VAR'
hints:
  send_more: clrun <id> '<next command>'

Detects when a shell variable was likely expanded to empty by the calling agent's shell, and tells it how to fix the quoting.

Contextual Next Actions
---
terminal_id: f5e6d7c8-...
command: npm init
output: "package name: (my-project)"
status: running
hints:
  send_input: clrun <id> '<response>'
  override: clrun input <id> '<text>' --override
  kill: clrun kill <id>
  note: >-
    Use single quotes for shell variables:
    clrun <id> 'echo $VAR'

Every response includes hints — the exact commands the agent should consider next, formatted as copy-pasteable strings.

The difference between a tool that works with AI agents and a tool designed for AI agents is context. clrun never leaves an agent guessing.

Four commands.
Complete control.

A minimal, powerful API surface designed for machine consumption. Every response is structured YAML.

1
Execute
$ clrun run "npm init"

Spawn an interactive PTY session. Get back a terminal ID.

2
Interact
$ clrun input <id> "yes" --priority 5

Queue deterministic input with priority control.

3
Observe
$ clrun tail <id> --lines 50

Read terminal output. Monitor execution in real time.

4
Verify
$ clrun status

Check session states, exit codes, and queue depth.

Zero-Configuration AI Compatibility

Built-in agent skills

On first run — whether via npm or pip — clrun automatically installs structured skill files into your project. Point your AI agent at .clrun/skills/ and it knows everything.

Core Skill
clrun-skill.md
Complete CLI reference written for AI consumption. Covers all commands, queue behavior, session lifecycle, and best practices.
Claude Code
claude-code-skill.md
Optimized integration instructions for Claude Code. Covers workflow patterns, prompt detection, override strategies, and error handling.
OpenClaw
openclaw-skill.md
Structured skill reference for OpenClaw agents. Full command reference, workflow patterns, and agent-specific guidance.

$ clrun run "any command"
→ Skills auto-installed to .clrun/skills/ on first run

Browse all skills
Coming Soon

Where this is going

clrun is the foundation. What comes next transforms individual agent execution into team-wide infrastructure.

Team Dashboard

Real-time visibility into team-wide agent execution.

Shared Execution Fabric

Collaborative execution substrate across machines.

Enterprise Audit Layer

Compliance-grade audit trails for regulated environments.

Cloud-Backed Execution

Persistent execution that survives machine restarts.

Team Observability

Unified view of all agent actions across your organization.