Shell Sessions
sb.shell() opens a long-lived bash session inside the sandbox. The
shell process persists across commands, so cd, export, shell
variables, and background jobs all survive between calls.
This is the right primitive for AI agents that run many small
commands in sequence — cd /repo && git diff && pytest && grep ...
— because it eliminates the per-command fork cost.
Why this is fast
A normal sb.exec("ls") does this every call:
- HTTP round-trip from your client to the runner.
- Vsock dial from the runner to the guest agent.
- The agent forks
sh -c "ls". The shell forksls. lsruns (microseconds).- The agent reads stdout, sends it back over vsock.
- HTTP response back to your client.
Steps 1, 2, 3, 5 cost 5–30 ms each (depending on whether you’re on bare metal or nested KVM). For 100 commands you pay 100× of all of that.
Inside sb.shell():
- One WebSocket connection for the whole session.
- One persistent vsock connection between the runner and the agent.
- One long-running
bashprocess inside the VM. Commands stream in over its stdin, output streams out over its stdout/stderr. - The agent injects a sentinel after each command so it knows where one command’s output ends and the next begins. Exit codes ride along on the sentinel.
Result: 0.13 ms per command on bare metal, where a separate exec would have been ~7 ms. On nested KVM the gap is ~30–60× because the fork tax is much higher.
Use case: agent loop
from isorun import Sandbox
with Sandbox("python:3.12") as sb: sb.create()
with sb.shell() as sh: sh.run("git clone https://github.com/me/repo") sh.run("cd repo") # cwd persists! sh.run("pip install -r requirements.txt") sh.run("export OPENAI_KEY=sk-...") # env persists!
# The agent runs 100 small commands. Total wall: ~13 ms. for path in source_files: result = sh.run(f"black --check {path}") if result.exit_code != 0: fix = llm_fix(path, result.stdout) sh.run(f"echo {fix!r} > {path}") sh.run(f"black {path}")# Open a shell session over WebSocketwscat -c wss://api.isorun.ai/v1/runs/$RID/shell \ -H "Authorization: Bearer $ISORUN_API_KEY"
# Wait for {"type": "shell_ready"}, then send:{"type": "shell_run", "id": "r1", "command": "cd /tmp"}# → {"type": "shell_exit", "id": "r1", "code": 0}
{"type": "shell_run", "id": "r2", "command": "pwd"}# → {"type": "shell_out", "id": "r2", "data": "/tmp\n"}# → {"type": "shell_exit", "id": "r2", "code": 0}What persists across commands in the same session
- cwd (
cdworks) - environment variables (
export FOO=bar) - shell variables (
X=42; echo $X) - shell functions and aliases
- background jobs (
./long-running &keeps running) - shell history (in-memory)
What doesn’t persist: anything that requires re-launching the shell
itself (e.g. exec to replace the shell, exit). If your command
kills the shell, the session ends and the client receives a
shell_closed frame.
Streaming output
The shell streams stdout and stderr separately and incrementally —
you don’t have to wait for the command to exit to see partial
output. Inside the SDK this is exposed as result.stdout and
result.stderr strings; the underlying WebSocket frames are
shell_out and shell_err chunks that arrive in real time.
For agent loops that need to react to output mid-command, hold the
underlying WebSocket directly (the SDK exposes the raw Shell object).
Limits
| Constraint | Value |
|---|---|
| Sessions per sandbox | unbounded — open as many as you want, each is one bash process |
| Session timeout | sandbox lifetime (or until sh.close()) |
| Per-command overhead | ~0.13 ms (bare metal) / ~0.5 ms (nested KVM) |
| Bash death detection | shell_closed frame surfaces immediately |
When to use exec vs shell
Use sb.exec() | Use sb.shell() |
|---|---|
| One-off command per sandbox | Many commands in sequence |
| You want command isolation | You want shared cwd / env / state |
| Don’t care about per-call latency | Latency matters (agent loop, REPL) |
| Stateless tools | Stateful workflows |