Python SDK
Install
pip install isorunThe shell-session feature requires the optional websockets package:
pip install isorun websocketsAuthentication
Set ISORUN_API_KEY in your environment, or pass api_key=... to
the constructor.
export ISORUN_API_KEY=isorun_live_<region>_<id>_<hmac>The five-minute tour
from isorun import Sandbox
with Sandbox("python:3.12") as sb: # 1. Run code out = sb.exec("python3 -c 'print(2 + 2)'") print(out.stdout) # 4
# 2. Long-lived shell session — state persists across commands with sb.shell() as sh: sh.run("cd /tmp && export FOO=bar") print(sh.run("echo $FOO").stdout) # bar
# 3. Fork the running sandbox into independent children children = sb.fork(count=5) for c in children: c.exec("python3 -c 'print(\"hello from a fork\")'") c.destroy()
# 4. Expose an in-sandbox server as a public URL sb.exec("nohup setsid python3 -m http.server 8000 >/dev/null 2>&1 &") print(sb.url(port=8000)) # → https://run<id>.isorun.ai/sb/p/8000/
# 5. Hibernate to free runner resources, resume later sb.hibernate() sb.resume()The with block destroys the sandbox on exit. If you don’t use it,
call sb.create() and sb.destroy() yourself.
API reference
Sandbox(image, **opts)
sb = Sandbox( image="python:3.12", # any OCI image vcpus=1, # virtual CPUs mem_mib=1024, # memory in MiB disk_mib=10240, # scratch disk in MiB timeout=30, # default exec timeout (seconds) sandbox_timeout=300, # auto-destroy after N seconds idle (-1 to disable) network_profile="default", # or "locked-down" / "internet" allow=None, # explicit egress allow list deny=None, # explicit egress deny list env={"KEY": "value"}, # base env vars credentials={ # per-service secrets injected via proxy "OPENAI_API_KEY": "sk-...", "ANTHROPIC_API_KEY": "sk-ant-...", }, api_url="https://api.isorun.ai", api_key=None, # default: ISORUN_API_KEY env var)sb.create() → SandboxInfo
Boots a fresh microVM. Returns a SandboxInfo with id, image,
vcpus, mem_mib, disk_mib, create_ms. Sets sb.id for use in
later calls.
sb.exec(command, *, timeout=None) → ExecResult
One-shot command execution. Returns an ExecResult with
exit_code, stdout, stderr, and an ok property
(exit_code == 0).
result = sb.exec("ls /etc")print(result.exit_code, result.stdout, result.stderr)For multiple commands in sequence, use sb.shell() instead — it’s
30-60× faster per command on nested KVM.
sb.exec_stream(command, *, on_stdout=None, on_stderr=None, timeout=None) → ExecResult
Same as exec(), but invokes the callbacks for each chunk of stdout
and stderr as it arrives. Use this when you need to react to partial
output before the command exits.
sb.exec_stream( "python3 long_training_loop.py", on_stdout=lambda chunk: print(chunk, end=""),)sb.shell() → Shell
Opens a long-lived bash session. Use as a context manager. The shell
process persists across multiple run() calls.
with sb.shell() as sh: sh.run("cd /repo") sh.run("git pull") result = sh.run("pytest -x") if result.exit_code != 0: # Inspect partial state sh.run("git status")sh.run(command) returns an ExecResult with exit_code, stdout,
stderr. Closing the context manager (or calling sh.close())
terminates the bash process.
If your command kills the bash itself (e.g. exit 42), the next
run() returns an ExecResult with the bash’s exit code, and
subsequent calls raise IsorunError("Shell is closed").
See the Shell Sessions page for the protocol details and benchmarks.
sb.fork(count=1) → list[Sandbox]
Clones a running sandbox into N children. Each child is an
independent Sandbox object with its own ID. Filesystem state and
memory state at the moment of the fork are inherited.
parent = Sandbox("python:3.12")parent.create()parent.exec("apt-get update && apt-get install -y build-essential")
# 5 children inherit the apt state, no re-install needed.children = parent.fork(count=5)for c in children: c.exec("python3 -c 'import platform; print(platform.python_version())'") c.destroy()
parent.destroy()The runner snapshots the parent once, then restores each child via
hardlinked mem.snap (cross-fork page cache sharing) and reflinked
scratch (instant CoW). Per-child cost is ~16 ms after the one-time
snapshot.
See the Forking page for the full mechanism and what survives the fork.
sb.url(port) → str
Returns the public HTTPS URL that proxies to a port your in-sandbox code is listening on. The URL is anonymous — anyone with it can reach the in-sandbox server.
sb.exec("nohup setsid jupyter notebook --port 8888 --no-browser \ --ip 0.0.0.0 --NotebookApp.token='' >/dev/null 2>&1 &")
print(sb.url(port=8888))# → https://run<id>.isorun.ai/sb/p/8888/WebSocket upgrades pass through end-to-end, so HMR, Jupyter kernel WS, gradio queues etc. all work without extra config.
See the Public URLs page for the protocol details and security model.
sb.hibernate() → None
Pause the VM, snapshot it to disk, and free the live FC process /
TAP / cgroup. The runID stays valid; call sb.resume() later. The
sandbox costs nothing on the runner side while hibernated.
sb.hibernate() # ~470 ms
# … hours later …sb.resume() # ~21 ms — faster than a cold createWhat survives hibernation: filesystem state, memory state (in-memory caches, JIT bytecode, model weights), running processes (at the same PIDs), env vars, cwd. ESTABLISHED TCP connections from before the hibernation are reset on resume.
sb.resume() → None
Restores a hibernated sandbox in ~21 ms. Same runID, same state.
sb.destroy() → DestroyStats
Frees the sandbox permanently, including any hibernation snapshot on disk. Returns measured resource usage:
stats = sb.destroy()print(stats.cpu_ms) # CPU time used (per-second billed)print(stats.mem_peak_bytes) # peak RSSprint(stats.disk_used_bytes)# CoW scratch usageprint(stats.uptime_ms) # wall clock from create to destroyFile operations
sb.upload_file("local.txt", "/remote/path/file.txt")data = sb.download("/remote/path/file.txt")sb.write_text("/remote/path/hello.txt", "hi\n")files = sb.ls("/remote/path/")Snapshots (named, persistent)
For checkpoint/restore across sandbox lifetimes (vs fork() which
clones a running sandbox into immediate children, and hibernate()
which keeps the same runID):
snap_id = sb.checkpoint() # named snapshot stored on the runnernew_sb = Sandbox.restore(snap_id)Async client
AsyncSandbox mirrors the sync API, with awaitable methods. Use it
inside an asyncio event loop.
import asynciofrom isorun import AsyncSandbox
async def main(): async with AsyncSandbox("python:3.12") as sb: result = await sb.exec("python3 -c 'print(42)'") print(result.stdout)
asyncio.run(main())Errors
All SDK exceptions inherit from IsorunError:
| Exception | When |
|---|---|
IsorunError | Generic — bad request, server error, validation failure |
IsorunTimeoutError | Exec or operation exceeded its timeout |
IsorunConnectionError | Network failure to the API |
IsorunError.code carries the HTTP status code when applicable.