Skip to content

Public URLs

sb.url(port) returns a public HTTPS URL that proxies to whatever your sandbox is listening on at that port. Start a vite dev server, a jupyter notebook, a gradio app, or any HTTP server inside the sandbox — and access it from a browser anywhere on the internet.

Use case: dev server preview

from isorun import Sandbox
with Sandbox("node:22") as sb:
sb.create()
sb.exec("git clone https://github.com/me/my-app && cd my-app && npm install")
# Start vite in the background. Use nohup + setsid + redirect so
# the exec call returns immediately and the server keeps running.
sb.exec("cd my-app && nohup setsid npm run dev >/dev/null 2>&1 &")
# Get a public URL anyone can open in a browser.
url = sb.url(port=5173)
print(url)
# → https://run<id>.isorun.ai/sb/p/5173/
# Hand it to the user. Vite HMR over WebSocket works automatically.
notify_user(url)

URL format

https://<runID>.isorun.ai/sb/p/<port>/<path>?<query>
ComponentMeaning
<runID>Sandbox ID returned by sb.id (16 hex chars)
<port>TCP port your in-sandbox server is listening on
<path>Forwarded verbatim to the in-sandbox server
<query>Forwarded verbatim

So https://run0123456789abcdef.isorun.ai/sb/p/8000/foo/bar?q=1 becomes a request to localhost:8000/foo/bar?q=1 inside the sandbox.

What works through the proxy

  • All HTTP methods. GET, POST, PUT, DELETE, PATCH — anything.
  • Streaming responses. SSE, chunked encoding, etc. pass through without buffering.
  • WebSocket upgrades. Vite HMR, Jupyter kernel WS, hot-reload, notebook websockets, gradio’s queue WS, langchain streaming — all work via raw TCP hijack tunneling on the runner and Cloudflare’s native WS pass-through on the worker.
  • Cookies. Forwarded both directions.
  • Large request and response bodies. No size cap from the proxy.

Authentication model

The URL is anonymous — anyone with the URL can hit it. The 16-hex sandbox ID has 64 bits of entropy, so it’s effectively unguessable unless you share it.

This matches the e2b model. If you need stronger auth in front of your in-sandbox service, run it behind a reverse proxy with auth inside the sandbox itself, or ship a per-port allowlist via your own gateway.

What gets stripped before the request reaches your in-sandbox server:

  • The runner’s Authorization: Bearer ORIGIN_API_KEY header — your in-sandbox server never sees runner credentials.

What gets added:

  • X-Forwarded-Host: origin-<server>.isorun.ai
  • X-Forwarded-Proto: https
  • X-Real-IP: <client IP>
  • Cloudflare’s CF-Ray, CF-Connecting-IP, etc.

Performance

End-to-end latency from a client anywhere on the internet through the full chain (CF colo → cloudflared tunnel → runner reverse proxy → guest TCP) is ~120 ms per request on a shared TCP connection, or ~180 ms with a fresh TLS handshake. Most of that is network — the proxy layers themselves add only a few milliseconds.

For traffic that needs the lowest possible latency, hold a persistent connection to the URL: HTTP keep-alive, WebSocket, or streaming SSE.

Limits

ConstraintValue
Ports per sandboxany TCP port your code listens on
Method supportall HTTP methods including PATCH and CUSTOM
WebSocket supportyes, end-to-end
Authnone — sandbox ID is the credential
Per-request overhead~5 ms (worker + reverse proxy) on top of network RTT