Skip to content

Credential Injection

The agent inside a sandbox needs to call third-party APIs (OpenAI, Anthropic, GitHub, HuggingFace, etc.). The naive way is to set OPENAI_API_KEY=sk-... as a sandbox env var — but then the agent’s process can read it from os.environ, dump it to a file, exfiltrate it via DNS, or include it in an LLM prompt by accident.

Credential injection solves this by holding the real secret on the host and never letting it cross into the guest VM. The host runs a reverse proxy that watches outbound HTTPS connections from the sandbox; when one matches a known service URL, it inserts the matching credential into the request before forwarding upstream.

Usage

from isorun import Sandbox
with Sandbox(
"python",
credentials={
"OPENAI_API_KEY": "sk-real-openai-key",
"ANTHROPIC_API_KEY": "sk-ant-real-anthropic-key",
"GITHUB_TOKEN": "ghp_real-github-token",
},
) as sb:
# The agent's environment has placeholder values:
sb.exec("env | grep -E 'OPENAI|ANTHROPIC|GITHUB'")
# OPENAI_API_KEY=sk-isorun-proxy-managed
# OPENAI_BASE_URL=http://10.0.0.1:9090/proxy/openai
# …
# And the agent's HTTPS calls Just Work:
sb.exec("python3 -c 'import openai; print(openai.models.list())'")

The agent never sees the real secret. If it tries to print os.environ["OPENAI_API_KEY"], it gets the placeholder. If it exfiltrates /proc/self/environ, same. The only thing it can do with the placeholder is make HTTPS calls through the proxy — which is exactly what you wanted.

Service detection

The proxy maps each env var name to a known service:

Env varService hostname
OPENAI_API_KEYapi.openai.com
ANTHROPIC_API_KEYapi.anthropic.com
GOOGLE_API_KEYgenerativelanguage.googleapis.com
GITHUB_TOKENapi.github.com
HUGGINGFACE_TOKENapi-inference.huggingface.co

For each known env var the proxy injects the right header (Authorization: Bearer ..., x-api-key: ..., etc.) on requests that hit the matching hostname. SDKs that already use the *_BASE_URL env var convention (which is most of them) pick up the proxy automatically.

Per-endpoint filtering

For tighter control — allow POST /v1/chat/completions but reject DELETE /v1/admin/... — use Endpoint Rules to specify which methods + paths the proxy is willing to forward for each credential.

Combining with network profiles

Credential injection composes cleanly with network profiles. The profile blocks all egress except the API hosts you trust; credential injection ensures the agent’s calls to those hosts go out with the right key, without ever putting the key in the guest’s address space.