LangChain
pip install isorun[langchain]The integration ships two tools:
IsorunCodeInterpreterTool— runs Python code inside a Firecracker microVM. Drop-in replacement forE2BCodeInterpreterTool.IsorunShellTool— runs arbitrary shell commands.
Both tools cache one sandbox per tool instance and reset the auto-destroy timer on every call so the sandbox stays alive across multiple agent turns.
With LangGraph
from isorun.integrations.langchain import IsorunCodeInterpreterToolfrom langchain_openai import ChatOpenAIfrom langgraph.prebuilt import create_react_agent
with IsorunCodeInterpreterTool() as tool: agent = create_react_agent( ChatOpenAI(model="gpt-4o"), tools=[tool], ) result = agent.invoke({ "messages": [("user", "Compute the first 100 prime numbers and print them.")] }) print(result["messages"][-1].content)With LangChain agents (legacy AgentExecutor)
from isorun.integrations.langchain import IsorunCodeInterpreterTool, IsorunShellToolfrom langchain.agents import AgentExecutor, create_openai_tools_agentfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAI
code = IsorunCodeInterpreterTool()shell = IsorunShellTool()tools = [code, shell]
prompt = ChatPromptTemplate.from_messages([ ("system", "You write and run code to solve user tasks."), ("user", "{input}"), ("placeholder", "{agent_scratchpad}"),])agent = create_openai_tools_agent(ChatOpenAI(model="gpt-4o"), tools, prompt)executor = AgentExecutor(agent=agent, tools=tools)
try: result = executor.invoke({"input": "Install requests, fetch example.com, return the title"}) print(result["output"])finally: code.close() shell.close()Drop-in for users coming from e2b
The package exports E2BCodeInterpreterTool as an alias of
IsorunCodeInterpreterTool. Existing e2b code keeps working with one
import change:
# Beforefrom langchain_e2b import E2BCodeInterpreterTool
# After — just change the packagefrom isorun.integrations.langchain import E2BCodeInterpreterToolCustom image
Pass any OCI image to the tool constructor. First boot pulls + builds a golden snapshot (~30 s); every subsequent invocation is the standard 27 ms cold boot.
tool = IsorunCodeInterpreterTool( image="python:3.12", # default sandbox_timeout=600, # auto-destroy after 10 min idle allow=["api.openai.com"],# egress allow list)What you get for free
- KVM hardware isolation (the agent’s code can’t escape the VM)
- Per-second billing — pay only for the seconds the tool actually used
- 27 ms cold start vs e2b’s ~150 ms
sb.fork(),sb.shell(),sb.url(port),sb.hibernate()etc. are all available on the underlyingIsorunCodeInterpreterTool._sandboxif you need them