CrewAI
pip install isorun[crewai]The integration ships two CrewAI-native tools:
IsorunCodeInterpreterTool— runs Python in a Firecracker microVMIsorunShellTool(aliasIsorunSandboxTool) — runs shell commands
Both tools follow the standard CrewAI BaseTool shape and cache one
sandbox per tool instance.
Example crew
from crewai import Agent, Task, Crewfrom isorun.integrations.crewai import IsorunCodeInterpreterTool, IsorunShellTool
code = IsorunCodeInterpreterTool()shell = IsorunShellTool()
researcher = Agent( role="Data scientist", goal="Analyze the user's data and produce findings.", backstory="You're an experienced analyst who runs everything in a sandbox.", tools=[code, shell],)
task = Task( description=( "Download the latest IMDB top-100 dataset, compute the average rating " "by decade, and report the decade with the highest average." ), expected_output="A short paragraph with the decade and the average rating.", agent=researcher,)
try: Crew(agents=[researcher], tasks=[task]).kickoff()finally: code.close() shell.close()Tool lifecycle
Each tool instance owns one sandbox. The sandbox is created lazily on
first call and destroyed when you call tool.close() (or use the
tool as a context manager). The tool resets the sandbox’s idle timer
on every call so it stays alive across multiple agent turns.
If your crew has multiple agents that need to share state (files, installed packages, in-memory caches), give them the same tool instance. If they need isolation, give each agent its own tool instance.