See how AI coding assistants really work. Load a repo, enter a prompt, and watch the context window fill up — system prompt, CLAUDE.md, /commands, tool calls, and all.
Files—
File Viewer
Agentic Loop
Network
Click a file in the tree to view its contents.
Or try a preset prompt below to watch the agentic loop in action.
Network Log
0 requests
#MethodURLStatusSizeTime
Where's My Context?
1x
Works with OpenAI, OpenRouter, or any OpenAI-compatible API. Key stays in your browser.
Loading...
Custom sources
Override these if default sources are blocked (e.g. enterprise proxy). Library URL must serve an ES module exporting CreateMLCEngine. Model URL should point to the directory containing compiled model files.
Runs entirely in your browser via WebGPU. Output quality is limited — the demo shows the process, not the quality.
Model Settings
Full context window (estimated)0 / 200,000 tokens (0%)
Used tokens breakdown—
System Prompt——
AGENTS.md——
Commands——
Tool Definitions——
Tool Results——
Your Prompt——
Model Response——
Token counts are estimated (~4 chars per token). Actual counts vary by model tokenizer.
How AI Assistants Work
Assemble context — system prompt + CLAUDE.md, any invoked /commands, tool schemas
Read your prompt — added to the context window
Call the model — sends everything to the LLM
Execute tools — model requests file reads, grep, edits
Loop — tool results go back to the model for the next turn