← Back to Tech Radars

Dude, Where's My Context?

See how AI coding assistants really work. Load a repo, enter a prompt, and watch the context window fill up — system prompt, CLAUDE.md, /commands, tool calls, and all.

Files
File Viewer
Agentic Loop
Network
  Click a file in the tree to view its contents.

  Or try a preset prompt below to watch the agentic loop in action.
Network Log
0 requests
# Method URL Status Size Time
Where's My Context?
1x
Model Settings
Full context window (estimated) 0 / 200,000 tokens (0%)
Used tokens breakdown
System Prompt
AGENTS.md
Commands
Tool Definitions
Tool Results
Your Prompt
Model Response
Token counts are estimated (~4 chars per token). Actual counts vary by model tokenizer.

How AI Assistants Work

  1. Assemble context — system prompt + CLAUDE.md, any invoked /commands, tool schemas
  2. Read your prompt — added to the context window
  3. Call the model — sends everything to the LLM
  4. Execute tools — model requests file reads, grep, edits
  5. Loop — tool results go back to the model for the next turn
  6. Respond — final answer with all context consumed
Ready — mock repo loaded
Simulation mode