AI coding agents face a significant challenge: context loss during conversation compaction. As sessions progress and conversation history grows, agents must compress older messages to stay within finite context windows. This process often discards critical structural information about codebases—function signatures, dependency chains, and architectural decisions disappear.
The Compaction Problem
Every AI agent grapples with the tension between finite context windows and infinite codebases. When compaction occurs without a persistent structural model, the agent loses track of previously analyzed code relationships. This leads to inefficient behavior: agents re-read files, repeat analysis, and lose important architectural understanding they've already developed.
What Goes Wrong in Practice
A concrete example illustrates this issue: during a 45-minute refactoring session, an agent traces a complete call chain from API layer through service classes to database. It understands entry points, internal utilities, and shared features. Then compaction hits. The agent discards this architectural work and must re-read files from scratch on the next request, asking "questions it already answered" and potentially making conflicting changes.
Code Graphs as Solution
Code graphs provide persistent external memory by representing codebases as structured relationships between functions, classes, modules, and their connections. Through tools like Supermodel's MCP server, agents can query for:
- Functions within modules
- File dependencies
- Call chains for features
- Type definitions and usage patterns
As the saying goes, "Graph queries give you structure and relationships, not just text matches."
Beyond Compaction: Broader Applications
Code graphs enable several advanced capabilities:
Dead Code Detection: Identify unused functions and classes without reading entire codebases.
Impact Analysis: Determine which modules depend on utilities before modifications to prevent unintended ripple effects.
Test Coverage Analysis: Trace which functions each test exercises directly from call graphs.
Codebase Evaluation: Assess domain structure, dependency health, and module coupling quickly.
Documentation Generation: Ground documentation in actual code structure rather than potentially outdated comments.
Developer Onboarding: Provide new team members and agents with structural maps for faster orientation.
Why This Matters Now
As agents tackle increasingly complex multi-file tasks, the compaction problem intensifies. While simple bug fixes may survive context compression, large refactors across many files expose the limitations of purely conversation-based context. Code graphs represent essential infrastructure for serious AI-assisted development.