Context Intelligence
Pilot’s context intelligence layer transforms raw AI sessions into structured, efficient
workflows. It auto-detects your project’s .agent/ directory and uses it to dramatically
reduce token waste while maintaining full project awareness.
The Challenge
Every AI coding session starts from scratch. Without context engineering, the AI loads entire codebases, burns through tokens, and loses track of decisions between sessions.
| Issue | Impact |
|---|---|
| Loads entire codebase on start | ~150,000 tokens consumed immediately |
| Short productive sessions | Exhausted after 5-7 exchanges |
| No persistent memory | Repeated explanations every session |
| Context waste | 92% of loaded context goes unused |
How Context Intelligence Solves It
Structured documentation loading, knowledge persistence, and session management — reducing token usage by 12x while extending productive session length by 4x.
| Metric | Without Context Engine | With Context Engine | Improvement |
|---|---|---|---|
| Token usage per session | ~150,000 | ~12,000 | 12x reduction |
| Productive exchanges | 5-7 | 20+ | 4x longer |
| Context efficiency | 8% | 92% | 11.5x better |
| Knowledge persistence | None | Graph-based | Decisions survive sessions |
| Session continuity | Start over | Resume from markers | Zero ramp-up time |
How It Works
When Pilot detects a .agent/ directory in your project, it automatically prefixes
every execution with context initialization:
// internal/executor/runner.go — BuildPrompt()
if _, err := os.Stat(filepath.Join(task.ProjectPath, ".agent")); err == nil {
sb.WriteString("Start my Navigator session.\n\n")
}This activates the context engine to:
- Load the index — Only
DEVELOPMENT-README.md(~2k tokens), not the entire codebase - Lazy-load on demand — Additional docs loaded only when referenced
- Capture decisions — Non-obvious choices stored in the knowledge graph
- Preserve context — Session markers enable resume without ramp-up
Key Capabilities
Lazy Loading Architecture
Instead of dumping everything into context at once, the context engine uses a tiered loading strategy:
| Tier | What Loads | When | Token Cost |
|---|---|---|---|
| Always | DEVELOPMENT-README.md (index) | Session start | ~2,000 |
| On demand | Task docs, architecture docs | Referenced in conversation | ~3,000 |
| Rare | Full system docs, SOPs | Architecture changes | ~5,000 |
| Never | Archived tasks, old context | N/A | 0 |
Total per session: ~12k tokens vs 50k+ loading everything.
Knowledge Graph
The context engine maintains a persistent knowledge graph across sessions:
- Decisions — Why certain approaches were chosen (
"Use JWT over sessions for stateless scaling") - Patterns — Reusable solutions discovered during development
- Pitfalls — Problems to avoid (
"GitHub API rate limits during CI — use conditional requests") - Dependencies — Relationships between components and files
Session Markers
Context save points that preserve state before breaks, risky changes, or context compaction:
- Save progress — Create checkpoints before destructive operations
- Resume work — Continue from where you left off with full context
- Share context — Transfer knowledge between sessions via
.agent/.context-markers/
Workflow Enforcement
The context engine enforces structured execution with mandatory workflow checks:
WORKFLOW CHECK
Loop trigger: [YES/NO]
Complexity: [0.X]
Mode: [LOOP/TASK/DIRECT]This routes tasks through the appropriate execution mode — loop mode for iterative work, task mode for planned features, or direct mode for simple changes.
Backend Support
Context intelligence works across all three execution backends:
| Backend | Context Engine | Knowledge Graph | Session Markers |
|---|---|---|---|
| Claude Code | Full support | Full support | Full support |
| Qwen Code | Full support | Full support | Full support |
| OpenCode | Full support | Full support | Full support |
The .agent/ directory is backend-agnostic — it provides the same structured context
regardless of which execution engine processes the task.
In the Execution Report
After every task, Pilot shows context intelligence status in the execution report:
📊 EXECUTION REPORT
───────────────────────
🧭 Context: Active
Mode: nav-task
📈 Phases:
Research 45s (20%)
Implement 2m (54%)
Verify 57s (26%)
💰 Tokens:
Input: 45k
Output: 12k
Cost: ~$0.82If context intelligence is not detected:
⚠️ Context: not found (running without codebase context)