Skip to Content
Context IntelligenceOverview

Context Intelligence

Pilot’s context intelligence layer transforms raw AI sessions into structured, efficient workflows. It auto-detects your project’s .agent/ directory and uses it to dramatically reduce token waste while maintaining full project awareness.

The Challenge

Every AI coding session starts from scratch. Without context engineering, the AI loads entire codebases, burns through tokens, and loses track of decisions between sessions.

IssueImpact
Loads entire codebase on start~150,000 tokens consumed immediately
Short productive sessionsExhausted after 5-7 exchanges
No persistent memoryRepeated explanations every session
Context waste92% of loaded context goes unused

How Context Intelligence Solves It

Structured documentation loading, knowledge persistence, and session management — reducing token usage by 12x while extending productive session length by 4x.

MetricWithout Context EngineWith Context EngineImprovement
Token usage per session~150,000~12,00012x reduction
Productive exchanges5-720+4x longer
Context efficiency8%92%11.5x better
Knowledge persistenceNoneGraph-basedDecisions survive sessions
Session continuityStart overResume from markersZero ramp-up time

How It Works

When Pilot detects a .agent/ directory in your project, it automatically prefixes every execution with context initialization:

// internal/executor/runner.go — BuildPrompt() if _, err := os.Stat(filepath.Join(task.ProjectPath, ".agent")); err == nil { sb.WriteString("Start my Navigator session.\n\n") }

This activates the context engine to:

  1. Load the index — Only DEVELOPMENT-README.md (~2k tokens), not the entire codebase
  2. Lazy-load on demand — Additional docs loaded only when referenced
  3. Capture decisions — Non-obvious choices stored in the knowledge graph
  4. Preserve context — Session markers enable resume without ramp-up

Key Capabilities

Lazy Loading Architecture

Instead of dumping everything into context at once, the context engine uses a tiered loading strategy:

TierWhat LoadsWhenToken Cost
AlwaysDEVELOPMENT-README.md (index)Session start~2,000
On demandTask docs, architecture docsReferenced in conversation~3,000
RareFull system docs, SOPsArchitecture changes~5,000
NeverArchived tasks, old contextN/A0

Total per session: ~12k tokens vs 50k+ loading everything.

Knowledge Graph

The context engine maintains a persistent knowledge graph across sessions:

  • Decisions — Why certain approaches were chosen ("Use JWT over sessions for stateless scaling")
  • Patterns — Reusable solutions discovered during development
  • Pitfalls — Problems to avoid ("GitHub API rate limits during CI — use conditional requests")
  • Dependencies — Relationships between components and files

Session Markers

Context save points that preserve state before breaks, risky changes, or context compaction:

  • Save progress — Create checkpoints before destructive operations
  • Resume work — Continue from where you left off with full context
  • Share context — Transfer knowledge between sessions via .agent/.context-markers/

Workflow Enforcement

The context engine enforces structured execution with mandatory workflow checks:

WORKFLOW CHECK Loop trigger: [YES/NO] Complexity: [0.X] Mode: [LOOP/TASK/DIRECT]

This routes tasks through the appropriate execution mode — loop mode for iterative work, task mode for planned features, or direct mode for simple changes.

Backend Support

Context intelligence works across all three execution backends:

BackendContext EngineKnowledge GraphSession Markers
Claude CodeFull supportFull supportFull support
Qwen CodeFull supportFull supportFull support
OpenCodeFull supportFull supportFull support

The .agent/ directory is backend-agnostic — it provides the same structured context regardless of which execution engine processes the task.

In the Execution Report

After every task, Pilot shows context intelligence status in the execution report:

📊 EXECUTION REPORT ─────────────────────── 🧭 Context: Active Mode: nav-task 📈 Phases: Research 45s (20%) Implement 2m (54%) Verify 57s (26%) 💰 Tokens: Input: 45k Output: 12k Cost: ~$0.82

If context intelligence is not detected:

⚠️ Context: not found (running without codebase context)