Configuration
Pilot uses a YAML configuration file at ~/.pilot/config.yaml. All settings support environment variable expansion with ${VAR_NAME} syntax.
# Interactive setup wizard — creates config.yaml
pilot setup
# Validate config and dependencies
pilot doctorAuthentication
Pilot delegates AI execution to Claude Code, which manages its own authentication. Pilot itself does not require an Anthropic API key to function.
| Component | Auth Method | Required |
|---|---|---|
| Claude Code (execution engine) | Claude subscription login or ANTHROPIC_API_KEY | Yes — one or the other |
| GitHub | GITHUB_TOKEN env var or adapters.github.token in config | Yes, if using GitHub |
| GitLab | GITLAB_TOKEN env var or adapters.gitlab.token in config | Yes, if using GitLab |
| Telegram | TELEGRAM_BOT_TOKEN env var or config | No |
| LLM classifier (smart intent routing) | ANTHROPIC_API_KEY env var | No — falls back to keyword matching |
| Epic decomposition (Haiku subtask parser) | ANTHROPIC_API_KEY env var | No — falls back to regex parser |
| Discord | DISCORD_BOT_TOKEN env var or adapters.discord.bot_token in config | Yes, if using Discord |
| Plane | PLANE_API_KEY env var or adapters.plane.api_key in config | Yes, if using Plane |
Most users don’t need ANTHROPIC_API_KEY at all. If you’re logged into Claude Code with a Claude subscription, Pilot works out of the box. The API key is only needed for optional internal features (LLM classifier, structured subtask parsing) that have simpler fallbacks.
Environment Variables
| Variable | Description | Required |
|---|---|---|
GITHUB_TOKEN | GitHub personal access token (repo + workflow scopes) | If using GitHub |
GITLAB_TOKEN | GitLab personal or project access token | If using GitLab |
TELEGRAM_BOT_TOKEN | Telegram bot token for chat interface | No |
ANTHROPIC_API_KEY | Enables LLM intent classifier and structured subtask parsing | No |
OPENAI_API_KEY | Enables Whisper voice transcription in Telegram | No |
SLACK_BOT_TOKEN | Slack bot token for notifications | No |
DISCORD_BOT_TOKEN | Discord bot token for chat interface | No |
PLANE_API_KEY | Plane.so API key for work item management | No |
Use ${VAR_NAME} in any string field to reference environment variables. Pilot expands them at load time. Paths starting with ~ are expanded to your home directory.
GitHub
Connects Pilot to your GitHub repository for issue polling and PR management.
adapters:
github:
enabled: true
token: ${GITHUB_TOKEN}
repo: "owner/repo" # owner/repo format
project_path: "/path/to/local/repo" # must match repo
webhook_secret: "" # for HMAC verification (webhooks mode)
pilot_label: "pilot" # label that triggers Pilot
polling:
enabled: true # poll for issues (vs webhooks)
interval: 30s
label: "pilot"
stale_label_cleanup:
enabled: true # auto-remove stale pilot-in-progress labels
interval: 30m
threshold: 1h # label age before cleanup| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable GitHub adapter |
token | string | — | PAT or GitHub App token |
repo | string | — | Repository in owner/repo format |
project_path | string | — | Local filesystem path to the repo |
webhook_secret | string | — | HMAC secret for webhook verification |
pilot_label | string | "pilot" | Label that marks issues for Pilot |
polling.enabled | bool | false | Enable issue polling (alternative to webhooks) |
polling.interval | duration | 30s | How often to poll for new issues |
polling.label | string | "pilot" | Label to filter when polling |
stale_label_cleanup.enabled | bool | true | Auto-remove stale pilot-in-progress labels |
stale_label_cleanup.interval | duration | 30m | Cleanup check interval |
stale_label_cleanup.threshold | duration | 1h | How old a label must be to be considered stale |
Project Board
Sync task status to a GitHub Projects V2 board automatically.
adapters:
github:
project_board:
enabled: true
project_number: 3 # project number from the URL
status_field: "Status" # single-select field name
statuses:
in_progress: "In Dev"
review: "Ready for Review"
done: "Done"
failed: "Blocked"| Field | Type | Default | Description |
|---|---|---|---|
project_board.enabled | bool | false | Enable GitHub Projects V2 board sync |
project_board.project_number | int | — | Project number (from the project URL) |
project_board.status_field | string | "Status" | Name of the single-select status field |
project_board.statuses.in_progress | string | — | Column name for tasks being worked on |
project_board.statuses.review | string | — | Column name for tasks in review |
project_board.statuses.done | string | — | Column name for completed tasks |
project_board.statuses.failed | string | — | Column name for failed/blocked tasks |
Telegram
Chat interface for sending tasks, receiving notifications, and approving actions.
adapters:
telegram:
enabled: true
bot_token: ${TELEGRAM_BOT_TOKEN}
chat_id: "123456789"
allowed_ids:
- 123456789 # your Telegram user/chat ID
project_path: "/path/to/default/repo"
polling: true # enable inbound message polling
plain_text_mode: true # plain text instead of Markdown
transcription:
backend: "auto" # "whisper-api" or "auto"
openai_api_key: ${OPENAI_API_KEY}
rate_limit:
enabled: true
messages_per_minute: 20
tasks_per_hour: 10
burst_size: 5
llm_classifier:
enabled: false
api_key: ${ANTHROPIC_API_KEY} # falls back to env var
timeout_seconds: 2
history_size: 10
history_ttl: 30m| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable Telegram adapter |
bot_token | string | — | Telegram bot token from @BotFather |
chat_id | string | — | Default chat ID for outbound notifications |
allowed_ids | []int64 | — | User/chat IDs authorized to send tasks |
project_path | string | — | Default project path for tasks sent via Telegram |
polling | bool | false | Enable inbound message polling |
plain_text_mode | bool | true | Use plain text instead of Markdown formatting |
transcription.backend | string | "auto" | Voice transcription backend: whisper-api or auto |
transcription.openai_api_key | string | — | OpenAI API key for Whisper transcription |
rate_limit.enabled | bool | true | Enable per-user rate limiting |
rate_limit.messages_per_minute | int | 20 | Max messages per minute per user |
rate_limit.tasks_per_hour | int | 10 | Max task executions per hour per user |
rate_limit.burst_size | int | 5 | Burst allowance above rate limit |
llm_classifier.enabled | bool | false | Enable LLM-based intent classification |
llm_classifier.api_key | string | — | Anthropic API key (falls back to ANTHROPIC_API_KEY) |
llm_classifier.timeout_seconds | int | 2 | Classification timeout in seconds |
llm_classifier.history_size | int | 10 | Messages to keep per chat for context |
llm_classifier.history_ttl | duration | 30m | TTL for conversation history |
Always set allowed_ids to restrict who can trigger Pilot via Telegram. Without it, anyone who discovers your bot can submit tasks.
Executor
Controls how Pilot runs execution backends (Claude Code, Qwen Code, or OpenCode) to execute tasks.
executor:
type: "claude-code" # "claude-code", "qwen-code", or "opencode"
auto_create_pr: true
direct_commit: false # commit directly to branch without PR
detect_ephemeral: true # detect and skip ephemeral changes
skip_self_review: false # skip self-review step
use_worktree: false # execute tasks in isolated git worktrees
claude_code:
command: "claude" # path to claude CLI
extra_args: [] # additional CLI arguments
planning_timeout: 2m # max time for epic planning before fallback to direct mode
model_routing:
enabled: false
trivial: "claude-haiku"
simple: "claude-sonnet-4-6"
medium: "claude-sonnet-4-6"
complex: "claude-opus-4-6"
effort_routing:
enabled: false
trivial: "low"
simple: "medium"
medium: "high"
complex: "max"
timeout:
default: "30m"
trivial: "5m"
simple: "10m"
medium: "30m"
complex: "60m"
decompose:
enabled: false
min_complexity: "complex" # only decompose complex tasks
max_subtasks: 5 # 2-10 range
min_description_words: 50 # skip short descriptions| Field | Type | Default | Description |
|---|---|---|---|
type | string | "claude-code" | Backend type: claude-code, qwen-code, or opencode |
auto_create_pr | bool | true | Automatically create PRs after execution |
direct_commit | bool | false | Commit directly without PR |
detect_ephemeral | bool | true | Detect and skip ephemeral file changes |
skip_self_review | bool | false | Skip the self-review step |
use_worktree | bool | false | Execute tasks in isolated git worktrees (enables execution with uncommitted changes) |
claude_code.command | string | "claude" | Path to the Claude CLI binary |
claude_code.extra_args | []string | [] | Additional CLI arguments |
claude_code.planning_timeout | duration | 2m | Maximum time for epic planning before fallback to direct execution. Affects Slack /plan and Telegram planning commands. |
qwen_code.command | string | "qwen" | Path to the Qwen Code CLI binary |
qwen_code.use_session_resume | bool | false | Reuse sessions for self-review |
model_routing.enabled | bool | false | Route tasks to different models by complexity |
model_routing.trivial | string | "claude-haiku" | Model for trivial tasks |
model_routing.simple | string | "claude-sonnet-4-6" | Model for simple tasks |
model_routing.medium | string | "claude-sonnet-4-6" | Model for medium tasks |
model_routing.complex | string | "claude-opus-4-6" | Model for complex tasks |
effort_routing.enabled | bool | false | Route thinking effort by complexity |
effort_routing.trivial | string | "low" | Effort for trivial tasks |
effort_routing.simple | string | "medium" | Effort for simple tasks |
effort_routing.medium | string | "high" | Effort for medium tasks |
effort_routing.complex | string | "max" | Effort for complex tasks |
timeout.default | duration | 30m | Default execution timeout |
timeout.trivial | duration | 5m | Timeout for trivial tasks |
timeout.simple | duration | 10m | Timeout for simple tasks |
timeout.medium | duration | 30m | Timeout for medium tasks |
timeout.complex | duration | 60m | Timeout for complex tasks |
decompose.enabled | bool | false | Auto-decompose complex tasks into subtasks (learn more) |
decompose.min_complexity | string | "complex" | Minimum complexity to trigger decomposition |
decompose.max_subtasks | int | 5 | Maximum subtasks created (2-10) |
decompose.min_description_words | int | 50 | Minimum description words to decompose |
executor:
type: "claude-code"
claude_code:
command: "claude"
extra_args: ["--verbose"]executor:
type: "opencode"
opencode:
server_url: "http://127.0.0.1:4096"
model: "anthropic/claude-sonnet-4-6"
provider: "anthropic"
auto_start_server: false
server_command: "opencode serve"OpenCode backend fields
| Field | Type | Default | Description |
|---|---|---|---|
opencode.server_url | string | "http://127.0.0.1:4096" | OpenCode server URL |
opencode.model | string | "anthropic/claude-sonnet-4-6" | Model identifier in provider/model format |
opencode.provider | string | "anthropic" | LLM provider name |
opencode.auto_start_server | bool | true | Auto-start OpenCode server if not running |
opencode.server_command | string | "opencode serve" | Command to start the server |
Hooks
Claude Code hooks provide inline quality gates during task execution.
executor:
hooks:
enabled: false # Enable Claude Code hooks
run_tests_on_stop: true # Stop gate: run tests before completion
block_destructive: true # PreToolUse: block destructive Bash commands
lint_on_save: false # PostToolUse: run linter after file edits| Field | Type | Default | Description |
|---|---|---|---|
hooks.enabled | bool | false | Enable Claude Code hooks |
hooks.run_tests_on_stop | bool | true | Stop hook runs tests before completion |
hooks.block_destructive | bool | true | PreToolUse hook blocks dangerous commands |
hooks.lint_on_save | bool | false | PostToolUse hook runs linter after file changes |
Stagnation Monitor
Detects when tasks are stuck in loops or making no progress.
executor:
stagnation:
enabled: false
warn_timeout: 10m
pause_timeout: 20m
abort_timeout: 30m
warn_at_iteration: 8
pause_at_iteration: 12
abort_at_iteration: 15
state_history_size: 5
identical_states_threshold: 3
grace_period: 30s
commit_partial_work: true| Field | Type | Default | Description |
|---|---|---|---|
stagnation.enabled | bool | false | Enable stagnation detection |
stagnation.warn_timeout | duration | 10m | Time before warning alert |
stagnation.pause_timeout | duration | 20m | Time before pausing execution |
stagnation.abort_timeout | duration | 30m | Time before aborting task |
stagnation.warn_at_iteration | int | 8 | Iteration count before warning |
stagnation.pause_at_iteration | int | 12 | Iteration count before pausing |
stagnation.abort_at_iteration | int | 15 | Iteration count before aborting |
stagnation.state_history_size | int | 5 | Size of state history for loop detection |
stagnation.identical_states_threshold | int | 3 | Identical states needed to detect loop |
stagnation.grace_period | duration | 30s | Grace period after intervention |
stagnation.commit_partial_work | bool | true | Commit partial progress before aborting |
Claude Code SDK Features
Advanced Claude Code backend features for session management and structured output.
executor:
claude_code:
use_session_resume: false # Reuse session for self-review
use_from_pr: false # Resume PR context for CI fixes
use_structured_output: false # Machine-readable classifier output| Field | Type | Default | Description |
|---|---|---|---|
claude_code.use_session_resume | bool | false | Reuse session for self-review (~40% token savings) |
claude_code.use_from_pr | bool | false | Resume PR context for autopilot CI fixes |
claude_code.use_structured_output | bool | false | Enable structured JSON output for classifiers |
Navigator Auto-Init
Automatically initialize Navigator documentation structure for projects.
executor:
navigator:
auto_init: true
templates_path: "" # Optional custom templates path| Field | Type | Default | Description |
|---|---|---|---|
navigator.auto_init | bool | true | Auto-create .agent/ on first task execution |
navigator.templates_path | string | "" | Override plugin templates location |
Worktree Isolation
Enable worktree isolation to execute tasks in separate git worktrees, preventing conflicts with uncommitted changes:
executor:
use_worktree: trueWhen enabled:
- Each task runs in an isolated temporary worktree (
/tmp/pilot-worktree-*) - Your original repository remains untouched during execution
- Context config (
.agent/) is copied to the worktree - Changes are pushed from the worktree branch to remote
- Automatic cleanup removes worktrees after task completion
- Orphaned worktrees are cleaned up on Pilot startup
Benefits:
- Execute tasks with uncommitted local changes
- No risk of merge conflicts with your work in progress
- Parallel development: code while Pilot works
- Clean execution environment for reliable automation
Performance impact: Minimal overhead (~100-200ms per worktree creation)
See the Worktree Isolation concept guide for detailed information on how it works, cleanup processes, and troubleshooting.
Autopilot
Autonomous PR lifecycle management — review, CI monitoring, merge, and release.
orchestrator:
autopilot:
enabled: false
environment: "stage" # dev, stage, prod
approval_source: "telegram" # telegram, slack, github-review
auto_review: true # self-review before PR
auto_merge: true # merge after CI passes
merge_method: "squash" # squash, merge, rebase
ci_wait_timeout: 30m
dev_ci_timeout: 5m # shorter timeout for dev env
ci_poll_interval: 30s
required_checks:
- test
- lint
auto_create_issues: true # create fix issues on CI failure
issue_labels:
- pilot
- autopilot-fix
notify_on_failure: true
max_failures: 3 # max fix attempts before giving up
max_merges_per_hour: 10 # rate limit merges
approval_timeout: 1h
merged_pr_scan_window: 30m # scan for recently merged PRs
github_review:
poll_interval: 30s # poll for GitHub review decisions
release:
enabled: false
trigger: "on_merge"
version_strategy: "conventional_commits"
tag_prefix: "v"
generate_changelog: true
notify_on_release: true
require_ci: true| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable autopilot mode |
environment | string | "stage" | Environment: dev, stage, prod |
approval_source | string | "telegram" | Where to get approvals: telegram, slack, github-review |
auto_review | bool | true | Run self-review before creating PR |
auto_merge | bool | true | Auto-merge after CI passes |
merge_method | string | "squash" | Git merge strategy |
ci_wait_timeout | duration | 30m | Max time to wait for CI |
dev_ci_timeout | duration | 5m | CI timeout in dev environment |
ci_poll_interval | duration | 30s | CI status polling interval |
required_checks | []string | ["test","lint"] | CI checks that must pass |
auto_create_issues | bool | true | Create fix issues when CI fails |
issue_labels | []string | ["pilot","autopilot-fix"] | Labels for auto-created fix issues |
notify_on_failure | bool | true | Send notification on failure |
max_failures | int | 3 | Max fix attempts before stopping |
max_merges_per_hour | int | 10 | Rate limit on merges |
approval_timeout | duration | 1h | Time before approval request expires |
merged_pr_scan_window | duration | 30m | Window to scan for recently merged PRs |
release.enabled | bool | false | Auto-create releases on merge |
release.trigger | string | "on_merge" | When to trigger release |
release.version_strategy | string | "conventional_commits" | How to determine version bump |
release.tag_prefix | string | "v" | Git tag prefix |
release.generate_changelog | bool | true | Auto-generate changelog |
release.require_ci | bool | true | Require CI pass before release |
Quality
Quality gates run after Pilot implements changes but before creating a PR. Each gate executes a shell command in the project directory. If a required gate fails, Pilot retries the implementation with the error output as feedback — the LLM sees what broke and fixes it.
How Quality Gates Work
Implementation complete
→ Run gates in order: build → test → lint → coverage → ...
→ All required gates pass? → Create PR
→ Required gate fails?
→ retry (default): feed error to LLM, re-implement, re-run gates
→ fail: stop immediately, mark task as failed
→ warn: log warning, create PR anywayEven without explicit configuration, Pilot auto-detects your project type and runs a minimal build gate to catch compilation errors. Set quality.enabled: true to configure the full pipeline.
Auto-Detection
When quality gates are not explicitly configured, Pilot detects your project type by checking for marker files and applies a minimal build check:
| Project Type | Marker File | Auto-Detected Command |
|---|---|---|
| Go | go.mod | go build ./... |
| Node.js (TypeScript) | package.json + tsconfig.json | npm run build || npx tsc --noEmit |
| Node.js (JavaScript) | package.json | npm run build --if-present |
| Rust | Cargo.toml | cargo check |
| Python | pyproject.toml or setup.py | python -m py_compile on changed files |
To disable auto-detection, set quality.enabled: false explicitly.
Gate Types
Seven built-in gate types with sensible default timeouts:
| Type | Default Timeout | Purpose |
|---|---|---|
build | 5m | Compilation / build step |
test | 10m | Test suite execution |
lint | 2m | Static analysis and formatting |
coverage | 10m | Code coverage with threshold enforcement |
security | 5m | Security scanning (e.g. gosec, npm audit) |
typecheck | 3m | Type checking (e.g. tsc --noEmit, mypy) |
custom | 5m | Any arbitrary command |
Project-Specific Examples
quality:
enabled: true
gates:
- name: build
type: build
command: "go build ./..."
required: true
timeout: 5m
max_retries: 2
failure_hint: "Fix compilation errors in the changed files"
- name: test
type: test
command: "go test ./... -count=1"
required: true
timeout: 10m
max_retries: 2
failure_hint: "Fix failing tests or update test expectations"
- name: lint
type: lint
command: "golangci-lint run ./..."
required: false
timeout: 3m
max_retries: 1
failure_hint: "Fix linting errors: formatting, unused imports, etc."
- name: vet
type: custom
command: "go vet ./..."
required: true
timeout: 2m
failure_hint: "Fix go vet issues"
- name: coverage
type: coverage
command: "go test ./... -coverprofile=coverage.out && go tool cover -func=coverage.out"
threshold: 70.0
required: false
timeout: 10mquality:
enabled: true
gates:
- name: typecheck
type: typecheck
command: "npx tsc --noEmit"
required: true
timeout: 3m
max_retries: 2
failure_hint: "Fix TypeScript type errors"
- name: build
type: build
command: "npm run build"
required: true
timeout: 5m
max_retries: 1
failure_hint: "Fix build errors"
- name: test
type: test
command: "npm test -- --watchAll=false"
required: true
timeout: 10m
max_retries: 2
failure_hint: "Fix failing tests or update snapshots"
- name: lint
type: lint
command: "npx eslint . --max-warnings 0"
required: false
timeout: 2m
max_retries: 1
failure_hint: "Fix ESLint warnings and errors"
- name: coverage
type: coverage
command: "npm test -- --coverage --watchAll=false"
threshold: 80.0
required: false
timeout: 10mFor monorepos using Turborepo or Nx, prefix commands with the workspace runner: npx turbo build or npx nx run-many --target=build.
quality:
enabled: true
gates:
- name: typecheck
type: typecheck
command: "mypy src/"
required: false
timeout: 3m
failure_hint: "Fix type annotation errors"
- name: test
type: test
command: "pytest -x --tb=short"
required: true
timeout: 10m
max_retries: 2
failure_hint: "Fix failing tests"
- name: lint
type: lint
command: "ruff check . && ruff format --check ."
required: false
timeout: 2m
max_retries: 1
failure_hint: "Run 'ruff format .' to fix formatting"
- name: coverage
type: coverage
command: "pytest --cov=src --cov-report=term-missing"
threshold: 75.0
required: false
timeout: 10m
- name: security
type: security
command: "bandit -r src/ -ll"
required: false
timeout: 3m
failure_hint: "Fix security issues flagged by bandit"quality:
enabled: true
gates:
- name: build
type: build
command: "cargo check"
required: true
timeout: 10m
max_retries: 2
failure_hint: "Fix compilation errors"
- name: test
type: test
command: "cargo test"
required: true
timeout: 15m
max_retries: 2
failure_hint: "Fix failing tests"
- name: lint
type: lint
command: "cargo clippy -- -D warnings"
required: false
timeout: 5m
max_retries: 1
failure_hint: "Fix clippy warnings"
- name: format
type: custom
command: "cargo fmt -- --check"
required: false
timeout: 1m
failure_hint: "Run 'cargo fmt' to fix formatting"Coverage Thresholds
Coverage gates parse output from common test runners to extract a percentage value. Supported output formats:
| Runner | Output Pattern | Example |
|---|---|---|
| Go | coverage: X.X% of statements | coverage: 85.3% of statements |
| Jest / NYC | Statements : X.X% or Lines : X.X% | Statements : 92.1% |
| pytest-cov | TOTAL ... X% | TOTAL 450 38 92% |
Set threshold on a coverage gate to enforce a minimum:
- name: coverage
type: coverage
command: "go test ./... -coverprofile=c.out && go tool cover -func=c.out"
threshold: 80.0 # fail if coverage drops below 80%
required: true # block PR creation
timeout: 10mWhen a coverage gate is required: false, Pilot logs a warning if coverage is below the threshold but still creates the PR. Set required: true to enforce it as a hard gate.
Failure Handling
The on_failure block defines global behavior when any required gate fails:
quality:
on_failure:
action: "retry" # retry | fail | warn
max_retries: 3 # global retry limit across all gates
notify_on:
- failed # send notification on these statuses| Action | Behavior |
|---|---|
retry | Feed the error output back to the LLM, re-implement, re-run all gates. Default. |
fail | Stop immediately. Mark the task as failed. Useful for CI-only validation. |
warn | Log the failure as a warning. Continue to PR creation. |
Per-gate retries (max_retries on individual gates) control how many times a single gate retries before reporting failure. Global retries (on_failure.max_retries) control how many full implementation cycles run. These compose: a gate with max_retries: 2 inside a pipeline with on_failure.max_retries: 3 can attempt the gate up to 2×3 = 6 times total.
Failure Hints
The failure_hint field on each gate is passed to the LLM as context when a gate fails. Write hints that help the AI fix the issue:
- name: lint
type: lint
command: "golangci-lint run ./..."
failure_hint: "Fix linting errors. Common issues: unused imports, missing error checks, formatting."Good hints are specific and actionable. Bad hints are vague (“fix the errors”).
Option Reference
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable quality gates |
gates[].name | string | — | Gate identifier |
gates[].type | string | — | Gate type: build, test, lint, coverage, security, typecheck, custom |
gates[].command | string | — | Shell command to run |
gates[].required | bool | false | Block PR if gate fails |
gates[].timeout | duration | varies | Max execution time (see gate type defaults above) |
gates[].threshold | float64 | — | Minimum threshold (coverage gates) |
gates[].max_retries | int | 0 | Retry count per gate on failure |
gates[].retry_delay | duration | 0 | Delay between retries |
gates[].failure_hint | string | — | Context message passed to LLM on failure |
on_failure.action | string | "retry" | Default action: retry, fail, warn |
on_failure.max_retries | int | — | Global retry limit (full re-implementation cycles) |
on_failure.notify_on | []string | — | Statuses that trigger notifications: failed, passed, skipped |
Budget
Cost controls for API usage. Prevents runaway spending by enforcing daily, monthly, and per-task limits with configurable enforcement actions.
Budget enforcement is disabled by default. Enable it explicitly with budget.enabled: true. When disabled, Pilot tracks no costs and applies no limits.
How Budget Enforcement Works
┌─────────────┐ ┌──────────────┐ ┌───────────────┐ ┌──────────┐
│ New task │────▶│ Check budget │────▶│ Within limits? │────▶│ Execute │
│ arrives │ │ (enforcer) │ │ │ │ task │
└─────────────┘ └──────────────┘ └───────┬────────┘ └────┬─────┘
│ No │
┌──────▼──────┐ ┌─────▼──────┐
│ Apply action │ │ Track cost │
│ warn/pause/ │ │ per event │
│ stop │ └────────────┘
└─────────────┘Budget checks run before each task starts. The enforcer queries accumulated spend from the metering database, compares against configured limits, and applies the configured action if a limit is exceeded. If the usage provider returns an error, the enforcer fails open — the task is allowed to proceed (logged as a warning).
Minimal Configuration
budget:
enabled: true
daily_limit: 50.00
monthly_limit: 500.00This enables budget tracking with sensible defaults for per-task limits and enforcement actions.
Full Configuration
budget:
enabled: true
daily_limit: 50.00 # USD per day
monthly_limit: 500.00 # USD per month
per_task:
max_tokens: 100000 # hard cap per task execution
max_duration: 30m # context timeout per task
on_exceed:
daily: "pause" # pause new tasks, finish current
monthly: "stop" # terminate immediately
per_task: "stop" # kill task that exceeds
thresholds:
warn_percent: 80 # alert at 80% of any limitOption Reference
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable budget tracking and enforcement |
daily_limit | float64 | 50.00 | Maximum daily spend in USD. Resets at midnight local time |
monthly_limit | float64 | 500.00 | Maximum monthly spend in USD. Resets on the 1st of each month |
per_task.max_tokens | int64 | 100000 | Maximum tokens (input + output) a single task may consume |
per_task.max_duration | duration | 30m | Maximum wall-clock time for a single task. Creates a context deadline |
on_exceed.daily | string | "pause" | Action when daily limit is hit: warn, pause, stop |
on_exceed.monthly | string | "stop" | Action when monthly limit is hit: warn, pause, stop |
on_exceed.per_task | string | "stop" | Action when per-task limit is hit: warn, pause, stop |
thresholds.warn_percent | float64 | 80 | Percentage of any limit that triggers a warning alert |
Per-Task Limits
Per-task limits protect against individual runaway executions — a single complex issue consuming your entire daily budget.
Token limit (per_task.max_tokens): Tracks cumulative input + output tokens during task execution. When exceeded, the task is terminated based on on_exceed.per_task.
Duration limit (per_task.max_duration): Creates a Go context with a deadline. When the deadline expires, the executor’s context is cancelled and the task stops gracefully.
# Small tasks, tight control
per_task:
max_tokens: 50000 # ~$0.25 per task
max_duration: 15m# Default — handles most issues
per_task:
max_tokens: 100000 # ~$0.50 per task
max_duration: 30m# Complex tasks, multi-file changes
per_task:
max_tokens: 500000 # ~$2.50 per task
max_duration: 1hSetting max_tokens: 0 or max_duration: 0 disables that specific per-task limit. The task will run without that constraint (other limits still apply).
Daily and Monthly Caps
Daily and monthly caps control aggregate spend across all tasks.
- Daily limit resets at midnight local time. If Pilot was paused due to a daily limit, it automatically resumes at the next reset.
- Monthly limit resets on the 1st of each month. Monthly pauses do NOT auto-resume — use
pilot budget reset --confirmor wait for the new month.
budget:
daily_limit: 25.00 # $25/day
monthly_limit: 200.00 # $200/month — ~8 working daysThe enforcer checks monthly limits first (more severe), then daily limits. This means a monthly stop action takes priority over a daily pause.
Enforcement Actions: Warn, Pause, Stop
Each limit boundary has an independently configurable action:
| Action | Behavior | Current Task | New Tasks | Auto-Resume |
|---|---|---|---|---|
warn | Log warning + fire alert | Continues | Allowed | N/A |
pause | Block new tasks | Finishes normally | Blocked (queued) | Yes, at next daily reset |
stop | Terminate immediately | Killed | Blocked | No — manual reset required |
Recommended configurations by use case:
# Solo: catch runaway tasks, don't lose work
on_exceed:
daily: "warn" # just notify, keep working
monthly: "pause" # stop new tasks at month cap
per_task: "stop" # kill individual runaway tasks# Team: strict daily, hard monthly
on_exceed:
daily: "pause" # pause at daily cap
monthly: "stop" # hard stop at monthly cap
per_task: "stop" # kill runaway tasks# Production: hard limits everywhere
on_exceed:
daily: "stop"
monthly: "stop"
per_task: "stop"Warning Thresholds and Alerts
The warn_percent threshold fires alerts before a limit is exceeded. Alerts are delivered through your configured alert channels (Slack, Telegram, email, PagerDuty — see Alerts).
thresholds:
warn_percent: 80 # alert at 80% of daily or monthly limitAlert types fired by the budget enforcer:
| Alert Type | Severity | Trigger |
|---|---|---|
daily_budget_warning | warning | Daily spend reaches warn_percent |
monthly_budget_warning | warning | Monthly spend reaches warn_percent |
daily_budget_exceeded | critical | Daily spend reaches 100% |
monthly_budget_exceeded | critical | Monthly spend reaches 100% |
Cost Tracking Model
Pilot tracks five event types that contribute to spend calculations:
| Event Type | Unit | Rate | Description |
|---|---|---|---|
task | per execution | $1.00 | Flat fee per task execution |
token | per 1M tokens | $3.60 input / $18.00 output | Claude API tokens (Sonnet pricing + 20% margin) |
compute | per minute | $0.01 | Execution wall-clock time |
storage | per GB/month | $0.10 | Memory and log storage |
api_call | per call | $0.001 | External API calls (GitHub, Linear, etc.) |
Token pricing includes a 20% platform margin over base Claude API rates. Opus 4.6 pricing ($5/$25 per 1M tokens) is tracked separately for tasks routed to Opus.
CLI Commands
Monitor and manage budget from the command line:
# View current spend vs limits with progress bars
pilot budget status
# View budget configuration and YAML template
pilot budget config
# Reset blocked tasks counter and resume daily-paused execution
pilot budget reset --confirmpilot budget status displays color-coded progress bars:
- White: Under 80% of limit
- Yellow: 80-99% of limit (warning zone)
- Red: 100%+ (limit exceeded)
The status also shows paused/blocked indicators and per-task enforcement settings.
pilot budget reset only clears the blocked tasks counter and resumes execution paused by daily limits. Monthly limit pauses require waiting for the new month or updating monthly_limit in config.
Orchestrator
Controls task execution strategy and scheduling.
orchestrator:
model: "claude-sonnet-4-6" # default model for planning
max_concurrent: 2 # max parallel tasks
execution:
mode: "sequential" # sequential | parallel | auto
wait_for_merge: true # wait for PR merge before next task
poll_interval: 30s
pr_timeout: 1h # max wait for PR merge
daily_brief:
enabled: false
schedule: "0 9 * * 1-5" # cron: 9 AM weekdays
timezone: "America/New_York"
channels:
- type: slack
channel: "#dev-updates"
content:
include_metrics: true
include_errors: true
max_items_per_section: 10
filters:
projects: [] # empty = all projects| Field | Type | Default | Description |
|---|---|---|---|
model | string | "claude-sonnet-4-6" | Default model for planning |
max_concurrent | int | 2 | Max parallel task executions |
execution.mode | string | "sequential" | sequential — wait for PR merge before next task; parallel — dispatch all tasks concurrently; auto — parallel dispatch with scope-overlap guard; issues targeting different directories run concurrently, overlapping scopes are serialized |
execution.wait_for_merge | bool | true | Wait for PR merge before next task |
execution.poll_interval | duration | 30s | PR status poll interval |
execution.pr_timeout | duration | 1h | Max wait for PR merge |
daily_brief.enabled | bool | false | Enable daily summary |
daily_brief.schedule | string | "0 9 * * 1-5" | Cron schedule (5-field cron syntax) |
daily_brief.timezone | string | "America/New_York" | IANA timezone for schedule |
daily_brief.channels[].type | string | — | Delivery channel: slack, telegram, email |
daily_brief.channels[].channel | string | — | Channel name or ID (e.g. #dev-updates) |
daily_brief.channels[].recipients | []string | — | Email recipients (for email type) |
daily_brief.content.include_metrics | bool | true | Include task/cost metrics in brief |
daily_brief.content.include_errors | bool | true | Include error summaries |
daily_brief.content.max_items_per_section | int | 10 | Max items per brief section |
daily_brief.filters.projects | []string | [] | Project name filter (empty = all projects) |
Alerts
Configurable alerting for operational events, cost, and security.
alerts:
enabled: true
defaults:
cooldown: 5m
default_severity: "warning"
suppress_duplicates: true
channels:
- name: slack-ops
type: slack
enabled: true
severities: [warning, critical]
slack:
channel: "#pilot-alerts"
- name: telegram-admin
type: telegram
enabled: true
severities: [critical]
telegram:
chat_id: 123456789
- name: pagerduty
type: pagerduty
enabled: false
severities: [critical]
pagerduty:
routing_key: "${PAGERDUTY_KEY}"
rules:
- name: task_stuck
type: task_stuck
enabled: true
severity: warning
channels: [slack-ops]
condition:
progress_unchanged_for: 10m
- name: consecutive_failures
type: consecutive_failures
enabled: true
severity: critical
channels: [slack-ops, telegram-admin]
cooldown: 30m
condition:
consecutive_failures: 3
- name: daily_spend
type: daily_spend_exceeded
enabled: false
severity: warning
channels: [slack-ops]
condition:
daily_spend_threshold: 50.0Defaults
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable alerting system |
defaults.cooldown | duration | 5m | Minimum time between duplicate alerts |
defaults.default_severity | string | "warning" | Severity when rule doesn’t specify one |
defaults.suppress_duplicates | bool | true | Prevent repeated alerts for the same event |
Channel configuration
| Field | Type | Default | Description |
|---|---|---|---|
channels[].name | string | — | Unique channel identifier |
channels[].type | string | — | Channel type: slack, telegram, email, webhook, pagerduty |
channels[].enabled | bool | false | Enable this channel |
channels[].severities | []string | — | Severity levels routed here: info, warning, critical |
channels[].slack.channel | string | — | Slack channel name (e.g. #pilot-alerts) |
channels[].telegram.chat_id | int64 | — | Telegram chat/user ID |
channels[].email.to | []string | — | Recipient email addresses |
channels[].email.subject | string | — | Custom subject template (optional) |
channels[].webhook.url | string | — | Webhook endpoint URL |
channels[].webhook.method | string | "POST" | HTTP method: POST, PUT |
channels[].webhook.headers | map | — | Custom HTTP headers |
channels[].webhook.secret | string | — | HMAC signing secret |
channels[].pagerduty.routing_key | string | — | PagerDuty integration/routing key |
channels[].pagerduty.service_id | string | — | PagerDuty service ID |
Rule configuration
| Field | Type | Default | Description |
|---|---|---|---|
rules[].name | string | — | Unique rule identifier |
rules[].type | string | — | Alert type (see supported types below) |
rules[].enabled | bool | false | Enable this rule |
rules[].severity | string | "warning" | Severity: info, warning, critical |
rules[].channels | []string | — | Channel names to route this alert to |
rules[].cooldown | duration | 5m | Override default cooldown for this rule |
rules[].description | string | — | Human-readable rule description |
rules[].condition.progress_unchanged_for | duration | — | Trigger when task has no progress for this long |
rules[].condition.consecutive_failures | int | — | Trigger after N consecutive task failures |
rules[].condition.daily_spend_threshold | float64 | — | Trigger when daily spend exceeds this USD amount |
rules[].condition.budget_limit | float64 | — | Trigger when total budget reaches this USD limit |
rules[].condition.usage_spike_percent | float64 | — | Trigger on usage spike exceeding this percentage |
rules[].condition.pattern | string | — | Regex pattern to match (security rules) |
rules[].condition.file_pattern | string | — | File glob pattern (sensitive file rules) |
rules[].condition.paths | []string | — | File paths to monitor |
Supported alert types: task_stuck, task_failed, consecutive_failures, service_unhealthy, daily_spend_exceeded, budget_depleted, usage_spike, unauthorized_access, sensitive_file_modified, unusual_pattern
Built-in rules (pre-configured in defaults):
| Rule | Type | Default State | Severity | Condition |
|---|---|---|---|---|
task_stuck | task_stuck | enabled | warning | No progress for 10m, cooldown 15m |
task_failed | task_failed | enabled | warning | Any task failure, no cooldown |
consecutive_failures | consecutive_failures | enabled | critical | 3 failures in a row, cooldown 30m |
daily_spend | daily_spend_exceeded | disabled | warning | Daily spend > $50, cooldown 1h |
budget_depleted | budget_depleted | disabled | critical | Budget > $500, cooldown 4h |
Approval
Multi-stage approval workflow for task execution and merging.
approval:
enabled: false
default_timeout: 1h
default_action: "rejected" # action on timeout
pre_execution:
enabled: false
approvers: ["@admin"]
timeout: 1h
default_action: "rejected"
require_all: false # any one approver suffices
pre_merge:
enabled: false
approvers: ["@lead"]
timeout: 24h
default_action: "rejected"
require_all: false
post_failure:
enabled: false
approvers: ["@admin"]
timeout: 1h
default_action: "rejected"| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable approval workflow |
default_timeout | duration | 1h | Default timeout for approval requests |
default_action | string | "rejected" | Action on timeout: approved, rejected |
pre_execution.enabled | bool | false | Require approval before task execution |
pre_merge.enabled | bool | false | Require approval before PR merge |
post_failure.enabled | bool | false | Require approval to retry after failure |
*.approvers | []string | — | User IDs or handles who can approve |
*.timeout | duration | varies | Timeout per stage |
*.require_all | bool | false | Require all approvers (vs any one) |
Logging
Configure log output format and rotation.
logging:
level: "info" # debug, info, warn, error
format: "text" # text, json
output: "stdout" # stdout, stderr, or file path
rotation:
max_size: "100MB"
max_age: "7d"
max_backups: 5| Field | Type | Default | Description |
|---|---|---|---|
level | string | "info" | Log level: debug, info, warn, error |
format | string | "text" | Output format: text, json |
output | string | "stdout" | Destination: stdout, stderr, or file path |
rotation.max_size | string | — | Max log file size before rotation |
rotation.max_age | string | — | Max age before deletion |
rotation.max_backups | int | — | Max rotated files to keep |
Tunnel
Expose your local Pilot instance to the internet for webhooks.
tunnel:
enabled: false
provider: "cloudflare" # cloudflare, ngrok, manual
domain: "" # custom domain (optional)
port: 9090 # local port to tunnel| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable tunnel |
provider | string | "cloudflare" | Tunnel provider: cloudflare, ngrok, manual |
domain | string | — | Custom domain for tunnel |
port | int | 9090 | Local port to expose |
Webhooks
Outbound webhooks for integrating Pilot events with external systems.
webhooks:
enabled: true
defaults:
timeout: 30s
retry:
max_attempts: 3
initial_delay: 1s
max_delay: 60s
multiplier: 2.0
endpoints:
- name: "ci-notify"
url: "https://example.com/webhook"
secret: "${WEBHOOK_SECRET}" # HMAC-SHA256 signing
enabled: true
timeout: 30s
events:
- task.completed
- task.failed
- pr.created
headers:
X-Custom-Header: "value"
retry:
max_attempts: 3| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable outbound webhooks |
defaults.timeout | duration | 30s | Default HTTP timeout for webhook calls |
defaults.retry.max_attempts | int | 3 | Default max retry attempts |
defaults.retry.initial_delay | duration | 1s | Delay before first retry |
defaults.retry.max_delay | duration | 60s | Maximum backoff delay |
defaults.retry.multiplier | float64 | 2.0 | Exponential backoff multiplier |
Endpoint configuration
| Field | Type | Default | Description |
|---|---|---|---|
endpoints[].name | string | — | Unique endpoint identifier |
endpoints[].url | string | — | Webhook delivery URL |
endpoints[].secret | string | — | HMAC-SHA256 signing secret |
endpoints[].enabled | bool | false | Enable this endpoint |
endpoints[].timeout | duration | 30s | Override default timeout |
endpoints[].events | []string | — | Events to deliver (see supported list) |
endpoints[].headers | map | — | Custom HTTP headers to include |
endpoints[].retry.max_attempts | int | 3 | Override default retry count |
endpoints[].retry.initial_delay | duration | 1s | Override retry initial delay |
endpoints[].retry.max_delay | duration | 60s | Override retry max delay |
endpoints[].retry.multiplier | float64 | 2.0 | Override backoff multiplier |
Supported events: task.started, task.progress, task.completed, task.failed, task.timeout, pr.created, budget.warning
Gateway
Internal HTTP/WebSocket server configuration.
gateway:
host: "127.0.0.1"
port: 9090
auth:
type: "claude-code" # claude-code or api-token
token: "" # required if type is api-token| Field | Type | Default | Description |
|---|---|---|---|
host | string | "127.0.0.1" | Bind address for the HTTP/WebSocket server |
port | int | 9090 | Port number (1–65535) |
auth.type | string | "claude-code" | Auth mode: claude-code (built-in) or api-token |
auth.token | string | — | Bearer token, required when auth.type is api-token |
Container deployments: Set gateway.host: "0.0.0.0" when running in Docker or Kubernetes. The default 127.0.0.1 only accepts loopback connections — health probes and ingress traffic will fail. See the Docker & Helm guide for container-specific configuration.
Container Config Mounting
When running in a container, mount config.yaml as a read-only volume:
# docker-compose.yml
volumes:
- ./config.yaml:/home/pilot/.pilot/config.yaml:ro# Kubernetes ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: pilot-config
data:
config.yaml: |
version: "1.0"
gateway:
host: "0.0.0.0"
port: 9090
adapters:
github:
enabled: true
token: "${GITHUB_TOKEN}"
repo: "your-org/your-repo"Other Adapters
Linear
adapters:
linear:
enabled: false
api_key: ${LINEAR_API_KEY}
team_id: "TEAM_ID"
pilot_label: "pilot"
auto_assign: true
polling:
enabled: true
interval: 30s
# Multi-workspace support
workspaces:
- name: "main"
api_key: ${LINEAR_API_KEY}
team_id: "TEAM_ID"
pilot_label: "pilot"
projects: ["my-project"]| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable Linear adapter |
api_key | string | — | Linear API key (legacy single-workspace) |
team_id | string | — | Linear team ID (legacy single-workspace) |
pilot_label | string | "pilot" | Label that marks issues for Pilot |
auto_assign | bool | false | Auto-assign issues to Pilot |
project_ids | []string | — | Filter to specific Linear project IDs |
polling.enabled | bool | false | Enable issue polling |
polling.interval | duration | 30s | Polling interval |
workspaces[].name | string | — | Workspace identifier |
workspaces[].api_key | string | — | API key for this workspace |
workspaces[].team_id | string | — | Team ID in this workspace |
workspaces[].pilot_label | string | "pilot" | Label for this workspace |
workspaces[].projects | []string | — | Project filter for this workspace |
workspaces[].auto_assign | bool | false | Auto-assign in this workspace |
GitLab
adapters:
gitlab:
enabled: false
token: ${GITLAB_TOKEN}
base_url: "https://gitlab.com" # self-hosted URL
project: "namespace/project"
webhook_secret: ""
pilot_label: "pilot"
polling:
enabled: false
interval: 30s
stale_label_cleanup:
enabled: true
interval: 30m
threshold: 1h| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable GitLab adapter |
token | string | — | GitLab personal or project access token |
base_url | string | "https://gitlab.com" | GitLab instance URL (for self-hosted) |
project | string | — | Project in namespace/project format |
webhook_secret | string | — | HMAC secret for webhook verification |
pilot_label | string | "pilot" | Label that marks issues for Pilot |
polling.enabled | bool | false | Enable issue polling |
polling.interval | duration | 30s | Polling interval |
polling.label | string | "pilot" | Label to filter when polling |
stale_label_cleanup.enabled | bool | true | Auto-remove stale in-progress labels |
stale_label_cleanup.interval | duration | 30m | Cleanup check interval |
stale_label_cleanup.threshold | duration | 1h | Label age before cleanup |
Slack
adapters:
slack:
enabled: false
bot_token: ${SLACK_BOT_TOKEN}
channel: "#pilot-updates"
signing_secret: ""| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable Slack adapter |
bot_token | string | — | Slack bot token (xoxb-...) |
channel | string | — | Default notification channel |
signing_secret | string | — | Slack signing secret for webhook verification |
Discord
adapters:
discord:
enabled: true
bot_token: ${DISCORD_BOT_TOKEN}
allowed_guilds:
- "123456789012345678" # guild (server) IDs allowed to send tasks
allowed_channels:
- "987654321098765432" # channel IDs allowed to send tasks
command_prefix: "!pilot"
rate_limit:
messages_per_second: 5
tasks_per_minute: 10| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable Discord adapter |
bot_token | string | — | Discord bot token |
allowed_guilds | list | — | Guild (server) IDs allowed to send tasks |
allowed_channels | list | — | Channel IDs allowed to send tasks |
command_prefix | string | — | Prefix for bot commands (e.g. !pilot) |
rate_limit.messages_per_second | int | 5 | Max outgoing messages per second |
rate_limit.tasks_per_minute | int | 10 | Max tasks accepted per minute |
Plane
adapters:
plane:
enabled: true
base_url: "https://api.plane.so" # or self-hosted URL
api_key: ${PLANE_API_KEY}
workspace_slug: "my-workspace"
project_ids:
- "project-uuid-1"
pilot_label: "pilot"
webhook_secret: ""
polling:
enabled: true
interval: 30s| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable Plane adapter |
base_url | string | "https://api.plane.so" | Plane API base URL (self-hosted or cloud) |
api_key | string | — | Plane API key (X-API-Key header) |
workspace_slug | string | — | Plane workspace slug |
project_ids | list | — | Project UUIDs to watch for work items |
pilot_label | string | "pilot" | Label that marks work items for Pilot |
webhook_secret | string | — | HMAC secret for webhook verification |
polling.enabled | bool | false | Enable work item polling |
polling.interval | duration | 30s | How often to poll for new work items |
Jira
adapters:
jira:
enabled: false
platform: "cloud" # cloud or server
base_url: "https://myorg.atlassian.net"
username: "bot@example.com"
api_token: ${JIRA_API_TOKEN}
webhook_secret: ""
pilot_label: "pilot"
transitions:
in_progress: "In Progress"
done: "Done"| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable Jira adapter |
platform | string | "cloud" | Jira platform: cloud or server |
base_url | string | — | Jira instance URL |
username | string | — | Jira username or email |
api_token | string | — | Jira API token |
webhook_secret | string | — | Webhook verification secret |
pilot_label | string | "pilot" | Label that marks issues for Pilot |
transitions.in_progress | string | — | Jira transition name for “In Progress” |
transitions.done | string | — | Jira transition name for “Done” |
Asana
adapters:
asana:
enabled: false
access_token: ${ASANA_ACCESS_TOKEN}
workspace_id: "1234567890"
webhook_secret: ""
pilot_tag: "pilot"| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable Asana adapter |
access_token | string | — | Asana personal access token |
workspace_id | string | — | Asana workspace ID |
webhook_secret | string | — | Webhook verification secret |
pilot_tag | string | "pilot" | Tag that marks tasks for Pilot |
Azure DevOps
adapters:
azure_devops:
enabled: false
pat: ${AZURE_DEVOPS_PAT}
organization: "myorg"
project: "MyProject"
repository: "myrepo"
base_url: "https://dev.azure.com"
pilot_tag: "pilot"
work_item_types:
- "User Story"
- "Bug"
polling:
enabled: true
interval: 30s
stale_label_cleanup:
enabled: true
interval: 30m
threshold: 1h| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable Azure DevOps adapter |
pat | string | — | Personal access token |
organization | string | — | Azure DevOps organization name |
project | string | — | Project name |
repository | string | — | Repository name |
base_url | string | "https://dev.azure.com" | Azure DevOps instance URL |
webhook_secret | string | — | Webhook verification secret |
pilot_tag | string | "pilot" | Tag that marks work items for Pilot |
work_item_types | []string | — | Work item types to process (e.g. User Story, Bug) |
polling.enabled | bool | false | Enable work item polling |
polling.interval | duration | 30s | Polling interval |
stale_label_cleanup.enabled | bool | true | Auto-remove stale in-progress tags |
stale_label_cleanup.interval | duration | 30m | Cleanup check interval |
stale_label_cleanup.threshold | duration | 1h | Tag age before cleanup |
Teams
Team-based project access control for multi-user environments. When configured, task execution is scoped to the member’s allowed projects.
team:
enabled: true
team_id: "engineering"
member_email: "dev@example.com"| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable team-based access control |
team_id | string | — | Team ID or name to scope execution |
member_email | string | — | Email of the member executing tasks |
Environment variable alternative:
PILOT_TEAM_ID="engineering"
PILOT_MEMBER_EMAIL="dev@example.com"Team configuration is optional. When not configured, Pilot runs without access restrictions. Enable it for multi-user deployments where you need to scope task execution to specific projects per team member.
Projects
Multi-project configuration for managing multiple repositories.
projects:
- name: "backend"
path: "/path/to/backend"
default_branch: "main"
navigator: true
github:
owner: "myorg"
repo: "backend"
- name: "frontend"
path: "/path/to/frontend"
default_branch: "main"
default_project: "backend"
memory:
path: "~/.pilot/data"
cross_project: true # share memory across projectsProject fields
| Field | Type | Default | Description |
|---|---|---|---|
projects[].name | string | — | Unique project name (used in CLI and logs) |
projects[].path | string | — | Absolute filesystem path to the repository |
projects[].default_branch | string | "main" | Default git branch for PRs |
projects[].navigator | bool | false | Enable context intelligence for this project |
projects[].github.owner | string | — | GitHub organization or user |
projects[].github.repo | string | — | GitHub repository name |
default_project | string | — | Name of the project used when none is specified |
Memory
| Field | Type | Default | Description |
|---|---|---|---|
memory.path | string | "~/.pilot/data" | Storage directory for SQLite and knowledge graph |
memory.cross_project | bool | true | Share learned patterns across all projects |
Dashboard
| Field | Type | Default | Description |
|---|---|---|---|
dashboard.refresh_interval | int | 1000 | TUI refresh interval in milliseconds |
dashboard.show_logs | bool | true | Show execution logs panel in dashboard |
Full Example
This is a production-ready configuration. Adjust values for your environment — start with environment: dev and auto_merge: false until you’re comfortable with the workflow.
version: "1.0"
adapters:
github:
enabled: true
token: ${GITHUB_TOKEN}
repo: "myorg/myapp"
project_path: "~/Projects/myapp"
pilot_label: "pilot"
polling:
enabled: true
interval: 30s
telegram:
bot_token: ${TELEGRAM_BOT_TOKEN}
allowed_ids: [123456789]
project_path: "~/Projects/myapp"
executor:
type: "claude-code"
model_routing:
enabled: true
trivial: "claude-haiku"
simple: "claude-sonnet-4-6"
medium: "claude-sonnet-4-6"
complex: "claude-opus-4-6"
timeout:
default: "30m"
complex: "60m"
orchestrator:
max_concurrent: 2
execution:
mode: "sequential"
wait_for_merge: true
autopilot:
enabled: true
environment: "dev"
auto_merge: false
max_failures: 3
quality:
enabled: true
gates:
- name: build
type: build
command: "make build"
required: true
timeout: 5m
- name: test
type: test
command: "make test"
required: true
timeout: 10m
budget:
enabled: true
daily_limit: 50.00
monthly_limit: 500.00
thresholds:
warn_percent: 80
logging:
level: "info"
format: "text"