Providers & models

wiki-builder supports Anthropic and OpenAI. Switch providers globally or per command.

Supported providers

ProviderDefault modelNotes
anthropic claude-opus-4-6 Default. Supports --thinking for extended reasoning.
openai gpt-4o Full tool-use support. --thinking flag has no effect.

Resolution order

Provider and API key are resolved in this order, highest priority first:

  1. --provider and --model flags on the command
  2. Environment variables: WIKI_PROVIDER, ANTHROPIC_API_KEY, OPENAI_API_KEY
  3. Saved config in ~/.wiki-builder/config.json
  4. Defaults: anthropic / claude-opus-4-6

Setting a default provider

# Set Anthropic as default
wiki config --provider anthropic --api-key sk-ant-...

# Set OpenAI as default
wiki config --provider openai --api-key sk-...

# Set a default model
wiki config --model claude-sonnet-4-6

Overriding per command

# Use OpenAI for a single ingest
wiki ingest raw/paper.md --provider openai

# Use a cheaper model for quick queries
wiki query "what is X?" --provider openai --model gpt-4o-mini

# Override via environment variable for a session
WIKI_PROVIDER=openai wiki ingest raw/paper.md

Choosing a model

Ingest and lint operations are the most demanding — they involve reading multiple files and writing many pages in one pass. Use a capable model for those. Query is less demanding and works fine with lighter models.

Use caseRecommended model
Ingesting complex documentsclaude-opus-4-6 or gpt-4o
Routine queriesclaude-sonnet-4-6 or gpt-4o-mini
Lint with auto-fixclaude-opus-4-6 or gpt-4o
Large volume ingestion (cost-sensitive)claude-haiku-4-5 or gpt-4o-mini

Extended thinking (Anthropic only)

Pass --thinking to any command to see the model's reasoning process printed before the final output. Useful for debugging why the LLM made a particular decision during ingest or lint.

wiki ingest raw/paper.md --thinking
wiki lint --thinking

Cost notes

A typical ingest touches 5–15 files and runs 10–20 tool-calling turns. Cost depends on source length, existing wiki size, and model. As a rough guide:

Use wiki status to monitor wiki growth. Larger wikis mean more context read per operation.