Providers & models
wiki-builder supports Anthropic and OpenAI. Switch providers globally or per command.
Supported providers
| Provider | Default model | Notes |
|---|---|---|
anthropic |
claude-opus-4-6 |
Default. Supports --thinking for extended reasoning. |
openai |
gpt-4o |
Full tool-use support. --thinking flag has no effect. |
Resolution order
Provider and API key are resolved in this order, highest priority first:
--providerand--modelflags on the command- Environment variables:
WIKI_PROVIDER,ANTHROPIC_API_KEY,OPENAI_API_KEY - Saved config in
~/.wiki-builder/config.json - Defaults:
anthropic/claude-opus-4-6
Setting a default provider
# Set Anthropic as default
wiki config --provider anthropic --api-key sk-ant-...
# Set OpenAI as default
wiki config --provider openai --api-key sk-...
# Set a default model
wiki config --model claude-sonnet-4-6
Overriding per command
# Use OpenAI for a single ingest
wiki ingest raw/paper.md --provider openai
# Use a cheaper model for quick queries
wiki query "what is X?" --provider openai --model gpt-4o-mini
# Override via environment variable for a session
WIKI_PROVIDER=openai wiki ingest raw/paper.md
Choosing a model
Ingest and lint operations are the most demanding — they involve reading multiple files and writing many pages in one pass. Use a capable model for those. Query is less demanding and works fine with lighter models.
| Use case | Recommended model |
|---|---|
| Ingesting complex documents | claude-opus-4-6 or gpt-4o |
| Routine queries | claude-sonnet-4-6 or gpt-4o-mini |
| Lint with auto-fix | claude-opus-4-6 or gpt-4o |
| Large volume ingestion (cost-sensitive) | claude-haiku-4-5 or gpt-4o-mini |
Extended thinking (Anthropic only)
Pass --thinking to any command to see the model's reasoning process printed before the final output. Useful for debugging why the LLM made a particular decision during ingest or lint.
wiki ingest raw/paper.md --thinking
wiki lint --thinking
Cost notes
A typical ingest touches 5–15 files and runs 10–20 tool-calling turns. Cost depends on source length, existing wiki size, and model. As a rough guide:
claude-opus-4-6: ~$0.05–0.20 per ingestgpt-4o: similar range- Smaller models: significantly cheaper, with some quality trade-off
Use wiki status to monitor wiki growth. Larger wikis mean more context read per operation.