A CLI tool that explains why your command just failed. Wraps any command, and when it exits non-zero, uses an LLM + Stack Overflow to diagnose the error.
wtf <command> [args...]
- Runs your command, passing through stdout/stderr in real time
- If the command succeeds (exit 0),
wtfexits silently - If it fails,
wtfcaptures stderr and:- Splits the output into independent error blocks (via LLM)
- Searches Stack Overflow for each error
- Explains each error with root cause, confidence, and actionable next steps
$ wtf cargo build
Compiling myapp v0.1.0
error[E0382]: borrow of moved value: `x`
...
✗ Command failed (exit code 101) — cargo build
━━━ wtf diagnosis ━━━
── Error 1: borrow of moved value ──
Root cause: Variable `x` was moved into a closure and then used again after the move.
Confidence: High — this is a standard Rust ownership error.
Evidence: Multiple SO answers confirm moving into closures transfers ownership.
Next steps:
• Clone `x` before passing it to the closure
• Use a reference instead of moving the value
Related Stack Overflow questions:
→ Use of moved value in Rust closure
https://stackoverflow.com/q/12345678
go install github.com/kht/wtf/cmd/wtf@latestOr build from source:
make build # produces ./wtfwtf uses layered configuration: config file → environment variables → CLI flags (highest priority).
Copy the example and edit:
# Linux/macOS
cp wtf.example.toml ~/.config/wtf/config.toml
# Windows
copy wtf.example.toml %APPDATA%\wtf\config.toml[llm]
provider = "openai" # "openai" or "anthropic"
model = "gpt-4.1"
api_key = "" # or use WTF_LLM_API_KEY env var
# Use different models for splitting vs explaining
[llm.split]
model = "gpt-4.1-mini" # cheaper model for splitting errors
[llm.explain]
# provider = "anthropic"
# thinking_budget = 10000 # enable extended thinking
[analysis]
timeout_secs = 60
excerpt_chars = 12000
stack_overflow_results = 5| Variable | Description |
|---|---|
WTF_LLM_PROVIDER |
LLM provider (openai or anthropic) |
WTF_LLM_MODEL |
Model name |
WTF_LLM_API_KEY |
API key |
WTF_LLM_BASE_URL |
Custom API endpoint |
WTF_LLM_SPLIT_* |
Per-step overrides for the split phase |
WTF_LLM_EXPLAIN_* |
Per-step overrides for the explain phase |
WTF_ANALYSIS_TIMEOUT_SECS |
Per-call timeout |
WTF_ANALYSIS_EXCERPT_CHARS |
Max stderr chars sent to LLM |
WTF_ANALYSIS_STACK_OVERFLOW_RESULTS |
Max SO results per error |
--config path to config file
--provider LLM provider
--model LLM model
--api-key API key
--base-url custom API endpoint
--timeout-secs per-call timeout
--excerpt-chars max stderr chars sent to LLM
--stack-overflow-results max SO results per error