>_ OpenClaude
Claude Code with any LLM — not just Claude.
Install • Quick Start • Providers • Model Guide • How It Works
Plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, Ollama, or any model that speaks the OpenAI chat completions API. All of Claude Code's tools work — bash, file read/write/edit, grep, glob, agents, tasks, MCP — powered by whatever model you choose.
200+ compatible models. 15 built-in tools. 786 lines of shim code. Zero extra dependencies.
npm (recommended)
npm install -g @aryanjsx/openclaudeFrom source
git clone https://github.com/aryanjsx/Openclaude.git
cd Openclaude
bun install
bun run buildRun directly with Bun (no build step)
git clone https://github.com/aryanjsx/Openclaude.git
cd Openclaude
bun install
bun run dev1. Set your provider
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-your-key-here
export OPENAI_MODEL=gpt-4o2. Launch
openclaudeThat's it. Streaming, tool calling, file editing, multi-step reasoning — everything works through the model you pick. The npm package name is @aryanjsx/openclaude, but the CLI command is openclaude.
| Provider | Setup |
|---|---|
| OpenAI |
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-...
export OPENAI_MODEL=gpt-4o |
| DeepSeek |
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-...
export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_MODEL=deepseek-chat |
| Gemini via OpenRouter |
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-or-...
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
export OPENAI_MODEL=google/gemini-2.0-flash |
| Ollama local, free |
ollama pull llama3.3:70b
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3.3:70b |
| Together AI |
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=...
export OPENAI_BASE_URL=https://api.together.xyz/v1
export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo |
| Groq |
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=gsk_...
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
export OPENAI_MODEL=llama-3.3-70b-versatile |
| Mistral |
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=...
export OPENAI_BASE_URL=https://api.mistral.ai/v1
export OPENAI_MODEL=mistral-large-latest |
| Azure OpenAI |
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=your-azure-key
export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
export OPENAI_MODEL=gpt-4o |
| LM Studio local |
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-name |
Any provider that exposes an OpenAI-compatible /v1/chat/completions endpoint will work.
| Variable | Required | Description |
|---|---|---|
CLAUDE_CODE_USE_OPENAI |
Yes | Set to 1 to enable the OpenAI provider |
OPENAI_API_KEY |
Yes* | Your API key (*not needed for local models) |
OPENAI_MODEL |
Yes | Model name (e.g. gpt-4o, deepseek-chat, llama3.3:70b) |
OPENAI_BASE_URL |
No | API endpoint (defaults to https://api.openai.com/v1) |
ANTHROPIC_MODEL can also override the model name. OPENAI_MODEL takes priority.
| Feature | Status |
|---|---|
| All 15 tools (Bash, FileRead, FileWrite, FileEdit, Glob, Grep, WebFetch, WebSearch, Agent, MCP, LSP, NotebookEdit, Tasks) | Fully working |
| Real-time token streaming | Fully working |
| Multi-step tool chains | Fully working |
| Vision (base64 & URL images) | Fully working |
| Slash commands (/commit, /review, /compact, /diff, /doctor) | Fully working |
| Sub-agents (AgentTool) | Fully working |
| Persistent memory | Fully working |
What's different from the original:
- No extended thinking mode (OpenAI models use different reasoning approaches)
- No Anthropic prompt caching or beta headers
- 32K default max output tokens (handled gracefully if a model caps lower)
| Model | Tool Calling | Code Quality | Speed |
|---|---|---|---|
| GPT-4o | Excellent | Excellent | Fast |
| DeepSeek-V3 | Great | Great | Fast |
| Gemini 2.0 Flash | Great | Good | Very Fast |
| Llama 3.3 70B | Good | Good | Medium |
| Mistral Large | Good | Good | Fast |
| GPT-4o-mini | Good | Good | Very Fast |
| Qwen 2.5 72B | Good | Good | Medium |
| Smaller models (<7B) | Limited | Limited | Very Fast |
For best results, use models with strong function/tool calling support.
The shim (src/services/api/openaiShim.ts) is a thin translation layer between Claude Code's Anthropic SDK interface and any OpenAI-compatible API:
Claude Code Tool System
|
Anthropic SDK interface (duck-typed)
|
openaiShim.ts ← translates formats
|
OpenAI Chat Completions API
|
Any compatible model
What it translates:
- Anthropic message blocks → OpenAI messages
- Anthropic
tool_use/tool_result→ OpenAI function calls - OpenAI SSE streaming → Anthropic stream events
- Anthropic system prompt arrays → OpenAI system messages
Claude Code doesn't know it's talking to a different model.
src/services/api/openaiShim.ts — OpenAI-compatible API shim (724 lines)
src/services/api/client.ts — Routes to shim when CLAUDE_CODE_USE_OPENAI=1
src/utils/model/providers.ts — Added 'openai' provider type
src/utils/model/configs.ts — Added openai model mappings
src/utils/model/model.ts — Respects OPENAI_MODEL for defaults
src/utils/auth.ts — Recognizes OpenAI as valid 3P provider
6 files changed. 786 lines added. Zero dependencies added.
bun run smoke # startup sanity check
bun run doctor:runtime # validate provider env + reachability
bun run hardening:check # typecheck + smoke + runtime doctor
bun run hardening:strict # full strict checkProvider launch profiles save repeated env setup:
bun run profile:init # auto-detect provider
bun run profile:init -- --provider openai --api-key sk-... # explicit setup
bun run dev:profile # launch from saved profile
bun run dev:openai # OpenAI shortcut
bun run dev:ollama # Ollama shortcutFork of instructkr/claude-code, which mirrored the Claude Code source snapshot that became publicly accessible through an npm source map exposure on March 31, 2026. Not affiliated with or endorsed by Anthropic.
MIT. The original Claude Code source is subject to Anthropic's terms. The OpenAI shim additions are public domain.