Desktop app for OpenCode with multi-project workspaces, streaming chat, prompt queue, model switching, voice input, and MCP tools.
Download latest release · Why OpenGUI · Build from source
OpenGUI gives OpenCode users desktop workflow for long coding sessions. Manage multiple projects visually, watch responses stream live, queue prompts while agent works, and switch models or agents without terminal juggling.
Early but usable. Bug reports and PRs welcome.
OpenGUI is for OpenCode users who want stronger visual workflow than terminal alone:
- Manage multiple projects at once with separate sessions per workspace
- See streaming responses live with token and context usage
- Queue prompts while agent is busy instead of waiting to type next step
- Switch providers, models, agents, and variants from UI
- Configure MCP tools and skills without leaving app
- Use voice input with Whisper-compatible transcription endpoint
- Multi-project workspaces for parallel coding sessions
- Real-time streaming over SSE with live usage tracking
- Prompt queue that auto-dispatches when assistant becomes idle
- Model & agent selection directly from chat workflow
- Slash commands from prompt box
- Syntax highlighting + math rendering with Shiki and KaTeX
- Dark/light theme with system-aware toggle
- Cross-platform builds for Linux, macOS, and Windows
Grab prebuilt app from latest release:
- Linux:
.deb - macOS:
.dmg - Windows:
.exeinstaller
- OpenCode CLI installed and available in your
PATH
Windows prerequisite: OpenCode must be available on your
PATHor at%USERPROFILE%\.opencode\bin\opencode.exe.
Note: Windows builds are unsigned. Windows SmartScreen will warn on first launch. Click More info -> Run anyway.
- Bun v1.2+
- OpenCode CLI installed and available in your
PATH - Electron installed through project dependencies
Install dependencies:
bun installNo manual config file needed. Connection settings live in UI.
Run web frontend + Electron with HMR:
bun run devRun only web frontend:
bun run dev:webBuild frontend bundle:
bun run buildRun Electron app in production mode:
bun run start:electronBuild Linux .deb:
bun run distBuild macOS .dmg:
bun run dist:macBuild Windows .exe installer:
bun run dist:winmain.cjs Electron main process (window management, IPC)
preload.cjs Preload script (contextBridge API for renderer)
opencode-bridge.mjs IPC bridge to the OpenCode SDK (SSE, sessions, prompts)
src/
index.ts Bun web server (development + production)
index.html HTML entry point
frontend.tsx React entry point
App.tsx Main app layout
hooks/
use-opencode.tsx Central state management (context + reducer)
useSTT.ts Speech-to-text hook
components/ UI components (sidebar, messages, prompt box, etc.)
lib/ Utility modules
types/ TypeScript type definitions
OpenGUI stores connection and UI preferences via the app settings interface.
Voice input (speech-to-text) requires a Whisper-compatible transcription server. Set the endpoint URL in Settings > General > Voice transcription endpoint. The microphone button only appears when an endpoint is configured. The server should accept a multipart POST with an audio file field and return { text, language, duration_seconds }.
Contributions are welcome. See CONTRIBUTING.md for guidelines.
If you find OpenGUI useful, consider giving it a star -- it helps others discover the project.
MIT
