Beyond text generation. CodeActor builds a mental model of your codebase — call graphs, semantic search, and architectural analysis — so its agents navigate, understand, and evolve your code with precision.
Tired of AI coding tools that just "see" text?
Traditional assistants treat code as a flat string, leading to hallucinations, uninformed edits, and an inability to answer "what depends on X?". CodeActor is different. Its Repo-Agent uses deep structural analysis to understand your software like a senior engineer — before a single line is written.
Traditional AI coding assistants share a fundamental limitation: they see code as flat text. This leads to:
- ❌ Hallucinated APIs — Suggesting functions that don't exist in your codebase
- ❌ No Architectural Awareness — Changes that silently break distant, dependent modules
- ❌ Blind Refactoring — Cannot assess cross-file impact or detect circular dependencies
- ❌ Keyword-Only Search — Missing relevant code just because different variable names were used
CodeActor's Repo-Agent solves this at the root. Powered by a Rust-based code intelligence engine, it builds a rich structural model of your code — ASTs, call graphs, and semantic embeddings — so every agent in the system reasons about code the way a senior engineer does.
| Traditional AI Tools | CodeActor |
|---|---|
| Flat text matching | Semantic search by code meaning |
| File-by-file editing | Cross-file impact analysis via call graphs |
| No complexity insight | Cycle detection & complexity scoring |
| Regex-based search | Natural-language "find auth logic" queries |
| Single-agent | Hub-and-Spoke multi-agent with Meta-Agent runtime extension |
| 🌐 Live Web Research | Cannot autonomously browse the internet |
- Hub-and-Spoke Architecture — Central Conductor delegates tasks to specialized sub-agents (Repo analysis, Code editing, General chat, DevOps operations, Browser automation)
- Meta-Agent — Autonomous agent designer that creates custom sub-agents at runtime for tasks beyond built-in agents' capabilities
- Self-Correction —
thinkingtool enables agents to analyze errors and recover without blind retries - Agent Disable — Conditionally exclude sub-agents at startup via
--disable-agents=repo,coding,chat,meta,devops,browser
- File Operations — Read, create, delete, rename, list directory, print directory tree
- Code Editing —
search_replace_in_filewith unified diff output and 10MB size guard - Code Search — ripgrep regex search, semantic search via vector embeddings, code skeleton/snippet queries
- Shell Execution —
run_bashwith foreground/background support, danger detection, and workspace-boundary checks - Cognitive Tools —
thinkingfor error analysis,micro_agentfor sub-LLM reasoning calls - Flow Control —
finishto signal task completion, user help requests - Browser Automation —
delegate_browserfor headless Chrome web research, navigation, data extraction, screenshots, and PDF generation - Repo Analysis — Call graph queries, hierarchical call trees, directory trees, function-level code skeletons
- TUI Mode — Full terminal UI built with Bubble Tea, with message log, agent streaming, and interactive authorization
- HTTP + WebSocket Server — REST API and real-time WebSocket streaming for IDE/Web integration
"Your AI that can read the web for you — finding answers in live documentation, community threads, and API references."
The Browser-Agent transforms CodeActor into a true web-native assistant. Powered by headless Chrome via go-rod, it autonomously navigates websites, interacts with page elements, and extracts knowledge — all within a secure, sandboxed environment. When local documentation falls short, the Conductor delegates web research tasks to Browser-Agent, which browses the internet to find the latest answers.
What it can do:
- 🔍 Autonomous Web Research — Browse documentation portals, GitHub issues, Stack Overflow, and API references. Find answers in the live web without manually copying URLs.
- 🖱️ Full Page Interaction — Click buttons, fill and submit forms, scroll pages, wait for dynamic content to load.
- 📄 Data Extraction — Extract text and HTML from any page. Capture full-page or element-level screenshots and generate PDFs.
- 🧠 JavaScript Execution — Run custom JS in the page context (with explicit user confirmation) to unlock web apps requiring client-side logic.
- 🔒 Security-First — All file outputs are restricted to the workspace directory. Each task gets an isolated browser session via Cookie management.
- 📊 Health Monitoring — Check website availability and monitor content changes for proactive maintenance.
The Browser-Agent is invoked by the Conductor via delegate_browser, seamlessly integrating with the multi-agent workflow. It is equipped with its own toolset (navigate, go_back, go_forward, reload, get_current_url, click, input, scroll, wait_element, wait, extract_text, extract_html, screenshot, pdf, execute_js) and follows the same LLM-tool-loop pattern as all other agents.
Example: A developer asks, "Find the latest FastAPI middleware documentation and summarize the CORS configuration." The Browser-Agent navigates to the FastAPI docs, locates the middleware section, extracts the relevant text, and returns a concise summary — without the developer ever leaving the editor.
- Official OpenAI Go SDK — Replaced langchaingo with
openai-go/v3for direct API control - DeepSeek Reasoning Support — Full
reasoning_contentround-trip (streaming + non-streaming), injected viaSetExtraFields - Custom Engine Abstraction — Lightweight
Engineinterface with Message/ToolDef/ToolCall types, decoupled from any SDK - 13 LLM Providers — Xiaomi MiMo, Alibaba Qwen, DeepSeek, SiliconFlow, Moonshot, Mistral, Zhipu GLM, OpenRouter, StreamLake, AWS Bedrock, and any OpenAI-compatible endpoint
- WorkspaceGuard — Validates file operations stay within the project workspace; intercepts dangerous shell commands
- Defense-in-Depth — Checks both LLM-flagged
is_dangerousand absolute-path analysis for shell commands - User Confirmation Pipeline — Pub-Sub based confirmation flow that works across TUI and WebSocket consumers
At the heart of CodeActor is Repo-Agent — a dedicated code intelligence agent backed by a Rust engine with Tree-sitter, LanceDB vector embeddings, and Petgraph call-graph analysis.
"Find where authentication logic is implemented, even if the keywords differ."
Powered by LanceDB vector embeddings (OpenAI text-embedding-3-small, 1536d), semantic search understands the intent behind your query. Unlike regex, it finds relevant code by meaning — even across different naming conventions, languages, or comment styles.
"Instantly see all public functions in a 5000-line file without scanning it manually."
Batch-queries return structured outlines (functions, types, imports) from specified files. Need the full implementation of a specific function? Query by filepath + function_name and get the complete snippet. Saves hours of manual code reading.
"Which call chain leads to this deprecated util? Are there circular dependencies?"
Function-level call graphs with caller/callee traversal, cycle detection, and complexity scoring. Understand ripple effects before making changes. View top functions ranked by out-degree to identify core modules at a glance.
"Show me the top 3 levels of how a request flows from handler to database."
Depth-limited call tree traversal reveals high-level architectural flow without drowning in details. Perfect for onboarding, code review, and architectural documentation.
Tree-sitter grammars for Rust, Python, JavaScript, TypeScript, Java, C++, Go — code is understood at the syntax level, not just as bytes. Enables precise function extraction, import analysis, and structural queries across polyglot codebases.
notify-based file system watcher with 20s debounce keeps the code model in sync. Edit files in your IDE — CodeActor re-indexes automatically.
CodeActor employs a Hub-and-Spoke architecture where a central Conductor orchestrates specialized agents for different tasks:
| Agent | Tools | Count |
|---|---|---|
| Conductor | delegate_repo, delegate_coding, delegate_chat, delegate_devops, delegate_meta, delegate_browser, finish, read_file, search_by_regex, list_dir, print_dir_tree |
12 |
| CodingAgent | All 16 tools (file ops, search, shell, thinking, micro_agent) | 16 |
| RepoAgent | read_file, search_by_regex, list_dir, print_dir_tree, semantic_search, query_code_skeleton, query_code_snippet |
7 |
| ChatAgent | micro_agent, thinking, finish |
3 |
| DevOpsAgent | run_bash, read_file, list_dir, print_dir_tree, search_by_regex, thinking, micro_agent, finish |
8 |
| BrowserAgent | navigate, go_back, go_forward, reload, get_current_url, click, input, scroll, wait_element, wait, extract_text, extract_html, screenshot, pdf, execute_js, thinking, micro_agent, finish |
18 |
Each agent is equipped with tools tailored to its domain, ensuring focused and efficient task execution. The Conductor routes requests to the most appropriate agent based on task type.
| Layer | Technology |
|---|---|
| Language | Go 1.24+, Rust (codebase engine) |
| LLM SDK | github.com/openai/openai-go/v3 |
| HTTP/WS | Gin + Melody |
| TUI | Bubble Tea + Lipgloss + Glamour |
| Code Analysis | Tree-sitter, Petgraph, LanceDB, Axum |
| Diff | github.com/aymanbagabas/go-udiff |
| Agent | Tools | Count |
|---|---|---|
| Conductor | delegate_repo, delegate_coding, delegate_chat, delegate_devops, delegate_meta, delegate_browser, finish, read_file, search_by_regex, list_dir, print_dir_tree |
12 |
| CodingAgent | All 16 tools (file ops, search, shell, thinking, micro_agent) | 16 |
| RepoAgent | read_file, search_by_regex, list_dir, print_dir_tree, semantic_search, query_code_skeleton, query_code_snippet |
7 |
| ChatAgent | micro_agent, thinking, finish |
3 |
| DevOpsAgent | run_bash, read_file, list_dir, print_dir_tree, search_by_regex, thinking, micro_agent, finish |
8 |
| BrowserAgent | navigate, go_back, go_forward, reload, get_current_url, click, input, scroll, wait_element, wait, extract_text, extract_html, screenshot, pdf, execute_js, thinking, micro_agent, finish |
18 |
Full architecture documentation →
The Meta-Agent is an autonomous agent designer — it extends the system's capabilities at runtime by creating specialized sub-agents on demand. When the Conductor encounters a task that falls outside the expertise of the built-in agents (Repo/Coding/Chat), it delegates to the Meta-Agent, which:
- Designs a custom agent with a tailored system prompt, tool selection, and result schema
- Executes the task using the designed agent's configuration
- Registers the new agent as a permanent delegate tool available for the rest of the session
delegate_security_auditor— Full-codebase security vulnerability auditdelegate_performance_profiler— Performance bottleneck analysisdelegate_db_migration_planner— Database migration planning and validation
[agent]
meta_max_steps = 30 # Max LLM steps during Meta-Agent execution (default: 30)
meta_retry_count = 5 # Retry count on JSON parse failure (default: 5)Disable Meta-Agent via startup flag:
./codeactor tui --disable-agents=metaThe codeactor-codebase is a standalone Rust service that provides deep code analysis capabilities. It runs as a background HTTP server managed automatically by the Go binary.
Capabilities previewed above in The Intelligence Core: Repo-Agent. Below are the implementation details.
| Method | Path | Description |
|---|---|---|
GET |
/health |
Health check |
GET |
/status |
Repo status (functions, files, embedding state) |
POST |
/investigate_repo |
Top-15 functions by out-degree, directory tree, file skeletons |
POST |
/semantic_search |
Vector-based semantic code search |
POST |
/query_code_skeleton |
Batch skeleton extraction from file paths |
POST |
/query_code_snippet |
Extract code snippet by filepath + function_name |
POST |
/query_call_graph |
Query call graph by file/function name |
POST |
/query_hierarchical_graph |
Hierarchical call tree with depth limit |
POST |
/query_indexing_status |
Embedding indexing status |
GET |
/draw_call_graph |
ECharts call graph visualization |
The Go binary handles the full lifecycle:
- Dynamic port allocation — Scans from 12800 upward to find an available port
- Binary extraction — Extracts embedded
codeactor-codebaseto~/.codeactor/bin/ - Auto-launch — Starts the Rust server as a child process with
--repo-pathand--address - Health polling — Waits up to 30s for
/healthto return 200 before proceeding - HTTP retry — All codebase API calls retry up to 3 times with backoff
- Cleanup on exit —
deferkills the child process when the Go process terminates
[http]
codebase_port = 12800
[codebase]
enable_embedding = true
embedding_db_uri = "~/.codeactor/data/lancedb"
graph_db_uri = "~/.codeactor/data/graph"
[codebase.embedding]
model = "text-embedding-3-small"
api_token = "sk-..."
api_base_url = "https://api.openai.com/v1"
dimensions = 1536- Go 1.24+
ripgrep(rg) — for full-text regex search- A running
codeactor-codebaseservice (auto-launched by the Go binary, or set manually)
git clone https://github.com/your-org/codeactor-agent.git
cd codeactor-agent
go build -o codeactor .Create $HOME/.codeactor/config/config.toml:
[global.llm]
use_provider = "siliconflow"
[global.llm.providers.siliconflow]
model = "deepseek-ai/DeepSeek-V3.2"
temperature = 0.0
max_tokens = 23000
api_base_url = "https://api.siliconflow.cn/v1"
api_key = "your-api-key-here"
[app]
enable_streaming = true
[agent]
conductor_max_steps = 30
coding_max_steps = 50
repo_max_steps = 30
devops_max_steps = 15
meta_max_steps = 30
meta_retry_count = 5
lang = "Chinese"TUI Mode (terminal interface):
./codeactor tui
# Or with a task file:
./codeactor tui --taskfile TASK.md
# Disable specific agents:
./codeactor tui --disable-agents=metaHTTP Server Mode (API + WebSocket):
./codeactor http
# Server starts at http://localhost:9800
# Custom port:
./codeactor http --port 9090cd clients/nodejs-cli && npm install
node index.js run <project-dir> "task description" # create & stream task
node index.js chat <task-id> <project-dir> # continue conversation
node index.js status <task-id> # query status
node index.js memory <task-id> # view conversation history
node index.js history # list recent tasksServer defaults to localhost:9080. Override via --host/--port or CODECACTOR_HOST=host:port.
| Provider | Config Key | Example Model |
|---|---|---|
| Xiaomi MiMo | xiaomi |
mimo-v2-flash |
| Alibaba Bailian | aliyun |
qwen3-coder-plus |
| SiliconFlow | siliconflow |
deepseek-ai/DeepSeek-V3.2 |
| DeepSeek | deepseek |
deepseek-ai/DeepSeek-V3 |
| Moonshot | moonshot |
moonshotai/Kimi-K2-Instruct |
| Mistral | mistral |
mistralai/devstral-small |
| Zhipu Z.ai | zai |
zai-org/GLM-4.5-Air |
| OpenRouter | openrouter |
qwen3-coder-plus |
| StreamLake | streamlake |
Custom endpoints |
| AWS Bedrock | bedrock |
us.anthropic.claude-3-7-sonnet-* |
| Local | local |
Any OpenAI-compatible server |
- ARCHITECTURE.md — System architecture, modules, data flow, protocols
- Agent_Reference.md — API reference and configuration guide
- Agent_Design.md — Multi-agent design rationale
- Browser_Agent_Design.md — Browser automation architecture and implementation
We welcome contributions of all kinds — bug reports, feature requests, documentation improvements, and code contributions. Whether you're a seasoned Go/Rust developer or just getting started, there's a place for you in the CodeActor community.
Get involved:
- Open an Issue for bugs or feature requests
- Submit a Pull Request for improvements
- Join the discussion in Discussions

