Pi package with custom extensions, themes, and configurations for the Pi Coding Agent
VTSTech β’ Website β’ Extensions β’ Themes β’ Install
A Pi package containing extensions, themes, and configuration for the Pi Coding Agent. These tools are built and optimized for running Pi on resource-constrained environments such as Google Colab (CPU-only, 12GB RAM) with Ollama serving small local models (0.3B-2B parameters), as well as with cloud providers like OpenRouter, Anthropic, Google, OpenAI, Groq, DeepSeek, and more.
Everything here is battle-tested on real hardware with real models - from small local Ollama models on budget machines to cloud providers via OpenRouter.
pi install git:github.com/VTSTech/pi-coding-agentPi clones the repo, auto-discovers the extensions/ and themes/ directories, and loads everything automatically. Restart Pi and you're done.
Update to the latest version:
pi updatePin to a specific tag:
pi install git:github.com/VTSTech/pi-coding-agent@v1.3.0Install only what you need. Each extension is published as a standalone npm package under the @vtstech scope. All shared code is bundled into each package, so there are no extra dependencies to install.
# Install individual extensions
pi install npm:@vtstech/pi-diag
pi install npm:@vtstech/pi-model-test
pi install npm:@vtstech/pi-security
pi install npm:@vtstech/pi-soul
pi install npm:@vtstech/pi-status
pi install npm:@vtstech/pi-api
pi install npm:@vtstech/pi-ollama-sync
pi install npm:@vtstech/pi-openrouter-sync
pi install npm:@vtstech/pi-react-fallback
pi install npm:@vtstech/pi-long-term-memory
# Update installed packages
pi updateAvailable packages:
| Package | Description |
|---|---|
@vtstech/pi-diag |
System diagnostic suite |
@vtstech/pi-model-test |
Model benchmark - Ollama & cloud providers |
@vtstech/pi-security |
Command/path/SSRF protection |
@vtstech/pi-soul |
SoulSpec persona management |
@vtstech/pi-status |
System resource monitor & status bar |
@vtstech/pi-api |
API mode switcher |
@vtstech/pi-ollama-sync |
Ollama β models.json sync |
@vtstech/pi-openrouter-sync |
OpenRouter β models.json sync |
@vtstech/pi-react-fallback |
ReAct fallback for non-native tool models |
@vtstech/pi-long-term-memory |
Persistent memory across sessions |
git clone https://github.com/VTSTech/pi-coding-agent.git
cd pi-coding-agent
cp extensions/*.ts ~/.pi/agent/extensions/
cp themes/*.json ~/.pi/agent/themes/
pi -c- Pi Coding Agent v0.66+ installed
- Ollama running locally or on a remote machine (for Ollama features)
- API key for any supported cloud provider (for cloud provider features)
This repo is a standard Pi package. The package.json contains a pi manifest that tells Pi where to find resources:
{
"name": "@vtstech/pi-coding-agent-extensions",
"version": "1.2.0",
"keywords": ["pi-package"],
"pi": {
"extensions": ["./extensions"],
"themes": ["./themes"]
}
}Pi auto-discovers from conventional directories (extensions/, themes/, skills/, prompts/) even without the manifest. The manifest is included for explicit declaration.
All extensions support remote Ollama instances out of the box - no extra configuration needed. The Ollama URL is resolved automatically from models.json:
models.json ollama provider baseUrl β OLLAMA_HOST env var β http://localhost:11434
This means you can:
- Run Ollama on a separate machine and tunnel it (e.g., Cloudflare Tunnel, Tailscale, SSH)
- Use
/ollama-sync https://your-tunnel-urlto sync models from a remote instance - The sync writes the remote URL back into
models.jsonso all other extensions (model-test,status,diag) automatically use it - Set
OLLAMA_HOSTas an environment variable fallback if nomodels.jsonconfig exists
Model testing and diagnostics work with cloud providers out of the box. The extensions auto-detect the active provider and adapt their behavior:
Supported providers (built-in registry):
| Provider | API Mode | Base URL |
|---|---|---|
| OpenRouter | openai-completions | https://openrouter.ai/api/v1 |
| Anthropic | anthropic-messages | https://api.anthropic.com |
| gemini | https://generativelanguage.googleapis.com |
|
| OpenAI | openai-completions | https://api.openai.com/v1 |
| Groq | openai-completions | https://api.groq.com |
| DeepSeek | openai-completions | https://api.deepseek.com |
| Mistral | openai-completions | https://api.mistral.ai |
| xAI | openai-completions | https://api.x.ai |
| Together | openai-completions | https://api.together.xyz |
| Fireworks | openai-completions | https://api.fireworks.ai/inference/v1 |
| Cohere | cohere-chat | https://api.cohere.com |
Provider detection uses a three-tier lookup: user-defined providers in models.json β built-in provider registry β unknown fallback.
Run a full system diagnostic of your Pi environment.
/diag
Checks:
- System - OS, CPU, RAM usage, uptime, Node.js version
- Disk - Disk usage via
df -h - Ollama - Running? Version? Response latency? Models pulled? Currently loaded in VRAM?
- models.json - Valid JSON? Provider config? Models listed? Cross-references with Ollama
- Settings - settings.json exists? Valid?
- Extensions - Extension files found? Active tools?
- Themes - Theme files? Valid JSON?
- Session - Active model? API mode? Provider? Base URL? Context window? Context usage? Thinking level?
- Security - Active security mode, effective blocklist sizes (mode-aware), command/SSRF/path validation tests, audit log status
Also registers a self_diagnostic tool so the AI agent can run diagnostics on command.
Test any model for reasoning, tool usage, and instruction following - works with Ollama and all cloud providers (OpenRouter, OpenAI, Anthropic, etc.).
/model-test # Test current Pi model
/model-test qwen3:0.6b # Test a specific Ollama model
/model-test --all # Test every Ollama modelThe extension runs the extended test flow with 20 reasoning puzzles, multi-step JSON instruction compliance, and chained tool call generation.
| Test | Method | Scoring |
|---|---|---|
| Reasoning | 20 puzzle tests (logic, math, spatial, commonsense, counter-intuitive, causal, comparative, analogical) | STRONG / MODERATE / WEAK / FAIL / ERROR |
| Instructions | Multi-step JSON schema compliance with automatic repair | STRONG / MODERATE / WEAK / FAIL |
| Tool Usage | Chained tool call generation | STRONG / MODERATE / WEAK / FAIL / ERROR |
Features:
- Extended reasoning test - 20 diverse puzzles with detailed breakdown
- Multi-step instructions - JSON schema with multiple fields and types
- Chained tool calls - tests multi-tool invocation capability
- Automatic provider detection - classifies the active model as
ollama,builtin, orunknown - Cloud provider support - works with OpenRouter, OpenAI, Anthropic, and 11+ built-in providers
- Progress indicators - UI notifications during testing
- No rate limit delay - faster execution for Ollama instances
- Tab-completion for model names in the
/model-testcommand
Features:
- Automatic provider detection - classifies the active model as
ollama,builtin, orunknown - Extended reasoning test - 20 diverse puzzles with detailed breakdown
- Multi-step instructions - JSON schema with multiple fields and types
- Chained tool calls - tests multi-tool invocation capability
- Built-in provider registry - 11 known cloud providers with API modes and base URLs
- Automatic remote Ollama URL - reads from
models.json, no manual config - Timeout resilience - 180s default with
--connect-timeout, auto-retry on failures - Rate limit delay - configurable delay between tests
- Thinking model fallback - retries with
think:truefor models like qwen3 - Displays API mode - shows the active API mode from
models.json - Native context length - displays true max context from Ollama
/api/show - Tool support cache - persistent cache avoids re-probing on every run
- Text-based tool call detection - handles models that output JSON as text
- JSON repair - automatically fixes truncated output
- Tab-completion for model names in the
/model-testcommand - Final recommendation: STRONG / GOOD / USABLE / WEAK
Sample output (cloud provider):
[model-test-report]
β‘ Pi Model Benchmark v1.3.2
Written by VTSTech
GitHub: https://github.com/VTSTech
Website: www.vts-tech.org (http://www.vts-tech.org)
ββ MODEL: poolside/laguna-xs.2:free ββββββββββββββββββββββββ
βΉοΈ Provider: openrouter (builtin)
ββ REASONING TEST (EXTENDED) βββββββββββββββββββββββββββββββ
iοΈ Testing 20 reasoning puzzles...
iοΈ Waiting 10.0s to avoid rate limiting...
β
β
snail_wall (logic): STRONG - expected "8", got "8" [(expected: 8, got: 8)]
β
β
math_sequence (math): STRONG - expected "162", got "162" [(expected: 162, got: 162)]
β
β
spatial_directions (spatial): STRONG - expected "south", got "180" [(expected: south)]
β οΈ β commonsense (commonsense): WEAK - expected "the other side", got "?" [(expected: the other side)]
β β code_simplify (code): FAIL - expected "15", got "2" [(expected: 15, got: 2)]
β
β
bat_and_ball (counterint): STRONG - expected "5", got "5" [(expected: 5, got: 5)]
β
β
scale_weight (counterint): STRONG - expected "400", got "400" [(expected: 400, got: 400)]
β
β
syllogism (logic): STRONG - expected "warm-blooded", got "?" [(expected: warm-blooded)]
β
β
if_then_chain (logic): STRONG - expected "grass grows", got "1" [(expected: grass grows)]
β
β
cause_effect (causal): STRONG - expected "grows", got "?" [(expected: grows)]
β
β
relative_quantities (comparative): STRONG - expected "15", got "15" [(expected: 15, got: 15)]
β οΈ β analogy_1 (analogy): WEAK - expected "room", got "?" [(expected: room)]
β
β
analogy_2 (analogy): STRONG - expected "boot", got "?" [(expected: boot)]
β
β
physics_1 (commonsense): STRONG - expected "bowling ball", got "80" [(expected: bowling ball)]
β οΈ β physics_2 (commonsense): WEAK - expected "hot", got "?" [(expected: hot)]
β
β
objects_1 (commonsense): STRONG - expected "scissors", got "?" [(expected: scissors)]
β
β
social_1 (commonsense): STRONG - expected "polite", got "?" [(expected: polite)]
β
β
animals_1 (commonsense): STRONG - expected "water", got "?" [(expected: water)]
β
β
gk_1 (commonsense): STRONG - expected "mars", got "?" [(expected: mars)]
β
β
gk_2 (commonsense): STRONG - expected "366", got "366" [(expected: 366, got: 366)]
β
Average score: STRONG
ββ INSTRUCTION FOLLOWING TEST (EXTENDED) βββββββββββββββββββ
iοΈ Testing multi-step JSON schema compliance...
iοΈ Waiting 10.0s to avoid rate limiting...
iοΈ Time: 1.4s
β
JSON output valid with correct values (STRONG)
iοΈ Output: {"name":"Poolside
Assistant","can_count":true,"sum":42,"language":"English","colors":["red","blue","green"],"timestamp":"2025-01-09T1
2:00:00Z"}
ββ TOOL USAGE TEST (EXTENDED) ββββββββββββββββββββββββββββββ
iοΈ Testing chained tool calls...
iοΈ Waiting 10.0s to avoid rate limiting...
iοΈ Time: 349ms
β
Tool calls: get_weather (MODERATE)
iοΈ Response: I'll get the weather for Tokyo and calculate that multiplication for you.
ββ SUMMARY βββββββββββββββββββββββββββββββββββββββββββββββββ
β
Reasoning: STRONG
β
Instructions: STRONG
β
Tool Usage: MODERATE
iοΈ Total time: 1.3m
iοΈ Score: 3/3 tests passed
iοΈ Detailed: Reasoning 16/20 tests passed, Instructions 1/1, Tool Usage 1/1
ββ RECOMMENDATION ββββββββββββββββββββββββββββββββββββββββββ
β poolside/laguna-xs.2:free is WEAK - limited capabilities for agent use
Runtime switching of API modes, base URLs, thinking settings, and compat flags in models.json.
Supports all 10 Pi API modes:
anthropic-messages Β· openai-completions Β· openai-responses Β· azure-openai-responses Β· openai-codex-responses Β· mistral-conversations Β· google-generative-ai Β· google-gemini-cli Β· google-vertex Β· bedrock-converse-stream
/api # Show current provider config (mode, URL, compat flags)
/api mode <mode> # Switch API mode (partial match supported)
/api url <url> # Switch base URL
/api think on|off|auto # Toggle thinking for all models in provider
/api compat <key> # View compat flags
/api compat <key> <val> # Set compat flag
/api modes # List all 10 supported API modes
/api providers # List all configured providers
/api reload # Hint to run /reloadFeatures:
- Partial mode matching -
/api mode openai-rmatchesopenai-responses - Auto-detect local provider - targets the first
localhost/ollamaprovider by default - Batch thinking toggle - set
reasoning: true/falseacross all models at once - Compat flag management - get/set
supportsDeveloperRole,thinkingFormat,maxTokensField, etc. - Tab-completion for sub-commands
Command, path, and network security layer for Pi's tool execution with a configurable security mode.
Automatically loaded - protects against:
- Partitioned command blocklist - 41 CRITICAL commands (always blocked: system modification, privilege escalation, network attacks, shell escapes) + 25 EXTENDED commands (blocked in max mode: package management, process control, development tools)
- Mode-aware SSRF protection - 22 ALWAYS_BLOCKED URL patterns (loopback, RFC1918 private ranges, cloud metadata endpoints) + 7 MAX_ONLY patterns (localhost by name, broadcast, link-local, current network) that are allowed in basic mode
- Security mode toggle - switch between
basic,max, andoffmodes at runtime; persisted to~/.pi/agent/security.json - Path validation - prevents filesystem escape and access to critical system directories; symlinks are dereferenced via
fs.realpathSync()to block/tmp/evil β /etc/passwdbypasses - Shell injection detection - regex patterns for command chaining, substitution, and redirection
- Audit logging - JSON-lines audit log at
~/.pi/agent/audit.logwith security mode recorded per entry (path exported asAUDIT_LOG_PATH)
/security mode basic # Relaxed mode - CRITICAL commands blocked, localhost URLs allowed
/security mode max # Full lockdown - all 66 commands blocked, strict SSRF
/security mode off # Disable all security checksDefault mode: max - if security.json doesn't exist, the extension starts in max mode and creates it on first use. The current mode is displayed in the status bar (SEC:BASIC, SEC:MAX, or SEC:OFF).
Load and manage AI agent personas defined in SoulSpec format with progressive disclosure support.
Automatically loaded - provides tools and commands for managing AI personas:
- Soul loading - Load personas from multiple locations with progressive disclosure (Level 1-3)
- Multiple locations - Supports global (
~/.pi/agent/souls/), project-local (.pi/souls/), and current directory (./souls/) soul storage - Progressive disclosure - Level 1 (basic info), Level 2 (core persona), Level 3 (extended behavior)
- Embodied agent support - Hardware constraints, safety policies, sensors, and actuators
- Built-in tools -
load_soul,list_souls,soul_infofor programmatic access - CLI commands -
/soulsto list available souls,/soul <name>to use a soul - Sample personas - Includes
nova-helper(coding assistant) androbot-assistant(physical robot)
/souls # List all available souls
/soul nova-helper # Use the Nova Helper persona
/load_soul {"soul_name":"robot-assistant"} # Load robot assistant persona
/soul_info robot-assistant # Get detailed information about a soulSoul locations - The extension searches for souls in multiple directories:
~/.pi/agent/souls/- Global souls directory.pi/souls/- Project-local souls directory./souls/- Current directory souls
Sample souls included:
- nova-helper - Helpful coding assistant focused on clear explanations and practical solutions
- robot-assistant - Physical robot assistant with voice interaction and manipulation capabilities
Text-based tool calling bridge for models without native function calling support.
Automatically loaded - no commands needed. When a model lacks native tool calling:
- Parses
Thought:,Action:,Action Input:patterns from model output - Multi-dialect support: classic ReAct (
Action:), Function (Function:), Tool (Tool:), Call (Call:) - each with dynamically-built regex patterns - Multiple regex strategies including parenthetical style and loose matching
- Bridges text-based tool calls into Pi's native tool execution pipeline
- Disabled by default; toggle via
/react-modewith persistent config across restarts
Persistent memory across sessions with automatic injection and AI-driven creation.
/memory add <text> - Add memory (with optional tags)
/memory list - List all memories
/memory clear - Clear memories (preserves metadata)
/memory meta - Show metadata
/memory-gate - Toggle memory creation gate
/memory help - Show helpFeatures:
- Persistent Storage: Memories survive across sessions and restarts
- Auto-Injection: Memory automatically injected at session start, BEFORE the AI generates its first response
- AI-Driven Creation: AI can request memories via
create_memorytool - Memory Gate: Confirm before creating memories (enabled by default)
- Tag Organization: Organize memories with tags
- Token Management: ~4k token window with auto-summarization
Memory Injection Hooks:
pre_session_start- Ensures metadata is completesession_start- Displays memory context to userbefore_provider_request- Prepends memory to the API request
Storage: .pi/agent/long-term-memory.json
Auto-populate models.json with all available Ollama models - works with local and remote instances.
/ollama-sync # Sync from models.json URL (or localhost)
/ollama-sync https://your-tunnel-url # Sync from a specific remote URL- Queries Ollama
/api/tagsfor available models (local or remote) - Writes the actual Ollama URL back into
models.jsonso other extensions pick it up automatically - URL priority: CLI argument β existing
models.jsonbaseUrl βOLLAMA_HOSTenv β localhost - Preserves existing provider config (apiKey, compat settings)
- Defaults to
openai-completionsAPI mode (correct for Ollama's/v1/chat/completionsendpoint) - Sorts models by size (smallest first)
- Auto-detects reasoning-capable models (deepseek-r1, qwq, qwen3, o1, o3, think, reason)
- Merges with existing per-model settings
- Per-model metadata in sync report (parameter size, quantization level, model family)
- Registered as both
/ollama-syncslash command andollama_synctool
Add OpenRouter models to models.json from URLs or bare model IDs.
/or-sync <url-or-id> [url-or-id ...] # Alias
/openrouter-sync <url-or-id> [url-or-id ...]- Accepts full OpenRouter URLs (
https://openrouter.ai/model/name:free) or bare IDs (model/name:free) - Multiple models in one command
- Strips query parameters and fragments from URLs before extracting model name
- Creates
openrouterprovider in models.json if missing (inherits baseUrl/api from built-in provider registry) - Appends models, never removes existing entries
- Reorders providers so openrouter sits above ollama
- Registered as both
/openrouter-syncslash command (alias/or-sync) andopenrouter_synctool
Adds composable named status items to the framework footer using ctx.ui.setStatus(). Each metric gets its own slot so it coexists cleanly with other extensions' status items.
CPU/RAM/Swap are only shown when using a local Ollama provider (not for cloud/remote). For cloud providers, system metrics are omitted. Model name, session tokens, and context usage are shown by the framework - not duplicated here. All labels use dimmed coloring; all values use green highlighting.
Status slots (updated every 5s, 1s for active tool):
- CtxMax + RespMax - combined slot showing native model context window and max response/completion tokens (e.g.,
CtxMax:33k RespMax:16.4k) - Resp - agent loop duration via
agent_start/agent_endevents - CPU% - per-core delta via
os.cpus()(local Ollama only) - RAM - used/total via
os.totalmem()/os.freemem()(local Ollama only) - Swap - used/total from
/proc/meminfo(shown only when swap is active, local only) - Generation params - temperature, top_p, top_k, num_predict, context size, reasoning_effort (dimmed)
- SEC - security mode indicator (
SEC:BASICorSEC:MAX) + session-scoped blocked count + 3s flash on blocked tools (resets on shutdown) - Active tool - live elapsed timer with
>indicator while a tool is running - Prompt - system prompt size as
chars chr tokens tokdisplayed on agent start - Pi version -
pi:0.66.1fetched once atsession_start(dim label + green value, always last slot)
All slots are cleared on session shutdown. Metrics that the framework already provides (model name, session tokens, context usage, thinking level) are intentionally omitted to avoid duplication.
A Matrix movie-inspired theme with neon green on pure black. Designed for terminal aesthetics and extended coding sessions.
/theme matrix
Color palette:
| Token | Color | Usage |
|---|---|---|
green |
#39ff14 |
Primary text - neon green |
brightGreen |
#7fff00 |
Accents, headings, inline code, highlights |
phosphor |
#66ff33 |
Links, tool titles, code block text, secondary text |
glowGreen |
#00ff41 |
Thinking text, quotes |
fadeGreen |
#00cc33 |
Muted text, borders |
hotGreen |
#b2ff59 |
Numbers, emphasis |
yellow |
#eeff00 |
Status bar active tool timer |
| Background | #000000 |
Pure black base |
# 1. Install the package
pi install git:github.com/VTSTech/pi-coding-agent
# 2. Restart Pi
pi -c
# 3. Sync your Ollama models into Pi (or use a cloud provider)
/ollama-sync # Local Ollama
/ollama-sync https://your-tunnel-url # Remote Ollama (e.g., Cloudflare Tunnel)
# 4. Reload Pi to pick up model changes
/reload
# 5. Run diagnostics to verify everything
/diag
# 6. Benchmark your models
/model-test --all
# 7. (Optional) Use long-term memory for persistent sessions
/memory list # View saved memories
/memory add "Remember to use TypeScript" # Add a memoryIf Ollama is running on a different machine, expose it via a tunnel and point Pi at it:
# On the Ollama machine - create a tunnel (example with cloudflared)
cloudflared tunnel --url http://localhost:11434
# In Pi - sync models from the tunnel URL
/ollama-sync https://your-tunnel-url.trycloudflare.comThe URL gets saved to models.json and all extensions use it automatically. No need to set OLLAMA_HOST or pass the URL again.
Pi handles cloud providers natively - just set your API key in the environment and select a model:
export OPENROUTER_API_KEY="sk-or-..."
# In Pi - select a cloud model
/model openrouter/openai/gpt-oss-120b:free
# Test it
/model-test{
"providers": {
"ollama": {
"baseUrl": "http://localhost:11434/v1",
"api": "openai-completions",
"apiKey": "ollama",
"compat": {
"supportsDeveloperRole": false,
"supportsReasoningEffort": false
},
"models": []
}
}
}Use
/ollama-syncto auto-populate the models array and set the correctbaseUrlfrom your Ollama instance.
Optimized for CPU-only environments with limited RAM:
{
"defaultProvider": "ollama",
"defaultModel": "granite4:350m",
"defaultThinkingLevel": "off",
"theme": "matrix",
"compaction": {
"enabled": true,
"reserveTokens": 2048,
"keepRecentTokens": 8000
}
}Pi supports multiple API backends via the api field in models.json. For Ollama, use openai-completions which maps to Ollama's native /v1/chat/completions endpoint. Other available modes:
| API Mode | Use Case |
|---|---|
openai-completions |
Ollama, OpenAI-compatible /v1/chat/completions |
openai-responses |
OpenAI Responses API (/v1/responses) |
anthropic-messages |
Anthropic native API |
google-generative-ai |
Gemini API |
google-vertex |
Google Vertex AI |
mistral-conversations |
Mistral API |
bedrock-converse-stream |
Amazon Bedrock |
See Pi's AI package docs for the full list.
These extensions are optimized for running Pi on Google Colab with CPU-only and 12GB RAM. Here's the recommended Ollama launch configuration:
import subprocess, os
# Install Ollama
subprocess.run(["curl", "-fsSL", "https://ollama.com/install.sh"], check=True)
# Environment tuning for CPU-only 12GB
os.environ["OLLAMA_HOST"] = "0.0.0.0:11434"
os.environ["CONTEXT_LENGTH"] = "4096" # Reduce from 262k default
os.environ["MAX_LOADED_MODELS"] = "1" # Only one model in memory
os.environ["KEEP_ALIVE"] = "2m" # Unload after 2min idle
os.environ["KV_CACHE_TYPE"] = "f16" # Use f16 for KV cache
os.environ["OLLAMA_MODELS"] = "/tmp/ollama" # Store in tmpfs (RAM disk)
os.environ["BATCH_SIZE"] = "512" # Smaller batches for CPU
os.environ["NO_CUDA"] = "1" # Force CPU mode
# Start Ollama
subprocess.Popen(["ollama", "serve"])| Model | Params | Size | Reasoning | Tools | Best For |
|---|---|---|---|---|---|
granite4:350m |
352M | 676 MB | β | β | Fast tasks, tool calling |
qwen3:0.6b |
752M | 498 MB | β | β | Small footprint, native tools |
qwen3.5:0.8b |
~800M | 1.0 GB | β | β | Daily driver |
qwen2.5-coder:1.5b |
1.5B | 940 MB | β | β | Code tasks |
llama3.2:1b |
1.2B | 1.2 GB | β | β | General use |
qwen3.5:2b |
2.3B | 2.7 GB | β | β | Best quality (fits 12GB) |
See TESTS.md for full benchmark results across all tested Ollama and cloud provider models.
pi-coding-agent/
βββ extensions/
β βββ api.ts # API mode switcher - modes, URLs, thinking, compat flags
β βββ diag.ts # System diagnostic suite
β βββ model-test.ts # Model benchmark - Ollama & cloud providers
β βββ ollama-sync.ts # Ollama β models.json sync
β βββ openrouter-sync.ts # OpenRouter β models.json sync
β βββ react-fallback.ts # ReAct fallback for non-native tool models
β βββ security.ts # Command/path/SSRF protection
β βββ soul.ts # SoulSpec persona management
β βββ status.ts # System resource monitor & status bar
βββ shared/
β βββ debug.ts # Conditional debug logging
β βββ format.ts # Shared formatting utilities
β βββ model-test-utils.ts # Shared test utilities, config, history
β βββ ollama.ts # Ollama API helpers, provider detection, mutex, retry
β βββ react-parser.ts # Multi-dialect ReAct text parser
β βββ security.ts # Security validation, SSRF, DNS rebinding, audit log
β βββ types.ts # TypeScript types & error classes
βββ themes/
β βββ matrix.json # Matrix movie theme
βββ individual-packages/ # Source for individual npm packages
β βββ pi-shared/ # Shared utilities (bundled into extensions)
β βββ pi-api/ # API mode switcher
β βββ pi-diag/ # System diagnostics
β βββ pi-model-test/ # Model benchmarking
β βββ pi-ollama-sync/ # Ollama synchronization
β βββ pi-openrouter-sync/ # OpenRouter synchronization
β βββ pi-react-fallback/ # ReAct fallback
β βββ pi-security/ # Security extensions
β βββ pi-soul/ # SoulSpec personas
β βββ pi-status/ # System monitoring
βββ dist/ # Built npm packages (published to npmjs.com)
βββ scripts/
β βββ build-tgz.sh # Build all individual .tgz packages
β βββ bump-version.sh # Linux/macOS version bump script
β βββ bump-version.ps1 # Windows PowerShell version bump script
βββ CHANGELOG.md # Version history
βββ TESTS.md # Model benchmark results
βββ VERSION # Single source of truth for version
βββ package.json # Pi package manifest
βββ README.md
βββ LICENSE
Written by VTSTech
π www.vts-tech.org β’ π GitHub β’ π§ veritas@vts-tech.org
Optimizing AI agent development for resource-constrained environments.