Full-stack engineer Β· founder Β· open-source-first Β· based in Japan π―π΅
Building the operational platform for LLM agents in production β observe, govern, defend, operate.
βΆ Watch the 52-second walkthrough on YouTube
Most LLM tooling picks one corner of the agent operations problem. Observability stacks (Langfuse, LangSmith, Helicone, Arize) tell you what your agent did after it did it. Guardrail classifiers (Lakera, NemoGuardrails) score single prompts in isolation. Gateways (LiteLLM, TensorZero) route traffic. Static scanners (Agentic Radar) audit code.
Lumin covers all four corners in one self-hosted Docker container. Four pillars, every major framework.
Full-trace recording for every LLM call, tool invocation, retrieval, embedding, cost, eval. Multi-turn sessions. Real-time WebSocket dashboard with span timelines. Cost + token attribution across OpenAI, Anthropic, Ollama. Drop-in alternative to Langfuse / LangSmith β but local-only.
Policy engine with a typed DSL (before_proxy_call / after_proxy_call lifecycle hooks, priority, severity, conditions). Shadow / enforce modes β every rule starts as shadow, promote after reviewing the timeline. Versioning + rollback + audit. Auto-suggester mines patterns from your real traces; replay tests draft policies against historical traces; drift detection alerts on distribution shifts. Human approvals queue + decisions audit.
8 detection methods layered: Presidio NER, Prompt Guard 2 (22M-param classifier), Llama Guard 4 (14 MLCommons hazards), LLM-judge, embedding similarity, indirect-prompt-injection detection, locally-trainable classifier, regex packs. 12 starter policy packs ship: OWASP LLM Top 10, OWASP Agentic 2025, GDPR, HIPAA, PCI-DSS, cost guards, cross-session isolation, framework-specific. Attack generator for adversarial CI testing. PII vault. Tenant-isolation firewall for multi-tenant bots (5 structural layers).
| OWASP | Lumin protection |
|---|---|
| LLM01 β Prompt Injection | Prompt Guard 2 + pattern + LLM-judge on every input |
| LLM02 / LLM06 β Sensitive Info Disclosure | Presidio NER scrubs PII / names / orgs / IDs / emails / SSNs / credit cards from prompts |
| LLM03 β Supply-Chain | Every tool call audited; tool allowlist + signed plugin manifests |
| LLM05 β Insecure Output | Output-filter chain (Llama Guard 4 + regex + structural) before responses leave the agent |
| LLM08 β Excessive Agency | Deny-by-default for shells (exec, bash, python) and network egress (web_fetch, curl). Per-user file sandbox |
| LLM09 β Overreliance | Policy engine + human approval queue |
| LLM10 β Model Theft | Tenant-isolation firewall: conversation-history reset, structural blocking of cross-session leaks |
Webhook fanout to PagerDuty / Slack / SIEM. Backups + retention with one-click restore. Panic disable kill-switch. Prometheus-shape metrics + liveness / readiness. Resilient by design β a Lumin outage MUST never affect the agent. Local-first: single Docker, DuckDB + SQLite, no cloud dependency.
| Lumin | Langfuse | Lakera | NemoGuard | |
|---|---|---|---|---|
| Full trace recording | β | β | β | β |
| Cost + token attribution | β | β | β | β |
| Evals + scoring | β | β | β | β |
| Prompt-injection detection | β | β | β | β |
| PII redaction (Presidio NER) | β | β | β classifier | β classifier |
| Excessive-agency guard (deny exec / fetch) | β | β | β | β |
| Per-user file sandbox | β | β | β | β |
| Conversation history isolation | β | β | β | β |
| Policy engine + human approval | β | β | β | |
| Self-hosted single Docker | β | β SaaS | β NIM endpoint | |
| Open source | β Apache-2.0 | β MIT | β | β Apache-2.0 |
- π³ Docker β
docker run -p 3000:3000 -p 8000:8000 zistica/lumin:0.7.0 - π¦ npm β
@lumin-io/sdk,@lumin-io/openclaw-diagnostics,@lumin-io/mastra,@lumin-io/voltagent - π Python SDK β
pip install -e .(@lumin.tracedecorator + framework integrations) - π 16 framework integrations β Python SDK, TypeScript SDK, LangChain, LangGraph, LlamaIndex, CrewAI, AutoGen, LiteLLM, OpenAI Agents, Pydantic AI, Anthropic (extended-thinking), OpenClaw (OTel + diagnostics plugin), Mastra, VoltAgent, OpenAI-compat HTTP proxy, OTLP receiver
- π zistica.com
- π§ support@zistica.com
- π¬ GitHub Discussions
βΆοΈ YouTube β Lumin demo



