Skip to content

eriirfos-eng/ternary-intelligence-stack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2,415 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Ternary Intelligence Stack (TIS)

version crates.io license tests API EU AI Act speedup MCP smithery badge examples stdlib DOI

Ternlang is a systems programming language, compiler, and high-performance inference runtime built on balanced ternary logic. We provide a fundamental architectural shift for Explainable AI (XAI) and European technological sovereignty.

Built by RFI-IRFOS · Graz, Austria · Whitepaper [https://osf.io/cyn28/files/8hzux]


Technical Pillars

  • Deterministic Uncertainty: Ternlang's trit (affirm/tend/reject) provides a first-class routing mechanism for Uncertainty-Aware AI, eliminating "hallucinated confidence."
  • Sparsity-Aware Inference Engine: Native @sparseskip optimization achieves up to 122x throughput gains by bypassing zero-signal (tend) weights at the hardware primitive level.
  • Explainable AI (XAI) by Design: Every decision is auditable and traceable, fulfilling EU AI Act Articles 13, 14, and 15 mandates for algorithmic transparency and human oversight.
  • Post-Binary Systems Architecture: A full-stack ecosystem including a custom Instruction Set Architecture (ISA), triadic networking, and memory-efficient ternary encoding.

The core type is trit: three values — −1 (reject), 0 (hold), +1 (affirm). The zero state is a first-class routing instruction: "insufficient confidence — do not act yet."

Ternlang provides a machine-readable path to human escalation instead of a forced binary guess. This is the foundation for Post-Binary Intelligence.


Full Documentation

ternlang-root/README.md (Full explanation, technical details, and compiler specs)

ROADMAP.md (Phases 1–18, session log, priority matrix)

Ternlang Studio Preview — Our work-in-progress SDK

Agent Albert — terminal-native, model-agnostic AI agent built in pure Rust


Team

The Ternary Intelligence Stack is built by a core team of three co-founders from RFI-IRFOS, Graz:

  • Simeon Kepp: Head of Research & Systems Architect.
  • Nikoletta Csonka: Head of Strategic Outreach & EU Relations.
  • Zabih Karimi: Principal Network & ML Engineer.

Read our BIO and Mission in LEADERSHIP.md


Performance Benchmarks

Feature Performance Gain Industry Comparison
Ternary Inference 2.3x (baseline) Up to 122x at 99%+ Sparsity
Data Density 1.25x improvement 5-trit block packing (8-bit)
Logic Consistency 100% Deterministic Eliminates binary timeout/null-guessing
Safety Latency < 1ms hard-veto Axis-6 Veto Hard Gate

Quick start

cargo install ternlang-cli

That's it. The ternlang binary is now in your PATH:

ternlang                        # → interactive REPL, start typing trit expressions
ternlang my_program.tern        # → run a .tern file directly
ternlang run my_program.tern    # → same (explicit form)
ternlang build my_program.tern  # → compile to .bet bytecode
ternlang fmt my_program.tern    # → format source file
ternlang repl                   # → interactive REPL (explicit)
ternlang test                   # → run test suite

Or build from source:

git clone https://github.com/eriirfos-eng/ternary-intelligence-stack
cd ternary-intelligence-stack/ternlang-root
cargo build --release
./target/release/ternlang examples/03_rocket_launch.tern

Agent Albert — AI Intelligence Layer

crates.io version

Albert is the sovereign, model-agnostic AI coding CLI and embedded intelligence layer of the Ternary Intelligence Stack. He runs entirely in your terminal, connects to any LLM provider, and never phones home. First went online: 2026-04-24.


Quick Install — One Copy Gets It All

# Install Albert (brings the full agent engine with it)
cargo install albert-cli

# Set your LLM key (pick any provider)
export GEMINI_API_KEY=AIza...          # Google Gemini — default, highest quota
export ANTHROPIC_API_KEY=sk-ant-...   # Anthropic Claude
export OPENAI_API_KEY=sk-...          # OpenAI / GPT-4o
# export XAI_API_KEY=xai-...          # xAI / Grok
# (Ollama: just run `ollama serve` — no key needed)

# Launch
albert-cli

That's it. Albert auto-detects your key and spins up the REPL.


What Albert Can Do

Capability Details
Multi-provider routing Claude, GPT-4o, Gemini, Grok, Ollama, Bedrock, Azure — swap with /model
Autonomous agent loop /loop <mission> — runs up to 10 tool-use turns to complete a goal; Ctrl+C aborts cleanly
Chain execution /plan <task> — LLM decomposes goal into steps, executes each in sequence
Tool harness read_file, write_file, edit_file, bash, glob_search, grep_search, web_fetch
Self-reflection memory Automatically scores each turn for importance; commits key facts to ~/.ternlang/memory.md and injects them on startup
Image input Attach images with [image: /path/to/file.png] syntax (Gemini multimodal)
Slash command library /plan, /loop, /tdd, /bughunter, /code-review, /build-fix, /refactor, /commit, /pr, /compress, and more — type / to browse
Interactive model picker /model with no args shows a full provider-grouped list with descriptions
Rate-limit resilience 429 errors auto-fall-back to a faster model and retry without crashing
Permission layer read-only, workspace-write, danger-full-access modes — deny-first AST interception
Session memory Sliding-window compaction keeps long sessions coherent
RTK integration 60–90% token savings on dev operations
MCP support stdio and network transport for any MCP server

Slash Commands

/model          → interactive model picker (all providers)
/loop <goal>    → autonomous multi-turn agent mission
/plan <task>    → decompose + execute step by step
/bughunter      → scan codebase for bugs
/commit         → AI-generated commit message + commit
/compress       → summarise and compact session history
/status         → show model, session, token usage
/help           → browse full command list

Ecosystem

Crate Role
albert-cli The albert binary
albert-runtime Session engine, MCP, auth, bash
albert-api Multi-provider LLM client
albert-commands Slash command library
albert-tools Tool execution layer
albert-compat Manifest extraction harness

Source: agent_albert_cli/
crates.io: albert-cli v0.1.2


Repository layout

Directory Contents
ternlang-root/ All Rust crates — compiler, VM, API, MCP server, ML stack
ternlang-root/stdlib/ 293 open-core .tern modules (Tier 1 — free)
ternlang-root/examples/ Runnable .tern examples (medical, finance, aerospace, etc.)
ternlang-root/spec/ BET-ISA spec, language reference, grammar, protocol specs
ternlang-root/ternlang-web/ ternlang.com frontend (GitHub Pages)
agent_albert_cli/ Agent Albert — model-agnostic AI coding CLI + TernStudio intelligence layer
eriirfos-eng/ternlang-premium (private) 28,495+ proprietary .tern modules — Tier 2 / 3 / 4

Live API

# Health check
curl https://ternlang.com/health

# MoE-13 ternary decision via MCP (no key required)
curl -X POST https://ternlang.com/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"moe_orchestrate","arguments":{"query":"Should I proceed?"}}}'

StdLib Access

The standard library is split across two repos to protect paid-tier IP:

  • Tier 1 (free): 293 open-core modules in ternlang-root/stdlib/ — clone this repo and use immediately
  • Tier 2/3/4 (paid): 28,495+ proprietary modules in the private eriirfos-eng/ternlang-premium repo

After purchasing: visit ternlang.com/activate — enter your API key + GitHub username and you'll receive a collaborator invite to the private repo automatically.

Full tier breakdown


Licensing

Tier Price Details
Community (LGPL-3.0) Free Compiler, VM, CLI, LSP, 293 stdlib modules + 30 MCP tools (all free)
Pro Standard (BSL-1.1) €99/month REST API (10,000 calls/month), server-side 3-layer memory, SSE streaming + Tier 2 stdlib
Industrial (BSL-1.1) €349/month 50,000 API calls, QNN, SEC, T-HAL, TernAudit + Tier 3 stdlib
Enterprise (Proprietary) From €2,500/month unlimited API calls

Commercial licensing: licensing@ternlang.com


Ternlang Logo