Skip to content

nhlpl/LiliAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

I'll create a complete, lightweight implementation of the Lilith architecture. This will be a runnable Python project that demonstrates modular "brain regions" communicating via emergent signaling and evolving over generations.


📁 Project Structure

lilith/
├── README.md
├── requirements.txt
├── config.yaml
├── lilith/
│   ├── __init__.py
│   ├── core.py          # Base classes, Message, Environment
│   ├── modules.py       # Thinking, Memory, Sensory, Regulatory modules
│   ├── signaling.py     # Neurotransmitter-inspired token system
│   ├── evolution.py     # Genetic algorithm for evolving module parameters
│   └── main.py          # Entry point / demo
└── examples/
    └── run_evolution.py

📄 File Contents

README.md

# Lilith

A lightweight implementation of a developmental, modular AI architecture inspired by the Lilith theoretical framework.

**Core Ideas:**
- Modular "brain regions" (Thinking, Memory, Sensory, Regulatory)
- Communication via emergent, neurotransmitter-inspired token signals
- Developmental learning through environmental interaction
- Evolutionary optimization of module parameters

**Lightweight by Design:**
- Uses local JSON for persistent memory
- LLM calls only when necessary (configurable)
- Small population genetic algorithm
- Runs on a single machine

## Quick Start

```bash
pip install -r requirements.txt
python -m lilith.main

Architecture

┌─────────────┐     ┌─────────────┐
│  Sensory    │────▶│  Thinking   │
│  Module     │     │  Module     │
└─────────────┘     └──────┬──────┘
                           │
                           ▼
┌─────────────┐     ┌─────────────┐
│  Memory     │◀───▶│ Regulatory  │
│  Module     │     │  Module     │
└─────────────┘     └─────────────┘

Modules communicate via Signal objects with properties:

  • type: "excitatory", "inhibitory", "modulatory"
  • strength: float 0.0-1.0
  • content: arbitrary data

Evolution

The genetic algorithm evolves:

  • Module prompt templates
  • Signal thresholds
  • Learning rates

Fitness is measured by task performance in a simulated environment.

License

MIT


---

### `requirements.txt`

pydantic>=2.0.0 openai>=1.0.0 pyyaml>=6.0 numpy>=1.24.0


---

### `config.yaml`

```yaml
llm:
  provider: "deepseek"
  model: "deepseek-chat"
  api_key_env: "DEEPSEEK_API_KEY"
  base_url: "https://api.deepseek.com/v1"

evolution:
  population_size: 20
  generations: 50
  mutation_rate: 0.2
  crossover_rate: 0.7

modules:
  thinking:
    temperature: 0.7
    max_tokens: 500
  memory:
    storage_path: "./memory_store.json"
  regulatory:
    homeostasis_target: 0.5
    decay_rate: 0.01

lilith/__init__.py

from .core import Module, Signal, Environment, Brain
from .modules import ThinkingModule, MemoryModule, SensoryModule, RegulatoryModule
from .signaling import NeurotransmitterType, SignalFactory
from .evolution import GeneticOptimizer, Genome

__all__ = [
    "Module", "Signal", "Environment", "Brain",
    "ThinkingModule", "MemoryModule", "SensoryModule", "RegulatoryModule",
    "NeurotransmitterType", "SignalFactory",
    "GeneticOptimizer", "Genome"
]

lilith/core.py

"""Core abstractions for Lilith: Module, Signal, Environment, Brain."""
from abc import ABC, abstractmethod
from dataclasses import dataclass, field
from typing import Dict, List, Any, Optional
from enum import Enum
import uuid
import json
import os

class SignalType(Enum):
    EXCITATORY = "excitatory"      # Amplifies activity
    INHIBITORY = "inhibitory"      # Dampens activity
    MODULATORY = "modulatory"      # Changes response characteristics

@dataclass
class Signal:
    """A message passed between modules, inspired by neurotransmitters."""
    source: str
    target: str
    type: SignalType
    strength: float  # 0.0 to 1.0
    content: Any
    id: str = field(default_factory=lambda: str(uuid.uuid4())[:8])
    
    def to_dict(self) -> Dict:
        return {
            "source": self.source,
            "target": self.target,
            "type": self.type.value,
            "strength": self.strength,
            "content": self.content,
            "id": self.id
        }

class Module(ABC):
    """Base class for all brain modules."""
    
    def __init__(self, name: str, config: Dict = None):
        self.name = name
        self.config = config or {}
        self.state: Dict[str, Any] = {}
        self.connections: List[str] = []  # Names of modules this one connects to
        
    @abstractmethod
    async def process(self, signals: List[Signal]) -> List[Signal]:
        """Process incoming signals and return outgoing signals."""
        pass
    
    def connect_to(self, module_name: str):
        if module_name not in self.connections:
            self.connections.append(module_name)
    
    def __repr__(self):
        return f"{self.__class__.__name__}({self.name})"

class Environment:
    """Simulated environment that provides sensory input and evaluates actions."""
    
    def __init__(self):
        self.state: Dict[str, Any] = {}
        self.history: List[Dict] = []
    
    def step(self, action: Any) -> Dict:
        """Execute an action and return observation and reward."""
        raise NotImplementedError("Subclass must implement step()")
    
    def reset(self):
        self.state = {}
        self.history = []

class Brain:
    """Orchestrates all modules and manages signal flow."""
    
    def __init__(self, config_path: str = "config.yaml"):
        import yaml
        with open(config_path, 'r') as f:
            self.config = yaml.safe_load(f)
        
        self.modules: Dict[str, Module] = {}
        self.signal_queue: List[Signal] = []
        self.history: List[Dict] = []
        
    def add_module(self, module: Module):
        self.modules[module.name] = module
        
    def wire_modules(self, connections: List[tuple]):
        """Define connections: (source_name, target_name)."""
        for src, tgt in connections:
            if src in self.modules and tgt in self.modules:
                self.modules[src].connect_to(tgt)
    
    async def step(self, external_input: Optional[Signal] = None) -> List[Signal]:
        """Process one cycle: route signals, let modules process, collect outputs."""
        if external_input:
            self.signal_queue.append(external_input)
        
        all_outputs = []
        # Process each module that has incoming signals
        for name, module in self.modules.items():
            incoming = [s for s in self.signal_queue if s.target == name]
            if incoming:
                outputs = await module.process(incoming)
                all_outputs.extend(outputs)
                # Route outputs to connected modules
                for out_signal in outputs:
                    for target in module.connections:
                        out_signal.target = target
                        self.signal_queue.append(out_signal)
        
        # Clear processed signals
        self.signal_queue = [s for s in self.signal_queue if not any(
            s.target == name and s in [sig for sig in self.signal_queue if sig.target == name][:len(incoming)]
            for name, incoming in [(n, [si for si in self.signal_queue if si.target == n]) for n in self.modules]
        )]  # Simplified: just keep unprocessed
        
        self.history.append({"outputs": [s.to_dict() for s in all_outputs]})
        return all_outputs
    
    def get_state_snapshot(self) -> Dict:
        """Return current brain state for analysis."""
        return {
            "modules": {name: mod.state for name, mod in self.modules.items()},
            "queue_size": len(self.signal_queue)
        }

lilith/signaling.py

"""Neurotransmitter-inspired signaling system."""
from enum import Enum
from typing import Any, Optional
from .core import Signal, SignalType
import random

class NeurotransmitterType(Enum):
    """Specific neurotransmitter analogs."""
    GLUTAMATE = "glutamate"     # Fast excitatory
    GABA = "gaba"               # Fast inhibitory
    DOPAMINE = "dopamine"       # Reward/prediction error
    SEROTONIN = "serotonin"     # Mood/patience
    NOREPINEPHRINE = "norepinephrine"  # Arousal/attention
    ACETYLCHOLINE = "acetylcholine"    # Learning/memory modulation

class SignalFactory:
    """Creates signals with neurotransmitter-like properties."""
    
    @staticmethod
    def excitatory(source: str, content: Any, strength: float = 0.8) -> Signal:
        return Signal(
            source=source,
            target="",  # To be set by router
            type=SignalType.EXCITATORY,
            strength=min(1.0, max(0.0, strength)),
            content=content
        )
    
    @staticmethod
    def inhibitory(source: str, content: Any, strength: float = 0.6) -> Signal:
        return Signal(
            source=source,
            target="",
            type=SignalType.INHIBITORY,
            strength=min(1.0, max(0.0, strength)),
            content=content
        )
    
    @staticmethod
    def modulatory(source: str, content: Any, ntype: NeurotransmitterType, strength: float = 0.5) -> Signal:
        signal = Signal(
            source=source,
            target="",
            type=SignalType.MODULATORY,
            strength=min(1.0, max(0.0, strength)),
            content=content
        )
        signal.content["neurotransmitter"] = ntype.value
        return signal
    
    @staticmethod
    def dopamine(source: str, reward_prediction_error: float) -> Signal:
        """Dopamine signal encodes reward prediction error."""
        strength = 0.5 + 0.5 * reward_prediction_error  # Map [-1,1] to [0,1]
        return SignalFactory.modulatory(
            source, 
            {"rpe": reward_prediction_error}, 
            NeurotransmitterType.DOPAMINE, 
            strength
        )

lilith/modules.py

"""Implementations of the four core modules."""
import json
import os
import asyncio
from typing import List, Dict, Any
from datetime import datetime
from openai import AsyncOpenAI

from .core import Module, Signal, SignalType
from .signaling import SignalFactory, NeurotransmitterType

class LLMProvider:
    """Simple wrapper for DeepSeek API."""
    def __init__(self, config: Dict):
        self.client = AsyncOpenAI(
            api_key=os.getenv(config.get("api_key_env", "DEEPSEEK_API_KEY")),
            base_url=config.get("base_url", "https://api.deepseek.com/v1")
        )
        self.model = config.get("model", "deepseek-chat")
    
    async def complete(self, prompt: str, temperature: float = 0.7, max_tokens: int = 500) -> str:
        response = await self.client.chat.completions.create(
            model=self.model,
            messages=[{"role": "user", "content": prompt}],
            temperature=temperature,
            max_tokens=max_tokens
        )
        return response.choices[0].message.content

class ThinkingModule(Module):
    """Central reasoning module. Processes inputs and generates plans/responses."""
    
    def __init__(self, name: str = "thinking", config: Dict = None):
        super().__init__(name, config)
        self.llm = LLMProvider(config.get("llm", {}))
        self.temperature = config.get("temperature", 0.7)
        self.max_tokens = config.get("max_tokens", 500)
        self.prompt_template = config.get("prompt_template", 
            "You are a reasoning module. Based on these inputs:\n{inputs}\n"
            "Generate a concise response or plan. If you receive inhibitory signals, be more cautious."
        )
    
    async def process(self, signals: List[Signal]) -> List[Signal]:
        # Aggregate inputs with their signal strengths
        inputs_text = ""
        total_excitation = 0.0
        total_inhibition = 0.0
        
        for sig in signals:
            inputs_text += f"[{sig.source} ({sig.type.value}, strength={sig.strength:.2f})]: {sig.content}\n"
            if sig.type == SignalType.EXCITATORY:
                total_excitation += sig.strength
            elif sig.type == SignalType.INHIBITORY:
                total_inhibition += sig.strength
        
        # Modulate temperature based on excitation/inhibition balance
        effective_temp = self.temperature * (1 + total_excitation - total_inhibition)
        effective_temp = max(0.1, min(1.5, effective_temp))
        
        prompt = self.prompt_template.format(inputs=inputs_text)
        if total_inhibition > total_excitation:
            prompt += "\nNote: Inhibitory signals dominate. Be conservative."
        
        response = await self.llm.complete(prompt, temperature=effective_temp, max_tokens=self.max_tokens)
        
        # Decide output strength based on confidence (simulated)
        confidence = 0.5 + 0.3 * (total_excitation - total_inhibition)
        confidence = max(0.1, min(1.0, confidence))
        
        output_signal = SignalFactory.excitatory(self.name, {"thought": response}, confidence)
        return [output_signal]

class MemoryModule(Module):
    """Stores and retrieves episodic and semantic memories."""
    
    def __init__(self, name: str = "memory", config: Dict = None):
        super().__init__(name, config)
        self.storage_path = config.get("storage_path", "./memory_store.json")
        self.memories = self._load()
        self.state["access_count"] = 0
    
    def _load(self) -> List[Dict]:
        if os.path.exists(self.storage_path):
            with open(self.storage_path, 'r') as f:
                return json.load(f)
        return []
    
    def _save(self):
        with open(self.storage_path, 'w') as f:
            json.dump(self.memories, f, indent=2)
    
    async def process(self, signals: List[Signal]) -> List[Signal]:
        outputs = []
        for sig in signals:
            if sig.type == SignalType.EXCITATORY:
                # Store memory
                if isinstance(sig.content, dict) and sig.content.get("action") == "store":
                    memory_item = {
                        "timestamp": datetime.now().isoformat(),
                        "content": sig.content.get("data"),
                        "strength": sig.strength,
                        "source": sig.source
                    }
                    self.memories.append(memory_item)
                    self._save()
                    outputs.append(SignalFactory.excitatory(self.name, {"stored": True}, 0.5))
                
                # Retrieve memory
                elif isinstance(sig.content, dict) and sig.content.get("action") == "retrieve":
                    query = sig.content.get("query", "")
                    # Simple keyword match (could be embedding-based)
                    matches = [m for m in self.memories if query.lower() in str(m["content"]).lower()]
                    # Return strongest matches
                    matches.sort(key=lambda x: x["strength"], reverse=True)
                    retrieved = matches[:3]
                    outputs.append(SignalFactory.excitatory(
                        self.name, 
                        {"retrieved": retrieved, "query": query}, 
                        min(1.0, len(retrieved) * 0.3)
                    ))
            
            elif sig.type == SignalType.MODULATORY:
                # Acetylcholine enhances memory encoding
                if sig.content.get("neurotransmitter") == NeurotransmitterType.ACETYLCHOLINE.value:
                    self.state["encoding_boost"] = sig.strength
        
        return outputs

class SensoryModule(Module):
    """Interfaces with environment. Provides perception."""
    
    def __init__(self, name: str = "sensory", config: Dict = None):
        super().__init__(name, config)
        self.environment = None  # Set by Brain
        self.llm = LLMProvider(config.get("llm", {}))
    
    def attach_environment(self, env):
        self.environment = env
    
    async def process(self, signals: List[Signal]) -> List[Signal]:
        outputs = []
        # If we have an environment, get observation
        if self.environment:
            obs = self.environment.state.get("observation", "No observation")
            # Use LLM to interpret raw observation into structured perception
            perception_prompt = f"Describe this observation concisely: {obs}"
            perception = await self.llm.complete(perception_prompt, temperature=0.3, max_tokens=100)
            outputs.append(SignalFactory.excitatory(self.name, {"perception": perception}, 0.9))
        
        # Also forward any external signals after interpretation
        for sig in signals:
            if sig.source == "external":
                interpreted = await self.llm.complete(f"Interpret this input: {sig.content}", temperature=0.3, max_tokens=100)
                outputs.append(SignalFactory.excitatory(self.name, {"interpreted": interpreted}, sig.strength))
        
        return outputs

class RegulatoryModule(Module):
    """Maintains homeostasis, modulates other modules."""
    
    def __init__(self, name: str = "regulatory", config: Dict = None):
        super().__init__(name, config)
        self.homeostasis_target = config.get("homeostasis_target", 0.5)
        self.decay_rate = config.get("decay_rate", 0.01)
        self.state["arousal"] = 0.5
        self.state["valence"] = 0.5
    
    async def process(self, signals: List[Signal]) -> List[Signal]:
        outputs = []
        
        # Decay arousal over time
        self.state["arousal"] *= (1 - self.decay_rate)
        
        for sig in signals:
            # Excitatory signals increase arousal
            if sig.type == SignalType.EXCITATORY:
                self.state["arousal"] = min(1.0, self.state["arousal"] + 0.1 * sig.strength)
            # Inhibitory signals decrease arousal
            elif sig.type == SignalType.INHIBITORY:
                self.state["arousal"] = max(0.1, self.state["arousal"] - 0.1 * sig.strength)
            # Dopamine modulates valence
            elif sig.type == SignalType.MODULATORY and sig.content.get("neurotransmitter") == NeurotransmitterType.DOPAMINE.value:
                rpe = sig.content.get("rpe", 0.0)
                self.state["valence"] = 0.5 + 0.5 * rpe
        
        # Send modulatory signals based on internal state
        if self.state["arousal"] > 0.7:
            # High arousal -> norepinephrine to thinking
            outputs.append(SignalFactory.modulatory(
                self.name,
                {"level": self.state["arousal"]},
                NeurotransmitterType.NOREPINEPHRINE,
                self.state["arousal"]
            ))
        elif self.state["arousal"] < 0.3:
            # Low arousal -> serotonin to promote calm
            outputs.append(SignalFactory.modulatory(
                self.name,
                {"level": self.state["arousal"]},
                NeurotransmitterType.SEROTONIN,
                1.0 - self.state["arousal"]
            ))
        
        # Homeostatic drive: if arousal deviates from target, send signal to adjust
        diff = self.state["arousal"] - self.homeostasis_target
        if abs(diff) > 0.2:
            outputs.append(Signal(
                source=self.name,
                target="thinking",
                type=SignalType.MODULATORY,
                strength=abs(diff),
                content={"homeostatic_error": diff}
            ))
        
        return outputs

lilith/evolution.py

"""Genetic algorithm for evolving module parameters."""
import random
import copy
import asyncio
from typing import List, Dict, Any, Callable
from dataclasses import dataclass, field
import numpy as np

@dataclass
class Genome:
    """Encodes parameters for all modules."""
    thinking_temp: float = 0.7
    thinking_max_tokens: int = 500
    memory_encoding_boost: float = 0.5
    regulatory_homeostasis: float = 0.5
    regulatory_decay: float = 0.01
    prompt_template: str = ""
    fitness: float = 0.0
    
    def mutate(self, rate: float = 0.2):
        if random.random() < rate:
            self.thinking_temp += random.gauss(0, 0.1)
            self.thinking_temp = max(0.1, min(1.5, self.thinking_temp))
        if random.random() < rate:
            self.thinking_max_tokens += random.choice([-50, 50])
            self.thinking_max_tokens = max(100, min(1000, self.thinking_max_tokens))
        if random.random() < rate:
            self.regulatory_homeostasis += random.gauss(0, 0.05)
            self.regulatory_homeostasis = max(0.2, min(0.8, self.regulatory_homeostasis))
        if random.random() < rate:
            self.regulatory_decay += random.gauss(0, 0.005)
            self.regulatory_decay = max(0.001, min(0.1, self.regulatory_decay))
    
    @staticmethod
    def crossover(parent1: 'Genome', parent2: 'Genome') -> 'Genome':
        child = Genome()
        child.thinking_temp = random.choice([parent1.thinking_temp, parent2.thinking_temp])
        child.thinking_max_tokens = random.choice([parent1.thinking_max_tokens, parent2.thinking_max_tokens])
        child.regulatory_homeostasis = random.choice([parent1.regulatory_homeostasis, parent2.regulatory_homeostasis])
        child.regulatory_decay = random.choice([parent1.regulatory_decay, parent2.regulatory_decay])
        child.prompt_template = random.choice([parent1.prompt_template, parent2.prompt_template])
        return child

class GeneticOptimizer:
    def __init__(self, population_size: int = 20, generations: int = 50,
                 mutation_rate: float = 0.2, crossover_rate: float = 0.7):
        self.pop_size = population_size
        self.generations = generations
        self.mutation_rate = mutation_rate
        self.crossover_rate = crossover_rate
        self.population: List[Genome] = []
        self.best_genome: Genome = None
        
    def initialize_population(self):
        self.population = [Genome() for _ in range(self.pop_size)]
        # Add some variation
        for g in self.population:
            g.mutate(1.0)
    
    async def evolve(self, evaluate_func: Callable[[Genome], float]):
        self.initialize_population()
        
        for gen in range(self.generations):
            print(f"Generation {gen+1}/{self.generations}")
            
            # Evaluate fitness
            for genome in self.population:
                genome.fitness = await evaluate_func(genome)
            
            # Sort by fitness
            self.population.sort(key=lambda g: g.fitness, reverse=True)
            best = self.population[0]
            if self.best_genome is None or best.fitness > self.best_genome.fitness:
                self.best_genome = copy.deepcopy(best)
            
            print(f"  Best fitness: {best.fitness:.3f}, Avg: {np.mean([g.fitness for g in self.population]):.3f}")
            
            # Selection (tournament)
            new_pop = []
            # Elitism: keep top 2
            new_pop.extend(self.population[:2])
            
            while len(new_pop) < self.pop_size:
                parent1 = self._tournament_select()
                parent2 = self._tournament_select()
                if random.random() < self.crossover_rate:
                    child = Genome.crossover(parent1, parent2)
                else:
                    child = copy.deepcopy(random.choice([parent1, parent2]))
                child.mutate(self.mutation_rate)
                new_pop.append(child)
            
            self.population = new_pop
        
        return self.best_genome
    
    def _tournament_select(self, k: int = 3) -> Genome:
        tournament = random.sample(self.population, k)
        return max(tournament, key=lambda g: g.fitness)

lilith/main.py

"""Main entry point: demo of Lilith brain in a simple environment."""
import asyncio
import os
import yaml
from lilith.core import Brain, Environment, Signal
from lilith.modules import ThinkingModule, MemoryModule, SensoryModule, RegulatoryModule
from lilith.signaling import SignalFactory
from lilith.evolution import GeneticOptimizer, Genome

class SimpleEnvironment(Environment):
    """A simple environment that gives tasks and rewards."""
    def __init__(self):
        super().__init__()
        self.tasks = [
            {"question": "What is 2+2?", "answer": "4"},
            {"question": "Name a color in the rainbow.", "answer": "red"},
            {"question": "What is the capital of France?", "answer": "Paris"},
        ]
        self.current_task = 0
        self.state["observation"] = self.tasks[0]["question"]
    
    def step(self, action: str) -> dict:
        task = self.tasks[self.current_task]
        reward = 1.0 if task["answer"].lower() in action.lower() else 0.0
        self.current_task = (self.current_task + 1) % len(self.tasks)
        self.state["observation"] = self.tasks[self.current_task]["question"]
        return {"reward": reward, "observation": self.state["observation"]}

async def run_brain_with_genome(genome: Genome) -> float:
    """Evaluate a genome by running the brain in the environment."""
    # Load base config
    with open("config.yaml", 'r') as f:
        config = yaml.safe_load(f)
    
    # Override with genome parameters
    config["modules"]["thinking"]["temperature"] = genome.thinking_temp
    config["modules"]["thinking"]["max_tokens"] = genome.thinking_max_tokens
    config["modules"]["regulatory"]["homeostasis_target"] = genome.regulatory_homeostasis
    config["modules"]["regulatory"]["decay_rate"] = genome.regulatory_decay
    if genome.prompt_template:
        config["modules"]["thinking"]["prompt_template"] = genome.prompt_template
    
    # Create brain
    brain = Brain("config.yaml")
    brain.config = config  # Override with modified config
    
    # Create modules with modified config
    thinking = ThinkingModule("thinking", config["modules"]["thinking"])
    memory = MemoryModule("memory", config["modules"]["memory"])
    sensory = SensoryModule("sensory", config["modules"].get("sensory", {"llm": config["llm"]}))
    regulatory = RegulatoryModule("regulatory", config["modules"]["regulatory"])
    
    # Attach environment
    env = SimpleEnvironment()
    sensory.attach_environment(env)
    
    brain.add_module(thinking)
    brain.add_module(memory)
    brain.add_module(sensory)
    brain.add_module(regulatory)
    
    # Wiring: sensory -> thinking -> memory; regulatory modulates all
    brain.wire_modules([
        ("sensory", "thinking"),
        ("thinking", "memory"),
        ("regulatory", "thinking"),
        ("regulatory", "memory"),
        ("thinking", "regulatory"),
    ])
    
    total_reward = 0.0
    # Run for a few steps
    for _ in range(5):
        # External input (simulate user query)
        ext_signal = SignalFactory.excitatory("external", {"query": env.state["observation"]}, 0.8)
        ext_signal.target = "sensory"
        
        outputs = await brain.step(ext_signal)
        
        # Find thinking module's output
        for out in outputs:
            if out.source == "thinking" and isinstance(out.content, dict) and "thought" in out.content:
                action = out.content["thought"]
                result = env.step(action)
                total_reward += result["reward"]
                
                # Send reward as dopamine signal
                rpe = result["reward"] - 0.5  # simple prediction error
                dopamine_signal = SignalFactory.dopamine("environment", rpe)
                dopamine_signal.target = "regulatory"
                await brain.step(dopamine_signal)
    
    return total_reward

async def main():
    print("🧠 Lilith Developmental AI Architecture")
    print("=" * 50)
    
    # Quick demo without evolution
    print("\nRunning single brain instance...")
    dummy_genome = Genome()
    reward = await run_brain_with_genome(dummy_genome)
    print(f"Total reward: {reward}")
    
    # Evolution demo (comment out for quick run)
    print("\nEvolving brain parameters...")
    optimizer = GeneticOptimizer(population_size=10, generations=5)
    best = await optimizer.evolve(run_brain_with_genome)
    print(f"\nBest genome: temp={best.thinking_temp:.3f}, tokens={best.thinking_max_tokens}, fitness={best.fitness:.3f}")

if __name__ == "__main__":
    asyncio.run(main())

🚀 How to Run

  1. Create the directory structure and place all files.
  2. Set your DeepSeek API key: export DEEPSEEK_API_KEY="your-key"
  3. Install dependencies: pip install -r requirements.txt
  4. Run: python -m lilith.main

The demo will run a single brain instance and then evolve parameters over 5 generations. You can increase population size and generations in config.yaml for more thorough evolution.

This is a complete, runnable implementation of the Lilith concepts—modular, neurotransmitter-inspired signaling, developmental learning via environment interaction, and evolutionary optimization—all lightweight and hackable.

About

A lightweight implementation of a developmental, modular AI architecture inspired by the Lilith theoretical framework.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors