Skip to content

cmudrc/design-research-agents

Repository files navigation

design-research-agents

CI Coverage Examples Passing Public API In Examples Docs

Important

Current monthly release: Lovelace Lift - April 2026
Due: May 1, 2026
Tracks: April 2026 work

design-research-agents is the agent-execution layer in the cmudrc design research ecosystem.

It provides typed, composable contracts for direct calls, multi-step runs, workflow orchestration, tool execution, and traceable experimentation.

Overview

This package centers on reproducible agent workflows with a compact public API:

  • Two primary entry points: DirectLLMCall and MultiStepAgent (direct, json, and code modes)
  • A seeded random control-condition agent for packaged-problem studies (SeededRandomBaselineAgent)
  • A prompt-driven workflow agent for packaged-problem studies (PromptWorkflowAgent)
  • Workflow primitives for model, tool, delegate, loop, and memory steps
  • A tool runtime built around Toolbox, with callable, script, and MCP-backed tool configs
  • Hosted and local LLM clients, plus ModelSelector for backend-selection policies
  • Prebuilt coordination and reasoning patterns for plan/execute, propose/critic, debate, routing, round-based coordination, blackboard, tree search, Ralph loops, nominal teams, RAG, and conversation
  • Tracing, structured ExecutionResult outputs, and runnable examples aimed at repeatable experiments

A Super Basic Agent

from design_research_agents import LlamaCppServerLLMClient, MultiStepAgent

with LlamaCppServerLLMClient() as llm_client:
    agent = MultiStepAgent(mode="direct", llm_client=llm_client, max_steps=3)
    result = agent.run(
        prompt="Suggest two design goals for a field-repairable drone battery latch.",
    )

print(result.final_output)

Quickstart

Requires Python 3.12+. Reproducible release installs target Python 3.12 (see .python-version).

If you prefer a guided editor-first flow, use the VS Code Setup Guide. It walks through creating a virtual environment, installing the published package, and running a first script in VS Code.

python -m venv .venv
source .venv/bin/activate
make dev
make test
PYTHONPATH=src python examples/agents/direct_llm_call.py

The base-install path uses OpenAICompatibleHTTPLLMClient and expects a running OpenAI-compatible endpoint. Contributor setup (make dev) installs development tooling only; backend runtimes are explicit extras.

For frozen installs, extras, and release maintenance, see Dependencies and Extras.

Examples

Start with examples/README.md for runnable examples grouped by agents, clients, workflows, patterns, model selection, and tools.

Docs

See the published documentation for quickstart guidance, backend setup, workflow/pattern guides, and API docs.

Build docs locally with:

make docs

Public API

The supported public surface is whatever is exported from design_research_agents.__all__.

Top-level exports include:

  • Agent entry points: DirectLLMCall, MultiStepAgent, SeededRandomBaselineAgent, PromptWorkflowAgent
  • Core contracts: ExecutionResult, LLMRequest, LLMMessage, LLMResponse, ToolResult
  • Workflow runtime: Workflow, CompiledExecution, and step contracts for model/tool/delegate/loop/memory behavior
  • Tools: Toolbox, CallableToolConfig, ScriptToolConfig, MCPServerConfig
  • Patterns: conversation, debate, plan/execute, propose/critic, Ralph loops, nominal teams, routing, round-based coordination, blackboard, tree search, and RAG
  • LLM clients: hosted and local adapters, including OpenAI-compatible HTTP plus provider-specific clients
  • Runtime services: ModelSelector and Tracer

Contributing

Contribution workflow and quality gates are documented in CONTRIBUTING.md.

About

A flexible, modular framework for researching AI agents that design

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors