Important
Current monthly release: Lovelace Lift - April 2026
Due: May 1, 2026
Tracks: April 2026 work
design-research-agents is the agent-execution layer in the cmudrc design
research ecosystem.
It provides typed, composable contracts for direct calls, multi-step runs, workflow orchestration, tool execution, and traceable experimentation.
This package centers on reproducible agent workflows with a compact public API:
- Two primary entry points:
DirectLLMCallandMultiStepAgent(direct,json, andcodemodes) - A seeded random control-condition agent for packaged-problem studies (
SeededRandomBaselineAgent) - A prompt-driven workflow agent for packaged-problem studies (
PromptWorkflowAgent) - Workflow primitives for model, tool, delegate, loop, and memory steps
- A tool runtime built around
Toolbox, with callable, script, and MCP-backed tool configs - Hosted and local LLM clients, plus
ModelSelectorfor backend-selection policies - Prebuilt coordination and reasoning patterns for plan/execute, propose/critic, debate, routing, round-based coordination, blackboard, tree search, Ralph loops, nominal teams, RAG, and conversation
- Tracing, structured
ExecutionResultoutputs, and runnable examples aimed at repeatable experiments
from design_research_agents import LlamaCppServerLLMClient, MultiStepAgent
with LlamaCppServerLLMClient() as llm_client:
agent = MultiStepAgent(mode="direct", llm_client=llm_client, max_steps=3)
result = agent.run(
prompt="Suggest two design goals for a field-repairable drone battery latch.",
)
print(result.final_output)Requires Python 3.12+.
Reproducible release installs target Python 3.12 (see .python-version).
If you prefer a guided editor-first flow, use the VS Code Setup Guide. It walks through creating a virtual environment, installing the published package, and running a first script in VS Code.
python -m venv .venv
source .venv/bin/activate
make dev
make test
PYTHONPATH=src python examples/agents/direct_llm_call.pyThe base-install path uses OpenAICompatibleHTTPLLMClient and expects a running
OpenAI-compatible endpoint. Contributor setup (make dev) installs development
tooling only; backend runtimes are explicit extras.
For frozen installs, extras, and release maintenance, see Dependencies and Extras.
Start with examples/README.md for runnable examples grouped by agents, clients, workflows, patterns, model selection, and tools.
See the published documentation for quickstart guidance, backend setup, workflow/pattern guides, and API docs.
Build docs locally with:
make docsThe supported public surface is whatever is exported from
design_research_agents.__all__.
Top-level exports include:
- Agent entry points:
DirectLLMCall,MultiStepAgent,SeededRandomBaselineAgent,PromptWorkflowAgent - Core contracts:
ExecutionResult,LLMRequest,LLMMessage,LLMResponse,ToolResult - Workflow runtime:
Workflow,CompiledExecution, and step contracts for model/tool/delegate/loop/memory behavior - Tools:
Toolbox,CallableToolConfig,ScriptToolConfig,MCPServerConfig - Patterns: conversation, debate, plan/execute, propose/critic, Ralph loops, nominal teams, routing, round-based coordination, blackboard, tree search, and RAG
- LLM clients: hosted and local adapters, including OpenAI-compatible HTTP plus provider-specific clients
- Runtime services:
ModelSelectorandTracer
Contribution workflow and quality gates are documented in CONTRIBUTING.md.