Skip to content

trpc-group/trpc-agent-python

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

43 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

English | 中文

tRPC-Agent-Python

PyPI Version Python Versions LICENSE Releases Coverage Documentation

A production-grade Agent framework deeply integrated with the Python AI ecosystem.
tRPC-Agent-Python provides an end-to-end foundation for agent building, orchestration, tool integration, session and long-term memory, service deployment, and observability, so you can ship reliable and extensible AI applications faster.

Why Choose tRPC-Agent-Python

  • Multi-paradigm agent orchestration: Built-in orchestration supports ChainAgent / ParallelAgent / CycleAgent / TransferAgent, with GraphAgent for graph-based orchestration.
  • Graph orchestration capability (GraphAgent): Use DSL to orchestrate Agent / Tool / MCP / Knowledge / CodeExecutor in one unified flow.
  • Efficient integration with Python AI ecosystems: Agent ecosystem extensions (claude-agent-sdk / LangGraph, etc.) / Tool ecosystem extensions (mcp, etc.) / Knowledge ecosystem extensions (LangChain, etc.) / Model ecosystem extensions (LiteLLM, etc.) / Memory ecosystem extensions (Mem0, etc.).
  • Agent ecosystem extensions: Supports LangGraphAgent / ClaudeAgent / TeamAgent (Agno-Like).
  • Tool ecosystem extensions: FunctionTool / File tools / MCPToolset / LangChain Tool / Agent-as-Tool.
  • Complete memory capability (Session / Memory): Session manages messages and state within a single session, while Memory manages cross-session long-term memory and personalization. Persistence supports InMemory / Redis / SQL; Memory also supports Mem0.
  • Production-grade knowledge capability: Built on LangChain components with first-class RAG support.
  • CodeExecutor extension capability: Supports local / container executors for code execution and task grounding.
  • Skills extension capability: Supports SKILL.md-based skill systems for reusable capabilities and dynamic tooling.
  • Connect to multiple LLM providers: OpenAI-like / Anthropic / LiteLLM routing.
  • Serving and observability: Expose HTTP / A2A / AG-UI services through FastAPI, with built-in OpenTelemetry tracing.
  • trpc-claw (OpenClaw-like personal agent): Built on nanobot, tRPC-Agent ships trpc-claw so you can quickly build an OpenClaw-like personal AI agent with Telegram, WeCom, and other channel support.

Use Cases

  • Intelligent customer support and knowledge QA (RAG + session memory)
  • Code generation and engineering automation (ClaudeAgent)
  • Code execution and automated task grounding (CodeExecutor)
  • Agent Skills for reusable capabilities
  • Multi-role collaborative workflows (TeamAgent / multi-agent)
  • Cross-protocol agent service integration (A2A / AG-UI)
  • MCP tool protocol integration and tool ecosystem expansion
  • Unified gateway access and protocol conversion
  • Component-based workflow orchestration using GraphAgent
  • Reusing existing LangGraph workflows in this runtime
  • Build an OpenClaw-like personal AI agent quickly with trpc-claw

Table of Contents

Quick Start

Prerequisites

  • Python 3.10+ (Python 3.12 recommended)
  • Available model API key (OpenAI-like / Anthropic, or route via LiteLLM)

Installation

pip install trpc-agent-py

Install optional capabilities as needed:

pip install trpc-agent-py[a2a,ag-ui,knowledge,agent-claude,mem0,langfuse]

Develop Weather Agent

import asyncio
import os
import uuid

from trpc_agent_sdk.agents import LlmAgent
from trpc_agent_sdk.models import OpenAIModel
from trpc_agent_sdk.runners import Runner
from trpc_agent_sdk.sessions import InMemorySessionService
from trpc_agent_sdk.tools import FunctionTool
from trpc_agent_sdk.types import Content, Part


async def get_weather_report(city: str) -> dict:
    return {"city": city, "temperature": "25°C", "condition": "Sunny", "humidity": "60%"}


async def main():
    model = OpenAIModel(
        model_name=os.environ["TRPC_AGENT_MODEL_NAME"],
        api_key=os.environ["TRPC_AGENT_API_KEY"],
        base_url=os.environ.get("TRPC_AGENT_BASE_URL", ""),
    )

    agent = LlmAgent(
        name="assistant",
        description="A helpful assistant",
        model=model,
        instruction="You are a helpful assistant.",
        tools=[FunctionTool(get_weather_report)],
    )

    session_service = InMemorySessionService()
    runner = Runner(app_name="demo_app", agent=agent, session_service=session_service)

    user_id = "demo_user"
    session_id = str(uuid.uuid4())
    user_content = Content(parts=[Part.from_text(text="What's the weather in Beijing?")])

    async for event in runner.run_async(user_id=user_id, session_id=session_id, new_message=user_content):
        if not event.content or not event.content.parts:
            continue
        for part in event.content.parts:
            if part.text and event.partial:
                print(part.text, end="", flush=True)
            elif part.function_call:
                print(f"\n🔧 [{part.function_call.name}({part.function_call.args})]", flush=True)
            elif part.function_response:
                print(f"📊 [{part.function_response.response}]", flush=True)

    print()

if __name__ == "__main__":
    asyncio.run(main())

Run the Agent

export TRPC_AGENT_API_KEY=xxx
export TRPC_AGENT_BASE_URL=xxxx
export TRPC_AGENT_MODEL_NAME=xxxx
python quickstart.py

trpc-claw Usage

tRPC-Agent ships trpc-claw (trpc_agent_cmd openclaw), built on nanobot, so you can quickly build an OpenClaw-like personal AI agent. Start it with a single command and it runs 24/7 — chat through Telegram, WeCom, or any other IM, or use it locally via CLI / UI.

For full configuration and advanced features, see: openclaw.md

Quick Start

1. Generate config

mkdir -p ~/.trpc_claw
trpc_agent_cmd openclaw conf_temp > ~/.trpc_claw/config.yaml

2. Set environment variables

export TRPC_AGENT_API_KEY=your_api_key
export TRPC_AGENT_BASE_URL=your_base_url
export TRPC_AGENT_MODEL_NAME=your_model

3. Run locally

# Force local CLI mode
trpc_agent_cmd openclaw chat -c ~/.trpc_claw/config.yaml

# Local UI
trpc_agent_cmd openclaw ui -c ~/.trpc_claw/config.yaml

4. Connect WeCom / Telegram

Enable the channel in config.yaml, then launch with run:

channels:
  wecom:
    enabled: true
    bot_id: ${WECOM_BOT_ID}
    secret: ${WECOM_BOT_SECRET}
  # or Telegram:
  # telegram:
  #   enabled: true
  #   token: ${TELEGRAM_BOT_TOKEN}
trpc_agent_cmd openclaw run -c ~/.trpc_claw/config.yaml

If no channel is available, run automatically falls back to local CLI for easy debugging.

Documentation

Examples

All examples in the examples directory are runnable. The groups below organize recommended starting points by capability, with short guidance so you can quickly pick what to read first for your scenario.

1. Getting Started and Basic Agents

Recommended first:

Related docs: llm_agent.md / model.md

This group helps you:

  • Run a full end-to-end path from user input to tool call to model output
  • Understand how to consume function_call / function_response events in streaming output
  • Learn baseline patterns for prompts and structured responses

Start with this snippet (Runner + streaming events):

runner = Runner(app_name=app_name, agent=root_agent, session_service=session_service)
async for event in runner.run_async(user_id=user_id, session_id=current_session_id, new_message=user_content):
    if event.partial and event.content:
        ...

2. Preset Multi-Agent Orchestration

Recommended first:

Related docs: multi_agents.md

This group helps you:

  • Understand the role differences among Chain / Parallel / Cycle / Transfer
  • Pick serial, parallel, loop, or handoff orchestration by task shape
  • Learn how to resume and compose flows from existing outputs

Start with this snippet (ChainAgent):

pipeline = ChainAgent(
    name="document_processor",
    sub_agents=[extractor_agent, translator_agent],
)

3. Team Collaboration

Recommended first:

Related docs: team.md / human_in_the_loop.md / cancel.md

This group helps you:

  • Understand the Leader / Member collaboration model in Team
  • Combine Skills, sub-teams, and external agents in one workflow
  • Cover practical concerns like filtering, human approval, and cancellation

Start with this snippet (TeamAgent):

content_team = TeamAgent(
    name="content_team",
    model=model,
    members=[researcher, writer],
    instruction=LEADER_INSTRUCTION,
    share_member_interactions=True,
)

4. Graph Orchestration

Recommended first:

Related docs: graph.md / dsl.md

This group helps you:

  • Build explicit, controllable workflows (branching, merging, interruption, resuming)
  • Mix Agent / Tool / MCP / CodeExecutor / Knowledge in a single graph
  • Use DSL for workflows that stay readable and maintainable

Start with this snippet (conditional routing):

graph.add_conditional_edges(
    "decide",
    create_route_choice(set(path_map.keys())),
    path_map,
)

5. Agent Ecosystem Extensions

Recommended first:

Related docs: langgraph_agent.md / claude_agent.md / human_in_the_loop.md / cancel.md

This group helps you:

  • Reuse existing LangGraph assets in the current runtime with LangGraphAgent
  • Use ClaudeAgent for code generation, engineering automation, and streaming tools
  • Cover production-ready patterns like human-in-the-loop and cancellation

Start with this snippet (ClaudeAgent):

root_agent = ClaudeAgent(
    name="claude_weather_agent",
    model=_create_model(),
    instruction=INSTRUCTION,
    tools=[FunctionTool(get_weather)],
    enable_session=True,
)

6. Tools and MCP

Recommended first:

Related docs: tool.md

This group helps you:

  • Cover the full tool access path from function tools to MCP to composed toolsets
  • Learn advanced modes such as streaming tools and Agent-as-Tool
  • Reuse existing tool implementations in multi-agent scenarios

Start with this snippet (MCPToolset):

class StdioMCPToolset(MCPToolset):
    def __init__(self):
        super().__init__()
        self._connection_params = StdioConnectionParams(
            server_params=McpStdioServerParameters(command="python3", args=["mcp_server.py"]),
            timeout=5,
        )

7. Skills

Recommended first:

Related docs: skill.md

This group helps you:

  • Package reusable capabilities into Skills
  • Support scenario-based dynamic tool composition
  • Build reusable business skill modules

Start with this snippet (SkillToolSet):

workspace_runtime = create_local_workspace_runtime()
repository = create_default_skill_repository(skill_paths, workspace_runtime=workspace_runtime)
skill_tool_set = SkillToolSet(repository=repository, run_tool_kwargs=tool_kwargs)

8. CodeExecutor

Recommended first:

Related docs: code_executor.md

This group helps you:

  • Choose local or containerized executors by runtime constraints
  • Let agents execute code and ground tasks within controlled boundaries
  • Combine with Skills/Tools for planning-and-execution loops

9. Session, Memory, and Knowledge

Recommended first:

Related docs:

This group helps you:

  • Session: manage per-session messages, summaries, and state
  • Memory: manage cross-session long-term memory (including Mem0)
  • Knowledge: cover document loading, retrieval, RAG, and prompt templates

10. Serving and Protocols

Recommended first:

Related docs: a2a.md / agui.md / cancel.md

This group helps you:

  • Expose services through HTTP / A2A / AG-UI
  • Integrate streaming responses and cancellation into real applications
  • Use minimal templates for production service rollout

11. Filters and Execution Control

Recommended first:

Related docs: filter.md / cancel.md

This group helps you:

  • Apply control policies at model, tool, and agent layers
  • Cover branch filtering, timeline filtering, and cancellation
  • Build strong governance and risk-control constraints

12. Advanced LlmAgent Capabilities

Recommended first:

Related docs: llm_agent.md / model.md / custom_agent.md

This group helps you:

  • Focus on LlmAgent extension points for context, prompting, and model routing
  • Adapt a general-purpose agent to domain-specific business policies
  • Build reusable behavior templates for repeated scenarios

13. LlmAgent Tool Calling and Interaction

Recommended first:

Related docs: llm_agent.md / tool.md / human_in_the_loop.md

This group helps you:

  • Cover both simple and complex streaming tool interaction patterns
  • Orchestrate parallel tool calls with human confirmation nodes
  • Combine with filters and cancellation for more reliable execution chains

For more examples, see each subdirectory README.md under examples.

Architecture Overview

tRPC-Agent-Python Architecture

The framework is organized in an event-driven architecture where each layer can evolve independently:

  • Agent layer: LlmAgent / ChainAgent / ParallelAgent / CycleAgent / TransferAgent
  • Agent ecosystem extension layer: LangGraphAgent / ClaudeAgent / TeamAgent
  • Graph capability layer: GraphAgent / trpc_agent_sdk.dsl.graph (DSL-based orchestration)
  • Runner layer: Unified execution entry, coordinating Session / Memory / Artifact services
  • Tool layer: FunctionTool / file tools / MCPToolset / Skill tools
  • Model layer: OpenAIModel / AnthropicModel / LiteLLMModel
  • Memory layer: SessionService / MemoryService / SessionSummarizer / Mem0MemoryService
  • Knowledge layer: Production-grade LangChain-based knowledge and RAG capability
  • Execution and skill layer: CodeExecutor (local / container) / Skills
  • Service layer: FastAPI / A2A / AG-UI
  • Observability layer: OpenTelemetry tracing/metrics, integrable with platforms like Langfuse
  • Ecosystem adapter layer: claude-agent-sdk / mcp / LangChain / LiteLLM / Mem0 plugged into the main chain through model/tool/memory adapters

Key packages:

Package Responsibility
trpc_agent_sdk.agents Agent abstractions, multi-agent orchestration, ecosystem extensions (LangGraphAgent / ClaudeAgent / TeamAgent)
trpc_agent_sdk.runners Unified execution and event output
trpc_agent_sdk.models Model adapter layer
trpc_agent_sdk.tools Tooling system and MCP support
trpc_agent_sdk.sessions Session management and summarization
trpc_agent_sdk.memory Long-term memory services
trpc_agent_sdk.dsl.graph DSL graph orchestration engine
trpc_agent_sdk.teams Team collaboration mode
trpc_agent_sdk.code_executors Code execution and workspace runtime
trpc_agent_sdk.skills Skill repository and Skill tools
trpc_agent_sdk.server FastAPI / A2A / AG-UI serving capabilities

Contributing

We love contributions! Join our growing developer community and help build the future of AI Agents.

Ways to Contribute

  • Report bugs or suggest new features through Issues
  • Improve documentation to help others onboard faster
  • Submit PRs for bug fixes, new features, or examples
  • Share your use cases to inspire other builders

Quick Contribution Setup

# Fork and clone the repository
git clone https://github.com/YOUR_USERNAME/trpc-agent-python.git
cd trpc-agent-python

# Install development dependencies and run tests
pip install -e ".[dev]"
pytest

# Make your changes and open a PR!

Please read CONTRIBUTING.md for detailed guidelines and coding standards.
Please follow CODE-OF-CONDUCT.md to keep our community friendly, respectful, and inclusive.

Acknowledgements

Enterprise Validation

We sincerely thank Tencent Licaitong, Tencent Ads, and other business teams for continuous validation and feedback in real production scenarios, which helps us keep improving the framework.

Open-source Inspiration

We are also inspired by outstanding open-source frameworks including ADK, Agno, CrewAI, and AutoGen. We keep moving forward on the shoulders of giants.


If this project helps you, a GitHub Star is always appreciated — it's the most direct encouragement and helps more developers discover this project.

About

tRPC-Agent-Python provides an end-to-end foundation for agent building, orchestration, tool integration, session and long-term memory, service deployment, and observability, so you can ship reliable and extensible AI applications faster.

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages