Agentflow is a production-grade, full-stack ecosystem for building, deploying, and scaling intelligent agentic applications. It bridges the gap between experimental agent frameworks and robust enterprise software, offering a unified workflow from Python backend orchestration to React frontend integration.
Unlike other frameworks that focus solely on the "agent" logic, Agentflow provides the entire infrastructure needed to build real-world applications: a high-performance orchestration engine, a professional CLI, a type-safe React client, and deep production features like dual-layer memory and dependency injection.
Agentflow is not just a library; it's a complete platform composed of three integrated pillars:
Powered by pyagenity-api and agentflow-cli
- Graph-Based Orchestration: Build complex, cyclic agent workflows (StateGraph) inspired by LangGraph but simplified for ease of use.
- FastAPI Integration: Auto-generated REST API endpoints for your agents with zero boilerplate.
- Professional CLI: Tools to
init,build(Docker), anddeployapplications instantly. - InjectQ: A powerful Dependency Injection framework specifically designed for agents, allowing cleaner, more testable tool code.
Powered by @10xscale/agentflow-client
- React-First: Custom hooks and patterns for seamless UI integration.
- Streaming & Real-Time: Native support for streaming responses (token-by-token or message-by-message) to build ChatGPT-like interfaces.
- Client-Side Tools: safely execute browser capabilities (geolocation, clipboard) as agent tools.
- State Management: Dynamic state handling that keeps frontend UI in sync with backend agent logic.
Powered by the Agentflow Divio System
- Structured Learning Path: Clearly separated Tutorials (learning), How-To Guides (solving), Reference (technical), and Concepts (understanding).
- "Hello World" in 5 Minutes: Dedicated paths to get beginners running immediately.
- Vendor Neutral: First-class support for any LLM (OpenAI, Anthropic, Gemini, Mistral, Local LLMs). We don't wrap you in a vendor-specific SDK.
- BYO-LLM: Bring your own model client—Agentflow handles the orchestration, state, and tools.
- Working Memory (State): Ephemeral context for current execution steps.
- Session Memory (Checkpointers): Dual-storage architecture using Redis (hot cache for speed) + PostgreSQL (durable storage for reliability).
- Knowledge Memory (Stores): Integration with Vector DBs (Qdrant) and Memory Stores (Mem0) for long-term semantic recall.
- Auto-Concurrency: Agents interacting with multiple tools (e.g., search + calculation) execute them in parallel automatically, delivering 3x+ performance gains over sequential execution found in other frameworks.
- Type-Safe DI: Automatically inject dependencies like
user_id,db_connection, orconfigdirectly into your tools and agents. - No Global State: Keeps your code clean, modular, and highly testable.
- MCP Support: Native implementation of the Model Context Protocol, allowing agents to connect to universal tools and data sources seamlessly.
- Remote Tool Execution: Define tools in Python (backend) or TypeScript (frontend) and let the agent call the right one transparently.
- Scaling-First Architecture: Designed to handle massive workloads with sub-millisecond response times.
- Redis Hot Cache: Active conversation states are stored in a high-speed Redis layer, providing immediate access and low latency for real-time interactions.
- Durable Database Layers: All session data is mirrored to durable storage (PostgreSQL/SQLAlchemy), ensuring consistency, reliability, and the ability to scale horizontally across multiple instances without losing state.
- Enterprise Authentication: First-class support for JWT, OAuth2, and custom authentication schemes out of the box.
- Role-Based Access Control (RBAC): Define fine-grained permissions for agents, tools, and endpoints.
- Per-Tool Authorization: Control which users, roles, or threads can execute specific tools, reducing risk and enabling multi-tenant setups.
- Intelligent Tool Selection: Automatically filter available tools based on user role, cost constraints, and execution context.
- Cost Tracking: Monitor LLM token usage and tool execution costs in real-time.
- Reduced API Calls: Prevent expensive tool calls by controlling tool availability dynamically.
- Strict Tool Validation: All tool calls are validated against the defined schema before execution.
- No Code Execution Exploits: Tools cannot be manipulated to execute arbitrary code; they're type-safe and sandboxed.
- Audit Logging: Every tool invocation is logged and traceable for security compliance.
- Multi-Backend Events: Publish agent events and metrics to any backend you choose:
- Kafka: For large-scale event streaming and real-time analytics.
- Redis Pub/Sub: For lightweight, in-process event distribution.
- RabbitMQ: For reliable message queuing and distributed systems.
- Custom Publishers: Easily extend with your own event backends.
- Event Granularity: Stream tokens, messages, node execution, or entire graph traces based on your observability needs.
- Zero-Configuration Endpoints: Auto-generated FastAPI endpoints for your agents—invoke, stream, manage state, memory, and threads.
- Production-Grade: Built with async/await, pydantic validation, and OpenAPI documentation out of the box.
- Extensible: Add custom endpoints and middleware without touching the core framework.
- No Vendor Lock-In: Completely open-source under MIT license. Audit every line of code.
- No Data Collection: We're not collecting your agent interactions, conversations, or proprietary data. Your data stays on your infrastructure.
- Deploy Anywhere: Use the built-in Docker support to deploy on your own servers, Kubernetes clusters, or cloud provider of choice.
- No Pricing Surprises: Unlike SaaS competitors, there's no per-API-call billing or hidden enterprise features. Pay for your infrastructure, not our licensing.
- Community-Driven: Contribute, fork, and customize freely. No proprietary restrictions.
| Feature | Agentflow | LangGraph | CrewAI | AutoGen |
|---|---|---|---|---|
| Architecture | Graph-Based | Graph-Based | Role-Based | Conversational |
| Full Stack | ✅ Backend + React SDK | ❌ Backend Only | ❌ Backend Only | ❌ Backend Only |
| Tool Execution | ✅ Parallel by Default | ⛔ Sequential | ⛔ Sequential | |
| Persistence | ✅ Redis + Postgres | |||
| Dependency Injection | ✅ Native (InjectQ) | ❌ Manual Passing | ❌ None | ❌ None |
| Deployment | ✅ CLI + Docker | ❌ Manual | ❌ Manual | ❌ Manual |
- Choose Agentflow if: You are building a production-ready application that needs a full stack (backend + frontend), high performance, and robust state management.
- Choose LangGraph if: You are already deeply invested in the LangChain ecosystem.
- Choose CrewAI if: You need simple, linear role-based automation without complex cyclic logic.