You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The E-Commerce-Support-Agent-RAG project provides a comprehensive understanding of the working behind a Customer Service Agent for online retailers by utilizing advanced AI to provide dynamic, personalized support by implementing Adaptive RAG. Adaptive RAG is a method that chooses the best strategy for answering questions, from a direct LLM response to single or multiple retrieval steps. This selection is based on the queryβs complexity, as determined by a classifier. The 2 strategies for answering are:
Single-Step Retrieval: For moderately complex questions, it retrieves information from a single external source, ensuring the answer is both swift and well-informed.
Multi-Step Retrieval: For highly complex questions, it consults multiple sources, piecing together a detailed and comprehensive answer.
This system efficiently processes and responds to customer inquiries, ensuring a seamless QA experience. Ideal for e-commerce platforms seeking to enhance customer interaction and satisfaction, it leverages machine learning to handle queries with precision and speed.
πΎ Features
Feature
Summary
βοΈ
Architecture
Utilizes a modular approach with separate components for handling different functionalities such as user interaction, data management, and response generation.
Employs ChromaDB for efficient management and indexing of embedding data.
Integrates AI-driven response generation using LiteLLM.
- App.py serves as the entry point for a customer support chat application, initializing the support agent and handling user queries - Upon starting a chat, it loads and indexes necessary data, displays initialization status, and sends a welcome message - It processes incoming messages and generates responses using a support agent, ensuring a dynamic and interactive user experience.
- Requirements.txt specifies the necessary Python packages for the project, ensuring consistent environments across different setups - It includes libraries for AI operations, database interactions, environment variable management, numerical computations, and testing - This setup is crucial for maintaining project functionality and compatibility, facilitating development and deployment processes.
- ChromaVectorStore serves as a wrapper for ChromaDB, facilitating the storage of pre-generated embeddings in a specified collection using an ephemeral client - It supports adding embeddings along with optional texts, metadata, and custom IDs, generating UUIDs for documents when IDs are not provided - This component enhances the project's capability to manage and index large volumes of embedding data efficiently.
- EmbeddingService in `rag/embedding_service.py` manages the generation of text embeddings using the LiteLLM library, configured with specific API keys and model settings - It supports batch processing for multiple texts and handles individual queries, providing a scalable solution for embedding generation within the project's architecture.
- SupportAgent in `agent/support_agent.py` orchestrates customer interactions for an e-commerce platform by leveraging machine learning models to process and respond to user queries - It initializes services for data loading, text processing, and embedding generation, and handles query classification, context generation, and personalized customer responses based on the nature of the inquiry.
- Agent/utils.py serves as a utility module within the broader codebase, primarily handling the parsing of responses from language models and JSON files into structured Pydantic models - It facilitates the extraction and transformation of data into a format that supports further processing and integration within the application's architecture.
- Config/settings.py establishes the environment for accessing API keys and configuring models within the software architecture - It initializes settings for embedding and language models, defines parameters for data retrieval, and sets agent behavior controls, ensuring the application interacts effectively with external AI services and manages data processing efficiently.
- The `prompt_templates.py` in the `llm` directory defines structured templates for handling customer support queries, specifically focusing on extracting and responding to order-related information - It categorizes queries into general knowledge requests and specific customer order data, ensuring responses are tailored and relevant based on the explicit details requested by the customer.
- LiteLLMService in llm/litellm_service.py serves as the interface for generating responses using a language model - It initializes with API and model settings, and offers a method to produce text outputs based on user prompts and optional system messages, handling errors gracefully during the process.
- Defines data models essential for managing product and order information within the system - Models such as Product, OrderDetails, and CustomerDetails facilitate structured data storage and retrieval for order processing and customer management, enhancing the system's ability to handle e-commerce transactions efficiently.
π Getting Started
βοΈ Installation Guide
Install E-Commerce-Support-Agent-RAG using one of the following methods:
Build from source:
Clone the E-Commerce-Support-Agent-RAG repository: