A research agent built using the OpenAI Agent SDK with support for both OpenAI models and local models via Ollama.
This research agent helps users conduct comprehensive research on any topic by:
- Planning and executing web searches to gather relevant information
- Synthesizing the information into a comprehensive report
- Providing follow-up questions for further research
The agent supports both OpenAI models and local models (like Llama3 7B) via Ollama, allowing for flexibility in deployment.
- Multi-agent architecture with planning, search, and writer agents
- Support for both OpenAI models and local models via Ollama
- Iterative research with automatic follow-up on generated questions
- Web search capabilities with support for multiple search providers (Google, Serper, Tavily, DuckDuckGo)
- Web content fetching and processing for deeper research
- Comprehensive report generation in markdown format
- Follow-up question generation for continued research
- Vector database integration for RAG operations (with local models)
- External file storage for data persistence
- Python 3.8+
- OpenAI API key (for OpenAI models)
- Ollama (for local models)
- Internet connection (for web search)
- One of the following search API keys (optional):
- Google Custom Search API key + Custom Search Engine ID
- Serper API key
- Tavily API key
- DuckDuckGo (no API key required, used as fallback)
- Clone the repository
- Run the setup script:
setup.cmd - Configure your environment variables in the
.envfile
To use local models with Ollama:
- Install Ollama from ollama.ai/download
- Run the Ollama setup script:
scripts/setup_ollama.cmd - This will download the Llama3 7B model and configure Ollama for use with the Research Agent
Run the research agent:
run.cmd
Enter your research topic when prompted, and the agent will:
- Plan the research approach
- Execute web searches
- Generate a comprehensive report
Run the web UI:
run_ui.cmd
This will start a web server at http://localhost:5000 where you can:
- Start new research tasks with different models and search providers
- View research progress in real-time
- Run follow-up research on generated questions
- Browse and read all generated reports
- Track active research tasks
run.cmd [options] [topic]
Options:
--model,-m: Model provider to use (openai, ollama)--model-name: Specific model name to use (e.g., gpt-4, llama3:7b)--search,-s: Search provider to use (google, serper, tavily, duckduckgo)--verbose,-v: Enable verbose logging--follow-up,-f: Run follow-up research on the generated questions
Examples:
run.cmd "artificial intelligence"
run.cmd --search duckduckgo "quantum computing"
run.cmd --search serper "machine learning"
run.cmd --search tavily "blockchain technology"
run.cmd --model openai --model-name gpt-4 "climate change"
run.cmd --model ollama --model-name llama3:7b "renewable energy"
run.cmd --follow-up "deep learning" # Run initial research and then follow-up on generated questions
src/- Source codeagents/- Agent implementationsmodels/- Model providers (OpenAI, Ollama)tools/- Tool implementationsutils/- Utility functionsconfig/- Configuration handling
tests/- Test suitescripts/- CMD scripts for setup and executiondocs/- Documentation
Research data is stored in an external storage path outside the project folder. By default, this is ~/Documents/ResearchAgentData, but it can be customized via the RESEARCH_DATA_PATH environment variable. This includes:
- Search results
- Generated reports
- Cached content
This approach ensures that your research data persists even if you delete or move the project folder.