A fully local, private RAG CLI for chatting with your personal journal entries.
- Python 3.11+
- Ollama installed and running
Pull the required models before first use:
ollama pull mxbai-embed-large
ollama pull qwen3.5python -m venv venv
source venv/bin/activate
pip install -r requirements.txtPlace your Day One JSON export (or compatible format) in the project directory. The default expected file is journal.json.
The expected JSON structure:
{
"metadata": { "version": "1.0" },
"entries": [
{
"creationDate": "2024-01-15T10:30:00Z",
"text": "Journal entry text...",
"tags": ["tag1", "tag2"],
"location": {
"placeName": "Place",
"localityName": "City",
"administrativeArea": "State",
"country": "Country"
},
"weather": {
"conditionsDescription": "Sunny",
"temperatureCelsius": 25
}
}
]
}python main.py --ingestTo use a custom file path:
python main.py --ingest --file /path/to/your/journal.jsonThis reads the JSON, extracts text and metadata (date, location, tags, weather), chunks the content, and stores embeddings in a local ChromaDB vector database.
python main.py --chatType your questions and the LLM will answer based on relevant journal entries. Type exit or quit to stop.
The following constants in main.py can be adjusted:
| Constant | Default | Description |
|---|---|---|
EMBEDDING_MODEL |
mxbai-embed-large |
Ollama embedding model |
CHAT_MODEL |
qwen3.5 |
Ollama chat model |
DB_DIR |
./chroma_db |
ChromaDB storage directory |
Everything runs locally:
- Ollama runs models on your machine
- ChromaDB stores vectors on disk
- No network calls are made during ingestion or chat