This setup assumes you have Ollama running locally and want to persist your n8n workflows using Docker.
docker volume create n8n_dataThis ensures your workflows, credentials, and execution history are saved across container restarts.
chmod u+x run.shThis allows you to launch n8n with a simple command.
./run.shℹ️ Make sure Ollama is already running and accessible at
http://localhost:11434.
You can verify this by running:curl http://localhost:11434/models
Once n8n is running, open your browser and visit:
http://localhost:5678
From there, you can start building workflows that integrate with Ollama using the LangChain nodes.