This project provides a Proof of Concept (POC) for using LavinMQ (a RabbitMQ alternative) for two common messaging patterns:
- Classic Queues: A reliable, work-distribution pattern using Producers and Consumers.
- Streams: A pattern for high-throughput, append-only logs, ideal for real-time data feeds.
Examples are provided in both Python and Node.js, and all configuration is managed via a .env file.
Adopting a message queue architecture offers significant advantages over traditional, direct API calls, especially when integrating critical systems. It provides:
-
Guaranteed Processing and Data Integrity: Once a message (like a sales order or inventory update) is accepted by the queue, it's safely stored until the receiving system can successfully process it. This "guaranteed delivery" eliminates the risk of losing critical data if the destination system is temporarily offline for maintenance or an unexpected restart. An order is only removed from the queue after the consumer confirms it has been processed successfully.
-
Enhanced System Resilience and Decoupling: The queue acts as a crucial buffer. During high-volume periods, it absorbs traffic spikes, preventing the destination system from becoming overloaded. This ensures stable, predictable performance and avoids system-wide slowdowns or crashes. The sending system doesn't need to know if the receiving system is online, busy, or undergoing an update. This allows development teams to deploy and maintain their respective systems independently, drastically reducing the risk of one system breaking another.
-
Intelligent Error Handling: If a specific message is malformed or fails to process, the message broker can automatically isolate it in a "dead-letter queue" for inspection. This prevents a single bad record from halting the entire data flow, ensuring that valid orders and updates continue to be processed without interruption.
-
Future-Proof Scalability: As transaction volume grows, you can increase processing power by simply adding more "consumer" instances to read from the queue in parallel. This can be done without any changes to the producing system, providing a clear and simple path to scale the architecture.
-
Complete Traceability for Compliance and Audits: The journey of every message can be tracked. We can log when an event was sent, when it entered the queue, and when it was processed. This creates an immutable audit trail, which is invaluable for regulatory compliance and simplifies debugging complex issues.
This project uses a .env file to manage configuration for usernames, passwords, and queue names. Create a file named .env in the root of the project with the following content:
# Environment variables for LavinMQ POC
# Credentials for the broker
LAVINMQ_USER=user
LAVINMQ_PASS=password
LAVINMQ_HOST=localhost
# Queue and Stream names
LAVINMQ_QUEUE=task_queue
LAVINMQ_STREAM=log_stream
LAVINMQ_DEAD_LETTER_EXCHANGE=dead_letter_exchange
LAVINMQ_DEAD_LETTER_QUEUE=dead_letter_queueYou need Docker and docker-compose installed. To start the LavinMQ broker, run the following command from the root of the project. It will automatically load the variables from the .env file.
docker compose up -dThis will start a LavinMQ container in the background.
- The broker will be available on
localhost:5672. - The management UI is available at http://localhost:15672.
- Login Credentials: The
LAVINMQ_USERandLAVINMQ_PASSfrom your.envfile.
If you are having trouble logging in, please check the following:
.envFile: Make sure you have a.envfile in the root of the project, and that it contains theLAVINMQ_USERandLAVINMQ_PASSvariables.- Restart Container: If you have recently created or modified the
.envfile, you must restart the container for the changes to take effect. You can do this by running:docker compose down && docker compose up -d - Default Credentials: The default credentials for this project are
userandpassword. If you have changed them in your.envfile, use your custom credentials.
This pattern demonstrates how to distribute tasks to one or more workers. A producer sends tasks to a queue, and a consumer receives and processes them, sending an acknowledgment only when the work is complete.
-
Navigate to the directory:
cd python-example -
Create a virtual environment and install dependencies with
uv: (If you don't haveuv, install it withpip install uv).# Create the virtual environment uv venv # Activate the environment source .venv/bin/activate # Install dependencies uv pip sync requirements.txt
-
Run the consumer: Open a terminal and run the consumer. It will wait for messages.
python consumer.py
-
Run the producer: Open a second terminal (with the virtual environment activated) and run the producer to send a message.
python producer.py "A task with... three... dots..."
-
Navigate to the directory:
cd nodejs-example -
Install dependencies:
npm install
-
Run the consumer: Open a terminal and run the consumer.
node consumer.js
-
Run the producer: Open a second terminal and run the producer.
node producer.js "Hello from Node.js"
This is the core benefit of a message broker. The producer and consumer don't need to be written in the same language.
Scenario 1: Node.js Producer -> Python Consumer
- Make sure your Python consumer from the
python-exampledirectory is running. - In another terminal, navigate to the
nodejs-exampledirectory. - Send a message using the Node.js producer:
node producer.js "This message is from Node.js!" - Observe the Python consumer terminal. You will see it receive the message.
Scenario 2: Python Producer -> Node.js Consumer
- Make sure your Node.js consumer from the
nodejs-exampledirectory is running. - In another terminal, navigate to the
python-exampledirectory (and activate its virtual environment). - Send a message using the Python producer:
python producer.py "This message is from Python!" - Observe the Node.js consumer terminal. It will receive and process the message seamlessly.
Streams are an append-only log structure, perfect for high-throughput scenarios like logging or real-time event tracking.
-
Navigate and set up the environment:
cd streams-example/python uv venv source .venv/bin/activate uv pip sync requirements.txt
-
Run the stream consumer:
python stream_consumer.py
-
Run the stream producer: In a second, activated terminal:
python stream_producer.py
You will see log messages appear in the consumer's terminal in real-time.
-
Navigate and install dependencies:
cd streams-example/nodejs npm install -
Run the stream consumer:
node stream_consumer.js
-
Run the stream producer: In a second terminal:
node producer.js
The consumer's terminal will immediately begin showing the log messages.
Aquí tienes algunos ejercicios prácticos que puedes realizar para ver los beneficios de las colas de mensajes en acción.
Este ejercicio demuestra que no se pierden mensajes incluso si el consumidor está fuera de línea.
- Asegúrate de que no haya consumidores ejecutándose. Detén cualquier proceso de
consumer.pyoconsumer.jsque esté activo. - Abre una terminal y navega a
nodejs-example. - Envía varios mensajes usando el productor:
node producer.js "Orden #1" node producer.js "Orden #2" node producer.js "Orden #3"
- Go to the LavinMQ management UI at http://localhost:15672 e inicia sesión. Verás que la
task_queuetiene 3 mensajes "Ready" esperando. - Ahora, inicia el consumidor de Python en otra terminal:
cd ../python-example source .venv/bin/activate python consumer.py
- Observa: El consumidor comenzará inmediatamente a procesar los tres mensajes que estaban esperando en la cola. Esto demuestra que la cola mantuvo los datos de forma segura mientras el consumidor estaba inactivo.
Este ejercicio muestra cómo se distribuye el trabajo entre múltiples trabajadores.
- Abre dos terminales separadas. En ambas, navega a
python-exampley activa el entorno virtual. - Ejecuta el consumidor en ambas terminales:
Ahora tienes dos "trabajadores" escuchando la misma cola.
python consumer.py
- Abre una tercera terminal y navega a
nodejs-example. - Envía varios mensajes con tiempos de procesamiento variables:
node producer.js "Una tarea rápida." node producer.js "Una tarea mediana.." node producer.js "Una tarea larga..." node producer.js "Otra tarea rápida."
- Observa: Verás que los mensajes se distribuyen entre los dos consumidores de Python. LavinMQ envía el siguiente mensaje disponible al siguiente consumidor disponible, equilibrando eficazmente la carga de trabajo.
Este ejercicio demuestra cómo un mensaje "malo" que causa un error puede ser aislado automáticamente sin detener otro trabajo.
- Asegúrate de que un consumidor esté ejecutándose (
consumer.pyoconsumer.js). - Abre una nueva terminal y navega al directorio
nodejs-example. - Envía una mezcla de mensajes "buenos" y "malos". Los consumidores ahora están programados para rechazar cualquier mensaje que contenga la palabra "error".
node producer.js "This is a valid message" node producer.js "This message contains an error" node producer.js "This is another valid message"
- Observa la terminal del consumidor: Verás que el consumidor procesa el primer mensaje válido, rechaza el mensaje con "error" y luego procesa inmediatamente el último mensaje válido. El flujo de trabajo no se bloquea por el mensaje defectuoso.
- Verifica la Cola de Mensajes Muertos: Ve a la interfaz de usuario de gestión de LavinMQ en http://localhost:15672.
- Navega a la pestaña "Queues" (Colas).
- Verás una nueva cola llamada
dead_letter_queue(o como la hayas configurado en tu archivo.env). - Haz clic en ella. Verás que contiene el mensaje: "Este mensaje contiene un error".
- Esto demuestra que el mensaje problemático ha sido aislado de forma segura para su posterior inspección y depuración, sin afectar el flujo de trabajo principal.
Si encuentras errores como PRECONDITION_FAILED al declarar intercambios o colas, esto generalmente significa que ya existen con argumentos diferentes a los que tu código está intentando usar. Para asegurar un estado limpio del broker durante el workshop, especialmente si has estado experimentando con diferentes configuraciones, se recomienda:
- Limpiar manualmente el broker: Accede a la interfaz de gestión de LavinMQ (RabbitMQ) en http://localhost:15672. Ve a las pestañas "Exchanges" (Intercambios) y "Queues" (Colas) y elimina manualmente cualquier intercambio (
dead_letter_exchange) y cola (task_queue,dead_letter_queue) que estén causando conflicto. - Reiniciar Docker con volúmenes eliminados: Para una limpieza completa que elimine todos los datos persistentes del broker, puedes ejecutar:
Ten en cuenta que el comando
docker compose down -v && docker compose up -ddocker compose down -velimina los volúmenes de Docker, lo que significa que cualquier dato persistente (como mensajes en colas o configuraciones de exchanges/queues) se perderá. Esto es ideal para empezar de cero en un entorno de desarrollo o workshop.
Si el comando docker compose down -v falla con un error interno del servidor Docker, es posible que necesites reiniciar tu demonio de Docker o verificar la instalación de Docker.