Skip to content

Wellu-Development/lavinmq-poc

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LavinMQ Producer/Consumer POC

This project provides a Proof of Concept (POC) for using LavinMQ (a RabbitMQ alternative) for two common messaging patterns:

  1. Classic Queues: A reliable, work-distribution pattern using Producers and Consumers.
  2. Streams: A pattern for high-throughput, append-only logs, ideal for real-time data feeds.

Examples are provided in both Python and Node.js, and all configuration is managed via a .env file.

1. The "Why": Benefits of a Message-Driven Architecture

Adopting a message queue architecture offers significant advantages over traditional, direct API calls, especially when integrating critical systems. It provides:

  • Guaranteed Processing and Data Integrity: Once a message (like a sales order or inventory update) is accepted by the queue, it's safely stored until the receiving system can successfully process it. This "guaranteed delivery" eliminates the risk of losing critical data if the destination system is temporarily offline for maintenance or an unexpected restart. An order is only removed from the queue after the consumer confirms it has been processed successfully.

  • Enhanced System Resilience and Decoupling: The queue acts as a crucial buffer. During high-volume periods, it absorbs traffic spikes, preventing the destination system from becoming overloaded. This ensures stable, predictable performance and avoids system-wide slowdowns or crashes. The sending system doesn't need to know if the receiving system is online, busy, or undergoing an update. This allows development teams to deploy and maintain their respective systems independently, drastically reducing the risk of one system breaking another.

  • Intelligent Error Handling: If a specific message is malformed or fails to process, the message broker can automatically isolate it in a "dead-letter queue" for inspection. This prevents a single bad record from halting the entire data flow, ensuring that valid orders and updates continue to be processed without interruption.

  • Future-Proof Scalability: As transaction volume grows, you can increase processing power by simply adding more "consumer" instances to read from the queue in parallel. This can be done without any changes to the producing system, providing a clear and simple path to scale the architecture.

  • Complete Traceability for Compliance and Audits: The journey of every message can be tracked. We can log when an event was sent, when it entered the queue, and when it was processed. This creates an immutable audit trail, which is invaluable for regulatory compliance and simplifies debugging complex issues.

2. Setup & Prerequisites

Create Configuration File

This project uses a .env file to manage configuration for usernames, passwords, and queue names. Create a file named .env in the root of the project with the following content:

# Environment variables for LavinMQ POC

# Credentials for the broker
LAVINMQ_USER=user
LAVINMQ_PASS=password
LAVINMQ_HOST=localhost

# Queue and Stream names
LAVINMQ_QUEUE=task_queue
LAVINMQ_STREAM=log_stream
LAVINMQ_DEAD_LETTER_EXCHANGE=dead_letter_exchange
LAVINMQ_DEAD_LETTER_QUEUE=dead_letter_queue

Start the LavinMQ Container

You need Docker and docker-compose installed. To start the LavinMQ broker, run the following command from the root of the project. It will automatically load the variables from the .env file.

docker compose up -d

This will start a LavinMQ container in the background.

  • The broker will be available on localhost:5672.
  • The management UI is available at http://localhost:15672.
  • Login Credentials: The LAVINMQ_USER and LAVINMQ_PASS from your .env file.

Troubleshooting Authentication

If you are having trouble logging in, please check the following:

  1. .env File: Make sure you have a .env file in the root of the project, and that it contains the LAVINMQ_USER and LAVINMQ_PASS variables.
  2. Restart Container: If you have recently created or modified the .env file, you must restart the container for the changes to take effect. You can do this by running:
    docker compose down && docker compose up -d
  3. Default Credentials: The default credentials for this project are user and password. If you have changed them in your .env file, use your custom credentials.

3. Use Case: Classic Work Queues

This pattern demonstrates how to distribute tasks to one or more workers. A producer sends tasks to a queue, and a consumer receives and processes them, sending an acknowledgment only when the work is complete.

Python Example

  1. Navigate to the directory:

    cd python-example
  2. Create a virtual environment and install dependencies with uv: (If you don't have uv, install it with pip install uv).

    # Create the virtual environment
    uv venv
    
    # Activate the environment
    source .venv/bin/activate
    
    # Install dependencies
    uv pip sync requirements.txt
  3. Run the consumer: Open a terminal and run the consumer. It will wait for messages.

    python consumer.py
  4. Run the producer: Open a second terminal (with the virtual environment activated) and run the producer to send a message.

    python producer.py "A task with... three... dots..."

Node.js Example

  1. Navigate to the directory:

    cd nodejs-example
  2. Install dependencies:

    npm install
  3. Run the consumer: Open a terminal and run the consumer.

    node consumer.js
  4. Run the producer: Open a second terminal and run the producer.

    node producer.js "Hello from Node.js"

4. The Power of Decoupling: Cross-Language Communication

This is the core benefit of a message broker. The producer and consumer don't need to be written in the same language.

Scenario 1: Node.js Producer -> Python Consumer

  1. Make sure your Python consumer from the python-example directory is running.
  2. In another terminal, navigate to the nodejs-example directory.
  3. Send a message using the Node.js producer:
    node producer.js "This message is from Node.js!"
  4. Observe the Python consumer terminal. You will see it receive the message.

Scenario 2: Python Producer -> Node.js Consumer

  1. Make sure your Node.js consumer from the nodejs-example directory is running.
  2. In another terminal, navigate to the python-example directory (and activate its virtual environment).
  3. Send a message using the Python producer:
    python producer.py "This message is from Python!"
  4. Observe the Node.js consumer terminal. It will receive and process the message seamlessly.

5. Use Case: Streams

Streams are an append-only log structure, perfect for high-throughput scenarios like logging or real-time event tracking.

Python Stream Example

  1. Navigate and set up the environment:

    cd streams-example/python
    uv venv
    source .venv/bin/activate
    uv pip sync requirements.txt
  2. Run the stream consumer:

    python stream_consumer.py
  3. Run the stream producer: In a second, activated terminal:

    python stream_producer.py

    You will see log messages appear in the consumer's terminal in real-time.

Node.js Stream Example

  1. Navigate and install dependencies:

    cd streams-example/nodejs
    npm install
  2. Run the stream consumer:

    node stream_consumer.js
  3. Run the stream producer: In a second terminal:

    node producer.js

    The consumer's terminal will immediately begin showing the log messages.

6. Ejercicios Prácticos para Entender los Beneficios

Aquí tienes algunos ejercicios prácticos que puedes realizar para ver los beneficios de las colas de mensajes en acción.

Ejercicio 1: Simular una Caída del Sistema (Resiliencia)

Este ejercicio demuestra que no se pierden mensajes incluso si el consumidor está fuera de línea.

  1. Asegúrate de que no haya consumidores ejecutándose. Detén cualquier proceso de consumer.py o consumer.js que esté activo.
  2. Abre una terminal y navega a nodejs-example.
  3. Envía varios mensajes usando el productor:
    node producer.js "Orden #1"
    node producer.js "Orden #2"
    node producer.js "Orden #3"
  4. Go to the LavinMQ management UI at http://localhost:15672 e inicia sesión. Verás que la task_queue tiene 3 mensajes "Ready" esperando.
  5. Ahora, inicia el consumidor de Python en otra terminal:
    cd ../python-example
    source .venv/bin/activate
    python consumer.py
  6. Observa: El consumidor comenzará inmediatamente a procesar los tres mensajes que estaban esperando en la cola. Esto demuestra que la cola mantuvo los datos de forma segura mientras el consumidor estaba inactivo.

Ejercicio 2: Escalar Consumidores (Escalabilidad)

Este ejercicio muestra cómo se distribuye el trabajo entre múltiples trabajadores.

  1. Abre dos terminales separadas. En ambas, navega a python-example y activa el entorno virtual.
  2. Ejecuta el consumidor en ambas terminales:
    python consumer.py
    Ahora tienes dos "trabajadores" escuchando la misma cola.
  3. Abre una tercera terminal y navega a nodejs-example.
  4. Envía varios mensajes con tiempos de procesamiento variables:
    node producer.js "Una tarea rápida."
    node producer.js "Una tarea mediana.."
    node producer.js "Una tarea larga..."
    node producer.js "Otra tarea rápida."
  5. Observa: Verás que los mensajes se distribuyen entre los dos consumidores de Python. LavinMQ envía el siguiente mensaje disponible al siguiente consumidor disponible, equilibrando eficazmente la carga de trabajo.

Ejercicio 3: Manejo de "Mensajes Venenosos" con una Cola de Mensajes Muertos (Manejo de Errores)

Este ejercicio demuestra cómo un mensaje "malo" que causa un error puede ser aislado automáticamente sin detener otro trabajo.

  1. Asegúrate de que un consumidor esté ejecutándose (consumer.py o consumer.js).
  2. Abre una nueva terminal y navega al directorio nodejs-example.
  3. Envía una mezcla de mensajes "buenos" y "malos". Los consumidores ahora están programados para rechazar cualquier mensaje que contenga la palabra "error".
    node producer.js "This is a valid message"
    node producer.js "This message contains an error"
    node producer.js "This is another valid message"
  4. Observa la terminal del consumidor: Verás que el consumidor procesa el primer mensaje válido, rechaza el mensaje con "error" y luego procesa inmediatamente el último mensaje válido. El flujo de trabajo no se bloquea por el mensaje defectuoso.
  5. Verifica la Cola de Mensajes Muertos: Ve a la interfaz de usuario de gestión de LavinMQ en http://localhost:15672.
    • Navega a la pestaña "Queues" (Colas).
    • Verás una nueva cola llamada dead_letter_queue (o como la hayas configurado en tu archivo .env).
    • Haz clic en ella. Verás que contiene el mensaje: "Este mensaje contiene un error".
    • Esto demuestra que el mensaje problemático ha sido aislado de forma segura para su posterior inspección y depuración, sin afectar el flujo de trabajo principal.

Nota Importante sobre la Limpieza del Broker (Para el Workshop)

Si encuentras errores como PRECONDITION_FAILED al declarar intercambios o colas, esto generalmente significa que ya existen con argumentos diferentes a los que tu código está intentando usar. Para asegurar un estado limpio del broker durante el workshop, especialmente si has estado experimentando con diferentes configuraciones, se recomienda:

  1. Limpiar manualmente el broker: Accede a la interfaz de gestión de LavinMQ (RabbitMQ) en http://localhost:15672. Ve a las pestañas "Exchanges" (Intercambios) y "Queues" (Colas) y elimina manualmente cualquier intercambio (dead_letter_exchange) y cola (task_queue, dead_letter_queue) que estén causando conflicto.
  2. Reiniciar Docker con volúmenes eliminados: Para una limpieza completa que elimine todos los datos persistentes del broker, puedes ejecutar:
    docker compose down -v && docker compose up -d
    Ten en cuenta que el comando docker compose down -v elimina los volúmenes de Docker, lo que significa que cualquier dato persistente (como mensajes en colas o configuraciones de exchanges/queues) se perderá. Esto es ideal para empezar de cero en un entorno de desarrollo o workshop.

Si el comando docker compose down -v falla con un error interno del servidor Docker, es posible que necesites reiniciar tu demonio de Docker o verificar la instalación de Docker.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors