Skip to content

maximyudayev/hermes

Repository files navigation

HERMES: Heterogeneous Edge Realtime Measurement and Execution System
Heterogeneous Edge Realtime Measurement and Execution System

A Unified Open-Source Framework for Realtime Multimodal Physiological Sensing, Edge AI, and Intervention in Closed-Loop Smart Healthcare Applications

Windows Linux macOS

QuickstartDocsGUIShowcaseCiteContact

HERMES for the Greek mythology analogy of the god of communication and speed, protector of information, the gods' herald. He embodies the nature of smooth and reliable communication. His role accurately resonates with the vision of this framework: facilitate reliable and fast exchange of continuously generated multimodal physiological and external data across distributed wireless and wired multi-sensor hosts for synchronized realtime data collection, in-the-loop AI stream processing, and analysis, in intelligent med- and health-tech (wearable) applications.


Overview of the system architecture on one of the distributed hosts

HERMES offers out-of-the-box streaming integrations to a number of commercial sensor devices and systems, high resolution cameras, templates for extension with custom user devices, and a ready-made wrapper for easy PyTorch AI model insertion. It reliably and synchronously captures heterogeneous data across distributed interconnected devices on a local network in a continuous manner, and enables realtime AI processing at the edge toward personalized intelligent closed-loop interventions of the user. All continuously acquired data is periodically flushed to disk for as long as the system has disk space, as MKV/MP4 and HDF5 files, for video and sensor data, respectively.

Quickstart

Core

Create a Python 3 virtual environment python -m venv .venv (python >= 3.7).

Activate it with .venv/bin/activate for Linux or .venv\Scripts\activate for Windows.

Single-command install HERMES into your project along other dependendices.

pip install pysio-hermes

Extra

All the integrated, validated and supported sensor devices are separately installable as pysio-hermes-<subpackage_name>, like:

pip install pysio-hermes-torch

Will install the AI processing subpackage to wrap user-specified PyTorch models.

List of supported devices (continuously updated) Some subpackages require OEM software installation, check each below for detailed prerequisites.

The following subpackages are in development.


FFmpeg (Optional)

If dealing with video or audio, you will have to install FFmpeg.

Make a copy of the examples/video_codec_<type>.yml, that matches your video encoding hardware (AMD or Intel CPU, or an NVIDIA GPU), as examples/video_codec.yml

Windows

  1. Download the full build with shared libraries from gyan.dev.
  2. Unpack the archive into the desired folder, like C:\Program Files\ffmpeg.
  3. Add path to the FFmpeg binaries to the Path environment variable manually, or via CMD.
    SETX PATH "%PATH%;C:\Program Files\ffmpeg\bin;C:\Program Files\ffmpeg" /M
  4. Open a new terminal window and check that FFmpeg can be correctly found by the system where ffmpeg.

Linux

  1. Install with the package manager sudo apt-get install ffmpeg.
  2. Check that ffmpeg is on path which ffmpeg.

Running

The system runs based on YAML configuration files, where connection to other hosts, and local or remote Producer's, Consumer's, Pipeline's.

Benchmarking

Communication Latency

  1. Install plotting libraries into the current virtual environment uv pip install -r viz_requirements.txt.
  2. On each host device, run the latency evaluation automated script under test/:
    cd test
    as test_latency_localhost.bat for Windows or . test_latency_localhost.sh for Linux.
  3. Gather generated CSV files from all tested devices and place in test/data/latency/localhost/<device_name> subfolders in the following structure. The folder name will be used as the trace name of the corresponding series on the generated plot.
    root/
    └───test/
        └───data/
            └───latency/
                ├───localhost/
                │   ├───laptop/
                │   │   ├───byte_100/
                │   │   │   └───latency_vs_frequency.csv
                │   │   └───rate_10/
                │   │       └───latency_vs_msgsize.csv
                │   ├───nuc/
                │   ├───pi/
                │   └───server/
                └───multi_device/
  4. Invert the directory structure for batch visualization by running python utils\invert_latency_subfolders.py for Windows or python utils/invert_latency_subfolders.py for Linux.
  5. Visualize latencies by running plot_latency.bat .\data\latency\localhost_inverted for Windows or . plot_latency.sh ./data/latency/localhost_inverted for Linux. It will generate latencies for each device ran on the shared set of experimental parameters:

Intra-device latency vs sampling frequency for 1kB messages Intra-device latency vs message size at 100Hz

Inter-device latency vs sampling frequency for 1kB messages Inter-device latency vs message size at 100Hz

Synchronization Consistency

  1. Log the NTP offset over time on each device, under network and processing load by running (will spawn a background process):
    • Windows (Option #1) - Command Prompt
      wmic process call create "cmd.exe /c w32tm /stripchart /computer:<local_ntp_server_ip> /samples:720 /period:5 /dataonly > %USERPROFILE%\Desktop\ntp_sync_1hr.log"
    • Windows (Option #2) - PowerShell
      Invoke-CimMethod -ClassName Win32_Process -MethodName Create -Arguments @{CommandLine = 'cmd.exe /c w32tm /stripchart /computer:<local_ntp_server_ip> /samples:720 /period:5 /dataonly > %USERPROFILE%\Desktop\ntp_sync_1hr.log'}
    • Linux - bash
      nohup bash -c 'for i in {1..720}; do echo "=== $(date +"%Y-%m-%d %H:%M:%S") ===" >> ntp_sync_1hr.log; chronyc tracking >> ntp_sync_1hr.log; echo "" >> ntp_sync_1hr.log; sleep 5; done' > /dev/null 2>&1 &
      Then parse the log into a comma-separated file:
      echo "\n\n\n" > ntp_parsed.log; awk '/===/ { ts = $2 " " $3 } /System time/ { print ts ", " $4 "s" }' ntp_sync_1hr.log >> ntp_parsed.log
  2. Gather generated log files from all tested devices and place in test/data/ntp_sync. The file name will be used as the trace name of the corresponding series on the generated plot. Ideally, use the same names as in latency, to match colors.
  3. Run the plot generator script plot_sync_tail.bat .\data\ntp_sync on Windows or . plot_sync_tail.sh ./data/ntp_sync on Linux.

Synchronization time offset tail curve across connected wired and wireless devices

Longitudinal Data Alignment

  1. Download demo HERMES data [TBA] from a 4 device sensing setup:
    • Raspberry Pi 5 exoskeleton controller
    • LattePanda 3 Delta wearable companion (FPOV + gaze tracking)
    • Xsens MoCap system connected to a laptop
    • Camera PC with 4 high-resolution cameras
  2. Update the DATA_PATH in the appropriate Windows or Linux CLI script to point to the downloaded data folder.
  3. Run the plotting script and select the 2 points when prompted, to zoom-in on to visually validate synchronization in the raw longitudinal data:
    • Windows -> test\synchronization\plot_sync_experiment.bat
    • Linux -> . test/synchronization/plot_sync_experiment.sh

Snapshot of longitudinal synchronization in heterogenous multimodal data captured with HERMES from a real exoskeleton experiment with four separate host devices - Raspberry Pi 5 exoskeleton controller, LattePanda 3 Delta wearable companion, Xsens MoCap system connected to a laptop, and a camera PC with 4 high-resolution cameras

Documentation

Check out the full documentation site for more usage examples, architecture overview, detailed extension guide, and FAQs.

Data Annotation


Pysioviz: A dashboard for visualization and annotation of collected multimodal data for AI workflows

We developed PysioViz a complementary dashboard based on Dash Plotly for analysis and annotation of the collected multimodal data. We use it ourselves to generate ground truth labels for the AI training workflows. Check it out and leave feedback!

Showcase

These are some of our own projects enabled by HERMES to excite you to adopt it in your smart closed-looop healthtech usecases.

AI-enabled intent prediction for high-level locomotion mode selection in a smart leg prosthesis

Realtime automated cueing for freezing-of-gait Parkinson's patients in free-living conditions

Personalized level of assistance in prolong use rehabilitation and support exoskeletons

License

This sourcecode is licensed under the MIT license - see the LICENSE file for details.

The project's logo is distributed under the CC BY-NC-ND 4.0 license - see the LOGO-LICENSE.

Citation

When using in your project, research, or product, please cite the following and notify us so we can update the index of success stories enabled by HERMES.

@preprint{yudayev2026hermes,
   title={HERMES: A Unified Open-Source Framework for Realtime Multimodal Physiological Sensing, Edge AI, and Intervention in Closed-Loop Smart Healthcare Applications}, 
   author={Yudayev, Maxim and Carlon, Juha and Lamsal, Diwas and Stefanova, Vayalet and Filtjens, Benjamin},
   year={2026},
   eprint={2601.12610},
   archivePrefix={arXiv},
   primaryClass={eess.SY},
   doi={10.48550/arXiv.2601.12610}, 
}

Acknowledgement

This project was primarily written by Maxim Yudayev while at the Department of Electrical Engineering, KU Leuven.

This study was funded, in part, by the AidWear project funded by the Federal Public Service for Policy and Support, the AID-FOG project by the Michael J. Fox Foundation for Parkinson’s Research under Grant No.: MJFF-024628, the strategic basic research project RevalExo (S001024N) funded by the Research Foundation Flanders, and the Flemish Government under the Flanders AI Research Program (FAIR).

HERMES is a "Ship of Theseus"1 of ActionSense that started as a fork and became a complete architectural rewrite of the system from the ground up to bridge the fundamental gaps in the state-of-the-art, and to match our research group's needs in realtime deployments and reliable data acquisition. Although there is no part of ActionSense in HERMES, we believe that its authors deserve recognition as inspiration for our system.

Special thanks for early usage, contributions, bug reports, good times during experiments, and feature requests to Juha Carlon (KU Leuven), Vayalet Stefanova (KU Leuven), Diwas Lamsal (KU Leuven), Stefano Nuzzo (VUB), Léonore Foguenne (ULiège). And for the support to prof. Benjamin Filtjens (TU Delft) and prof. Bart Vanrumste (KU Leuven).

Footnotes

  1. The Ship of Theseus is a paradoxical thought experiment of identity and persistence from Greek mythology that questions whether a ship, all of whose original parts are replaced over time, remains the same ship.

About

Heterogeneous Edge Realtime Measurement and Execution System: A Unified Open-Source Framework for Realtime Multimodal Physiological Sensing, Edge AI, and Intervention in Closed-Loop Smart Healthcare Applications

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors