A Unified Open-Source Framework for Realtime Multimodal Physiological Sensing, Edge AI, and Intervention in Closed-Loop Smart Healthcare Applications
Quickstart • Docs • GUI • Showcase • Cite • Contact
HERMES for the Greek mythology analogy of the god of communication and speed, protector of information, the gods' herald. He embodies the nature of smooth and reliable communication. His role accurately resonates with the vision of this framework: facilitate reliable and fast exchange of continuously generated multimodal physiological and external data across distributed wireless and wired multi-sensor hosts for synchronized realtime data collection, in-the-loop AI stream processing, and analysis, in intelligent med- and health-tech (wearable) applications.
HERMES offers out-of-the-box streaming integrations to a number of commercial sensor devices and systems, high resolution cameras, templates for extension with custom user devices, and a ready-made wrapper for easy PyTorch AI model insertion. It reliably and synchronously captures heterogeneous data across distributed interconnected devices on a local network in a continuous manner, and enables realtime AI processing at the edge toward personalized intelligent closed-loop interventions of the user. All continuously acquired data is periodically flushed to disk for as long as the system has disk space, as MKV/MP4 and HDF5 files, for video and sensor data, respectively.
Create a Python 3 virtual environment python -m venv .venv (python >= 3.7).
Activate it with .venv/bin/activate for Linux or .venv\Scripts\activate for Windows.
Single-command install HERMES into your project along other dependendices.
pip install pysio-hermesAll the integrated, validated and supported sensor devices are separately installable as pysio-hermes-<subpackage_name>, like:
pip install pysio-hermes-torchWill install the AI processing subpackage to wrap user-specified PyTorch models.
List of supported devices (continuously updated)
Some subpackages require OEM software installation, check each below for detailed prerequisites.torchWrapper for PyTorch AI modelspupillabsPupil Labs Core smartglassesbaslerBasler camerasdotsMovella DOTs IMUsmvnXsens MVN Analyze MoCap suitawindaXsens Awinda IMUscometaCometa WavePlus sEMGmoticonMoticon OpenGo pressure insolestmsiTMSi SAGA physiological signalsviconVicon Nexus capture systemmoxyMoxy muscle oxygenation monitor
The following subpackages are in development.
If dealing with video or audio, you will have to install FFmpeg.
Make a copy of the examples/video_codec_<type>.yml, that matches your video encoding hardware (AMD or Intel CPU, or an NVIDIA GPU), as examples/video_codec.yml
- Download the full build with shared libraries from gyan.dev.
- Unpack the archive into the desired folder, like
C:\Program Files\ffmpeg. - Add path to the FFmpeg binaries to the
Pathenvironment variable manually, or via CMD.SETX PATH "%PATH%;C:\Program Files\ffmpeg\bin;C:\Program Files\ffmpeg" /M
- Open a new terminal window and check that FFmpeg can be correctly found by the system
where ffmpeg.
- Install with the package manager
sudo apt-get install ffmpeg. - Check that ffmpeg is on path
which ffmpeg.
The system runs based on YAML configuration files, where connection to other hosts, and local or remote Producer's, Consumer's, Pipeline's.
- Install plotting libraries into the current virtual environment
uv pip install -r viz_requirements.txt. - On each host device, run the latency evaluation automated script under
test/:ascd test
test_latency_localhost.batfor Windows or. test_latency_localhost.shfor Linux. - Gather generated CSV files from all tested devices and place in
test/data/latency/localhost/<device_name>subfolders in the following structure. The folder name will be used as the trace name of the corresponding series on the generated plot.root/ └───test/ └───data/ └───latency/ ├───localhost/ │ ├───laptop/ │ │ ├───byte_100/ │ │ │ └───latency_vs_frequency.csv │ │ └───rate_10/ │ │ └───latency_vs_msgsize.csv │ ├───nuc/ │ ├───pi/ │ └───server/ └───multi_device/ - Invert the directory structure for batch visualization by running
python utils\invert_latency_subfolders.pyfor Windows orpython utils/invert_latency_subfolders.pyfor Linux. - Visualize latencies by running
plot_latency.bat .\data\latency\localhost_invertedfor Windows or. plot_latency.sh ./data/latency/localhost_invertedfor Linux. It will generate latencies for each device ran on the shared set of experimental parameters:
- Log the NTP offset over time on each device, under network and processing load by running (will spawn a background process):
- Windows (Option #1) - Command Prompt
wmic process call create "cmd.exe /c w32tm /stripchart /computer:<local_ntp_server_ip> /samples:720 /period:5 /dataonly > %USERPROFILE%\Desktop\ntp_sync_1hr.log"
- Windows (Option #2) - PowerShell
Invoke-CimMethod -ClassName Win32_Process -MethodName Create -Arguments @{CommandLine = 'cmd.exe /c w32tm /stripchart /computer:<local_ntp_server_ip> /samples:720 /period:5 /dataonly > %USERPROFILE%\Desktop\ntp_sync_1hr.log'}
- Linux - bash
Then parse the log into a comma-separated file:
nohup bash -c 'for i in {1..720}; do echo "=== $(date +"%Y-%m-%d %H:%M:%S") ===" >> ntp_sync_1hr.log; chronyc tracking >> ntp_sync_1hr.log; echo "" >> ntp_sync_1hr.log; sleep 5; done' > /dev/null 2>&1 &
echo "\n\n\n" > ntp_parsed.log; awk '/===/ { ts = $2 " " $3 } /System time/ { print ts ", " $4 "s" }' ntp_sync_1hr.log >> ntp_parsed.log
- Windows (Option #1) - Command Prompt
- Gather generated log files from all tested devices and place in
test/data/ntp_sync. The file name will be used as the trace name of the corresponding series on the generated plot. Ideally, use the same names as in latency, to match colors. - Run the plot generator script
plot_sync_tail.bat .\data\ntp_syncon Windows or. plot_sync_tail.sh ./data/ntp_syncon Linux.
- Download demo HERMES data [TBA] from a 4 device sensing setup:
- Raspberry Pi 5 exoskeleton controller
- LattePanda 3 Delta wearable companion (FPOV + gaze tracking)
- Xsens MoCap system connected to a laptop
- Camera PC with 4 high-resolution cameras
- Update the
DATA_PATHin the appropriate Windows or Linux CLI script to point to the downloaded data folder. - Run the plotting script and select the 2 points when prompted, to zoom-in on to visually validate synchronization in the raw longitudinal data:
- Windows ->
test\synchronization\plot_sync_experiment.bat - Linux ->
. test/synchronization/plot_sync_experiment.sh
- Windows ->
Check out the full documentation site for more usage examples, architecture overview, detailed extension guide, and FAQs.
We developed PysioViz a complementary dashboard based on Dash Plotly for analysis and annotation of the collected multimodal data. We use it ourselves to generate ground truth labels for the AI training workflows. Check it out and leave feedback!
These are some of our own projects enabled by HERMES to excite you to adopt it in your smart closed-looop healthtech usecases.
AI-enabled intent prediction for high-level locomotion mode selection in a smart leg prosthesis
Realtime automated cueing for freezing-of-gait Parkinson's patients in free-living conditions
Personalized level of assistance in prolong use rehabilitation and support exoskeletons
This sourcecode is licensed under the MIT license - see the LICENSE file for details.
The project's logo is distributed under the CC BY-NC-ND 4.0 license - see the LOGO-LICENSE.
When using in your project, research, or product, please cite the following and notify us so we can update the index of success stories enabled by HERMES.
@preprint{yudayev2026hermes,
title={HERMES: A Unified Open-Source Framework for Realtime Multimodal Physiological Sensing, Edge AI, and Intervention in Closed-Loop Smart Healthcare Applications},
author={Yudayev, Maxim and Carlon, Juha and Lamsal, Diwas and Stefanova, Vayalet and Filtjens, Benjamin},
year={2026},
eprint={2601.12610},
archivePrefix={arXiv},
primaryClass={eess.SY},
doi={10.48550/arXiv.2601.12610},
}This project was primarily written by Maxim Yudayev while at the Department of Electrical Engineering, KU Leuven.
This study was funded, in part, by the AidWear project funded by the Federal Public Service for Policy and Support, the AID-FOG project by the Michael J. Fox Foundation for Parkinson’s Research under Grant No.: MJFF-024628, the strategic basic research project RevalExo (S001024N) funded by the Research Foundation Flanders, and the Flemish Government under the Flanders AI Research Program (FAIR).
HERMES is a "Ship of Theseus"1 of ActionSense that started as a fork and became a complete architectural rewrite of the system from the ground up to bridge the fundamental gaps in the state-of-the-art, and to match our research group's needs in realtime deployments and reliable data acquisition. Although there is no part of ActionSense in HERMES, we believe that its authors deserve recognition as inspiration for our system.
Special thanks for early usage, contributions, bug reports, good times during experiments, and feature requests to Juha Carlon (KU Leuven), Vayalet Stefanova (KU Leuven), Diwas Lamsal (KU Leuven), Stefano Nuzzo (VUB), Léonore Foguenne (ULiège). And for the support to prof. Benjamin Filtjens (TU Delft) and prof. Bart Vanrumste (KU Leuven).
Footnotes
-
The Ship of Theseus is a paradoxical thought experiment of identity and persistence from Greek mythology that questions whether a ship, all of whose original parts are replaced over time, remains the same ship. ↩







