DEVIOUS is a novel Visual-Inertial Odometry (VIO) framework designed to leverage the unique advantages of event cameras in conjunction with inertial measurements. Unlike traditional frame-based VIO pipelines, DEVIOUS operates on dense optical flow fields derived from asynchronous event streams, providing high-speed, low-latency, and robust odometry estimation even in challenging environments.
DEVIOUS consists of two components:
A deep learning model that processes dense optical flow from event cameras to predict visual odometry.
- Input: E-RAFT optical flow fields (from event camera data)
- Output: Visual odometry predictions (pose estimates from vision alone)
A sensor fusion module that combines visual and inertial odometry sources using an Extended Kalman Filter to produce the final VIO estimate.
- Input: DEVIOUS-VO predictions + Air-IO inertial odometry predictions
- Output: Fused VIO estimate combining visual and inertial information
Environment Installation
See requirements.txt for environment requirements and install dependencies:
pip install -r requirements.txtDownload Datasets
DEVIOUS was benchmarked using the Multi-robot, Multi-Sensor, Multi-Environment Event Dataset (M3ED).
To run DEVIOUS, on an M3ED sequence, download the following files:
- The
dataanddepthh5 files from the M3ED website. - The pre-processed Air-IO/Air-IMU predictions from Google Drive.
- The pre-trained DEVIOUS-VO model & results from Google Drive
The models must be moved into the checkpoints/ folder before inference.
Note
If you plan to inference DEVIOUS-VO manually, you will need to run E-RAFT on the event data to generate dense optical flow. Our implementation for this can be found here
You can immediately test our method on the M3ED dataset using the pre-trained VO model.
Prerequisites: M3ED ground truth and data files, Air-IO predictions, downloaded DEVIOUS-VO results
- Move all M3ED data and ground truth files into one folder.
- Edit the
data-rootattribute ofm3ed_ekf.jsonwith the location of the M3ED data folder. - Move the
devious_output.picklefile into a folder with the Air-IO and Air-IMU results. - Edit the
dataset_rootattribute ofm3ed_ekf.jsonwith the location of the pickle data folder. - Run EKF fusion:
python main.py ekf -d m3ed- Results will be saved to
saved/m3ed_ekf/
To test our full pipeline, you can generate VO predictions from scratch and fuse them with the EKF.
Prerequisites: M3ED ground truth and data files, Air-IO predictions, E-RAFT flow files, pre-trained VO model in checkpoints/
- Move all M3ED data and ground truth files into one folder.
- Edit the
data-rootattribute of bothm3ed_encoder.jsonandm3ed_recurrent.jsonwith the location of M3ED data folder. - Encode the flows and cache their results:
python main.py model encoder cache -d m3ed- Run VO inference:
python main.py model recurrent test -d m3ed- Results will be saved to
saved/m3ed_recurrent/ - Move all .pickle files into one folder
- Edit the
dataset_rootattribute ofm3ed_ekf.jsonwith the location of the pickle data folder. - Run EKF fusion:
python main.py ekf -d m3ed- Results will be saved to
saved/m3ed_ekf/
To add a new dataset, follow the steps below:
- Create a custom data loader similar to
loaders/m3ed_loader.py - Create custom config files similar to those in
configs/ - Adjust
main.pyto add your dataset as a valid command - Run the steps above for training/testing
Portions of the DEVIOUS codebase were adapted from E-RAFT:
@InProceedings{Gehrig3dv2021,
author = {Mathias Gehrig and Mario Millh\"ausler and Daniel Gehrig and Davide Scaramuzza},
title = {E-RAFT: Dense Optical Flow from Event Cameras},
booktitle = {International Conference on 3D Vision (3DV)},
year = {2021}
}
and AirIO:
@misc{qiu2025airiolearninginertialodometry,
title={AirIO: Learning Inertial Odometry with Enhanced IMU Feature Observability},
author={Yuheng Qiu and Can Xu and Yutian Chen and Shibo Zhao and Junyi Geng and Sebastian Scherer},
year={2025},
eprint={2501.15659},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2501.15659},
}
This research was completed by Jack Ford and Joseph Kahana through the University of Pennsylvania's GRASP Laboratory.
Research was supervised by Prof. Kostas Daniilidis, Matthew Leonard, and Ioannis Asmanis.
