Official repository for the PEPR: Privileged Event-based Predictive Regularization for Domain Generalization, accepted at CVPR26 Findings.
PEPR tackles domain shift in visual perception by leveraging event data as privileged information during training. Instead of aligning RGB and event features, it trains the RGB model to predict event representations in a shared latent space—capturing robustness without losing semantic detail. The result is a standard RGB model at inference that generalizes better across domain shifts (e.g., day-to-night), outperforming alignment-based methods.
Code for semantic segmentation and detection coming soon!
-
FRED
Available on 🤗 Hugging Face:
https://huggingface.co/datasets/GabrieleMagrini/FRED -
DSEC
Includes annotations for object detection and semantic segmentation:
https://dsec.ifi.uzh.ch/ -
Hard-DSEC-Det
See the official repository:
https://github.com/djessy1998/EA-DETR -
Cityscapes
Available in the official website:
https://www.cityscapes-dataset.com/ -
Cityscapes Adverse
Available on 🤗 Hugging Face:
https://huggingface.co/datasets/naufalso/cityscape-adverse
To simulate the event version of cityscapes, please refer to the official VID2E repository.
If you use PEPR in your research, please cite:
@article{magrini2026pepr,
title={PEPR: Privileged Event-based Predictive Regularization for Domain Generalization},
author={Magrini, Gabriele and Becattini, Federico and Biondi, Niccol{\`o} and Pala, Pietro},
journal={arXiv preprint arXiv:2602.04583},
year={2026}
}



