Nystagmus is a condition characterized by involuntary, rapid and repetitive movement of the eyes, mainly caused by a dysfunction of the peripheral vestibular system located in the inner ear. It may also arise from the dysfunction of brain areas dedicated to the control of equilibrium and of the visual system. Precisely recognizing nystagmus may help in rapidly differentiating benign from potentially life–threatening diseases. In this paper, we propose a method, based on eye tracking, to easily process RGB videos captured with widely used devices (like smartphones and tablets) to recognize horizontal nystagmus and its direction. The proposed pipeline is designed to work in flexible conditions, without requiring specific recording configurations. We train and evaluate the system on real videos and report an AUROC of 88.7% for nystagmus detection and 76.5% for direction classification. These results support the feasibility of using consumer–grade video data for accurate and automatic screening of nystagmus.
Nunziati, G., Porri, A., Andreini, P., Bonechi, S., Bianchini, M., Pecci, R., et al. (2026). A Deep Learning Approach for Nystagmus Recognition in RGB Videos. In 2025 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE) (pp.1331-1336). New York : IEEE [10.1109/MetroXRAINE66377.2025.11340186].
A Deep Learning Approach for Nystagmus Recognition in RGB Videos
Nunziati, Giacomo;Andreini, Paolo;Bonechi, Simone;Bianchini, Monica;Scarselli, Franco;
2026-01-01
Abstract
Nystagmus is a condition characterized by involuntary, rapid and repetitive movement of the eyes, mainly caused by a dysfunction of the peripheral vestibular system located in the inner ear. It may also arise from the dysfunction of brain areas dedicated to the control of equilibrium and of the visual system. Precisely recognizing nystagmus may help in rapidly differentiating benign from potentially life–threatening diseases. In this paper, we propose a method, based on eye tracking, to easily process RGB videos captured with widely used devices (like smartphones and tablets) to recognize horizontal nystagmus and its direction. The proposed pipeline is designed to work in flexible conditions, without requiring specific recording configurations. We train and evaluate the system on real videos and report an AUROC of 88.7% for nystagmus detection and 76.5% for direction classification. These results support the feasibility of using consumer–grade video data for accurate and automatic screening of nystagmus.| File | Dimensione | Formato | |
|---|---|---|---|
|
A_Deep_Learning_Approach_for_Nystagmus_Recognition_in_RGB_Videos.pdf
non disponiibile
Tipologia:
PDF editoriale
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
1.3 MB
Formato
Adobe PDF
|
1.3 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11365/1307996
