Automatic recognition of human activities is important for the development of next generation video-surveillance systems. In this paper we address the specific problem of automatically detecting violent interpersonal acts in monocular colour video streams. Unlikely previous approaches, only little knowledge is assumed about the acquisition setup and about the content of the acquired scenes. So the proposed approach is suitable in a wide range of practical cases. Reliability and general-purpose applicability is achieved by analysing low-level features (like the spatial-temporal behaviour of coloured stains), and by measuring some warping and motion parameters. In this way it is not necessary to extract accurate target silhouettes, that is a critical task because of occlusions and overcrowding that are typical during interpersonal contacts. A suitable index called maximum warping energy (MWE) has been defined to describe the localized spatial-temporal complexity of colour conformations. Our experiments show that aggressive activities give significantly higher MWE values if compared with safe actions like: walking, running, embracing or handshaking. So it is possible to distinguish violent acts from normal behaviours even in presence of many people and crowded environments. Homography is used to improve robustness by verifying the real targets nearness. False interactions because of perspective-induced occlusions are discarded.

Mecocci, A., F., M. (2007). Real-Time Automatic Detection of Violent-Acts by Low-Level Colour Visual Cues. In Proceedings of IEEE International Conference on Image Processing, ICIP 2007: San Antonio, Texas, USA, October 8-11, 2007 (pp.345-345) [10.1109/ICIP.2007.4378962].

Real-Time Automatic Detection of Violent-Acts by Low-Level Colour Visual Cues

MECOCCI, ALESSANDRO;
2007-01-01

Abstract

Automatic recognition of human activities is important for the development of next generation video-surveillance systems. In this paper we address the specific problem of automatically detecting violent interpersonal acts in monocular colour video streams. Unlikely previous approaches, only little knowledge is assumed about the acquisition setup and about the content of the acquired scenes. So the proposed approach is suitable in a wide range of practical cases. Reliability and general-purpose applicability is achieved by analysing low-level features (like the spatial-temporal behaviour of coloured stains), and by measuring some warping and motion parameters. In this way it is not necessary to extract accurate target silhouettes, that is a critical task because of occlusions and overcrowding that are typical during interpersonal contacts. A suitable index called maximum warping energy (MWE) has been defined to describe the localized spatial-temporal complexity of colour conformations. Our experiments show that aggressive activities give significantly higher MWE values if compared with safe actions like: walking, running, embracing or handshaking. So it is possible to distinguish violent acts from normal behaviours even in presence of many people and crowded environments. Homography is used to improve robustness by verifying the real targets nearness. False interactions because of perspective-induced occlusions are discarded.
2007
9781424414376
Mecocci, A., F., M. (2007). Real-Time Automatic Detection of Violent-Acts by Low-Level Colour Visual Cues. In Proceedings of IEEE International Conference on Image Processing, ICIP 2007: San Antonio, Texas, USA, October 8-11, 2007 (pp.345-345) [10.1109/ICIP.2007.4378962].
File in questo prodotto:
File Dimensione Formato  
Real-Time Automatic Detection of Violent-Acts by Low-Level Colour Visual Cues.pdf

non disponibili

Tipologia: Post-print
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 304.44 kB
Formato Adobe PDF
304.44 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/25408
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo