Nowadays, with the widespread diffusion of online image databases, the possibility of easily searching, browsing and filtering image content is more than an urge. Typically, this operation is made possible thanks to the use of tags, i.e., textual representations of semantic concepts associated to the images. The tagging process is either performed by users, who manually label the images, or by automatic image classifiers, so as to reach a broader coverage. Typically, these methods rely on the extraction of local descriptors (e.g., SIFT, SURF, HOG, etc.), the construction of a suitable feature-based representation (e.g., bag-of-visual words), and the use of supervised classifiers (e.g., SVM). In this paper, we show that such a classification procedure can be attacked by a malicious user, who might be interested in altering the tags automatically suggested by the classifier. This might be used, for example, by an attacker who is willing to avoid the automatic detection of improper material in a parental control system. More specifically, we show that it is possible to modify an image in order to have it associated to the wrong class, without perceptually affecting the image visual quality. The proposed method is validated against a well known image dataset, and results prove to be promising, highlighting the need to jointly study the problem from the standpoint of both the analyst and the attacker.

Melloni, A., Bestagini, P., COSTANZO PICCINNANO, A., Barni, M., Tagliasacchi, M., Tubaro, S. (2013). Attacking image classification based on Bag-of-Visual-Words. In Proc. of IEEE WIFS 2013 (pp.103-108). New York : IEEE.

Attacking image classification based on Bag-of-Visual-Words

COSTANZO PICCINNANO, ANDREA;BARNI, MAURO;
2013-01-01

Abstract

Nowadays, with the widespread diffusion of online image databases, the possibility of easily searching, browsing and filtering image content is more than an urge. Typically, this operation is made possible thanks to the use of tags, i.e., textual representations of semantic concepts associated to the images. The tagging process is either performed by users, who manually label the images, or by automatic image classifiers, so as to reach a broader coverage. Typically, these methods rely on the extraction of local descriptors (e.g., SIFT, SURF, HOG, etc.), the construction of a suitable feature-based representation (e.g., bag-of-visual words), and the use of supervised classifiers (e.g., SVM). In this paper, we show that such a classification procedure can be attacked by a malicious user, who might be interested in altering the tags automatically suggested by the classifier. This might be used, for example, by an attacker who is willing to avoid the automatic detection of improper material in a parental control system. More specifically, we show that it is possible to modify an image in order to have it associated to the wrong class, without perceptually affecting the image visual quality. The proposed method is validated against a well known image dataset, and results prove to be promising, highlighting the need to jointly study the problem from the standpoint of both the analyst and the attacker.
2013
Melloni, A., Bestagini, P., COSTANZO PICCINNANO, A., Barni, M., Tagliasacchi, M., Tubaro, S. (2013). Attacking image classification based on Bag-of-Visual-Words. In Proc. of IEEE WIFS 2013 (pp.103-108). New York : IEEE.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/46060
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo