An attack method against convolutional neural network (CNN) detectors, which minimises the distortion in the pixel domain, is proposed. By focusing on CNN models developed for manipulation detection, experiments show that, while the small perturbations introduced by existing methods tend to be cancelled out when the adversarial examples are rounded to pixels, thus making the attack ineffective, the proposed approach can generate pixel-domain adversarial images which succeed in inducing a wrong decision with very small distortions.

Tondi, B. (2018). Pixel-domain Adversarial Examples Against CNN-based Manipulation Detectors. ELECTRONICS LETTERS, 54(21), 1220-1221 [10.1049/el.2018.6469].

Pixel-domain Adversarial Examples Against CNN-based Manipulation Detectors

Tondi, B.
2018-01-01

Abstract

An attack method against convolutional neural network (CNN) detectors, which minimises the distortion in the pixel domain, is proposed. By focusing on CNN models developed for manipulation detection, experiments show that, while the small perturbations introduced by existing methods tend to be cancelled out when the adversarial examples are rounded to pixels, thus making the attack ineffective, the proposed approach can generate pixel-domain adversarial images which succeed in inducing a wrong decision with very small distortions.
2018
Tondi, B. (2018). Pixel-domain Adversarial Examples Against CNN-based Manipulation Detectors. ELECTRONICS LETTERS, 54(21), 1220-1221 [10.1049/el.2018.6469].
File in questo prodotto:
File Dimensione Formato  
el.2018.6469.pdf

non disponibili

Tipologia: PDF editoriale
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 65.16 kB
Formato Adobe PDF
65.16 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1127169