In the race of arms between attackers, trying to build more and more realistic face replay attacks, and defenders, deploying spoof detection modules with ever-increasing capabilities, CNN-based methods have shown outstanding detection performance thus raising the bar for the construction of realistic replay attacks against face-based authentication systems. Rather than trying to rebroadcast even more realistic faces, we show that attackers can successfully fool a face authentication system equipped with a deep learning spoof detection module, by exploiting the vulnerabilities of CNNs to adversarial perturbations. We first show that mounting such an attack is not a trivial task due to the unique features of spoofing detection modules. Then, we propose a method to craft adversarial images that can be successfully exploited to build an effective replay attack. Experiments conducted on the REPLAY-MOBILE database demonstrate that our attacked images achieve good performance against a face recognition system equipped with CNN-based anti-spoofing, in that they are able to pass the face detection, spoof detection and face recognition modules of the authentication chain.
Zhang, B., Tondi, B., Barni, M. (2020). Adversarial examples for reply attacks against CNN-based face recognition with anti-spoofing capability. COMPUTER VISION AND IMAGE UNDERSTANDING, 197-198 [10.1016/j.cviu.2020.102988].
Adversarial examples for reply attacks against CNN-based face recognition with anti-spoofing capability
Tondi, B.;Barni, M.
2020-01-01
Abstract
In the race of arms between attackers, trying to build more and more realistic face replay attacks, and defenders, deploying spoof detection modules with ever-increasing capabilities, CNN-based methods have shown outstanding detection performance thus raising the bar for the construction of realistic replay attacks against face-based authentication systems. Rather than trying to rebroadcast even more realistic faces, we show that attackers can successfully fool a face authentication system equipped with a deep learning spoof detection module, by exploiting the vulnerabilities of CNNs to adversarial perturbations. We first show that mounting such an attack is not a trivial task due to the unique features of spoofing detection modules. Then, we propose a method to craft adversarial images that can be successfully exploited to build an effective replay attack. Experiments conducted on the REPLAY-MOBILE database demonstrate that our attacked images achieve good performance against a face recognition system equipped with CNN-based anti-spoofing, in that they are able to pass the face detection, spoof detection and face recognition modules of the authentication chain.File | Dimensione | Formato | |
---|---|---|---|
1-s2.0-S1077314220300606-main.pdf
non disponibili
Tipologia:
PDF editoriale
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
1.77 MB
Formato
Adobe PDF
|
1.77 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11365/1127173