We introduce a new attack against face verification systems based on Deep Neural Networks (DNN). The attack relies on the introduction into the network of a hidden backdoor, whose activation at test time induces a verification error allowing the attacker to impersonate any user. The new attack, named Master Key backdoor attack, operates by interfering with the training phase, so to instruct the DNN to always output a positive verification answer when the face of the attacker is presented at its input. With respect to existing attacks, the new backdoor attack offers much more flexibility, since the attacker does not need to know the identity of the victim beforehand. In this way, he can deploy a Universal Impersonation attack in an open-set framework, allowing him to impersonate any enrolled users, even those that were not yet enrolled in the system when the attack was conceived. We present a practical implementation of the attack targeting a Siamese-DNN face verification system, and show its effectiveness when the system is trained on VGGFace2 dataset and tested on LFW and YTF datasets. According to our experiments, the Master Key backdoor attack provides a high attack success rate even when the ratio of poisoned training data is as small as 0.01, thus raising a new alarm regarding the use of DNN-based face verification systems in security-critical applications.

Guo, W., Tondi, B., Barni, M. (2021). A Master Key backdoor for universal impersonation attack against DNN-based face verification. PATTERN RECOGNITION LETTERS, 144, 61-67 [10.1016/j.patrec.2021.01.009].

A Master Key backdoor for universal impersonation attack against DNN-based face verification

Wei Guo;Benedetta Tondi;Mauro Barni
2021-01-01

Abstract

We introduce a new attack against face verification systems based on Deep Neural Networks (DNN). The attack relies on the introduction into the network of a hidden backdoor, whose activation at test time induces a verification error allowing the attacker to impersonate any user. The new attack, named Master Key backdoor attack, operates by interfering with the training phase, so to instruct the DNN to always output a positive verification answer when the face of the attacker is presented at its input. With respect to existing attacks, the new backdoor attack offers much more flexibility, since the attacker does not need to know the identity of the victim beforehand. In this way, he can deploy a Universal Impersonation attack in an open-set framework, allowing him to impersonate any enrolled users, even those that were not yet enrolled in the system when the attack was conceived. We present a practical implementation of the attack targeting a Siamese-DNN face verification system, and show its effectiveness when the system is trained on VGGFace2 dataset and tested on LFW and YTF datasets. According to our experiments, the Master Key backdoor attack provides a high attack success rate even when the ratio of poisoned training data is as small as 0.01, thus raising a new alarm regarding the use of DNN-based face verification systems in security-critical applications.
2021
Guo, W., Tondi, B., Barni, M. (2021). A Master Key backdoor for universal impersonation attack against DNN-based face verification. PATTERN RECOGNITION LETTERS, 144, 61-67 [10.1016/j.patrec.2021.01.009].
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0167865521000210-main.pdf

non disponibili

Tipologia: PDF editoriale
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 1.03 MB
Formato Adobe PDF
1.03 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1197503