We propose a stealthy clean-label video backdoor attack against Deep Learning (DL)-based models aiming at detecting a particular class of spoofing attacks, namely video rebroadcast attacks. The injected backdoor does not affect spoofing detection in normal conditions, but induces a misclassification in the presence of a specific triggering signal. The proposed backdoor relies on a temporal trigger altering the average chrominance of the video sequence. The backdoor signal is designed by taking into account the peculiarities of the Human Visual System (HVS) to reduce the visibility of the trigger, thus increasing the stealthiness of the backdoor. To force the network to look at the presence of the trigger in the challenging clean-label scenario, we choose the poisoned samples used for the injection of the backdoor following a so-called Outlier Poisoning Strategy (OPS). According to OPS, the triggering signal is inserted in the training samples that the network finds more difficult to classify. The effectiveness of the proposed backdoor attack and its generality are validated experimentally on different datasets and anti-spoofing rebroadcast detection architectures.
Guo, W., Tondi, B., Barni, M. (2023). A Temporal Chrominance Trigger for Clean-Label Backdoor Attack Against Anti-Spoof Rebroadcast Detection. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 1-12 [10.1109/TDSC.2022.3233519].
A Temporal Chrominance Trigger for Clean-Label Backdoor Attack Against Anti-Spoof Rebroadcast Detection
Guo, Wei
;Tondi, Benedetta;Barni, Mauro
2023-01-01
Abstract
We propose a stealthy clean-label video backdoor attack against Deep Learning (DL)-based models aiming at detecting a particular class of spoofing attacks, namely video rebroadcast attacks. The injected backdoor does not affect spoofing detection in normal conditions, but induces a misclassification in the presence of a specific triggering signal. The proposed backdoor relies on a temporal trigger altering the average chrominance of the video sequence. The backdoor signal is designed by taking into account the peculiarities of the Human Visual System (HVS) to reduce the visibility of the trigger, thus increasing the stealthiness of the backdoor. To force the network to look at the presence of the trigger in the challenging clean-label scenario, we choose the poisoned samples used for the injection of the backdoor following a so-called Outlier Poisoning Strategy (OPS). According to OPS, the triggering signal is inserted in the training samples that the network finds more difficult to classify. The effectiveness of the proposed backdoor attack and its generality are validated experimentally on different datasets and anti-spoofing rebroadcast detection architectures.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11365/1241454