Steganographic schemes are commonly designed in a way to preserve image statistics or steganalytic features. Since most of the state-of-the-art steganalytic methods employ a machine learning (ML)-based classifier, it is reasonable to consider countering steganalysis by trying to fool the ML classifiers. However, simply applying perturbations on stego images as adversarial examples may lead to the failure of data extraction and introduce unexpected artifacts detectable by other classifiers. In this paper, we present a steganographic scheme with a novel operation called adversarial embedding (ADV-EMB), which achieves the goal of hiding a stego message while at the same time fooling a convolutional neural network (CNN)-based steganalyzer. The proposed method works under the conventional framework of distortion minimization. In particular, ADV-EMB adjusts the costs of image elements modifications according to the gradients back propagated from the target CNN steganalyzer. Therefore, modification direction has a higher probability to be the same as the inverse sign of the gradient. In this way, the so-called adversarial stego images are generated. Experiments demonstrate that the proposed steganographic scheme achieves better security performance against the target adversary-unaware steganalyzer by increasing its missed detection rate. In addition, it deteriorates the performance of other adversary-aware steganalyzers, opening the way to a new class of modern steganographic schemes capable of overcoming powerful CNN-based steganalysis.

Tang, W., Li, B., Tan, S., Barni, M., Huang, J. (2019). CNN-Based adversarial embedding for image steganography. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 14(8), 2074-2087 [10.1109/TIFS.2019.2891237].

CNN-Based adversarial embedding for image steganography

Barni, M.;
2019-01-01

Abstract

Steganographic schemes are commonly designed in a way to preserve image statistics or steganalytic features. Since most of the state-of-the-art steganalytic methods employ a machine learning (ML)-based classifier, it is reasonable to consider countering steganalysis by trying to fool the ML classifiers. However, simply applying perturbations on stego images as adversarial examples may lead to the failure of data extraction and introduce unexpected artifacts detectable by other classifiers. In this paper, we present a steganographic scheme with a novel operation called adversarial embedding (ADV-EMB), which achieves the goal of hiding a stego message while at the same time fooling a convolutional neural network (CNN)-based steganalyzer. The proposed method works under the conventional framework of distortion minimization. In particular, ADV-EMB adjusts the costs of image elements modifications according to the gradients back propagated from the target CNN steganalyzer. Therefore, modification direction has a higher probability to be the same as the inverse sign of the gradient. In this way, the so-called adversarial stego images are generated. Experiments demonstrate that the proposed steganographic scheme achieves better security performance against the target adversary-unaware steganalyzer by increasing its missed detection rate. In addition, it deteriorates the performance of other adversary-aware steganalyzers, opening the way to a new class of modern steganographic schemes capable of overcoming powerful CNN-based steganalysis.
2019
Tang, W., Li, B., Tan, S., Barni, M., Huang, J. (2019). CNN-Based adversarial embedding for image steganography. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 14(8), 2074-2087 [10.1109/TIFS.2019.2891237].
File in questo prodotto:
File Dimensione Formato  
Tang-2019-Cnn-based-adversarial-embedding-for.pdf

accesso aperto

Tipologia: PDF editoriale
Licenza: Creative commons
Dimensione 2.88 MB
Formato Adobe PDF
2.88 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1080748