Synthetic Image source attribution Is a challenging task, especially in data scarcity conditions requiring few-shot or zero-shot classification capabilities. We present a new training-free one-shot attribution method based on image resynthesis. A prompt describing the image under analysis is generated, then it is used to resynthesize the image with all the candidate sources. The image is attributed to the model which produced the resynthesis closest to the original image in a proper feature space. We also introduce a new dataset for synthetic image attribution consisting of face images from commercial and open-source text-to-image generators. The dataset provides a challenging attribution framework, useful for developing new attribution models and testing their capabilities on different generative architectures. The dataset structure allows to test approaches based on resynthesis and to compare them to few-shot methods. Results from state-of-the-art few-shot approaches and other baselines show that the proposed resynthesis method outperforms existing techniques when only a few samples are available for training or fine-tuning. The experiments also demonstrate that the new dataset is a challenging one and represents a valuable benchmark for developing and evaluating future few-shot and zero-shot methods.

Bongini, P., Molinari, V., Costanzo, A., Tondi, B., Barni, M. (2025). Training-free Source Attribution of AI-generated Images via Resynthesis. In proceedings of 2025 IEEE International Workshop on Information Forensics and Security (WIFS) (pp.114-119). Institute of Electrical and Electronics Engineers Inc. [10.1109/wifs66636.2025.00028].

Training-free Source Attribution of AI-generated Images via Resynthesis

Bongini, Pietro;Molinari, Valentina;Costanzo, Andrea;Tondi, Benedetta;Barni, Mauro
2025-01-01

Abstract

Synthetic Image source attribution Is a challenging task, especially in data scarcity conditions requiring few-shot or zero-shot classification capabilities. We present a new training-free one-shot attribution method based on image resynthesis. A prompt describing the image under analysis is generated, then it is used to resynthesize the image with all the candidate sources. The image is attributed to the model which produced the resynthesis closest to the original image in a proper feature space. We also introduce a new dataset for synthetic image attribution consisting of face images from commercial and open-source text-to-image generators. The dataset provides a challenging attribution framework, useful for developing new attribution models and testing their capabilities on different generative architectures. The dataset structure allows to test approaches based on resynthesis and to compare them to few-shot methods. Results from state-of-the-art few-shot approaches and other baselines show that the proposed resynthesis method outperforms existing techniques when only a few samples are available for training or fine-tuning. The experiments also demonstrate that the new dataset is a challenging one and represents a valuable benchmark for developing and evaluating future few-shot and zero-shot methods.
2025
Bongini, P., Molinari, V., Costanzo, A., Tondi, B., Barni, M. (2025). Training-free Source Attribution of AI-generated Images via Resynthesis. In proceedings of 2025 IEEE International Workshop on Information Forensics and Security (WIFS) (pp.114-119). Institute of Electrical and Electronics Engineers Inc. [10.1109/wifs66636.2025.00028].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1315940
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo