Amongst a variety of approaches aimed at making the learning procedure of neural networks more effective, the scientifc community developed strategies to order the examples according to their estimated complexity, to distil knowledge from larger networks, or to exploit the principles behind adversarial machine learning. A different idea has been recently proposed, named Friendly Training, which consists in altering the input data by adding an automatically estimated perturbation, with the goal of facilitating the learning process of a neural classifer. The transformation progressively fadesout as long as training proceeds, until it completely vanishes. In this work we revisit and extend this idea, introducing a radically different and novel approach inspired by the effectiveness of neural generators in the context of Adversarial Machine Learning. We propose an auxiliary multi-layer network that is responsible of altering the input data to make them easier to be handled by the classifer at the current stage of the training procedure. The auxiliary network is trained jointly with the neural classifer, thus intrinsically increasing the “depth” of the classifer, and it is expected to spot general regularities in the data alteration process. The effect of the auxiliary network is progressively reduced up to the end of training, when it is fully dropped and the classifer is deployed for applications. We refer to this approach as Neural Friendly Training. An extended experimental procedure involving several datasets and different neural architectures shows that Neural Friendly Training overcomes the originally proposed Friendly Training technique, improving the generalization of the classifer, especially in the case of noisy data.

Marullo, S., Tiezzi, M., Gori, M., & Melacci, S. (2022). Being Friends Instead of Adversaries: Deep Networks Learn from Data Simplified by Other Networks. In Proceedings of the 36th AAAI Conference on Artificial Intelligence (pp.7728-7735) [10.1609/aaai.v36i7.20740].

Being Friends Instead of Adversaries: Deep Networks Learn from Data Simplified by Other Networks

Marullo, Simone;Tiezzi, Matteo;Gori, Marco;Melacci, Stefano
2022

Abstract

Amongst a variety of approaches aimed at making the learning procedure of neural networks more effective, the scientifc community developed strategies to order the examples according to their estimated complexity, to distil knowledge from larger networks, or to exploit the principles behind adversarial machine learning. A different idea has been recently proposed, named Friendly Training, which consists in altering the input data by adding an automatically estimated perturbation, with the goal of facilitating the learning process of a neural classifer. The transformation progressively fadesout as long as training proceeds, until it completely vanishes. In this work we revisit and extend this idea, introducing a radically different and novel approach inspired by the effectiveness of neural generators in the context of Adversarial Machine Learning. We propose an auxiliary multi-layer network that is responsible of altering the input data to make them easier to be handled by the classifer at the current stage of the training procedure. The auxiliary network is trained jointly with the neural classifer, thus intrinsically increasing the “depth” of the classifer, and it is expected to spot general regularities in the data alteration process. The effect of the auxiliary network is progressively reduced up to the end of training, when it is fully dropped and the classifer is deployed for applications. We refer to this approach as Neural Friendly Training. An extended experimental procedure involving several datasets and different neural architectures shows that Neural Friendly Training overcomes the originally proposed Friendly Training technique, improving the generalization of the classifer, especially in the case of noisy data.
Marullo, S., Tiezzi, M., Gori, M., & Melacci, S. (2022). Being Friends Instead of Adversaries: Deep Networks Learn from Data Simplified by Other Networks. In Proceedings of the 36th AAAI Conference on Artificial Intelligence (pp.7728-7735) [10.1609/aaai.v36i7.20740].
File in questo prodotto:
File Dimensione Formato  
melacci_AAAI2022b.pdf

accesso aperto

Tipologia: PDF editoriale
Licenza: PUBBLICO - Pubblico con Copyright
Dimensione 1.08 MB
Formato Adobe PDF
1.08 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/11365/1213815