Recursive neural networks are a powerful tool for processing structured data, thus filling the gap between connectionism, which is usually related to poorly organized data, and a great variety of real–world problems, where the information is naturally organized in entities and relationships among entities. According to the recursive paradigm, the input information consists of directed positional acyclic graphs (DPAGs), on which recursive networks are trained, following the partial order defined by the links of the graph. In this chapter, we propose a unified framework for learning in supervised neural networks, based on the BackPropagation algorithm, starting from static multi–layered architectures to recursive network. In fact, BackPropagation can be naturally specialized to train complex architectures able to process structured data, in the form of sequences and graphs. In this way, the problem of learning in connectionism, no matter what type of data is “natural” to represent a given application domain, is mantained in the elegant framework of continuous optimization. Finally, an application of the recursive model to image processing is presented, giving evidence to the fact that encoding images by trees or graphs gives rise to a more robust and informative representation, aimed at facilitating object detection and image classification.

Bianchini, M., Maggini, M., Sarti, L. (2013). Supervised Neural Network Learning: from Vectors to Graphs. In The Handbook on Reasoning-Based Intelligent Systems (pp. 275-305). World Scientific Publishing Company [10.1142/9789814329484_0011].

Supervised Neural Network Learning: from Vectors to Graphs

BIANCHINI, MONICA;MAGGINI, MARCO;SARTI, LORENZO
2013-01-01

Abstract

Recursive neural networks are a powerful tool for processing structured data, thus filling the gap between connectionism, which is usually related to poorly organized data, and a great variety of real–world problems, where the information is naturally organized in entities and relationships among entities. According to the recursive paradigm, the input information consists of directed positional acyclic graphs (DPAGs), on which recursive networks are trained, following the partial order defined by the links of the graph. In this chapter, we propose a unified framework for learning in supervised neural networks, based on the BackPropagation algorithm, starting from static multi–layered architectures to recursive network. In fact, BackPropagation can be naturally specialized to train complex architectures able to process structured data, in the form of sequences and graphs. In this way, the problem of learning in connectionism, no matter what type of data is “natural” to represent a given application domain, is mantained in the elegant framework of continuous optimization. Finally, an application of the recursive model to image processing is presented, giving evidence to the fact that encoding images by trees or graphs gives rise to a more robust and informative representation, aimed at facilitating object detection and image classification.
2013
9789814329477
Bianchini, M., Maggini, M., Sarti, L. (2013). Supervised Neural Network Learning: from Vectors to Graphs. In The Handbook on Reasoning-Based Intelligent Systems (pp. 275-305). World Scientific Publishing Company [10.1142/9789814329484_0011].
File in questo prodotto:
File Dimensione Formato  
2012-b1323-ch11.pdf

non disponibili

Tipologia: Pre-print
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 2.33 MB
Formato Adobe PDF
2.33 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/42891
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo