Learning machines for pattern recognition, such as neural networks or support vector machines, are usually conceived to process real–valued vectors with predefined dimensionality even if, in many real–world applications, relevant information is inherently organized into entities and relationships between them. Instead, Graph Neural Networks (GNNs) can directly process structured data, guaranteeing universal approximation of many practically useful functions on graphs. GNNs, that do not strictly meet the definition of deep architectures, are based on the unfolding mechanism during learning, that, in practice, yields networks that have the same depth of the data structures they process. However, GNNs may be hindered by the long–term dependency problem, i.e. the difficulty in taking into account information coming from peripheral nodes within graphs — due to the local nature of the procedures for updating the state and the weights. To overcome this limitation, GNNs may be cascaded to form layered architectures, called Layered GNNs (LGNNs). Each GNN in the cascade is trained based on the original graph “enriched” with the information computed by the previous layer, to implement a sort of incremental learning framework, able to take into account progressively further information. The applicability of LGNNs will be illustrated both with respect to a classical problem in graph–theory and to pattern recognition problems in bioinformatics.

Bianchini, M., Dimitri, G.M., Maggini, M., Scarselli, F. (2018). Deep Neural Networks for Structured Data. In Computational Intelligence for Pattern Recognition (pp. 29-51). Berlin : Springer-Verlag [10.1007/978-3-319-89629-8_2].

Deep Neural Networks for Structured Data

Bianchini, Monica;DIMITRI, GIOVANNA MARIA;Maggini, Marco;Scarselli, Franco
2018-01-01

Abstract

Learning machines for pattern recognition, such as neural networks or support vector machines, are usually conceived to process real–valued vectors with predefined dimensionality even if, in many real–world applications, relevant information is inherently organized into entities and relationships between them. Instead, Graph Neural Networks (GNNs) can directly process structured data, guaranteeing universal approximation of many practically useful functions on graphs. GNNs, that do not strictly meet the definition of deep architectures, are based on the unfolding mechanism during learning, that, in practice, yields networks that have the same depth of the data structures they process. However, GNNs may be hindered by the long–term dependency problem, i.e. the difficulty in taking into account information coming from peripheral nodes within graphs — due to the local nature of the procedures for updating the state and the weights. To overcome this limitation, GNNs may be cascaded to form layered architectures, called Layered GNNs (LGNNs). Each GNN in the cascade is trained based on the original graph “enriched” with the information computed by the previous layer, to implement a sort of incremental learning framework, able to take into account progressively further information. The applicability of LGNNs will be illustrated both with respect to a classical problem in graph–theory and to pattern recognition problems in bioinformatics.
2018
978-3-319-89628-1
978-3-319-89629-8
Bianchini, M., Dimitri, G.M., Maggini, M., Scarselli, F. (2018). Deep Neural Networks for Structured Data. In Computational Intelligence for Pattern Recognition (pp. 29-51). Berlin : Springer-Verlag [10.1007/978-3-319-89629-8_2].
File in questo prodotto:
File Dimensione Formato  
456097_1_En_2_Chapter_Author.pdf

non disponibili

Tipologia: PDF editoriale
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 870.92 kB
Formato Adobe PDF
870.92 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1066553