Graph Neural Networks (GNNs) are a powerful tool for processing graphs, that represent a natural way to collect information coming from several areas of science and engineering - e.g. data mining, computer vision, molecular chemistry, molecular biology, pattern recognition -, where data are intrinsically organized in entities and relationships among entities. Nevertheless, GNNs suffer, so as recurrent/recursive models, from the long-term dependency problem that makes the learning difficult in deep structures. In this paper, we present a new architecture, called Layered GNN (LGNN), realized by a cascade of GNNs: each layer is fed with the original data and with the state information calculated by the previous layer in the cascade. Intuitively, this allows each GNN to solve a subproblem, related only to those patterns that were misclassified by the previous GNNs. Some experimental results are reported, based on synthetic and real-world datasets, which assess a significant improvement in performances with respect to the standard GNN approach.

N., B., Bianchini, M., & Scarselli, F. (2010). Learning long-term dependencies using layered graph neural networks. In Proceedings of WCCI-IJCNN 2010 (pp.1-8). IEEE [10.1109/IJCNN.2010.5596634].

Learning long-term dependencies using layered graph neural networks

BIANCHINI, MONICA;SCARSELLI, FRANCO
2010

Abstract

Graph Neural Networks (GNNs) are a powerful tool for processing graphs, that represent a natural way to collect information coming from several areas of science and engineering - e.g. data mining, computer vision, molecular chemistry, molecular biology, pattern recognition -, where data are intrinsically organized in entities and relationships among entities. Nevertheless, GNNs suffer, so as recurrent/recursive models, from the long-term dependency problem that makes the learning difficult in deep structures. In this paper, we present a new architecture, called Layered GNN (LGNN), realized by a cascade of GNNs: each layer is fed with the original data and with the state information calculated by the previous layer in the cascade. Intuitively, this allows each GNN to solve a subproblem, related only to those patterns that were misclassified by the previous GNNs. Some experimental results are reported, based on synthetic and real-world datasets, which assess a significant improvement in performances with respect to the standard GNN approach.
9781424469161
N., B., Bianchini, M., & Scarselli, F. (2010). Learning long-term dependencies using layered graph neural networks. In Proceedings of WCCI-IJCNN 2010 (pp.1-8). IEEE [10.1109/IJCNN.2010.5596634].
File in questo prodotto:
File Dimensione Formato  
IJCNN-10.pdf

non disponibili

Tipologia: Post-print
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 638.58 kB
Formato Adobe PDF
638.58 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/11365/24390
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo