Learning machines for pattern recognition, such as neural networks or support vector machines, are usually conceived to process real–valued vectors with predefined dimensionality even if, in many real–world applications, relevant information is inherently organized into entities and relationships between them. Instead, Graph Neural Networks (GNNs) can directly process structured data, guaranteeing universal approximation of many practically useful functions on graphs. GNNs, that do not strictly meet the definition of deep architectures, are based on the unfolding mechanism during learning, that, in practice, yields networks that have the same depth of the data structures they process. However, GNNs may be hindered by the long–term dependency problem, i.e. the difficulty in taking into account information coming from peripheral nodes within graphs — due to the local nature of the procedures for updating the state and the weights. To overcome this limitation, GNNs may be cascaded to form layered architectures, called Layered GNNs (LGNNs). Each GNN in the cascade is trained based on the original graph “enriched” with the information computed by the previous layer, to implement a sort of incremental learning framework, able to take into account progressively further information. The applicability of LGNNs will be illustrated both with respect to a classical problem in graph–theory and to pattern recognition problems in bioinformatics.
|Titolo:||Deep Neural Networks for Structured Data|
|Citazione:||Bianchini, M., Dimitri, G.M., Maggini, M., & Scarselli, F. (2018). Deep Neural Networks for Structured Data. In Computational Intelligence for Pattern Recognition (pp. 29-51). Berlin : Springer-Verlag.|
|Appare nelle tipologie:||2.1 Contributo in volume (Capitolo o Saggio)|