In the last years, the popularity of deep learning techniques has renewed the interest in neural models able to process complex patterns, that are naturally encoded as graphs. In particular, different architectures have been proposed to extend the original Graph Neural Network (GNN) model. GNNs exploit a set of state variables, each assigned to a graph node, and a diffusion mechanism among neighbor nodes, to implement an iterative state update procedure that computes the fixed point of the (learnable) state transition function. In this paper, we propose a novel approach to state computation and learning for GNNs, based on a constraint optimisation task solved in the Lagrangian framework. The state convergence procedure is implicitly expressed by the constraint satisfaction mechanism and does not require a separate iterative phase for each epoch of the learning procedure. In fact, the computational structure is based on the search for saddle points of the Lagrangian in the adjoint space of weights, neural outputs (node states), and Lagrange multipliers. The proposed approach is compared experimentally with other popular models for processing graphs.

Tiezzi, M., Marra, G., Melacci, S., Maggini, M., Gori, M. (2020). Lagrangian Propagation Graph Neural Networks. In The First International Workshop on Deep Learning on Graphs: Methodologies and Applications (DLGMA’20).

Lagrangian Propagation Graph Neural Networks

Matteo Tiezzi;Giuseppe Marra;Stefano Melacci;Marco Maggini;Marco Gori
2020-01-01

Abstract

In the last years, the popularity of deep learning techniques has renewed the interest in neural models able to process complex patterns, that are naturally encoded as graphs. In particular, different architectures have been proposed to extend the original Graph Neural Network (GNN) model. GNNs exploit a set of state variables, each assigned to a graph node, and a diffusion mechanism among neighbor nodes, to implement an iterative state update procedure that computes the fixed point of the (learnable) state transition function. In this paper, we propose a novel approach to state computation and learning for GNNs, based on a constraint optimisation task solved in the Lagrangian framework. The state convergence procedure is implicitly expressed by the constraint satisfaction mechanism and does not require a separate iterative phase for each epoch of the learning procedure. In fact, the computational structure is based on the search for saddle points of the Lagrangian in the adjoint space of weights, neural outputs (node states), and Lagrange multipliers. The proposed approach is compared experimentally with other popular models for processing graphs.
2020
Tiezzi, M., Marra, G., Melacci, S., Maggini, M., Gori, M. (2020). Lagrangian Propagation Graph Neural Networks. In The First International Workshop on Deep Learning on Graphs: Methodologies and Applications (DLGMA’20).
File in questo prodotto:
File Dimensione Formato  
tiezzi_AAAI2020Workshop.pdf

non disponibili

Tipologia: PDF editoriale
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 246.94 kB
Formato Adobe PDF
246.94 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1106291