Graph Neural Networks (Scarselli et al., 2009) exploit an iterative diffusion procedure to compute the node states as the fixed point of the trainable state transition function. In this paper, we show how to cast this scheme as a constrained optimization problem, thus avoiding the unfolding procedure required for the computation of the fixed point. This is done by searching for saddle points of the Lagrangian function in the space of the weights, state variables and Lagrange multipliers. The proposed approach shows state-of-the-art performance in multiple standard benchmarks in graph domains.
Tiezzi, M., Marra, G., Melacci, S., Maggini, M. (2020). Deep Lagrangian Propagation in Graph Neural Networks. In ICML Workshop on Graph Representation Learning and Beyond (GRL+).
Deep Lagrangian Propagation in Graph Neural Networks
Matteo Tiezzi;Giuseppe Marra;Stefano Melacci;Marco Maggini
2020-01-01
Abstract
Graph Neural Networks (Scarselli et al., 2009) exploit an iterative diffusion procedure to compute the node states as the fixed point of the trainable state transition function. In this paper, we show how to cast this scheme as a constrained optimization problem, thus avoiding the unfolding procedure required for the computation of the fixed point. This is done by searching for saddle points of the Lagrangian function in the space of the weights, state variables and Lagrange multipliers. The proposed approach shows state-of-the-art performance in multiple standard benchmarks in graph domains.File | Dimensione | Formato | |
---|---|---|---|
melacci_ICMLWorkshopGRL2020.pdf
accesso aperto
Tipologia:
PDF editoriale
Licenza:
PUBBLICO - Pubblico con Copyright
Dimensione
366.2 kB
Formato
Adobe PDF
|
366.2 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11365/1114042