Neural network models are based on a distributed computational scheme in which signals are propagated among neurons through weighted connections. The network topology defines the overall computation, which is local to each neuron but follows a precise flow driven by the neural network architecture, in the forward and error backpropagation phases. This chapter proposes a completely local alternative view on the neural network computational scheme, devised as the satisfaction of architectural constraints solved in the Lagrangian framework. The proposed local propagation algorithm casts learning in neural networks as the search for saddle points in the adjoint space composed of weights, neurons’ outputs, and Lagrange multipliers. In particular, the case of graph neural networks is considered, for which the computationally expensive iterative learning procedure can be avoided by joint optimization of the node states and transition functions, in which the state computation on the input graph is expressed by a constraint satisfaction mechanism.

Maggini, M., Tiezzi, M., Gori, M. (2024). A Lagrangian framework for learning in graph neural networks. In R. Kozma, C. Alippi, Y. Choe, F. C. Morabito (a cura di), Artificial Intelligence in the Age of Neural Networks and Brain Computing, Second Edition (pp. 343-365). Elsevier [10.1016/B978-0-323-96104-2.00015-4].

A Lagrangian framework for learning in graph neural networks

Maggini M.;Tiezzi M.;Gori M.
2024-01-01

Abstract

Neural network models are based on a distributed computational scheme in which signals are propagated among neurons through weighted connections. The network topology defines the overall computation, which is local to each neuron but follows a precise flow driven by the neural network architecture, in the forward and error backpropagation phases. This chapter proposes a completely local alternative view on the neural network computational scheme, devised as the satisfaction of architectural constraints solved in the Lagrangian framework. The proposed local propagation algorithm casts learning in neural networks as the search for saddle points in the adjoint space composed of weights, neurons’ outputs, and Lagrange multipliers. In particular, the case of graph neural networks is considered, for which the computationally expensive iterative learning procedure can be avoided by joint optimization of the node states and transition functions, in which the state computation on the input graph is expressed by a constraint satisfaction mechanism.
2024
9780323961042
Maggini, M., Tiezzi, M., Gori, M. (2024). A Lagrangian framework for learning in graph neural networks. In R. Kozma, C. Alippi, Y. Choe, F. C. Morabito (a cura di), Artificial Intelligence in the Age of Neural Networks and Brain Computing, Second Edition (pp. 343-365). Elsevier [10.1016/B978-0-323-96104-2.00015-4].
File in questo prodotto:
File Dimensione Formato  
978-0-323-96104-2 2.pdf

non disponibili

Tipologia: PDF editoriale
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 4.56 MB
Formato Adobe PDF
4.56 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1253154