Recursive neural networks are conceived for processing graphs and extend the well-known recurrent model for processing sequences. In the previous literature, recursive neural networks can deal only with directed ordered acyclic graphs (DOAGs), in which the children of any given node are ordered. While this assumption is reasonable in some applications, it introduces unnecessary constraints in others. In this paper, it is shown that the constraint on the ordering can be relaxed by using an appropriate weight sharing, that guarantees the independence of the network output with respect to the permutations of the arcs leaving from each node. The method can be used with graphs having low connectivity and, in particular, few outcoming arcs. Some theoretical properties of the proposed architecture are given. They guarantee that the approximation capabilities are maintained, despite the weight sharing.
Bianchini, M., Gori, M., Scarselli, F. (2001). Processing directed acyclic graphs with recursive neural networks. IEEE TRANSACTIONS ON NEURAL NETWORKS, 12(6), 1464-1470 [10.1109/72.963781].
Processing directed acyclic graphs with recursive neural networks
BIANCHINI, MONICA;GORI, MARCO;SCARSELLI, FRANCO
2001-01-01
Abstract
Recursive neural networks are conceived for processing graphs and extend the well-known recurrent model for processing sequences. In the previous literature, recursive neural networks can deal only with directed ordered acyclic graphs (DOAGs), in which the children of any given node are ordered. While this assumption is reasonable in some applications, it introduces unnecessary constraints in others. In this paper, it is shown that the constraint on the ordering can be relaxed by using an appropriate weight sharing, that guarantees the independence of the network output with respect to the permutations of the arcs leaving from each node. The method can be used with graphs having low connectivity and, in particular, few outcoming arcs. Some theoretical properties of the proposed architecture are given. They guarantee that the approximation capabilities are maintained, despite the weight sharing.File | Dimensione | Formato | |
---|---|---|---|
DAG-TNN.PDF
non disponibili
Tipologia:
Post-print
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
157.61 kB
Formato
Adobe PDF
|
157.61 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11365/22031
Attenzione
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo