In this paper, we propose some techniques for injecting finite state automata into Recurrent Radial Basis Function networks (R2BF). When providing proper hints and constraining the weight space properly, we show that these networks behave as automata. A technique is suggested for forcing the learning process to develop automata representations that is based on adding a proper penalty function to the ordinary cost. Successful experimental results are shown for inductive inference of regular grammars.
P., F., Gori, M., Maggini, M., G., S. (1996). Representation of Finite state automata in recurrent radial basis function networks. MACHINE LEARNING, 23(1), 5-32 [10.1023/A:1018061531322].
Representation of Finite state automata in recurrent radial basis function networks
GORI, MARCO;MAGGINI, MARCO;
1996-01-01
Abstract
In this paper, we propose some techniques for injecting finite state automata into Recurrent Radial Basis Function networks (R2BF). When providing proper hints and constraining the weight space properly, we show that these networks behave as automata. A technique is suggested for forcing the learning process to develop automata representations that is based on adding a proper penalty function to the ordinary cost. Successful experimental results are shown for inductive inference of regular grammars.File | Dimensione | Formato | |
---|---|---|---|
ML96.pdf
non disponibili
Tipologia:
Post-print
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
1.58 MB
Formato
Adobe PDF
|
1.58 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11365/29675
Attenzione
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo