A number of researchers have shown that discrete-time recurrent neural networks (DTRNN) are capable of inferring deterministic finite automata from sets of example and counterexample strings; however, discrete algorithmic methods are much better at this task and clearly outperform DTRNN in terms of space and time complexity. We show how DTRNN may be used to learn not the exact language that explains the whole learning set but an approximate and much simpler language that explains a great majority of the examples by using simpler rules. This is accomplished by gradually varying the error function in such a way that the DTRNN is eventually allowed to classify clearly but incorrectly those strings that it has found to be difficult to learn, which are treated as exceptions. The results show that in this way, the DTRNN usually manages to learn a simplified approximate language.

M. L., F., A. M., C., Gori, M., Maggini, M. (1999). Recurrent neural networks can learn simple, approximate regular languages. In Proceedings of the International Joint Conference on Neural Networks 1999 (pp.1527-1532) [10.1109/IJCNN.1999.832596].

Recurrent neural networks can learn simple, approximate regular languages

GORI, MARCO;MAGGINI, MARCO
1999-01-01

Abstract

A number of researchers have shown that discrete-time recurrent neural networks (DTRNN) are capable of inferring deterministic finite automata from sets of example and counterexample strings; however, discrete algorithmic methods are much better at this task and clearly outperform DTRNN in terms of space and time complexity. We show how DTRNN may be used to learn not the exact language that explains the whole learning set but an approximate and much simpler language that explains a great majority of the examples by using simpler rules. This is accomplished by gradually varying the error function in such a way that the DTRNN is eventually allowed to classify clearly but incorrectly those strings that it has found to be difficult to learn, which are treated as exceptions. The results show that in this way, the DTRNN usually manages to learn a simplified approximate language.
1999
0780355296
M. L., F., A. M., C., Gori, M., Maggini, M. (1999). Recurrent neural networks can learn simple, approximate regular languages. In Proceedings of the International Joint Conference on Neural Networks 1999 (pp.1527-1532) [10.1109/IJCNN.1999.832596].
File in questo prodotto:
File Dimensione Formato  
IJCNN99a.pdf

non disponibili

Tipologia: Post-print
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 337.28 kB
Formato Adobe PDF
337.28 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/35372
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo