Many researchers have recently explored the use of recurrent networks for the inductive inference of regular grammars from positive and negative examples [5, 9, 11] with very promising results. In this paper, we give a set of weight constraints guaranteeing that a recurrent network behave as an automaton and show that the measure of this admissible set decreases progressively as the network dimension increases, thus suggesting that automata behavior becomes more and more unlikely for "large" networks. As a result, problems of inductive inference of regular grammars from "long" strings are likely not to be afforded effectively with "large" networks. We suggest looking for more valuable approaches based on the divide et impera paradigm that allow us to limit the network dimensions [3]. 1 Introduction Recently, many researchers have used recurrent neural networks for performing inductive inference of regular grammars with very promising results [5, 9, 11].

P., F., Gori, M., Maggini, M., G., S. (1994). Inductive Inference of Regular Grammars Using Recurrent Networks: A Critical Analysis. In Proceedings of the Workshop at the International Conference on Logic Programming ICLP'94,”Logic and Reasoning with Neural Networks”.

Inductive Inference of Regular Grammars Using Recurrent Networks: A Critical Analysis

GORI, MARCO;MAGGINI, MARCO;
1994-01-01

Abstract

Many researchers have recently explored the use of recurrent networks for the inductive inference of regular grammars from positive and negative examples [5, 9, 11] with very promising results. In this paper, we give a set of weight constraints guaranteeing that a recurrent network behave as an automaton and show that the measure of this admissible set decreases progressively as the network dimension increases, thus suggesting that automata behavior becomes more and more unlikely for "large" networks. As a result, problems of inductive inference of regular grammars from "long" strings are likely not to be afforded effectively with "large" networks. We suggest looking for more valuable approaches based on the divide et impera paradigm that allow us to limit the network dimensions [3]. 1 Introduction Recently, many researchers have used recurrent neural networks for performing inductive inference of regular grammars with very promising results [5, 9, 11].
1994
P., F., Gori, M., Maggini, M., G., S. (1994). Inductive Inference of Regular Grammars Using Recurrent Networks: A Critical Analysis. In Proceedings of the Workshop at the International Conference on Logic Programming ICLP'94,”Logic and Reasoning with Neural Networks”.
File in questo prodotto:
File Dimensione Formato  
ICLP94.pdf

non disponibili

Tipologia: Post-print
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 192.59 kB
Formato Adobe PDF
192.59 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/36423
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo