Many researchers have recently focused their efforts on devising efficient algorithms, mainly based on optimization schemes, for learning the weights of recurrent neural networks. As in the case of feedforward networks, however, these learning algorithms may get stuck in local minima during gradient descent, thus discovering sub-optimal solutions. This paper analyses the problem of optimal learning in recurrent networks by proposing conditions that guarantee local minima free error surfaces. An example is given that also shows the constructive role of the proposed theory in designing networks suitable for solving a given task. Moreover, a formal relationship between recurrent and static feedforward networks is established such that the examples of local minima for feedforward networks already known in the literature can be associated with analogous ones in recurrent networks.
Scheda prodotto non validato
Scheda prodotto in fase di analisi da parte dello staff di validazione
|Titolo:||On the problem of local minima in recurrent neural networks|
|Citazione:||Bianchini, M., Gori, M., & Maggini, M. (1994). On the problem of local minima in recurrent neural networks. IEEE TRANSACTIONS ON NEURAL NETWORKS, 5(2), 167-177.|
|Appare nelle tipologie:||1.1 Articolo in rivista|