The paper considers a general class of neural networks possessing discontinuous neuron activations and neuron interconnection matrices belonging to the class of M-matrices or H-matrices. A number of results are established on global exponential convergence of the state and output solutions towards a unique equilibrium point. Moreover, by exploiting the presence of sliding modes, conditions are given under which convergence in finite time is guaranteed. In all cases, the exponential convergence rate, or the finite convergence time, can be quantitatively estimated on the basis of the parameters defining the neural network. As a by-product, it is proved that the considered neural networks, although they are described by a system of differential equations with discontinuous right-hand side, enjoy the property of uniqueness of the solution starting at a given initial condition. The results are proved by a generalized Lyapunov-like approach and by using tools from the theory of differential equations with discontinuous right-hand side. At the core of the approach is a basic lemma, which holds under the assumption of M-matrices or H-matrices, and enables to study the limiting behaviour of a suitably defined distance between any pair of solutions to the neural network.

Forti, M. (2007). M-matrices and global convergence of discontinuous neural networks. INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, 35, 105-130 [10.1002/cta.381].

M-matrices and global convergence of discontinuous neural networks

FORTI, MAURO
2007

Abstract

The paper considers a general class of neural networks possessing discontinuous neuron activations and neuron interconnection matrices belonging to the class of M-matrices or H-matrices. A number of results are established on global exponential convergence of the state and output solutions towards a unique equilibrium point. Moreover, by exploiting the presence of sliding modes, conditions are given under which convergence in finite time is guaranteed. In all cases, the exponential convergence rate, or the finite convergence time, can be quantitatively estimated on the basis of the parameters defining the neural network. As a by-product, it is proved that the considered neural networks, although they are described by a system of differential equations with discontinuous right-hand side, enjoy the property of uniqueness of the solution starting at a given initial condition. The results are proved by a generalized Lyapunov-like approach and by using tools from the theory of differential equations with discontinuous right-hand side. At the core of the approach is a basic lemma, which holds under the assumption of M-matrices or H-matrices, and enables to study the limiting behaviour of a suitably defined distance between any pair of solutions to the neural network.
File in questo prodotto:
File Dimensione Formato  
abstract_Mmatrices.txt

non disponibili

Tipologia: Abstract
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 1.28 kB
Formato Text
1.28 kB Text   Visualizza/Apri   Richiedi una copia
403938_M-matrices_ijcta.pdf

non disponibili

Tipologia: Post-print
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 330.6 kB
Formato Adobe PDF
330.6 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/11365/24971
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo