This chapter introduces a probabilistic interpretation of artificial neural networks (ANNs), moving the focus from posterior probabilities to probability density functions (pdfs). Parametric and non-parametric neural-based algorithms for unsupervised estimation of pdfs, relying on maximum-likelihood or on the Parzen Window techniques, are reviewed. The approaches may overcome the limitations of traditional statistical estimation methods, possibly leading to improved pdf models. Two paradigms for combining ANNs and hidden Markov models (HMMs) for sequence recognition are then discussed. These models rely on (i) an ANN that estimates state-posteriors over a maximum-a-posteriori criterion, or on (ii) a connectionist estimation of emission pdfs, featuring global optimization of HMM and ANN parameters over a maximum-likelihood criterion. Finally, the chapter faces the problem of the classification of graphs (structured data), by presenting a connectionist probabilistic model for the posterior probability of classes given a labeled graphical pattern. In all cases, empirical evidence and theoretical arguments underline the fact that plausible probabilistic interpretations of ANNs are viable and may lead to improved statistical classifiers, not only in the statical but also in the sequential and structured pattern recognition setups.

Trentin, E., Freno, A. (2009). Probabilistic interpretation of neural networks for the classification of vectors, sequences and graphs. In Innovations in neural information paradigms and applications (pp. 155-182). Berlin : Springer [10.1007/978-3-642-04003-0_7].

Probabilistic interpretation of neural networks for the classification of vectors, sequences and graphs

Trentin, E.;Freno, A.
2009-01-01

Abstract

This chapter introduces a probabilistic interpretation of artificial neural networks (ANNs), moving the focus from posterior probabilities to probability density functions (pdfs). Parametric and non-parametric neural-based algorithms for unsupervised estimation of pdfs, relying on maximum-likelihood or on the Parzen Window techniques, are reviewed. The approaches may overcome the limitations of traditional statistical estimation methods, possibly leading to improved pdf models. Two paradigms for combining ANNs and hidden Markov models (HMMs) for sequence recognition are then discussed. These models rely on (i) an ANN that estimates state-posteriors over a maximum-a-posteriori criterion, or on (ii) a connectionist estimation of emission pdfs, featuring global optimization of HMM and ANN parameters over a maximum-likelihood criterion. Finally, the chapter faces the problem of the classification of graphs (structured data), by presenting a connectionist probabilistic model for the posterior probability of classes given a labeled graphical pattern. In all cases, empirical evidence and theoretical arguments underline the fact that plausible probabilistic interpretations of ANNs are viable and may lead to improved statistical classifiers, not only in the statical but also in the sequential and structured pattern recognition setups.
2009
978-3-642-04002-3
Trentin, E., Freno, A. (2009). Probabilistic interpretation of neural networks for the classification of vectors, sequences and graphs. In Innovations in neural information paradigms and applications (pp. 155-182). Berlin : Springer [10.1007/978-3-642-04003-0_7].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/15774