The significant evolution of kernel machines in the last few years has opened the doors to a truly new wave in machine learning on both the theoretical and the applicative side. However, in spite of their strong results in low level learning tasks, there is still a gap with models rooted in logic and probability, whenever one needs to express relations and express constraints amongst different entities. This paper describes how kernel-like models, inspired by the parsimony principle, can cope with highly structured and rich environments that are described by the unified notion of constraint. We formulate the learning as a constrained variational problem and prove that an approximate solution can be given by a kernel-based machine, referred to as a support constraint machine (SCM), that makes it possible to deal with learning tasks (functions) and constraints. The learning process resembles somehow the unification of Prolog, since the learned functions yield the verification of the given constraints. Experimental evidence is given of the capability of SCMs to check new constraints in the case of first-order logic.
Scheda prodotto non validato
Scheda prodotto in fase di analisi da parte dello staff di validazione
|Titolo:||Support constraint machines|
|Citazione:||Gori, M., & Melacci, S. (2011). Support constraint machines. In Neural Information Processing (pp.28-37). Berlin : Springer Verlag.|
|Appare nelle tipologie:||4.1 Contributo in Atti di convegno|
File in questo prodotto: