A theory of learning is proposed, which extends naturally the classic regularization framework of kernel machines to the case in which the agent interacts with a richer environment, compactly described by the notion of constraint. Variational calculus is exploited to derive general representer theorems that give a description of the structure of the solution to the learning problem. It is shown that such solution can be represented in terms of constraint reactions, which remind the corresponding notion in analytic mechanics. In particular, the derived representer theorems clearly show the extension of the classic kernel expansion on support vectors to the expansion on support constraints. As an application of the proposed theory three examples are given, which illustrate the dimensional collapse to a finite-dimensional space of parameters. The constraint reactions are calculated for the classic collection of supervised examples, for the case of box constraints, and for the case of hard holonomic linear constraints mixed with supervised examples. Interestingly, this leads to representer theorems for which we can re-use the kernel machine mathematical and algorithmic apparatus.

Gnecco, G., Gori, M., Melacci, S., Sanguineti, M. (2015). Learning as Constraint Reactions. In P. KoprinkovaHristova, V. Mladenov, N.K. Kasabov (a cura di), Artificial Neural Networks - Methods and Applications in Bio-/Neuroinformatics (pp. 245-270). Springer International Publishing [10.1007/978-3-319-09903-3_12].

Learning as Constraint Reactions

Gori, Marco;Melacci, Stefano;
2015-01-01

Abstract

A theory of learning is proposed, which extends naturally the classic regularization framework of kernel machines to the case in which the agent interacts with a richer environment, compactly described by the notion of constraint. Variational calculus is exploited to derive general representer theorems that give a description of the structure of the solution to the learning problem. It is shown that such solution can be represented in terms of constraint reactions, which remind the corresponding notion in analytic mechanics. In particular, the derived representer theorems clearly show the extension of the classic kernel expansion on support vectors to the expansion on support constraints. As an application of the proposed theory three examples are given, which illustrate the dimensional collapse to a finite-dimensional space of parameters. The constraint reactions are calculated for the classic collection of supervised examples, for the case of box constraints, and for the case of hard holonomic linear constraints mixed with supervised examples. Interestingly, this leads to representer theorems for which we can re-use the kernel machine mathematical and algorithmic apparatus.
2015
978-3-319-09902-6
978-3-319-09903-3
Gnecco, G., Gori, M., Melacci, S., Sanguineti, M. (2015). Learning as Constraint Reactions. In P. KoprinkovaHristova, V. Mladenov, N.K. Kasabov (a cura di), Artificial Neural Networks - Methods and Applications in Bio-/Neuroinformatics (pp. 245-270). Springer International Publishing [10.1007/978-3-319-09903-3_12].
File in questo prodotto:
File Dimensione Formato  
00040245.pdf

non disponibili

Tipologia: PDF editoriale
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 503.7 kB
Formato Adobe PDF
503.7 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/974169