In this paper we propose a general framework to integrate supervised and unsupervised examples with background knowledge ex- pressed by a collection of first-order logic clauses into kernel machines. In particular, we consider a multi-task learning scheme where multiple predicates defined on a set of objects are to be jointly learned from exam- ples, enforcing a set of FOL constraints on the admissible configurations of their values. The predicates are defined on the feature spaces, in which the input objects are represented, and can be either known a priori or ap- proximated by an appropriate kernel-based learner. A general approach is presented to convert the FOL clauses into a continuous implementation that can deal with the outputs computed by the kernel-based predicates. The learning problem is formulated as a semi-supervised task that re- quires the optimization in the primal of a loss function that combines a fitting loss measure on the supervised examples, a regularization term, and a penalty term that enforces the constraints on both the supervised and unsupervised examples. Unfortunately, the penalty term is not con- vex and it can hinder the optimization process. However, it is possible to avoid poor solutions by using a two stage learning schema, in which the supervised examples are learned first and then the constraints are enforced.

Diligenti, M., Gori, M., Maggini, M., Rigutini, L. (2010). Multitask Kernel-based Learning with First-Order Logic Constraints. In Proceedings of the 20th International Conference on Inductive Logic Programming (ILP 2010).

Multitask Kernel-based Learning with First-Order Logic Constraints

DILIGENTI, MICHELANGELO;GORI, MARCO;MAGGINI, MARCO;RIGUTINI, LEONARDO
2010-01-01

Abstract

In this paper we propose a general framework to integrate supervised and unsupervised examples with background knowledge ex- pressed by a collection of first-order logic clauses into kernel machines. In particular, we consider a multi-task learning scheme where multiple predicates defined on a set of objects are to be jointly learned from exam- ples, enforcing a set of FOL constraints on the admissible configurations of their values. The predicates are defined on the feature spaces, in which the input objects are represented, and can be either known a priori or ap- proximated by an appropriate kernel-based learner. A general approach is presented to convert the FOL clauses into a continuous implementation that can deal with the outputs computed by the kernel-based predicates. The learning problem is formulated as a semi-supervised task that re- quires the optimization in the primal of a loss function that combines a fitting loss measure on the supervised examples, a regularization term, and a penalty term that enforces the constraints on both the supervised and unsupervised examples. Unfortunately, the penalty term is not con- vex and it can hinder the optimization process. However, it is possible to avoid poor solutions by using a two stage learning schema, in which the supervised examples are learned first and then the constraints are enforced.
2010
Diligenti, M., Gori, M., Maggini, M., Rigutini, L. (2010). Multitask Kernel-based Learning with First-Order Logic Constraints. In Proceedings of the 20th International Conference on Inductive Logic Programming (ILP 2010).
File in questo prodotto:
File Dimensione Formato  
ilp2010_submission_14.pdf

non disponibili

Tipologia: Post-print
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 206.46 kB
Formato Adobe PDF
206.46 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/37224
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo