This paper proposes a unified approach to learning in environments in which patterns can be represented in variable-dimension domains, which nicely includes the case in which there are missing features. The proposal is based on the representation of the environment by pointwise constraints that are shown to model naturally pattern relationships that come out in problems of information retrieval, computer vision, and related fields. The given interpretation of learning leads to capturing the truly different aspects of similarity coming from the content at different dimensions and the pattern links. It turns out that functions that process real-valued features and functions that operate on symbolic entities are learned within a unified framework of regularization that can also be expressed using the kernel machines mathematical and algorithmic apparatus. Interestingly, in the extreme cases in which only the content or only the links are available, our theory returns classic kernel machines or graph regularization, respectively. We show experimental results that provide clear evidence of the remarkable improvements that are obtained when both types of similarities are exploited on artificial and real-world benchmarks.

Diligenti, M., Gori, M., Saccà, C. (2016). Learning in Variable-Dimensional Spaces. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 27(6), 1322-1332 [10.1109/TNNLS.2015.2497275].

Learning in Variable-Dimensional Spaces

Diligenti, Michelangelo;Gori, Marco;Saccà, Claudio
2016-01-01

Abstract

This paper proposes a unified approach to learning in environments in which patterns can be represented in variable-dimension domains, which nicely includes the case in which there are missing features. The proposal is based on the representation of the environment by pointwise constraints that are shown to model naturally pattern relationships that come out in problems of information retrieval, computer vision, and related fields. The given interpretation of learning leads to capturing the truly different aspects of similarity coming from the content at different dimensions and the pattern links. It turns out that functions that process real-valued features and functions that operate on symbolic entities are learned within a unified framework of regularization that can also be expressed using the kernel machines mathematical and algorithmic apparatus. Interestingly, in the extreme cases in which only the content or only the links are available, our theory returns classic kernel machines or graph regularization, respectively. We show experimental results that provide clear evidence of the remarkable improvements that are obtained when both types of similarities are exploited on artificial and real-world benchmarks.
2016
Diligenti, M., Gori, M., Saccà, C. (2016). Learning in Variable-Dimensional Spaces. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 27(6), 1322-1332 [10.1109/TNNLS.2015.2497275].
File in questo prodotto:
File Dimensione Formato  
Learning Diligenti.pdf

non disponibili

Tipologia: PDF editoriale
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 2.24 MB
Formato Adobe PDF
2.24 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/998970