Deep Learning architectures can develop feature representations and classification models in an integrated way during training. This joint learning process requires large networks with many parameters, and it is successful when a large amount of training data is available. Instead of making the learner develop its entire understanding of the world from scratch from the input examples, the injection of prior knowledge into the learner seems to be a principled way to reduce the amount of require training data, as the learner does not need to induce the rules from the data. This paper presents a general framework to integrate arbitrary prior knowledge into learning. The domain knowledge is provided as a collection of first-order logic (FOL) clauses, where each task to be learned corresponds to a predicate in the knowledge base. The logic statements are translated into a set of differentiable constraints, which can be integrated into the learning process to distill the knowledge into the network, or used during inference to enforce the consistency of the predictions with the prior knowledge. The experimental results have been carried out on multiple image datasets and show that the integration of the prior knowledge boosts the accuracy of several state-of-the-art deep architectures on image classification tasks.

Roychowdhury, S., Diligenti, M., Gori, M. (2021). Regularizing deep networks with prior knowledge: A constraint-based approach. KNOWLEDGE-BASED SYSTEMS, 222 [10.1016/j.knosys.2021.106989].

Regularizing deep networks with prior knowledge: A constraint-based approach

Diligenti, Michelangelo
;
Gori, Marco
Membro del Collaboration Group
2021-01-01

Abstract

Deep Learning architectures can develop feature representations and classification models in an integrated way during training. This joint learning process requires large networks with many parameters, and it is successful when a large amount of training data is available. Instead of making the learner develop its entire understanding of the world from scratch from the input examples, the injection of prior knowledge into the learner seems to be a principled way to reduce the amount of require training data, as the learner does not need to induce the rules from the data. This paper presents a general framework to integrate arbitrary prior knowledge into learning. The domain knowledge is provided as a collection of first-order logic (FOL) clauses, where each task to be learned corresponds to a predicate in the knowledge base. The logic statements are translated into a set of differentiable constraints, which can be integrated into the learning process to distill the knowledge into the network, or used during inference to enforce the consistency of the predictions with the prior knowledge. The experimental results have been carried out on multiple image datasets and show that the integration of the prior knowledge boosts the accuracy of several state-of-the-art deep architectures on image classification tasks.
2021
Roychowdhury, S., Diligenti, M., Gori, M. (2021). Regularizing deep networks with prior knowledge: A constraint-based approach. KNOWLEDGE-BASED SYSTEMS, 222 [10.1016/j.knosys.2021.106989].
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0950705121002525-main.pdf

accesso aperto

Tipologia: PDF editoriale
Licenza: Creative commons
Dimensione 555.92 kB
Formato Adobe PDF
555.92 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1138590