Knowledge Graph Embedding models have shown remarkable performances in different tasks like knowledge completion. However, they inherently lack interpretability, making it difficult to understand the reasoning behind their predictions. While different Neural-Symbolic (NeSy) models have been proposed to achieve interpretable reasoning through logic rules, existing evaluations primarily focus on accuracy, overlooking the critical assessment of explanation quality. This paper addresses this gap by introducing fully “interpretable-by-design” NeSy approaches for link prediction inspired by recently proposed models. Our framework employs reasoners that generate explicit logic proofs, utilizing either predefined or learned logic rules, ensuring transparent and explainable predictions. We go beyond traditional accuracy assessments, evaluating the quality of these explanations using established XAI metrics, including coherence. By quantitatively assessing the interpretability of our model, we aim to advance the development of trustworthy and understandable link prediction systems for Knowledge Graphs.

CASTELLANO ONTIVEROS, R., Bonabi Mobaraki, E., Giannini, F., Barbiero, P., Gori, M., Diligenti, M. (2025). Interpretable-by-design Neural-Symbolic Link Prediction on Knowledge Graphs. In Proceedings of the 3rd World Conference on eXplainable Artificial Intelligence (XAI-2025).

Interpretable-by-design Neural-Symbolic Link Prediction on Knowledge Graphs

Rodrigo Castellano Ontiveros
;
Marco Gori;Michelangelo Diligenti
2025-01-01

Abstract

Knowledge Graph Embedding models have shown remarkable performances in different tasks like knowledge completion. However, they inherently lack interpretability, making it difficult to understand the reasoning behind their predictions. While different Neural-Symbolic (NeSy) models have been proposed to achieve interpretable reasoning through logic rules, existing evaluations primarily focus on accuracy, overlooking the critical assessment of explanation quality. This paper addresses this gap by introducing fully “interpretable-by-design” NeSy approaches for link prediction inspired by recently proposed models. Our framework employs reasoners that generate explicit logic proofs, utilizing either predefined or learned logic rules, ensuring transparent and explainable predictions. We go beyond traditional accuracy assessments, evaluating the quality of these explanations using established XAI metrics, including coherence. By quantitatively assessing the interpretability of our model, we aim to advance the development of trustworthy and understandable link prediction systems for Knowledge Graphs.
2025
CASTELLANO ONTIVEROS, R., Bonabi Mobaraki, E., Giannini, F., Barbiero, P., Gori, M., Diligenti, M. (2025). Interpretable-by-design Neural-Symbolic Link Prediction on Knowledge Graphs. In Proceedings of the 3rd World Conference on eXplainable Artificial Intelligence (XAI-2025).
File in questo prodotto:
File Dimensione Formato  
_XAI_2025__X_RCBM.pdf

non disponibili

Tipologia: Post-print
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 426.72 kB
Formato Adobe PDF
426.72 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11365/1290414