Nowadays, artificial intelligence in medicine plays a leading role. This necessitates the need to ensure that artificial intelligence systems are not only high-performing but also comprehensible to all stakeholders involved, including doctors, patients, healthcare providers, etc. As a result, the explainability of artificial intelligence systems has become a widely discussed subject in recent times, leading to the publication of numerous approaches and solutions. In this paper, we aimed to provide a systematic review of these approaches in order to analyze their role in making artificial intelligence interpretable for everyone. The conducted review was carried out in accordance with the PRISMA statement. We conducted a BIAS analysis, identifying 87 scientific papers from those retrieved as having a low risk of BIAS. Subsequently, we defined a classification framework based on the classification taxonomy and applied it to analyze these papers. The results show that, although most AI approaches in medicine currently incorporate explainability methods, the evaluation of these systems is not always performed. When evaluation does occur, it is most often focused on improving the system itself rather than assessing users' perception of the system's effectiveness. To address these limitations, we propose a framework for evaluating explainability approaches in medicine, aimed at guiding developers in designing effective human-centered methods.

XAI Unveiled: Revealing the Potential of Explainable AI in Medicine: A Systematic Review

Scarpato N.;Ferroni P.
;
Guadagni F.
2024-01-01

Abstract

Nowadays, artificial intelligence in medicine plays a leading role. This necessitates the need to ensure that artificial intelligence systems are not only high-performing but also comprehensible to all stakeholders involved, including doctors, patients, healthcare providers, etc. As a result, the explainability of artificial intelligence systems has become a widely discussed subject in recent times, leading to the publication of numerous approaches and solutions. In this paper, we aimed to provide a systematic review of these approaches in order to analyze their role in making artificial intelligence interpretable for everyone. The conducted review was carried out in accordance with the PRISMA statement. We conducted a BIAS analysis, identifying 87 scientific papers from those retrieved as having a low risk of BIAS. Subsequently, we defined a classification framework based on the classification taxonomy and applied it to analyze these papers. The results show that, although most AI approaches in medicine currently incorporate explainability methods, the evaluation of these systems is not always performed. When evaluation does occur, it is most often focused on improving the system itself rather than assessing users' perception of the system's effectiveness. To address these limitations, we propose a framework for evaluating explainability approaches in medicine, aimed at guiding developers in designing effective human-centered methods.
2024
artificial intelligence
Explainability
explainable artificial intelligence
interpretability
interpretable artificial intelligence
medicine
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12078/24906
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact