Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Explainable AI and Multi-Modal Causability in Medicine

Explainable AI and Multi-Modal Causability in Medicine AbstractProgress in statistical machine learning made AI in medicine successful, in certain classification tasks even beyond human level performance. Nevertheless, correlation is not causation and successful models are often complex “black-boxes”, which make it hard to understand why a result has been achieved. The explainable AI (xAI) community develops methods, e. g. to highlight which input parameters are relevant for a result; however, in the medical domain there is a need for causability: In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations produced by xAI. The key for future human-AI interfaces is to map explainability with causability and to allow a domain expert to ask questions to understand why an AI came up with a result, and also to ask “what-if” questions (counterfactuals) to gain insight into the underlying independent explanatory factors of a result. A multi-modal causability is important in the medical domain because often different modalities contribute to a result. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png i-com de Gruyter

Explainable AI and Multi-Modal Causability in Medicine

i-com , Volume 19 (3): 9 – Jan 26, 2021

Loading next page...
 
/lp/de-gruyter/explainable-ai-and-multi-modal-causability-in-medicine-aJigWkXFHQ
Publisher
de Gruyter
Copyright
© 2020 Holzinger, published by De Gruyter
ISSN
2196-6826
eISSN
2196-6826
DOI
10.1515/icom-2020-0024
Publisher site
See Article on Publisher Site

Abstract

AbstractProgress in statistical machine learning made AI in medicine successful, in certain classification tasks even beyond human level performance. Nevertheless, correlation is not causation and successful models are often complex “black-boxes”, which make it hard to understand why a result has been achieved. The explainable AI (xAI) community develops methods, e. g. to highlight which input parameters are relevant for a result; however, in the medical domain there is a need for causability: In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations produced by xAI. The key for future human-AI interfaces is to map explainability with causability and to allow a domain expert to ask questions to understand why an AI came up with a result, and also to ask “what-if” questions (counterfactuals) to gain insight into the underlying independent explanatory factors of a result. A multi-modal causability is important in the medical domain because often different modalities contribute to a result.

Journal

i-comde Gruyter

Published: Jan 26, 2021

Keywords: explainable AI; Human-Centered AI; Human-AI interfaces

There are no references for this article.