Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You and Your Team.

Learn More →

Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence

Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from philosophy of science, this framework is modeled after accounts of explanation in cognitive science. The framework distinguishes between the explanation-seeking questions that are likely to be asked by different stakeholders, and specifies the general ways in which these questions should be answered so as to allow these stakeholders to perform their roles in the Machine Learning ecosystem. By applying the normative framework to recently developed techniques such as input heatmapping, feature-detector visualization, and diagnostic classification, it is possible to determine whether and to what extent techniques from Explainable Artificial Intelligence can be used to render opaque computing systems transparent and, thus, whether they can be used to solve the Black Box Problem. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Philosophy & Technology Springer Journals

Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence

Philosophy & Technology , Volume 34 (2) – Dec 20, 2019

Loading next page...
 
/lp/springer-journals/solving-the-black-box-problem-a-normative-framework-for-explainable-wKoKmStGEg
Publisher
Springer Journals
Copyright
Copyright © Springer Nature B.V. 2019
Subject
Philosophy; Philosophy of Technology
ISSN
2210-5433
eISSN
2210-5441
DOI
10.1007/s13347-019-00382-7
Publisher site
See Article on Publisher Site

Abstract

Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from philosophy of science, this framework is modeled after accounts of explanation in cognitive science. The framework distinguishes between the explanation-seeking questions that are likely to be asked by different stakeholders, and specifies the general ways in which these questions should be answered so as to allow these stakeholders to perform their roles in the Machine Learning ecosystem. By applying the normative framework to recently developed techniques such as input heatmapping, feature-detector visualization, and diagnostic classification, it is possible to determine whether and to what extent techniques from Explainable Artificial Intelligence can be used to render opaque computing systems transparent and, thus, whether they can be used to solve the Black Box Problem.

Journal

Philosophy & TechnologySpringer Journals

Published: Dec 20, 2019

References