Determinism and Social Paradoxes of Explainable Artificial Intelligence (XAI)

Soichiro Toda, Eisuke Nakazawa
Volume 5 Issue 2 Pages 47-58
First Published: March 31, 2023

PDF


Abstract

The core philosophical issue of explainable artificial intelligence (XAI) is the philosophical implication and clarifying the meaning of the term explanation. We call the process of connecting the cognitive suspension that exists between artificial intelligence (AI) and humans individually “explanation.” If, through explanation, an AI is recognized as a moral agency, then and only then is the AI allowed to act in a way that satisfies the person as an XAI. AIs and humans have different operating systems to begin with. However, the condition that an AI is a moral actor, which equals a decision maker, is crucial for the AI to be recognized as an XAI. Furthermore, an XAI as a moral actor eliminates the paradox of infinite regress of explanations in the XAI argument. As an aid to this understanding, we examine the requirements for the social implementation of XAI, using the ethically interesting case of triage as a starting point. Then, we highlight the practical/philosophical paradox that cannot be resolved: can XAI create a story for explana-tion? We also discuss the trade-off between “accuracy” and “humanity” provide further topics for future research.

 

Key words

explainable artificial intelligence, AI, philosophy, moral agency