Artificial Intelligence (AI) is believed to have the potential to produce radical changes in health care. Systems aim to improve diagnosis, prediction, and treatment of a wide array of health conditions. An assumption is that AI will enable more accurate and efficient ways to diagnose diseases and to restore the precious and time-honored connection and trust, i.e., the human touch, allowing health care professionals to spend more time with their patients. Meanwhile, according to a manuscript published in the December 2022 issue of the journal Ethics and Information Technology, sophisticated self-learning AI systems that do not follow predetermined decision rules, often referred to as black-boxes, have spawned philosophical debate. This black-box nature is believed to be a major ethical challenge for the use of these systems and it remains disputed whether explainability is philosophically and computationally possible.
Among various observations made, the authors point out that regarding the possibility of explanations of AI, there is a diversity of answers coming from the fields of computer sciences, philosophy, and the sciences (e.g., biology, medicine, chemistry). In the field of molecular biology, for instance, mechanistic explanations describing the behavior of the underlying mechanisms of real-world phenomena are standard. If the same explanatory logic is applied to AI, however, then it is much more difficult to uphold claims of genuine explanatory success. A claim is acknowledged that explanations of phenomena predicted by AI are unforthcoming due to the lack of causal relations underlying the model predictions. The problem stems from the fact that AI systems involve searching for (high) correlations between features in the data, but it is done without a theoretical backup to provide causal relations. Also, a concern has been expressed in a two-fold argument about a fully-automated explanation in the context of medicine. Medical AI delivers classifications, but classifications are not explanations. Additionally, an explanation requires a bona fide structure for that explanation. Reference also is made to many remaining questions that allude both to epistemologists and ethicists: Should explainability play a role, and if so, which role, in the responsible implementation of AI in medicine and healthcare?