Explainable Artificial Intelligence in Medical Decision Support Systems
2: Karunya University, India
3: Department of Electrical Engineering, University of Colorado Denver, USA
4: Department of Electronics & Communication Engineering, Sikkim Manipal Institute of Technology, India
Medical decision support systems (MDSS) are computer-based programs that analyse data within a patient's healthcare records to provide questions, prompts, or reminders to assist clinicians at the point of care. Inputting a patient's data, symptoms, or current treatment regimens into an MDSS, clinicians are assisted with the identification or elimination of the most likely potential medical causes, which can enable faster discovery of a set of appropriate diagnoses or treatment plans. Explainable AI (XAI) is a "white box" model of artificial intelligence in which the results of the solution can be understood by the users, who can see an estimate of the weighted importance of each feature on the model's predictions, and understand how the different features interact to arrive at a specific decision.
This book discusses XAI-based analytics for patient-specific MDSS as well as related security and privacy issues associated with processing patient data. It provides insights into real-world scenarios of the deployment, application, management, and associated benefits of XAI in MDSS. The book outlines the frameworks for MDSS and explores the applicability, prospects, and legal implications of XAI for MDSS. Applications of XAI in MDSS such as XAI for robot-assisted surgeries, medical image segmentation, cancer diagnostics, and diabetes mellitus and heart disease prediction are explored.
Inspec keywords: decision making; diseases; neural nets; decision support systems; medical computing; data privacy; security of data; explanation; health care; learning (artificial intelligence)
Other keywords: health care; diseases; explainable artificial intelligence; neural nets; security of data; medical computing; data privacy; learning (artificial intelligence); medical decision support systems; decision making
Subjects: Decision support systems; Biology and medical computing; Neural nets; Data security; General and management topics; Reasoning and inference techniques; Medical administration
- Book DOI: 10.1049/PBHE050E
- Chapter DOI: 10.1049/PBHE050E
- ISBN: 9781839536205
- e-ISBN: 9781839536212
- Page count: 545
- Format: PDF
-
Front Matter
- + Show details - Hide details
-
p.
(1)
-
1 Explainable artificial intelligence (XAI) in medical decision systems (MDSSs): healthcare systems perspective
- + Show details - Hide details
-
p.
1
–43
(43)
The healthcare sector is very interested in machine learning (ML) and artificial intelligence (AI). Nevertheless, applying AI applications in scientific contexts is difficult due to explainability issues. Explainable AI (XAI) has been studied as a potential remedy for the problems with current AI methods. The usage of ML with XAI may be capable of both explaining models and making judgments, in contrast to AI techniques like deep learning. Computer applications called medical decision support systems (MDSS) affect the decisions doctors make regarding certain patients at a specific moment. MDSS has played a crucial role in systems' attempts to improve patient safety and the standard of care, particularly for non-communicable illnesses. They have moreover been a crucial prerequisite for effectively utilizing electronic healthcare (EHRs) data. This chapter offers a broad overview of the application of XAI in MDSS toward various infectious diseases, summarizes recent research on the use and effects of MDSS in healthcare with regard to non-communicable diseases, and offers suggestions for users to keep in mind as these systems are incorporated into healthcare systems and utilized outside of contexts for research and development.
-
2 Explainable artificial intelligence (XAI) in medical decision support systems (MDSS): applicability, prospects, legal implications, and challenges
- + Show details - Hide details
-
p.
45
–90
(46)
The healthcare sector is very interested in machine learning (ML) and artificial intelligence (AI). Nevertheless, applying AI applications in scientific contexts is difficult because of the issues with explainability. Explainable AI (XAI) has been studied as a possible remedy for the issues with current AI methods. The usage of machine learning (ML) with XAI may be capable of both explaining models and making judgments, in contrast to AI techniques like deep learning. Computer applications called medical decision support systems (MDSS) affect the decisions doctors make regarding certain patients at a specific moment. MDSS have played a crucial role in systems' attempts to advance patient wellbeing and the standard of care, particularly for non-communicable illnesses. Moreover, they have been a crucial prerequisite for the effective utilization of electronic healthcare (EHRs) data. This chapter bargains a comprehensive impression of the application of AI and XAI in MDSSs, summarizes recent research on the use and effects of MDSS in healthcare, and offers suggestions for users to keep in mind as these systems are integrated into healthcare systems and utilized outside of contexts for research and development.
-
3 Explainable Artificial Intelligence-based framework for medical decision support systems
- + Show details - Hide details
-
p.
91
–116
(26)
The rise in death tolls due to increased infectious diseases has become one of the most severe health problems and the largest source of death globally. Artificial Intelligence (AI)-based models have emerged and developed to assist medical experts in decision-making, thus reducing the mortality and morbidity rate. However, the most prominent weakness of these algorithms is the lack of interpretations for their results. In other words, the end-user is unfamiliar with the fundamental logic that supports the prediction. Hence, due to their black-box nature, physicians struggle to understand these models; thus, they often do not attract the confidence of the medical practitioners and, in most cases, are not permitted in medical practice. Therefore, this chapter reviews the most substantial reasons for and against explainable AI (XAI) in medical Decision Support Systems (MDSS) with future prospects. The chapter proposes a framework to address the above-mentioned issue in AL-based models using a deep Shapley additive explanations (DeepSHAP) for predicting various diseases. The framework relies on deep neural network architecture enabled with a feature selection method for disease prediction with an explanation. The proposed framework will provide medical experts with more accurate and personalized results for disease prediction and facilitate improved decision-making.
-
4 Prototype interface for detecting mental fatigue with EEG and XAI frameworks in Industry 4.0
- + Show details - Hide details
-
p.
117
–136
(20)
Mental fatigue correlates to prolonged cognitive activity. It stresses the brain of so many ideas or thoughts that can translate into commitments, jobs, and to-do at home - leaving a person exhausted and hindering productivity and overall cognitive function. Moreover, extracranial electroencephalogram (EEG) signals are an excellent indicator of the brain conditions of a person. Besides, mental fatigue increases power in frontal theta (θ) and parietal alpha (α) EEG rhythms. On the other hand, artificial intelligence (AI) and EEG signals improved the classification and regression results in different applications with new convolutional neural networks (CNNs), including EEGNet. Some of these results in the literature are applications for disabled persons, detection of mental fatigue driving, mental workload, and schizophrenia. Despite the benefits of applying CNNs to interpret EEG signals, the final products' applications are still limited due to the expertise required for working with this model. Alternatively, explainable AI (XAI) refers to the principle of AI operation and the presentation of the results obtained in the most user-friendly way possible. Explainable models must provide a clear description of their results without having to forget high learning efficiency. It must also be possible for users to understand the emerging generation of artificial intelligence mechanisms, place a certain degree of trust in it, and work with it and manage it efficiently. The present chapter proposes a new application that brings the advantages of using EEG signals together with the EEGNet structure, adding explainable intelligent models simplifying the detection of mental fatigue and preventing accidents in Industry 4.0. A study of various activities as a stimulus under a workstation scenario is analyzed to determine criteria associated with preventing accidents in the physical plant of an industrial building. Typically, it is difficult for the device to provide us with high-quality signals; there are invasive systems that allow greater precision. In this project, we use a non-invasive device for this purpose.
-
5 XAI for medical image segmentation in medical decision support systems
- + Show details - Hide details
-
p.
137
–165
(29)
Medical image segmentation has contributed immensely to medical care delivery. With the speedy development of deep learning (DL), medical image segmentation processing based on deep convolutional neutral networks (CNNs) has become a research interest. Explainable artificial intelligence (XAI) provides pathways for useful MDSSs. The necessity for XAI in MDSSs is largely based on ethical, fair decision making, strengthening means of chronological procedures, and unfairness that should be revealed during medical image segmentation processes. Studies have shown that an inaccurate diagnosis is as a result of not identifying the limits of a pathological lesion or organ. It is clear that the likelihood of survival can be improved if the tumor is identified and classified properly at its early stage. In this study, we provide an enhanced application of fuzzy C-means and Artificial Neural Network Algorithm for medical image segmentation. The paper intends to review and contrast the techniques of automatic detection of brain tumor through magnetic resonance imaging (MRI) by the application of fuzzy C-mean and artificial neural network (ANN). Explanation given to these AI processes creates medical decision confidence, trustworthiness, acceptability, and potentials for its incorporation in the medical image segmentation workflow. Based on the discussions on human pathological tissues and organs, the specificity between them and their classic segmentation algorithms is revealed.
-
6 XAI robot-assisted surgeries in future medical decision support systems
- + Show details - Hide details
-
p.
167
–195
(29)
Artificial intelligence (AI) models are gaining widespread applications in various areas such as the healthcare system, especially robotic surgeries. The output of these models needs to be easily explained to surgeons and other stakeholders. These explanations assist stakeholders or end-users of these AI models in establishing trust and understanding the output of the model. However, there are identified limitations in fully implementing these AI models, particularly in critical areas such as robotic surgeries. This is mainly due to the complexity of its results, patient safety, and growing security concerns. Thus, explainable AI (XAI) aims to bridge the gap in understanding the results of AI models. Toward this end, this chapter provides an overview of the current applications, importance, and limitations of XAI robotic-assisted surgeries in the medical decision support system (MDSS). The chapter discusses the privacy and security concerns of patients while utilizing XAI techniques in robotic surgeries. The chapter also explores current trends and issues regarding the future deployment of XAI robotic-assisted surgeries in supporting medical decision-making systems. Finally, the chapter addresses the limitations of machine learning (ML) tools used for robotic surgeries.
-
7 Prediction of erythemato squamous-disease using ensemble learning framework
- + Show details - Hide details
-
p.
197
–228
(32)
The erythemto-squamous (skin) disease is characterized by redundant and noisy features. One of the biggest challenges in the artificial intelligence field has been finding relevant features for the target concept. This is a result of similarities among the six classes in the dataset. From the literature, most studies focus mainly on building models on one-phase combined feature selection methods. This paper assesses the performance of models derived from machine learning techniques experimentally using ensemble feature selection techniques. The skin dataset was evaluated using chi-squared, information gain, gain ratio, and relief F as the filter-based features selection methods and RFE-, PRIFEB-, and MIFEB-based on SVMs as the embedded feature selection methods in determining distinctive feature subsets. Then, a variety of classification algorithms have been used to create models that are then compared to seek the optimal feature combinations that produce model performance. The experimenter results show that our proposed stacking models outperform other models in terms of accuracy and applicability.
-
8 Security-based explainable artificial intelligence (XAI) in healthcare system
- + Show details - Hide details
-
p.
229
–257
(29)
Explainable Artificial Intelligence (XAI) is one of the most advanced research areas of Artificial Intelligence (AI). To explain the deep learning (DL) model is the main objective of XAI. It deals with artificial models which are understandable to humans, including the users, developers, policymakers, etc. XAI is very important in some critical domains like security, healthcare, etc. The purpose of XAI is only to provide a clear answer to the question of how the model made its decision. The explanation is very important before any system decision-making. As an example, if a system responds to a decision, it is necessary to have inside knowledge of the model about that decision. The decision can be positive or negative, but it is more important to know the decision based on characteristics. The decision of the model should be trusted when we know the internal structure of the DL model. Generally, DL models come under the black box models. So for security purposes, it is very necessary to explain a system internally for any decision-making. Security is very crucial in healthcare as well as in any other domain. The objective of this research is to provide a decision about security based on XAI which is a big challenge. We can improve security systems based on XAI for the next level. For medical/healthcare security, when we recognize human action using transfer learning techniques, one pre-trained model is considered good for action and the same action is not good in terms of accuracy using another pre-trained model. This is called the black-box model problem, and it needs to know what is the internal mechanism of both models for the same action. Why one model considers good for action and why the same action is not very well using another model? Here need a model-specific approach of post-hoc interpretability to know the internal structure and characteristics of both models for the same action.
-
9 Explainable dimensionality reduction model with deep learning for diagnosing hypertensive retinopathy
- + Show details - Hide details
-
p.
259
–283
(25)
Artificial intelligence (AI) is a division of computer science that pacts with the formation and training of algorithms that attempt to mimic human intellect. Diabetic retinopathy is the major cause of eyesight loss worldwide. AI-based technologies have recently been employed to diagnose and assess diabetic retinopathy. Early identification allows for adequate therapy, preventing eyesight loss. Machine learning techniques can extract features from images and determine the existence of diabetic retinopathy. In computer-assisted medical image analysis for the identification of illnesses like hypertension, diabetes and diabetic nephropathy, and arteriosclerosis, automatic retinal picture segmentation is a crucial problem. The identification of retinal vessels allows for the primary discovery of diabetic retinopathy, the main cause of visual detachment. AI and dimensionality reduction techniques like linear discriminant analysis (LDA) with deep learning such as convolutional neural network (CNN), artificial neural network (ANN), and recurrent neural network is further recommended. Conventional identification of these retinal blood vessels is a time-consuming procedure that can be automated. In this study, the use machine learning algorithm LDA for the classification of the image with deep learning methods CNN, ANNs, and multi-layer perceptron for further classifications were used in the diagnosing of hypertensive retinopathy (HS). The data was first classified using LDA before being passed into CNN, ANN, and Resnet and results were obtained with the accuracy of 86.00%, 84.32%, and 43.29%, respectively, yet ANN required the shortest time to run, at 2.50 sec.
-
10 Understanding cancer patients with diagnostically influential factors using high-dimensional data embedding
- + Show details - Hide details
-
p.
285
–311
(27)
Analysing breast cancer data is a long-established research topic from both medical diagnosis and data modeling perspectives. Enormous predictive models have been employed in modeling breast cancer data, e.g., predicting a patient's survival rate given certain medical circumstances and a patient's demographics. However, these predictive models tend to take a black-box approach to the modeling and therefore can hardly provide any explainable results to be applied for diagnostic purposes, in particular, if neural network-based models are utilised. On the other hand, identifying diagnostically influential factors with exploratory descriptive models has been proven difficult due to the high dimensionality of breast cancer data under consideration. For instance, the breast cancer data provided by SEER, The Surveillance, Epidemiology, and End Results Program, typically has more than 100 dimensions of numeric and categorical data types and could expend to about some 1,000 dimensions for analysis if orthogonal (one-hot) encoding is applied. Hence, effectively interpreting and understanding high-dimensional data becomes crucial in modelling cancer data, and it is because of this that dimensionality reduction algorithms and manifold learning algorithms have been studied intensively and many relevant algorithms are available, with each having pros and cons of its own. In this chapter, a comparative study is presented aiming at providing visualized, explainable insights in breast cancer survival rate analysis and identifying critical influential factors that strongly determine the likelihood of a patient's survival. Two dimensionality reduction algorithms are considered in this study for comparison purpose: one is a typical and popular t-distributed stochastic neighbor embedding (t-SNE) algorithm and another is a relevant new same degree distribution (SDD) algorithm. The relevant experiments have demonstrated that based on the same embedding performance assessment metrics, the SDD algorithm can achieve much better data embedding results which could be impossible or difficult if t-SNE is used. Furthermore, using the reliable embedding results from SDD, meaningful and explainable factors have been identified that reflect crucially the similarities of the patients who have survived and the diversities of the patients who, unfortunately, have died. Clusters of patients who survived are clearly recognizable in a two-dimensional embedding space, whereas the embedded points of patients who died are significantly scattered in the space. The entire package of the codes used for the analysis is available for replication.
-
11 Explainable neural networks in diabetes mellitus prediction
- + Show details - Hide details
-
p.
313
–334
(22)
Artificial Intelligence (AI) has been widely applied in healthcare for several purposes, especially in disease prediction enabling physicians to more accurately diagnose patients' conditions. Results generated by traditional AI models are difficult to justify due to the opaqueness of the models. Thus, making it difficult for physicians to trust the results and use them in real-life practice. Recent advancements in explainable AI (XAI) have made the results more reliable, making it possible for physicians to embrace AI in clinical practice. Explainable deep neural network (xDNN) is a machine learning technique that can enhance diabetes mellitus disease prediction and explain the results. This chapter focuses on using explainable neural networks in diabetes mellitus prediction. It provides valuable insights on key steps and techniques for diabetes mellitus prediction using explainable neural networks (xNNs). In particular, the sequence for implementing the model using R programming software was discussed. In order to demonstrate the implementation of xNNs in diabetes mellitus prediction, the Pima Indian diabetes mellitus datasets were used. The model was assessed based on accuracy, sensitivity, specificity, precision, recall, and F1 score. Additionally, the chapter discussed the different methods of implementing explainability in XAI's and provided a clear illustration using the variable importance tool in R. The results revealed the effect of each variable on the overall model. We found that the variable importance varies with the network architecture. Overall, diabetes pedigree functions are the least important predictor of diabetes mellitus in the model.
-
12 A KNN and ANN model for predicting heart diseases
- + Show details - Hide details
-
p.
335
–356
(22)
The heart is the single most important organ in the human body. Patients, professions, and medical systems are all bearing the brunt of heart failure's devastating effects on contemporary society. Since cardiac arrest may well be demonstrated as a better understanding or conceivably go unobserved, particularly in the vast population of clients that have other cardiovascular disorders, the true prevalence of heart failure is likely to be underestimated, accounting for only 1-4% of all hospitalized patients as test procedures in developed nations. A person with heart failure has a heart that is unable to circulate sufficient blood through the body, but the term "heart failure" does not explain why this happens. The clinical picture is confusing since there are several possible causes of heart problems, many of which are diseases in and of themselves. Many cases of heart failure can be avoided if the underlying medical conditions that cause them are identified and treated promptly. The study and prediction of cardiac conditions must be precise because numerous diseases have been connected to the cardiovascular system. The resolution of this problem requires intensive online research on the relevant topic. Since incorrect illness prognoses are a leading cause of death among heart patients, learning more about effective prediction algorithms is crucial.
This research utilizes K-nearest neighbor (KNN) and artificial neural network (ANN) to assess cardiovascular diseases using data collected from Kaggle. The highest accuracy (96%) was achieved by ANN trained with the standard scalar. Medical experts, specialists, and academics can all benefit greatly from this study. Based on the results of this study, cardiologists will be able to make more knowledgeable decisions about the inhibition, analysis, and handling of heart disease.
-
13 Artificial Intelligence-enabled Internet of Medical Things for COVID-19 pandemic data management
- + Show details - Hide details
-
p.
357
–380
(24)
The dreaded coronavirus (COVID-19) disease traceable to Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV2) has killed thousands of people worldwide, and the World Health Organization (WHO) has proclaimed the viral respiratory disease a human pandemic. The adverse flare of COVID-19 and its variants has triggered collaborative research interests across all disciplines, especially in medicine and healthcare delivery. Complex healthcare data collected from patients via sensors and devices are transmitted to the cloud for analysis and sharing. However, it is pretty difficult to achieve rapid and intelligent decisions on the processed information due to the heterogeneity and complexity of the data. Artificial intelligence (AI) has recently appeared as a promising paradigm to address this issue. The introduction of AI to the Internet of Medical Things (IoMT) births the era of AI of Medical Things (AIoMT). The AIoMT enables the autonomous operation of sensors and devices to provide a favourable and secure environmental landscape to healthcare personnel and patients. AIoMT finds successful applications in natural language processing (NLP), speech recognition, and computer vision. In the current emergency, medical-related records comprising blood pressure, heart rate, oxygen level, temperature, and more are collected to examine the medical conditions of patients. However, the power usage of the low-power sensor nodes employed for data transmission to the remote data centres poses significant limitations. Currently, sensitive medical information is transmitted over open wireless channels, which are highly susceptible to malicious attacks, posing a significant security risk. An insightful privacy-aware energy-efficient architecture using AIoMT for COVID-19 pandemic data handling is presented in this chapter. The goal is to secure sensitive medical records of patients and other stakeholders in the healthcare domain. Additionally, this chapter presents an elaborate discussion on improving energy efficiency and minimizing the communication cost to improve healthcare information security. Finally, the chapter highlights the open research issues and possible lines of future research in AIoMT.
-
14 A deep neural network for the identification of lead molecules in antibiotics discovery
- + Show details - Hide details
-
p.
381
–400
(20)
In this study, we develop a deep neural network (DNN) model, multi-layer perceptron (MLP) to classify the molecules into "active" and "inactive" compounds using a ligand-based virtual screening approach for the lead compounds identification at the early stage of the antibiotic discovery. Lead identification as a major part of virtual screening in the drug discovery process is mostly performed by the quantitative structure-activity relationship (QSAR)-based method. The purpose of applying an artificial intelligence (AI) method is to reduce the time and subsequently the costs that are always associated with the process. The MLP model has several stacks of hidden layers and it used a back-propagation algorithm for the training. The dataset of experimentally known bioactivities of the drug-like compounds and their respective target was obtained from ChEMBL database. A biological target of an antibiotic, dihydrofolate reductase (DHFR), was searched from the database to get its inhibitors' chemical properties and the IC50 values on which the classification was based. One set of the dataset was preprocessed and split into two for the training and validating sets of 80% and 20% respectively. With this approach, the compounds were successfully classified into the desired categories and an accuracy of 0.74 was achieved.
-
15 Statistical test with differential privacy for medical decision support systems
- + Show details - Hide details
-
p.
401
–433
(33)
Several statistical testing methods have been employed to offer accessible analysis regarding medical data for medical decision support systems (MDSSs), with the Chi-squared test among the most widely used option. Critics have noted, however, that presenting such data risks exposing individual attribute values. This chapter will demonstrate how the findings of statistical analysis can inadvertently reveal individual attribute values. It will then show how advanced differential privacy systems, such as those utilized by companies including Google and Apple, can be used to protect individual attribute values while conducting extremely precise Chi-squared tests.
-
16 Automated decision support system for diagnosing sleep diseases using machine intelligence techniques
- + Show details - Hide details
-
p.
435
–469
(35)
Sleep is one of the human health's most vital yet often underrated components. Sleep studies are crucial for unearthing various abnormalities associated with sleep, widely prevalent in today's world and bound to increase over the years. An increasingly rapid lifestyle makes short sleeping hours surprisingly too common. Sleep deprivation can heavily impact humans and their quality of life. Diagnosing sleeping issues accurately during the initial stages is one of the significant challenges faced by the medical community. Sleep stage scoring is the primary step in detecting sleep abnormalities or dividing a person's entire sleep duration into different categories according to muscle movements, brain activity, eye movements, etc. Polysomnography is the scientific test that records these human activities during sleep through electrodes connected to patients. A hypnogram that results from this test is the graphical form of the sleep scoring done by technicians. For ages, this process has been carried out manually, which is frequently prone to error, requires ample time, effort, and training, and is susceptible to inter-scorer differences. Therefore, it is essential to devise an automated system for sleep staging. This experimental study involves machine learning techniques to classify the different sleep stages. The reported results proved that the proposed sleep staging model was well performed for the five-class classification task with improved accuracy using the ensemble learning classification model.
-
17 XAI methods for precision medicine in medical decision support systems
- + Show details - Hide details
-
p.
471
–487
(17)
Over the last couple of years, explainable artificial intelligence (XAI) has witnessed tremendous development evidenced by growing research interest in the area. This could be attributed to the increasing role of machine learning, especially deep learning. While these models are highly accurate, they lack explainability and interpretability. There has been limited application of AI systems in vital fields such as precision medicine due to the aforementioned vagueness. The aim of the study is XAI for precision medicine in medical decision support systems (MDSS). The authors outline through an organized examination of literature the application of XAI in MDSS, thus highlighting the several benefits of the use of XAI as reported in the literature such as enhanced decision confidence in precision medicine. The opportunities and challenges of explainable models in MDSS were discussed. Guidelines for the implementation of XAI in MDSS have been recommended in this study while highlighting some of the opportunities and challenges.
-
18 The psychology of explanation in medical decision support systems
- + Show details - Hide details
-
p.
489
–506
(18)
Today, an important role is being played by artificial intelligence (AI) in healthcare systems. Many targeted healthcare applications such as medical diagnostics, patient monitoring, and learning healthcare systems are now available with the aid of AI software programs. Clinical decision-making is enabled by AI algorithms and software. The predictive analysis of the AI algorithms is aided by a computerized predictive analysis flowchart that enables it to separate, organize, and check for patterns from complex data and draw a conclusion with some degree of probability, which will enable the healthcare service provider to make a quality decision within a short time. The AI algorithm does not make the final decision going by the existing legal frameworks at the various jurisdictions but rather they are used as supporting tools for diagnosis or a screening tool, instead of doing the usual medical tasks being done by the doctor in a hospital setting. Many studies in the literature are available today on research with patients' electronic health records deployed by AI-assisted data analysis and learning tools. They use an electronic secure computer which does the records keeping instead of the traditional way of paper records. AI applications are being surged by the recent advancement in machine learning (ML), and the improvement of AI applications in health solely depends on the success in designing the AI algorithm, which is called ML. Only a proper and good algorithm design can guarantee the set goals for AI systems. The autonomous system that can perceive, learn, decide, and act on its own will only be possible by continued advances in AI algorithms known as ML. Autonomous machines are simply self-operating machines, which can carry out their assigned task without human intervention. However, the machine's inability to explain their decision and action taken by them to human users has posed a big limitation to their adoption and effective use. The deployment of more intelligent, autonomous, and symbiotic systems will provide a good solution to the challenges being faced in the healthcare system. Thus, this chapter presents the psychology of explanation in medical decision support systems (MDSS). The psychological perspectives on explanation in healthcare systems with a binocular focus on MDSS are highlighted.
-
Back Matter
- + Show details - Hide details
-
p.
(1)