Explainable Artificial Intelligence (XAI): Concepts, enabling tools, technologies and applications

2: Department of Computer Engineering, Suleyman Demirel University, Turkey
3: Computer Science Department, Raja Rajeswari College of Engineering, India
4: Department of Information Technology, Sri Krishna College of Engineering and Technology, India
5: Department of Electrical and Electronics Engineering, Universiti Teknologi PETRONAS (UTP), Malaysia
The world is keen to leverage multi-faceted AI techniques and tools to deploy and deliver the next generation of business and IT applications. Resource-intensive gadgets, machines, instruments, appliances, and equipment spread across a variety of environments are empowered with AI competencies. Connected products are collectively or individually enabled to be intelligent in their operations, offering and output.
AI is being touted as the next-generation technology to visualize and realize a bevy of intelligent systems, networks and environments. However, there are challenges associated with the huge adoption of AI methods. As we give full control to AI systems, we need to know how these AI models reach their decisions. Trust and transparency of AI systems are being seen as a critical challenge. Building knowledge graphs and linking them with AI systems are being recommended as a viable solution for overcoming this trust issue and the way forward to fulfil the ideals of explainable AI.
The authors focus on explainable AI concepts, tools, frameworks and techniques. To make the working of AI more transparent, they introduce knowledge graphs (KG) to support the need for trust and transparency into the functioning of AI systems. They show how these technologies can be used towards explaining data fabric solutions and how intelligent applications can be used to greater effect in finance and healthcare.
Explainable Artificial Intelligence (XAI): Concepts, enabling tools, technologies and applications is aimed primarily at industry and academic researchers, scientists, engineers, lecturers and advanced students in the fields of IT and computer science, soft computing, AI/ML/DL, data science, semantic web, knowledge engineering and IoT. It will also prove a useful resource for software, product and project managers and developers in these fields.
- Book DOI: 10.1049/PBPC062E
- Chapter DOI: 10.1049/PBPC062E
- ISBN: 9781839536953
- e-ISBN: 9781839536960
- Page count: 530
- Format: PDF
-
Front Matter
- + Show details - Hide details
-
p.
(1)
-
1 An overview of past and present progressions in XAI
- + Show details - Hide details
-
p.
1
–16
(16)
Deep learning has gained a huge commitment to the new headway in man-made brainpower. In contrast with conventional artificial intelligence (AI) techniques, for example, choice trees and backing vector machines, profound learning strategies have accomplished considerable improvement in different forecast assignments. Notwithstanding, deep neural networks (DNNs) are similarly powerless in clarifying their derivation cycles and eventual outcomes. In some certifiable applications, for example, business choice, process advancement, clinical determination, and venture suggestion, reasonableness and straightforwardness of our AI frameworks become especially fundamental for their clients, for individuals who are impacted by AI choices, and besides, for the scientists and designers who make the AI arrangements. This chapter gives an insight into explainable AI, the new and trending current technology used for diverse modern-day applications.
-
2 Demystifying explainable artificial intelligence (EAI)
- + Show details - Hide details
-
p.
17
–29
(13)
Today, artificial intelligence (AI) permeates nearly every aspect of our personal and professional life. It is the most extensive information base and common-sense reasoning system. When examining a dataset, the era of automated decision-making, such as profiling, necessitates the correct knowledge and skills. Modern systems frequently suffer from a lack of transparency and interpretability. Due to these problems, explainable AI (EAI) has become a hot topic in academia. The term "explainable AI" (EAI), also known as "interpretable AI," is a term used to describe machine learning and deep learning methods that can justify decisions that people can understand. In this work, we demonstrate how our EAI technique can be used to analyze the model's real-time decisions, identify trends in the model's overall behavior, and aid in the identification of potential flaws in the model's evolution. Furthermore, in order to ensure that the explanations are reliable and useful, we objectively test their consistency across a wide range of EAI metrics. EAI strives to demystify the reasoning that underlies an algorithm's output.
-
3 Illustrating the significance of explainable artificial intelligence (XAI)
- + Show details - Hide details
-
p.
31
–49
(19)
Artificial intelligence (AI) is turning out to be an indispensable paradigm for businesses and individuals. AI is automating and accelerating a specific set of everyday problems such as classification, regression, clustering, detection, recognition, translation, etc. AI can classify whether an incoming e-mail is spam or real, recognize a person's face in an image, understand a speech and convert it into text, create an appropriate caption for a scene, etc. The scope of AI is fast expanding. Industry verticals are keenly exploring and experimenting with different things. Business processes are being automated and optimized through the smart leverage of all kinds of noteworthy advancements happening in the AI space. Increasingly AI takes the center stage in business operations across the globe. There is a dazzling array of integrated platforms, frameworks, toolsets, libraries, and case studies and hence the adoption of AI algorithms and models is picking dramatically in the recent past. However, there are a few critical challenges to be surmounted before the widespread usage of AI models in mission-critical domains such as healthcare, security, retailing, supply chain, and infrastructure management. That is, business executives and IT experts insist on trustworthy and transparent decision-making by AI models.
This chapter is to explain the brewing challenges in the AI field and how they can be surmounted through competent technology solutions. Especially how the fast-emerging explainable AI (XAI) is a set of methods and software libraries that allow human users to comprehend and trust the results created by AI models. XAI is to describe an AI model and how it arrived at a particular decision. XAI is to explain the AI model's implications and potential biases. It helps in understanding model accuracy and fairness. XAI turns out to be a crucial cog for mission-critical enterprises to embark on the AI paradigm with all the clarity and confidence. With the maturity of the XAI concept, the AI adoption happens in a responsible manner across industry verticals.
-
4 Inclusion of XAI in artificial intelligence and deep learning technologies
- + Show details - Hide details
-
p.
51
–64
(14)
As technology has advanced over the past few decades, the complexity of artificial intelligence (AI) systems has increased rapidly. While these systems can provide impressive results, they can also be difficult to understand, even for experts in the field. Explainable AI (XAI) is an emerging field of research focused on making AI systems more transparent and interpretable. In this article, we will explore what XAI is, why it matters, and how it works.
XAI is an emerging field of research that aims to make AI systems more transparent, interpretable, and accountable. In recent years, AI has made significant advances in fields such as natural language processing, image recognition, and game playing. However, as AI systems become more complex and ubiquitous, it becomes increasingly important to ensure that they are used ethically and responsibly.
One of the main challenges with AI is that it can be difficult for humans to understand how the system arrived at a particular decision. For example, a deep learning algorithm might be able to identify objects in an image with incredible accuracy, but it may not be clear how the system arrived at its conclusion. This can lead to a lack of trust in the system, particularly in high-stakes domains such as healthcare, finance, and criminal justice.
Technically, XAI refers to a set of methods and techniques that enable AI systems to provide human-understandable explanations of their decisions, predictions, and actions. XAI utilizes various approaches such as rule-based systems, decision trees, and model-based techniques to produce explanations that can be interpreted and verified by humans. XAI aims to address the lack of transparency and accountability in traditional black-box AI systems referred in Figure 4.1, which can make it difficult for developers and users to understand and trust these systems. By providing interpretable explanations, XAI can increase the effectiveness, reliability, and trustworthiness of AI systems in a variety of applications.
-
5 Explainable artificial intelligence: tools, platforms, and new taxonomies
- + Show details - Hide details
-
p.
65
–91
(27)
Recent advances in machine learning (ML) strategies have introduced several artificial intelligence (AI)-based systems. These AI systems have the capability to perceive, learn, smartly decide, and act quickly on the given situation. Apparently, this is the requirement from such systems but after witnessing their performance, it has been noticed that these systems are unable to explain their actuation to users (humans). This constraint has been taken into consideration by several researchers later, after all this is the main thing required to make our autonomous systems more intelligent and robust. At this instant, researchers felt the need for explainable AI (XAI) that may make the verifiability of taken decision essential. This will increase the demand for an ability to question, understand, and above all generate a trust level over artificial intelligence systems. There are several models but still there is no consensus on the assessment of explainability. Thus, this chapter presents a comprehensive review of current state-of-the-art over the XAI that have a societal impact. In addition to this, one may find the drivers and tools for XAI. Last but certainly not the least is the complete literature review that provides the future research directions for researchers in this area.
-
6 An overview of AI platforms, frameworks, libraries, and processes
- + Show details - Hide details
-
p.
93
–113
(21)
Artificial intelligence (AI) is a sophisticated software-based technology that combines complex computer programming with aspects of human intelligence in a variety of ways to do a wide range of tasks that were previously thought to be only humanly feasible. The invention of electronic computers with stored programs gave rise to the idea of AI. At a meeting held at Dartmouth College in 1956, computer scientist John McCarthy first used the phrase "Artificial Intelligence".
A branch of computer science that has grown over time is AI. It involves replicating human cognitive processes using machines, notably computer systems. The phrase "artificial intelligence" (AI) is frequently used to describe a project that aims to create systems that are capable of doing activities that typically require human intelligence, such as decision-making, visual perception, language comprehension, and speech recognition. AI systems routinely operate by consuming vast amounts of trained and labeled data, scanning it for correlations and patterns, and then using those patterns to predict future events.
AI is concerned with the development and use of computer systems that can solve issues that often call for human intelligence. Such issues are related to jobs that are naturally occurring, such as eyesight or natural language interpretation, or both. Typically, they cannot be resolved using traditional algorithmic techniques. AI systems handle symbolic information rather than only numerical data, as is customary in computer science, to solve them. AI uses several different types of knowledge about an application area. Therefore, important to AI research and development are the issues of knowledge representation, acquisition, and usage. One of the most active subfields is the deployment of knowledge-based systems.
-
7 Quality framework for explainable artificial intelligence (XAI) and machine learning applications
- + Show details - Hide details
-
p.
115
–138
(24)
Artificial intelligence (AI) and machine learning (ML) applications are applied in many applications and devices, and it is expected to grow by 15 trillion dollars by 2030. There are more demands for explainable AI (XAI) for its improvement in explainability attributes of the AI quality. Software quality is defined as the product meets its required product specification and is expected to behave as it is expected by the stakeholders. Furthermore, we need a systematic approach to the design, development, implementation, and testing of AI products. Therefore, this chapter proposes a software engineering framework for AI and ML applications (SEF - AI and ML) supporting the complete XAI application development phases including a reference architecture to standardize across XAI applications. The framework has been validated through a case study involving an explainable Chatbot using business process modeling notations (BPMN), modeling, and simulation. The results demonstrate a 98% utilization rate and improved time efficiency, confirming the validation of performance and resource requirements for cloud-driven AI Chatbot services. Therefore, SEF - AI and ML has the potential to be a standard framework for AI and ML applications to achieve the desired quality and certainty of AI products and services.
-
8 Methods for explainable artificial intelligence
- + Show details - Hide details
-
p.
139
–161
(23)
As AI models are becoming increasingly regulated by governments, it has become crucial to provide explanations for their decisions. The emergence of XAI has helped us to better understand AI systems and move towards models that can offer human-friendly explanations. However, it remains unclear whether the growing range of XAI methodologies and tools is enough to provide practical support in the risky scenarios that regulatory stakeholders are concerned about. For instance, can an intelligent model be used for a medical diagnosis simply because of the availability of score-CAM or GradCAM? The answer is a resounding "NO" because there are no established risk-aware scenarios that can guide the research community on the requirements for implementing XAI-supported AI models in real-world contexts. Therefore, society needs approaches that recognize XAI tools as necessary but insufficient steps toward assessing the trustworthiness of AI-based systems for specific tasks.
-
9 Knowledge representation and reasoning (KRR)
- + Show details - Hide details
-
p.
163
–178
(16)
Knowledge representation and reasoning (KRR) is a key research area in artificial intelligence that deals with the design and development of methods to represent, manipulate, and reason with knowledge in computer systems. KRR techniques are used to develop intelligent systems that can reason about complex domains, make decisions, and provide explanations for their actions. The present research aims to understand the process of representing knowledge and reasoning with different domains, including agriculture, education, healthcare, and business. Different techniques, such as ontologies, semantic networks, frames, rule-based systems, description logics, first-order logic, and Bayesian networks, have been developed to facilitate the representation and reasoning of knowledge. In the present research, different techniques for representing and reasoning with knowledge are used in two different domains: academic knowledge and farmer knowledge. The study examines the strengths and weaknesses of each technique and provides insights into which techniques are most suitable for different types of KRR tasks. The results of this study can help practitioners and researchers to choose the most appropriate technique for their specific KRR needs.
-
10 Knowledge visualization: AI integration with 360-degree dashboards
- + Show details - Hide details
-
p.
179
–202
(24)
The main intention of visualization is to enhance the ability of understanding and creating a new vision of the problem given. This chapter focuses on knowledge visualization. With technology advancements, visual images play a major role in representing data. We will cover different tools and technologies to gather visual data and how to convert it into a knowledge-level presentable form.
-
11 Empowering machine learning with knowledge graphs for the semantic era
- + Show details - Hide details
-
p.
203
–225
(23)
The widely used relational database model organizes data in the format of tables with columns and rows. By checking the tables, the relationship amongst the different data points can be identified. This has been doing well for business operations automation as the data volume is growing slowly. For complicated operations (which involve identifying relationships amongst data points kept in different tables), the relational database model is found inefficient and inadequate. There are other inadequacies in the hugely popular and widely relational database systems (RDBS). Therefore, there is a clarion call for pioneering database solutions for the impending knowledge era.
On the other side, with the rising need for creating and managing intelligent devices and software products for business transformation, the role and responsibility of artificial intelligence (AI) methods go up significantly. Machine learning (ML) and deep learning (DL) algorithms play a very vital role in producing sophisticated AI systems and services. For AI algorithms to do their tasks, they need a lot of correct and cleaned data. Right data is to result in highly accurate predictions/inferences/conclusions.
Data typically gets collected from different sources and cleansed. There are several dissimilar data formats and transmission protocols greatly complicating the goal of data integration. Therefore, there are insistences for competent data integration and virtualization technologies and tools, which play a very vital role in visualizing and realizing profoundly impactful AI systems. That is, there is a need for a fresh and flexible approach to fulfilling the complicated requirement of data integration. The need for data integration has given rise to many research works and thought processes into working out the best possible way to collect, store, manipulate, and maintain digital data. Knowledge graphs (KGs) are the graph databases emerging and getting established as the next-generation data management system for speeding up knowledge engineering and extraction towards digitally transformed businesses and societies. The popularity of KGs is increasing due to their potential and flexibility in dealing with complex and interrelated data.
In a KG, the data is stored and information is depicted in a graphical format. This methodology in KGs can be applied to create a graphical representation of the relationships amongst all of its data points. Hence, even if the data points do not fit neatly together into a table, the associations between the data points can be evaluated fast and with much less computation power which is the explicit advantage over relational database.
Formally, a KG is a directed labelled graph that intrinsically and illustratively represents relations between data points. A node of the KG represents a data point. The entity of this data point could be a person, a place, or a webpage, and an edge represents the relationship between a pair of data points.
-
12 Enterprise knowledge graphs using ensemble learning and data management
- + Show details - Hide details
-
p.
227
–238
(12)
Ensemble model is made of a set of models that integrate various type supervised for form classifier to increase or boast prediction consistency. This chapter introduced improved algorithm framework for supervised learning which takes the best three classifiers out of six and combine to produce enhanced ensemble model using uniform voting approach. The proposed technique is tested on PIMA Indian Diabetes dataset and showed superior performance compared to classification tree-based extended techniques (e.g., Random Forest and AdaBoost). The new structured formulated ensemble framework introduced also tend to be invariant to size of fold during validation process (k-fold validation).
-
13 Illustrating graph neural networks (GNNs) and the distinct applications
- + Show details - Hide details
-
p.
239
–265
(27)
The fledgling concept of graph neural network (GNN) has gained a greater acceptance and adoption in the recent past across domains such as social and transport networks, knowledge graphs (KGs), recommendation, expert and question-answering systems, neurons in the brain, and life science that deals with molecular structure. The unique power of GNNs in modeling the intriguing and intimidating dependencies between nodes in a graph has laid down a stimulating environment for envisaging breakthrough results in the graph theory arena. GNN is a special but powerful type of neural networks. GNNs directly operate on the graph-structured data and are capable of assisting in implementing intelligent systems. In short, GNNs are being viewed as an enabling factor and facet of real digital transformation.
This chapter is to explain the distinct characteristics of GNNs and how they contribute to visualizing and realizing a variety of advanced applications for the impending knowledge era.
-
14 AI applications - computer vision and natural language processing
- + Show details - Hide details
-
p.
267
–292
(26)
Computer vision is a field of study that focuses on enabling computers to interpret and understand visual information from the world around us. Artificial intelligence (AI) has revolutionized computer vision by providing algorithms and models that can process, analyze, and classify images and videos. AI applications in computer vision have many real-world applications, including in healthcare, transportation, entertainment, and security. Some of the popular AI applications in computer vision include object detection and recognition, image segmentation, image restoration, pose estimation, scene understanding, augmented reality (AR), medical imaging, etc.
-
15 Machine learning and computer vision - beyond modeling, training, and algorithms
- + Show details - Hide details
-
p.
293
–307
(15)
Machine learning is a branch of artificial intelligence (AI). Machine learning finds its application in different domains like healthcare, travel, and e-commerce, and has enhanced the working of the same. It makes use of statistical prediction and modeling. It takes the raw data as input, analyzes the data, and generates the output according to the analyzed data. This chapter provides an extensive overview of machine learning techniques and a basic conceptual briefing about this extensive topic. The chapter also aims to explain the types of machine learning including supervised learning, unsupervised learning, and reinforcement learning. Machine learning has the potential to produce consistently accurate estimates in this new era. This chapter is to basically understand the fundamental concepts of machine learning and its techniques. It tells about the techniques we generally use, and the places where these types can be applied, profuse algorithms. Here we also get to discuss the various languages that we are going to use, the framework of machine learning, the best tools or the efficient platforms that can support machine learning practices and to implement the concepts.
-
16 Assistive image caption and tweet development using deep learning
- + Show details - Hide details
-
p.
309
–327
(19)
In this rapidly growing world and with the technological boom, there are a huge variety of applications and devices that generate an enormous amount of data every second. With unstructured data being the most difficult to manage and keep track of, there has been a drastic increase in the amount of visual data generation. To keep track of such data for further insights and use, a textual descriptor is often needed and getting it is the primitive step for any analytics. A manual description is subjective and not appropriate for larger data. This issue is addressed by automation, hence opening gates for computer vision and artificial intelligence in the domain. Another area that has changed multimedia communication and has seen a great deal of advancement is social media. Applications like Twitter have become an indispensable part of people's lives. Moreover, Progressive Web Application (PWA) is a term that has started to be implemented in various applications. It gives an on-par experience with native apps and has become more prevalent. This work Assistive Image Caption and Tweet (AICT) aims to set a new horizon by combining these applications, and setting a base for future applications and devices. It does so by using deep learning techniques such as convolution neural networks (CNN) and Long Short-Term Memory (LSTM) to generate captions for images within milliseconds, natural language processing (NLP) to generate the text in different languages along with the audio to assist visually impaired people, and an automated assistive tweet function that directly tweets the image with its caption in the language desired.
-
17 Explainable renegotiation for SLA in cloud-based system
- + Show details - Hide details
-
p.
329
–346
(18)
The excitement around cloud technology often leads people to believe that it can solve all problems, but this is not always true and the complexity of using the cloud is often overlooked by promoters. There is a significant difference between the adoption of cloud technology and the level of innovation that cloud consumers can achieve, which is a major concern for many cloud users. They are questioning the ability of cloud computing to provide continuous service delivery.
The Service Level Agreement (SLA) is a crucial document in cloud computing that outlines the obligations of both the customer and the provider, including details about the expected service delivery and penalties for violations. This document is essential for customers to trust the cloud provider's ability to handle their data and rely on the service. Without strong assurances that their requirements and SLA will be enforced, customers will not outsource their data to cloud infrastructures. Thus, managing the SLA within a cloud-based system is crucial to ensuring service continuity.
Cloud providers typically offer two types of SLA: predefined and negotiated. A predefined SLA is a generic template that applies to all customers. However, some customers may have unique QoS requirements that are not covered by a predefined SLA. In such cases, the customer and provider engage in a negotiation process to establish a mutually agreed-upon SLA before service provision (negotiated SLA). In this context, negotiation refers to the process by which parties arrive at a mutually acceptable agreement on a particular matter.
In a typical scenario, the terms of an SLA are fixed upon construction and remain unchanged throughout the service period. However, this approach conflicts with the dynamic nature of cloud computing, which is characterised by flexibility. The inability of current SLA management frameworks to accommodate this dynamic environment can negatively affect service delivery performance. In this context, a framework refers to an abstract representation of functionality for managing the SLA. Service providers must be capable of accommodating changes in the needs and circumstances of cloud consumers over time. Failure to address these factors can result in service violations that may impact the acceptance of a cloud service.
Existing frameworks for managing SLAs do not include provisions for adjustable SLAs based on customer preferences during operation, nor do they address service violation handling for cloud-based systems that can minimise such violations. The focus of typical SLA renegotiation is limited to the initial phase of service delivery, and service violation handling only reacts after the violation has been detected. Consequently, there is a need for adjusting SLAs during service operation, considering the customer's changing preferences, the provider's current situation, and any occurrences of service violations.
-
18 Explainable AI for stock price prediction in stock market
- + Show details - Hide details
-
p.
347
–366
(20)
In recent years, it has been observed that a lot of people invest their money into various stocks, and this growth is said to be exponential in the coming years as well. But people have to be careful in what they invest and how much they invest as the stock market is a high-risk and high-reward field. To cater to the need an explainable artificial intelligence (AI)-based methodology is proposed to create a model that could predict the future stock price which could help people mitigate losses and have a better chance of earning profits. In this work, the system is devised in such a way it first tests various models like K-nearest neighbours (KNN), moving average, linear regression and long short-term memory (LSTM) to understand how they respond to the data about any particular company that is listed on NSE India. Then the selected model's algorithm that produced the best accuracy is further used to find the price of any stock for the next 30 days. Since this is a software tool, for better user accessibility, a website using Django is created so that users can log in and check for any stock prices and predictions for the company in which they are interested.
-
19 Advancements of XAI in healthcare sector
- + Show details - Hide details
-
p.
367
–396
(30)
Artificial intelligence (AI) has undoubtedly been a center for the latest advancements and has brought about significant developments in the medical field. However, the healthcare practitioners' and researchers' desire for an interpretation of the system's predictions based on the health statistics acquired through advanced machine learning models did not meet. So, the study of eXplainable AI (XAI) has been pursued through science establishment to give justifications for machine predictions and assure accuracy in the sophisticated medical framework. Because depending on artificial conclusions to rescue the health of an individual without an adequate knowledge of the underlying logic is unacceptable. Before applying the results to the patient, XAI helps the medical domain crew understand the reasons and keep the conclusions in check for a better cause. In this book chapter, we will emphasize the motives for espousing XAI in the medical domain and examine the basic principle behind it, as well as how it might help to dependent AI-based solutions in healthcare.
-
20 Adequate lung cancer prognosis system using data mining algorithms
- + Show details - Hide details
-
p.
397
–427
(31)
Data mining is the technique used by different algorithms to retrieve the necessary information from an immense amount of datasets. The main objective of this research is to predict the possible level of lung cancer. Several experiments have been conducted using data analysis methods to explain the estimation of lung cancer risks. Cancer is the deadliest illness now, causing a lot of deaths. That's because it is incurable in most situations. But it is not so the case if it is detected at an earlier stage. So an earlier diagnosis is necessary. However, to foresee the incidence of lung cancer, there are so many steps and actions needed. This research is therefore focused on using four data mining methods to forecast risks of lung cancer in patients without much effort using the naive Bayes, decision trees, k-nearest neighbors, and random forest algorithms based on basic parameters. To evaluate the most efficient and productive model, the efficiency of these classification techniques was measured.
-
21 Comparison of artificial intelligence models for prognosis of breast cancer
- + Show details - Hide details
-
p.
429
–445
(17)
Breast cancer is the most common cancer not only affecting women but also men. Diagnosis and treatment are crucial stages in the cancer treatment process. However, even after treatment, individuals often experience ongoing challenges, including the regular need for painful procedures such as biopsies, MRIs, and scans as part of their journey towards recovery. We propose that in this case, machine learning (ML) and deep learning analyses may be used to perform longitudinal studies of women with breast cancer. We do a comparative analysis of three situations, in the first situation, we apply ML algorithms just after the primary preprocessing steps, in the second situation, we add balanced class weights hyperparameters, and in the third, we do principal component analysis (PCA). In the first situation, the light gradient boosting machine (LightGBM) gives the best accuracy of 87.87%, and the random forest (RF) gives an accuracy of 87.87% after the hyperparameter of balanced class weights is given. After PCA, logistic regression gives a maximum accuracy of 84.84%.
-
22 AI-powered virtual therapist: for enhanced human-machine interaction
- + Show details - Hide details
-
p.
447
–465
(19)
This work explores the development of an artificial intelligence (AI)-powered virtual therapist that can recognise feelings in people's facial expressions and respond in a tailored manner using machine learning (ML) and natural language processing (NLP) algorithms. The virtual therapist interacts with people and offers assistance with a variety of mental health issues. Through the use of Python, OpenCV, and DeepFace, the algorithm used in the project automatically identifies the facial identification of human emotions. Users can enjoy a more seamless and personalised experience thanks to the incorporation of a chatbot. The outcomes show how well the virtual therapist does at reading facial expressions to identify various feelings, including neutral, happy, sad, surprised, angry, dread, and disgust. The research emphasises how AI could replace traditional therapy with a more convenient and affordable option, thus enhancing mental health care. Virtual therapists have the potential to revolutionise mental health care by offering accessible and individualised support to people dealing with mental health issues. We do this by fusing the strength of AI with mood detection and chatbot technology.
-
23 Conclusion: an insight into the recent developments and future trends in XAI
- + Show details - Hide details
-
p.
467
–484
(18)
The reasonableness and logical AI have started expanding considerably by both examination of local area and industry. Clarifying the capacity of artificial intelligence (AI) gives numerous subjects of dynamic examination by the need of passing on well-being and trust to clients in the "how" and "why" of computerized decision-production in various applications like independent driving, clinical determination, or banking and money. In this chapter, we present a chronicled point of view of explainable AI (XAI). The calculations utilized in AI can be separated into white-box and discovery AI [machine learning (ML)] calculations. White-box models are ML models that give results that are reasonable to specialists in the space. Black-box models, then again, are amazingly difficult to clarify and can barely be seen even by space experts. XAI calculations are considered to follow the three standards of straightforwardness, interpretability, and logic. We examine how reasonableness was mostly imagined before, how it is perceived in the present, and how it very well may be perceived later on. We close the chapter by proposing measures for clarifications that we accept will assume an essential part in the advancement of human-justifiable logical frameworks.
-
Back Matter
- + Show details - Hide details
-
p.
(1)
Related content
