Catch up on the latest AI articles

Knowledge Graphs Open The Way To An Understandable AI Future

Knowledge Graphs Open The Way To An Understandable AI Future

Survey

3 main points
✔️ distinguish between explainable AI (XAI) and interpretable machine learning (IML), and proposed understandable AI (CAI) as an umbrella concept for them.
✔️ We constructed a taxonomy of CAI methods on knowledge graphs in terms of representation, task, underlying methods, and types of understandability, and identified a lineage of research in IML and XAI.

✔️ As future research topics, he proposed the application of XAI to link prediction, improvement of explanations using semantic information of knowledge graphs, comparative evaluation among XAI methods, application of IML to graph clustering, and improvement of transfer of interpretation of IML models, and he He asserted that the use of semantic information in knowledge graphs is expected to improve the safety of AI systems.

Comprehensible Artificial Intelligence on Knowledge Graphs: A survey
written by Simon Schramm, Christoph Wehner, Ute Schmid
(Submitted on 4 Apr 2024)
Comments: Published on arxiv.

Subjects:  Artificial Intelligence (cs.AI)

code:  

The images used in this article are from the paper, the introductory slides, or were created based on them.

Summary

This paper is a comprehensive survey study on Knowledge Graph (KG)-based Comprehensive Artificial Intelligence (CAI), which is defined as a higher level concept than Explainable AI (XAI) and Interpretable Machine Learning (IML).

First, the paper clarifies the concept of CAI and proposes a taxonomy that classifies KGs in terms of their representation methods, tasks, and underlying methods. Based on this taxonomy, the paper provides a detailed analysis of IML and XAI methods that utilize KG, respectively.

The IML methods include rule extraction, path finding, and embedding methods, while the XAI methods include rule-based, decomposition-based, alternative model-based, and graph generation-based methods. The features and issues of each method are carefully organized, and the current status of CAI using KG and future research directions are suggested.

Introduction

In recent years, artificial intelligence (AI) systems have moved beyond the realm of research and are permeating our daily lives. In particular, AI methods based on knowledge graphs have seen a surge in applications since the beginning of the 21st century and are used in many fields. However, explaining the decision making of AI systems is both a demand from users and a regulatory requirement in many application areas.

Knowledge graphs have great potential as a foundation for Comprehensible AI (CAI) because they can represent connected data, or knowledge, in a form that can be understood by both humans and machines. In this paper, we review the history of CAI on knowledge graphs, clearly distinguish between the concepts of explainable AI (XAI) and interpretable machine learning (IML), and propose CAI as an umbrella concept for both.

Related Research

Several survey papers exist in this area of prior research. For example, Tiddi and Schlobach [35] discuss CAI utilizing KG in a broad definition. On the other hand, Bianchi et al. [36] provide an overview of AI methods in general that use KG as input, and mention CAI methods in the process; Lecue [37] organizes the issues and methods of XAI utilizing KG by category of AAAI papers.

However, in these previous studies, the conceptual distinction between XAI and IML was not clear, and the terms tended to be confused. In addition, there was limited research that systematically organized CAI methods using KG.

Proposed Method

The taxonomy, the proposed methodology of this paper, consists of the following four perspectives

Representation: This is a method of representing a knowledge graph as an input to an AI model. It is classified into three types: symbolic, sub-symbolic, and neuro-symbolic.

Task: The type of problem that the CAI method addresses. There are five types of tasks: Link Prediction, Node Clustering, Graph Clustering, Clustering, and Recommendation.

Foundation: Machine learning algorithms and methods to realize CAI. These include Factorization Machines, Translational Learning, Rule-based Learning, Neural Networks, and Reinforcement Learning. Reinforcement Learning.

Comprehensibility: Represents two approaches to CAI: Interpretable Machine Learning (IML) and Explainable AI (XAI). Interpretable Machine Learning (IML) and Explainable Artificial Intelligence (XAI).

This taxonomy provides a framework for systematically organizing CAI methods on the knowledge graph and clearly identifying the characteristics of each method. This allows researchers to understand the differences and relationships among CAI methods, which can be useful for developing new methods and improving existing methods. The taxonomy can also be used as a tool to provide a bird's-eye view of research trends in CAI.

Results (of a study)

The survey showed that there are three strains of IML research: rule mining, pathfinding, and embedding-based. On the other hand, XAI research was shown to have four lineages: rule-based learning, decomposition methods, proxy models, and graph generation.


Figure 6.

Figure 6 summarizes the lineage of research in IML and XAI. This clarifies the characteristics and relevance of each approach.


Figure 10.

Figure 10 shows a heat map summarizing the surveyed papers in terms of representation, task, and underlying methodology.The heat map is shown separately for IML and XAI, with the intensity of color in each cell indicating the degree of research concentration. This allows the current research trends and blank areas to be identified at a glance.For example, we can see that there are few papers in XAI research that address link prediction and few studies that use symbolic or neurosymbolic as a representation. On the other hand, we can read that IML research has few papers on graph clustering and many studies are based on rule-based learning.

These figures play an important role in providing an overview of the overall picture of CAI research on the knowledge graph, identifying the characteristics and relevance of each approach, research trends, and future research opportunities.

Future Outlook

The authors identify the following issues for future research on CAI on knowledge graphs

1. Application of XAI methods to link prediction
2. Improved explanation of knowledge graphs using semantic information
3. Establishment of common criteria for comparative evaluation among XAI methods
4 . Application of IML methods to graph clustering
5 . Improved methods for communicating IML model interpretation to users

The authors argue that the semantic information in the knowledge graph can be used to improve the safety of AI systems.

 
  • メルマガ登録(ver
  • ライター
  • エンジニア_大募集!!

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us