Explaining Aha! moments in artificial agents through IKE-XAI: Implicit Knowledge Extraction for eXplainable AI - Laboratoire des sciences et techniques de l'information, de la communication et de la connaissance Accéder directement au contenu
Poster De Conférence Année : 2022

Explaining Aha! moments in artificial agents through IKE-XAI: Implicit Knowledge Extraction for eXplainable AI

Résumé

During the learning process, a child develops a mental representation of the task he or she is learning. A Machine Learning algorithm develops a latent representation of the task it learns. We investigate the development of the knowledge construction of an artificial agent (AA) by getting inspiration from the one of children, through the analysis of its behavior, i.e., its sequences of moves. We focus on the Tower of Hanoï (TOH) task, a well-known transformation problem in the field of problem-solving as well as one of the fundamental tasks to study children’s knowledge construction about their world and the Aha! phenomenon. We define knowledge here as a set of facts, information, and skills acquired through experience by the AA that contribute to gaining a theoretical or practical understanding of a subject or the world. The main contribution of our work is to propose a 3-step end-to-end methodology for knowledge extraction from AA named Implicit Knowledge Extraction with eXplainable Artificial Intelligence (IKE-XAI). IKE-XAI extracts the AA implicit knowledge in form of an automaton, encoded during its learning. We showcase this technique to solve and explain the TOH task when researchers have only access to moves that represent observational behavior as in human-machine interaction. The 3 steps of IKE-XAI are: first, a Q-learning agent that learns to perform the TOH task; second, a trained recurrent neural network with LSTM units that encodes an implicit representation of the TOH task; and third, an XAI process using a post-hoc implicit rule extraction algorithm to extract graph representations (Finite State Automata, FSA) as visual and explicit explanations of the behavior of the Q-learning agent. This methodology blends neural and symbolic (in our case FSA) components to provide more interpretable model outcomes. At the experimental level, we demonstrate that it is possible to extract the vision of the AA of a simple task (TOH with N=3 disks) and complex one (TOH with N=4 disks and N =6 disks), in the form of FSA that represents AA’s problem-solving strategies, for their explainability. In parallel to the decrease in the average number of movements required to complete a task, namely the acquisition of expertise, we also observed a change in the FSAs extracted at different moments of this acquisition of expertise. The analysis of the characteristics of the FSAs shows a change in the number of nodes and the weights of the transitions. Regarding the Aha! moment, in the 3 experimental contexts, the analyses carried out allowed us to conclude that the Aha! moment for an AA occurs when it changes its behavior in a noticeable way, which translates into a significant change in the extracted FSAs and a stabilization of these. Our experiments show that the IKE-XAI approach helps to understand the development of the Q-learning agent behavior by providing a global explanation of its knowledge evolution during learning. IKE-XAI also allows researchers to identify the agent's Aha! moment by determining from what moment the knowledge representation stabilizes and the agent no longer learns. As a conclusion, we showed that IKE-XAI makes it possible to elucidate the evolution of knowledge acquisition of a learning AA through the study of its behavior over time in terms of an extracted, synthesizing FSA. This allows us to convey, in a symbolic manner, a more explainable vision of AAs. This work also brings a light on the subject of the Aha! moment for autonomous agents and beyond, it leads to a reflection on the question of the definition of insight for an autonomous artificial agent. The convergence of models is thus interesting for the study of this phenomenon in autonomous artificial agents, and more globally for the question of explainability.
Fichier principal
Vignette du fichier
2022-11-23_Explaining_Aha_TOH_WIML_NEURIPS_2022_poster_LANDSCAPE.pdf (3.77 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03875691 , version 1 (28-11-2022)

Licence

Paternité

Identifiants

  • HAL Id : hal-03875691 , version 1

Citer

Ikram Chraibi Kaadoud, Adrien Bennetot, Barbara Mawhin, Vicky Charisi, Natalia Díaz-Rodríguez. Explaining Aha! moments in artificial agents through IKE-XAI: Implicit Knowledge Extraction for eXplainable AI. Women in Machine Learning WIML @ Neurips 2022, Nov 2022, Louisiane, United States. , 2022. ⟨hal-03875691⟩
73 Consultations
18 Téléchargements

Partager

Gmail Facebook X LinkedIn More