T14: Human-Centered Explainable AI

Monday, 24 July 2023, 13:30 - 17:30 CEST (Copenhagen)
Back to Tutorials' Program

 

Wojciech Samek (short bio)

Fraunhofer Heinrich Hertz Institute, Germany

 

Objectives:

  • Teach about goals and requirements for human-centred XAI
  • Teach about the different approaches to XAI (attribution maps, explaining by example, concept visualizations)
  • Introduce and critically discuss selected state-of-the-art explanation methods
  • Teach about recent developments in XAI, especially new approaches to interactive and human-understandable explaining
  • Present different application use cases of XAI (e.g., health, geoscience) and introduce recent XAI tools and toolboxes
  • Outline interesting future developments

 

Content and Benefits:

In this tutorial, we will first present motivations and requirements for human-centered XAI. Then, we will critically discuss different approaches to explaining models and their respective predictions, including attribution maps, explaining by example and concept visualizations. This overview will be embedded into a more general discussion about explaining as a process of human-machine interaction. In the second part of the tutorial, we will present several XAI methods that successfully cope with highly nonlinear ML models used in practice and discuss their theoretical underpinnings. Exemplar applications of XAI in disciplines such as health and geoscience are presented. In the third part of the tutorial, we will focus on recent development in XAI. Especially, we will present an approach that delivers more human-understandable explanations (e.g., in terms of human-understandable concepts), discuss its applications, and introduce a recent toolbox implementing this approach. Furthermore, we will focus on interactive XAI approaches, which enable the human user to interact with the ML model in a targeted manner. Here we will present different use cases, where XAI is used to actually improve the generalization ability, robustness, and fairness of ML models. Finally, we will carefully discuss these recent developments and give an outlook on future applications of XAI.

 

Target Audience:

This tutorial targets core as well as applied ML researchers. Core machine learning researchers may be interested to learn about the connections between the different explanation methods, and the broad set of open questions, in particular, how to extend XAI to new ML algorithms. Applied ML researchers may find it interesting to understand the strong assumptions behind standard validation procedures, and why interpretability can be useful to further validate their model. They may also discover new tools to analyze their data and extract insight from it. Participants will benefit from having a technical background (computer science or engineering), and basic ML training.

Bio Sketch of Presenter:

Wojciech Samek is a professor in the Department of Electrical Engineering and Computer Science at the Technical University of Berlin and is jointly heading the Department of Artificial Intelligence at Fraunhofer Heinrich Hertz Institute (HHI), Berlin, Germany. He studied computer science at Humboldt University of Berlin, Heriot-Watt University and University of Edinburgh and received the Dr. rer. nat. degree with distinction (summa cum laude) from the Technical University of Berlin in 2014. During his studies he was awarded scholarships from the German Academic Scholarship Foundation and the DFG Research Training Group GRK 1589/1, and was a visiting researcher at NASA Ames Research Center, Mountain View, USA. Dr. Samek is associated faculty at the BIFOLD - Berlin Institute for the Foundation of Learning and Data, the ELLIS Unit Berlin and the DFG Graduate School BIOQIC, and member of the scientific advisory board of IDEAS NCBR - Polish Centre of Innovation in the Field of Artificial Intelligence. Furthermore, he is a senior editor of IEEE TNNLS, an editorial board member of Pattern Recognition, and an elected member of the IEEE MLSP Technical Committee and the Germany's Platform for Artificial Intelligence. He is recipient of multiple best paper awards, including the 2020 Pattern Recognition Best Paper Award and the 2022 Digital Signal Processing Best Paper Prize, and part of the expert group developing the ISO/IEC MPEG-17 NNR standard. He is the leading editor of the Springer book "Explainable AI: Interpreting, Explaining and Visualizing Deep Learning" (2019), co-editor of the open access Springer book "xxAI - Beyond explainable AI" (2022), and organizer of various special sessions, workshops and tutorials on topics such as explainable AI, neural network compression, and federated learning. Dr. Samek has co-authored more than 150 peer-reviewed journal and conference papers; some of them listed as ESI Hot (top 0.1%) or Highly Cited Papers (top 1%).