Recorded 12 January 2023. Oliver Eberle of Technische Universität Berlin presents “Explainable structured machine learning in similarity, graph and transformer models” at IPAM’s Explainable AI for the Sciences: Towards Novel Insights Workshop.
Abstract: Many widely used models such as deep similarity models, GNNs, and Transformer models, are highly non-linear and structured in ways that challenge the extraction of meaningful explanations. This presentation will outline explanation techniques that consider the particular model structure in the framework of layer-wise relevance propagation. This motivates the step to go beyond standard explanations in terms of input features that result in second-order and higher-order attributions and to extend existing approaches for evaluating and visualizing explanation techniques to these new types of explanations. Using these methods, a selection of research use cases is presented, i.e. quantifying knowledge evolution in early modern times, studying gender bias in language models, and probing Transformer explanations during task-solving. This presentation thus highlights that a careful treatment of model structure in XAI can improve their faithfulness, result in better explanations and enable novel insights.
Learn more online at: http://www.ipam.ucla.edu/programs/workshops/explainable-ai-for-the-sciences-towards-novel-insights/

Add comment

Your email address will not be published. Required fields are marked *

Categories

All Topics