Recorded 9 January 2023. Jaesik Choi of the Korea Advanced Institute of Science and Technology presents “Explainable Artificial Intelligence to Analyze Internal Decision Mechanism of Deep Neural Networks” at IPAM’s Explainable AI for the Sciences: Towards Novel Insights Workshop.
Abstract: As complex artificial intelligence (AI) systems such as deep neural networks is used for many mission critical task such as military, finance, human resources and autonomous driving, it is important to secure the safe use of such complex AI systems. In this talk, we will present recent advances to clarify the internal decision of deep neural networks. Moreover, we will overview approaches to automatically correct internal nodes which incur artifacts or less reliable outputs. Furthermore, we will investigate the reasons why some deep neural networks include not-so-stable internal nodes.
Learn more online at: http://www.ipam.ucla.edu/programs/workshops/explainable-ai-for-the-sciences-towards-novel-insights/

Add comment

Your email address will not be published. Required fields are marked *

Categories

All Topics