Recent deep learning models have achieved impressive predictive performance by learning complex functions of many variables, often at the cost of interpretability. In this talk, we will begin with a discussion on defining interpretable machine learning in general and describe our agglomerative contextual decomposition (ACD) method to interpret neural networks. ACD attributes importance to features and feature interactions for individual predictions and has brought insights to NLP/computer vision problems and can be used to directly improve generalization in interesting ways.

We will then focus on scientific interpretable machine learning. Building on ACD’s extension to the scientifically meaningful frequency domain, an adaptive wavelet distillation (AWD) interpretation method is developed. AWD is shown to be both outperforming deep neural networks and interpretable in two prediction problems from cosmology and cell biology.

Finally, we will address the need to quality-control the entire data science life cycle to build any model for trustworthy interpretation.

Introduction

This conference – organized under the auspices of the Isaac Newton Institute “Mathematics of Deep Learning” Programme — brings together leading researchers along with other stakeholders in industry and society to discuss issues surrounding trustworthy artificial intelligence.

This conference will overview the state-of-the-art within the wide area of trustworthy artificial intelligence including machine learning accountability, fairness, privacy, and safety; it will overview emerging directions in trustworthy artificial intelligence, and engage with academia, industry, policy makers, and the wider public.

Add comment

Your email address will not be published. Required fields are marked *

Categories

All Topics