Interpreting deep neural networks towards trustworthiness – Bin Yu, University of California
We will then focus on scientific interpretable machine learning. Building on ACD’s extension to the scientifically meaningful frequency domain, an adaptive wavelet distillation (AWD) interpretation method is developed. AWD is shown to be both outperforming deep neural networks and interpretable in two prediction problems from cosmology and cell biology.
Finally, we will address the need to quality-control the entire data science life cycle to build any model for trustworthy interpretation.
Introduction
This conference – organized under the auspices of the Isaac Newton Institute “Mathematics of Deep Learning” Programme — brings together leading researchers along with other stakeholders in industry and society to discuss issues surrounding trustworthy artificial intelligence.
This conference will overview the state-of-the-art within the wide area of trustworthy artificial intelligence including machine learning accountability, fairness, privacy, and safety; it will overview emerging directions in trustworthy artificial intelligence, and engage with academia, industry, policy makers, and the wider public.
Add comment