Deep learning is being increasingly used for challenging problems in scientific computing. Theoretically, such efforts are supported by a large and growing body of literature on existence of deep neural networks with favourable approximation properties. Yet, these results often say very little about practical performance in terms of the traditional pillars of numerical analysis: accuracy, stability, sampling complexity and computational cost. In this talk, I will focus on two distinct problems in scientific computing to which deep learning is being actively applied: high-dimensional function approximation and inverse problems for imaging. In each case, I will first highlight several limitations of current approaches in terms of stability, unpredictable generalization and/or the gap between existence theory and practical performance. Then, I will showcase recent theoretical contributions that show that deep neural networks matching the performance of best-in-class schemes can be computed in both settings. This highlights the potential of deep neural networks, and sheds light on achieving robust, reliable and overall improved practical performance.
This talk is based on joint work with Vegard Antun, Simone Brugiapaglia, Nick Dexter, Nina M. Gottschling, Anders C. Hansen, Sebastian Moraga and Maksym Neyra-Nesterenko.
Introduction
This conference – organized under the auspices of the Isaac Newton Institute “Mathematics of Deep Learning” Programme — brings together leading researchers along with other stakeholders in industry and society to discuss issues surrounding trustworthy artificial intelligence.
This conference will overview the state-of-the-art within the wide area of trustworthy artificial intelligence including machine learning accountability, fairness, privacy, and safety; it will overview emerging directions in trustworthy artificial intelligence, and engage with academia, industry, policy makers, and the wider public.
Add comment