Machine learning in healthcare: From interpretability to a new human-machine partnership
The transformative potential of machine learning could revolutionise areas such as healthcare. However, this opportunity comes with its own unique challenges. Prominent among these is the challenge of interpretability: taking the workings of complex “black box” machine learning models and making them readily understandable to a multitude of users.
The value of interpretability as a broad concept is clear. Yet there is no single “type” of interpretability, since there are many potential ways to extract and present information from the output of a model, and many types of information to choose to extract.
This high-level talk proposes a unique and coherent framework for categorizing and developing interpretable machine learning models based on the needs of their users. We will demonstrate this framework using a range of examples from our lab’s extensive research into interpretability, and our ongoing interdisciplinary discussions with members of the clinical and other non-ML communities.
We will also touch on some very exciting possibilities to apply interpretability and machine learning in order to understand and empower how humans make decisions.
This talk has been created for a broad audience, including those with little or no prior understanding of machine learning or even healthcare.
Introduction
This conference – organized under the auspices of the Isaac Newton Institute “Mathematics of Deep Learning” Programme — brings together leading researchers along with other stakeholders in industry and society to discuss issues surrounding trustworthy artificial intelligence.
This conference will overview the state-of-the-art within the wide area of trustworthy artificial intelligence including machine learning accountability, fairness, privacy, and safety; it will overview emerging directions in trustworthy artificial intelligence, and engage with academia, industry, policy makers, and the wider public.
Add comment