Should we care about machine learning model interpretability? Is it more relevant for some scenarios than others? How can we say we actually achieved model understanding?

In this session, Professor Hima Lakkaraju answers these questions as well as demonstrates TalkToModel, an interactive dialogue system for explaining machine learning models through conversations.

Besides demonstrating a compelling conversational explainable user interface (XAI), TalkToModel demonstrates using language models to interact with complex systems and make them more accessible to a wide audience.

==

Join the Cohere Discord: https://discord.gg/co-mmunity
Discussion thread for this episode (feel free to ask questions):
https://discord.com/channels/954421988141711382/1083404412086661230

==
Contents
0:00 Introducing Hima Lakkaraju
1:34 Why model understanding is critical for high-stakes decision-making with ML
4:12 Examples: Why we need model understanding
11:17 How to achieve ML model understanding?
16:51 Explaining individual model decisions based on inputs
22:05 The LIME Explainability method
27:23 What to when explainability methods disagree?
47:33 Conversational interfaces for model understanding
57:53 Conclusion and Summary

Add comment

Your email address will not be published. Required fields are marked *

Categories

All Topics