Should we care about machine learning model interpretability? Is it more relevant for some scenarios than others? How can we say we actually achieved model understanding?
In this session, Professor Hima Lakkaraju answers these questions as well as demonstrates TalkToModel, an interactive dialogue system for explaining machine learning models through conversations.
Besides demonstrating a compelling conversational explainable user interface (XAI), TalkToModel demonstrates using language models to interact with complex systems and make them more accessible to a wide audience.
==
Join the Cohere Discord: https://discord.gg/co-mmunity
Discussion thread for this episode (feel free to ask questions):
https://discord.com/channels/954421988141711382/1083404412086661230
Check out Hima’s work here- https://himalakkaraju.github.io/
Follow her here- https://twitter.com/hima_lakkaraju
Add comment