In her talk “Responsible AI – getting the human back into the loop”, Simone Stumpf of the University of Glasgow focused on the different research works from which she developed the four principles that govern explanations in AI systems. Explanations need to be: sound (faithful to the underlying machine-learning algorithm), complete (explain the training data upon which the predictions are made), iterative (incrementally reveal themselves) and not overwhelming for the users. In different experiments, Simone and her team showed that having users interacting with and offering feedback to an AI system would increase the accuracy of the system itself. They also found the best feedback was offered by the users who had a pretty good understanding of how the system worked. In other words, these users had the so-called mental model of the system, which was assessed by asking them questions about how the system was understood to work. Interestingly, since her work has focused on interactive AI systems, Simone opted for the word “interpretability” rather than “explanability.” That is because “explanability” is system-centric (the system provides explanations about itself), while “interpretability” is user-centric (the end user interacts with the system and helps develop explanations). Simone’s final message was that, as AI systems are complex socio-technical systems, we as AI researchers and developers need to involve the end users throughout the whole AI lifecycle, which will be her next big research challenge.
Add comment