Vector Institute Distinguished Lecture Series 2024-2025
Wednesday, November 6, 2024, 3:30 PM - 4:40 PM
Explanations have been proposed as a way to improve the human+AI performance in the context of AI decision support. By providing context for an AI recommendation, the reasoning goes, people will be able to use the decision support to ultimately make better choices. However, many studies have established that reality does not pan out this way: not only does AI decision support often fail to improve human+AI decision quality, but sometimes it makes it worse.
In this talk, I will discuss how both the explanation content and delivery can affect the quality of the human+AI decision, and how we can improve both. Regarding content, I will describe ongoing sim2real work in which we first optimize explanations for properties -- computational measures of content quality -- and then use an in-silico approach to determine which properties are likely to be important for users. This represents a paradigm shift of explanations as objects optimized to have certain qualities rather than explanations as simply some forward computation applied to a model. We validate the best candidates in user studies. Regarding delivery, I will present recent studies in which we use machine learning to personalize delivery strategies to the needs of different users.
* This event is open to the public with emphasis on graduate students in machine learning, computer science, ECE, statistics, mathematics, linguistics, medicine, as well as PhD-level data scientists in the GTA.