
As AI models become increasingly powerful, it is an attractive proposition to use them in important decision making pipelines, in collaboration with human decision makers. But how should a human being and a machine learning model collaborate to reach decisions that are better than either of them could achieve on their own? If the human and the AI model were perfect Bayesians, operating in a setting with a commonly known and correctly specified prior, Aumann's classical agreement theorem would give us one answer: they could engage in conversation about the task at hand, and their conversation would be guaranteed to converge to (accuracy-improving) agreement. This classical result however would require making many implausible assumptions, both about the knowledge and computational power of both parties. We show how to recover similar (and more general) results using only computationally and statistically tractable assumptions, which substantially relax full Bayesian rationality. Joint work with Natalie Collina, Varun Gupta, and Surbhi Goel, based on a paper that will appear in STOC 2025.
*This event is open to the public with an emphasis on graduate students in machine learning, computer science, ECE, statistics, mathematics, linguistics, and medicine, as well as PhD-level data scientists in the GTA.