In safety-critical settings and decision making tasks, it is often crucial to quantify the predictive uncertainty of machine learning models. Uncertainty estimates not only codify the trustworthiness of predictions, but also identify regions of the input space that would benefit from additional exploration. Unfortunately, quantifying neural network uncertainty has proven to be a longstanding challenge. In this talk, I will discuss criteria (beyond calibration) of uncertainty estimates that provide meaningful utility on downstream outcomes and tasks. I will demonstrate where existing methods fall short, and - more troublingly - I will discuss recent evidence that their efficacy will further decline as neural networks continue to grow in capacity. I will conclude with ideas for future directions, as well as a call for radically different uncertainty quantification approaches.
* This event is open to the public with emphasis on graduate students in machine learning, computer science, ECE, statistics, mathematics, linguistics, medicine, as well as PhD-level data scientists in the GTA.