Connect with Pioneering Minds Shaping the Future of AI Research
Join renowned researcher Kyunghyun Cho for an exclusive lecture as part of the Vector Distinguished Talk series—where breakthrough research meets real-world impact.
What to Expect
- Cutting-edge insights from a leading AI researcher
- Direct access to advanced research findings
Perfect for: Researchers, data scientists, AI practitioners, graduate students, and anyone passionate about the latest developments in machine learning.
Register now to secure your spot!
Time |
Session/Speaker |
|---|---|
11:00 AM |
Lecture Talk: Kyunghyun Cho, Professor of Computer Science and Data Science at NYU and Executive Director of Frontier Research at Genentech |

Professor of Computer Science and Data Science, New York University, Executive Director of Frontier Research, Genetech
Kyunghyun Cho is a professor of computer science and data science at New York University and an executive director of frontier research at the Prescient Design team within Genentech Research & Early Development (gRED). He became the Glen de Vries Professor of Health Statistics in 2025. He is also a CIFAR Fellow of Learning in Machines & Brains and an Associate Member of the National Academy of Engineering of Korea. He served as a (co-)Program Chair of ICLR 2020, NeurIPS 2022 and ICML 2022. He was one of the three founding Editors-in-Chief of the Transactions on Machine Learning Research (TMLR) until 2024. He was a research scientist at Facebook AI Research from June 2017 to May 2020 and a postdoctoral fellow at University of Montreal until Summer 2015 under the supervision of Prof. Yoshua Bengio, after receiving MSc and PhD degrees from Aalto University April 2011 and April 2014, respectively, under the supervision of Prof. Juha Karhunen, Dr. Tapani Raiko and Dr. Alexander Ilin. He received the Samsung Ho-Am Prize in Engineering in 2021. He tries his best to find a balance among machine learning, natural language processing, and life, but almost always fails to do so.
Kyunghyun Cho, Professor of Computer Sciences & Data Science, New York University (NYU)
Talk Title: Reality Checks
Despite its amazing success, leaderboard chasing has become something researchers dread and mock. When implemented properly and executed faithfully, leaderboard chasing can lead to both faster and easily reproducible progress in science, as evident from the amazing progress we have seen with machine learning, or more broadly artificial intelligence, in recent decades. It does not, however, mean that it is easy to implement and execute leaderboard chasing properly. In this talk, I will go over four case studies demonstrating the issues that ultimately prevent leaderboard chasing from being a valid scientific approach.
The first case study is on the lack of proper hyperparameter tuning in continual learning, the second on the lack of consensus on evaluation metrics in machine unlearning, the third on the challenges of properly evaluating the evaluation metrics in free-form text generation, and the final one on wishful thinking. By going over these cases, I hope we can collectively acknowledge some of our own fallacies, think of the underlying causes behind these fallacies, and come up with better ways to approach artificial intelligence research.