Banner
Mon 02 Feb
Seminars and Conferences

Truth in Modeling: How close do we actually get to what we want to model?

On Monday, 2 February 2026, at 4:30 p.m., will take place an online seminar entitled Truth in Modeling: How close do we actually get to what we want to model? held by professor Dirk Hovy.

Abstract

To model any construct (e.g., hate speech, stance, logic, mental health, etc.), we need to make a few assumptions: the construct should exist, it should be modelable, and we should be able to reliably detect it. In a word, it should be true. However, modeling truth in practice is much less of a binary notion than we would want it to be. In this talk about completed and ongoing work in my lab, I will show how many of our assumptions about construct validity and truth need to be reexamined for annotation, benchmark data sets, LLM output, LLM rankings, and even publication. I will show evidence of assumptions gone wrong, and methods on how to control for and mitigate that effect. Importantly, all that should not discourage us from seeking to model the truth. But it should make us even more keenly aware of the subtle (and not so subtle) effects that make this noble pursuit even more difficult than it already was.

Speaker: Dirk Hovy - Università Bocconi (Milano)

Biography
Dirk Hovy is a professor in the Computing Sciences Department of Bocconi University, the dean for Digital Transformation and AI, and the scientific director of the Data and Marketing Insights research unit. Previously, he was faculty at the University of Copenhagen, got a PhD from USC’s Information Sciences Institute, and a linguistics master’s from Marburg university in Germany. He is interested in what computers can tell us about language and what language can tell us about society. He is also interested in ethical questions of bias and algorithmic fairness in machine learning. Dirk has authored over 150 articles on these topics, two textbooks on NLP in Python, and a forthcoming book on the limitations of AI. Dirk has co-founded and organized several workshops (on computational social science, and ethics in NLP), was a local organizer for the EMNLP 2017 conference, and general chair of EMNLP 2025. He was awarded an ERC Starting Grant project 2020 for research on demographic bias in NLP. In his spare time, he enjoys cooking, leather working, and picking up heavy things to put them back down.

The seminar will take place online at this link.

For more information, please contact professor Daniele Quercia.