The AI Methods Series brings together faculty to share their research and highlight innovative applications of artificial intelligence across disciplines, with a particular emphasis on advancing questions in the social and behavioral sciences.
This online session features Nicholas Beauchamp, Associate Professor in the Department of Political Science. Professor Beauchamp’s research examines how political opinions form and change through discussion, deliberation, and argument in settings such as legislatures, political campaigns, and social media. His work draws on methods from natural language processing, Bayesian statistics, and network analysis to illuminate the dynamics of political communication and behavior.
Watch the Seminar
Access the full seminar recording to explore how theory-informed AI models can help uncover subtle, context-dependent bias in media and large language models.
Seminar
Media Bias, AI Bias, Human Bias: Using Black Boxes to Understand Black Boxes
How do we measure ideological “bias” in the media? Early computational work attempts to measure ideological bias via word lists or supervised machine learning, and more recent efforts employ LLMs to better detect subtle or context-dependent bias. But these AI-based approaches typically share the same weaknesses as asking a human to score documents, trading simplistic but transparent models in favor of black-box subjectivity. Efforts to assess the ideological bias of the AI in turn simply push the problem back another layer, relying on yet more subjective human judgment or additional layers of AI, and revealing huge variability both across and within models. Instead, in our work we have added theory-informed structure to LLMs in order to detect subtle, context-dependent bias, while at the same time producing transparent and explainable output. The core of this work is modeling selection effects, where overt language may appear neutral and unbiased, but ideological effects occur through the inescapable process of selecting a subset of events within a larger news story to report. In this talk I will discuss a series of models we have developed that infer entities, events, sentiment and morals in order to illuminate these selection effects. I also discuss the ongoing limitations of our models regarding transparency, and the continuing need for simpler but more transparent models. I conclude with some thoughts about the ways in which these problems are not specific just to the media, but apply equally to efforts to measure ideology in AI or in human minds.
Contact Olivia Olvera (OliviaO@illinois.edu) with questions.