
Artificial intelligence is increasingly shaping how social and behavioral scientists analyze large datasets and examine human behavior. At the Center for Social & Behavioral Science (CSBS), two recent events explored both the promise and the practical challenges of these methods—from interpreting bias in media coverage to applying machine learning models in research.
Together, the two events offered complementary perspectives: one examining how AI can help researchers interpret complex social phenomena, and the other demonstrating how machine learning tools can be applied in research projects.
Understanding Bias in Media, AI, and Human Judgement
To explore some of the conceptual challenges surrounding AI and computational analysis, CSBS hosted a session in the AI Methods Series featuring Nicholas Beauchamp, Associate Professor in the Department of Political Science at Northeastern University. In his talk, “Media Bias, AI Bias, Human Bias: Using Black Boxes to Understand Black Boxes,” Beauchamp examined how researchers attempt to measure ideological bias in media using computational tools.
Traditional approaches often rely on indicators—such as counting politically associated words or using supervised machine learning to assign ideological scores to articles. While these methods can produce a measure of bias, Beauchamp explained that they often fail to capture the subtler ways bias can emerge in news coverage.
The basic problem for LLMs is the same as the problem for humans—you can ask them questions and they will always answer, but you can’t necessarily trust the answer because you don’t know what’s going on inside the black box.
Nicholas Beauchamp
Associate Professor, Political Science (Northeastern University)One of the key challenges is what Beauchamp describes as “selection effects.” Even when journalists aim to write neutral and balanced stories, they still must choose which events, people, and actions to include in a report. That process of selecting what to cover—and what to leave out—can shape the overall narrative of a story.
For any news event, there’s always a large universe of things happening—but journalists have to choose a subset of those events to report on. That picking and choosing is necessarily informed by their values.
Nicholas Beauchamp
To better capture these dynamics, Beauchamp and his collaborators developed computational models that move beyond simple word counts. Their approach uses AI to identify entities, actions, and relationships within a news article, building networks that show how different actors interact, and how those interactions are framed.
For example, rather than simply asking whether a news outlet uses more positive or negative language overall, the models examine patterns such as which political actors are described positively or negatively, and how those patterns differ across media outlets. This allows researchers to detect more subtle asymmetries—for instance, differences in how outlets describe actions taken by co-partisans versus political opponents.
Beauchamp also explored how these methods can be extended to analyze moral values expressed in news coverage, identifying themes such as harm, fairness, and care within reporting. By linking these morals signals to the entities and events described in an article, researchers can better understand how values and ideology shape the way political events are framed.
Beauchamp wrapped up the conversation by emphasizing that these methods raise broader questions about transparency and interpretation. While large language models can uncover patterns at a scale impossible for human analysts, they can also function as “black boxes,” making it difficult to fully explain how conclusions are generated. For social scientists, balancing the power of advanced AI tools with the need for interpretation and transparent analysis remains an ongoing challenge.

Hands-On Learning with Machine Learning Methods
Building on an introductory machine learning session held in October 2025, this workshop—let by Jeff Levy, Assistant Instructional Professor at the University of Chicago Harris School of Public Policy—offered a deeper dive into random forest models and other predictive techniques. Participants explored how these methods can help researchers identify patterns in complex datasets and extend the analytical tools already used in social science research.
The session brought together faculty, students, and researchers interested incorporating machine learning techniques into their own projects. Through theory and discussion, participants were introduced to key concepts and approaches that can support predictive analysis in the social and behavioral sciences.
With support from Illinois Computes, attendees were introduced to Jupyter Research Notebooks for Python and R and explored the computing resources available through the National Center for Supercomputing Applications (NCSA), including high-performance computing, GPUs, and storage for large-scale data analysis.
Equally important, the workshop created space for participants to discuss their own research ideas, exchange perspectives, and identify potential areas for collaboration.












Supporting AI-Driven Research and Collaboration
Together, these events highlight the many ways researchers are beginning to engage with AI—from critically examining its limitations to experimenting with new analytical approaches.
By convening conversations like these, CSBS helps create opportunities for scholars to learn from one another, share emerging methods, and build connections across disciplines as the role of AI in research continues to evolve.
Revisit the Conversation
For those who were unable to attend—or who would like to revisit the discussion—recordings of both events are available on the CSBS website.
Watch AI Methods Zoom Recording Watch the Machine Learning WorkshopStay Connected with CSBS
The conversation around AI and emerging research methods is evolving quickly. Follow CSBS on LinkedIn for updates on upcoming events, funding opportunities, research insights, and highlights from the work of our interdisciplinary affiliate community.
Have an idea for a topic or speaker for the AI Methods Series? We’d love to hear from you—reach out to us at CSBScience@illinois.edu.
CSBS LinkedIn Send Topic or Speaker Idea