
JungHwan Yang is an assistant professor in the Department of Communication at the University of Illinois and a David F. Linowes Faculty Fellow at the Cline Center for Advanced Social Research. His research bridges political communication, data science, and computational social science, examining how people interact with political information in today’s rapidly evolving media landscape. His current projects analyze selective news coverage, viral social media polls, and the use of generative AI to study political content on platforms like TikTok.
In this interview, JungHwan discusses his research on political information in a fragmented media environment, focusing on media bias and public perception. He is working on studies of mass shooting coverage and the impact of viral social media polls on political opinion. He also shares his excitement about using generative AI to analyze political narratives on platforms like TikTok, viewing these tools as transformative for social science research.
What are your main research interests?
My research explores how people access, engage with, and respond to political information in today’s fragmented media environment. I am particularly interested in the life cycle of real-world events—how they become news, which ones are selectively covered or ignored, and how people consume those stories. Understanding this process reveals media bias and shapes the public’s understanding of politics.
To investigate this, I am currently working on a case study of mass shootings, supported by the Cline Center for Advanced Social Research. Although hundreds of mass shootings occur each year, only a small number receive national media attention. Even when they do, different outlets frame these events with varying emphases and biases, and audiences interpret them through their own filters.
This process plays a critical role in how individuals make sense of the world and form opinions about potential policy solutions. This project aims to uncover biases in both news production and audience consumption.
Another project, funded by the NSF, examines the political influence of social polls—informal, often viral polls shared on social media. Working as part of an interdisciplinary team of social scientists and computer scientists, we study whether these polls reflect genuine public opinion or create misleading signals that distort perceptions of political reality. Both projects contribute to my broader goal: understanding how information flows—or fails to flow—and how that shapes democratic life.
What are you most excited about in your research this year?
I am especially excited about how generative AI is opening up new possibilities for social science research. Thanks to large language models (LLMs) and other AI tools, we are now able to move beyond traditional content analysis to study more complex narrative structures in text, image, and video. Instead of simply identifying topics, we can examine how stories are framed, how people are portrayed, and what moral or emotional cues are embedded in the coverage.
In one of my ongoing projects, we are using generative AI in combination with human coders to analyze political TikTok videos during presidential campaigns. This work is especially timely, given that a growing number of young people now rely on TikTok as a primary source of political information. If this is where political learning is happening, we need to ask: what exactly are they learning? Generative AI enables us to answer this question at scale. I see this as a transformative moment for the field, and I am excited to help develop standards for using these tools responsibly and rigorously in social science.