In recent months, the assistant professor of communication design has published and presented research examining how to communicate algorithmic bias through training data, how patients view their interactions with individualizing AI doctors, and how users perceive autoplay features on video platforms.
Cheng “Chris” Chen, who regularly publishes on topics relating to psychology of communication technologies, such as social media, AI and generative AI, expanded on her academic research this summer with two new published articles and an in-person presentation at the 2024 International Communication Association Conference in Gold Coast, Australia.
The assistant professor of communication design published a recent article, titled “Preventing users from going down rabbit holes of extreme video content,” in the International Journal of Human-Computer Studies, a peer-reviewed academic journal that examines the design and use of interactive computer technology. In collaboration with three co-researchers, Chen investigated what autoplay is and how it influences users’ rabbit hole perceptions. Interestingly, it was Chen’s own experience that generated the research idea.
“As an active YouTube user, I have observed that the videos recommended to me often tend to become increasingly extreme,” Chen said. “This observation has led me to question whether the perception of descending into a ‘rabbit hole’ is connected to YouTube’s default autoplay feature.”
The study highlights that autoplay is not just a passive experience. It offers both automation and interactivity (i.e., allowing users to turn the toggle switch on and off). Engaging with either aspect can influence rabbit hole perception through different mechanisms and under certain conditions, Chen explained.
Building upon previous research on individuation by AI doctors, Chen and three co-researchers published “When an AI Doctor Gets Personal: The Effects of Social and Medical Individuation in Encounters With Human and AI Doctors” in Communication Research, a top-tier journal in the field of communications. The new research explores the conditions under which individuation by AI doctors is perceived as either beneficial or detrimental.
According to Chen, one of the interesting and counterintuitive findings from the study is that patients seem to perceive AI doctors as putting in more effort when they recall social information during a second medical encounter. But the professor points out that, from the perspective of AI, social information is just another piece of data and doesn’t require extra effort to memorize it.
Lastly, Chen headed to Australia’s east coast in June for the 74th Annual International Communication Association (ICA) Conference. One of the leading associations in the communication discipline, ICA invited scholars from across the globe to examine this year’s theme – Communication and global human rights.
Chen presented a paper investigating racial bias in AI, exploring how to communicate algorithmic bias to lay users through training data.
“We found that when the training sample shows “Happy Whites” and “Unhappy Blacks,” users perceived it as just as biased as a sample with a balanced racial distribution across emotional categories,” Chen said. “This suggests that users struggle to recognize race as a confounding factor in a snapshot of training data. Designers should consider these cognitive limitations when communicating algorithmic bias through training data.”
ICA aims to advance the scholarly study of communication by encouraging and facilitating excellence in academic research worldwide. The association began more than 50 years ago as a small association of U.S. researchers and now has more than 5,000 members in over 80 countries.