The assistant professor of communication design’s research is featured in Human-Computer Interaction, a leading publication for research on the design, evaluation and implementation of interactive computing systems for human use.
Chris Chen’s sixth publication this year has a personal angle.
The assistant professor of communication design recently published her dissertation study, examining how racial diversity cues and user feedback shape trust and fairness perceptions in AI systems. The study, titled “Communicating and combating algorithmic bias: effects of data diversity, labeler diversity, performance bias, and user feedback on AI trust,” was published in Human-Computer Interaction, one of the leading research journals in the field.
In collaboration with coauthor S. Shyam Sundar of Penn State University, Chen investigated whether showing users racial distribution of the training data and the backgrounds of those labeling the data can influence their expectation of fairness and trust in AI systems. The study also examines how these perceptions change when users are invited to provide feedback.
“I initially became interested in the topic of algorithmic bias due to my personal experiences with AI systems that failed to recognize my face and speech accurately – likely because I am an Asian woman,” Chen said. “It’s often only after a negative experience that I realize the AI system is biased. This led me to wonder how we can better communicate algorithmic bias to lay users before they engage with AI systems, and what strategies can empower users to combat such bias.”
Using an experiment with 597 participants, the coauthors tested theories on how algorithmic attributes influence users’ perceptions and engagement with AI, drawing on the theoretical model of Human-AI Interaction, which is based on the Theory of Interactive Media Effects (HAII-TIME).
Their findings suggest that when users see racial diversity cue in either the training dataset or labelers’ background, they are more likely to expect fairness from the AI and trust it, as they associate diversity with fairer outcomes. Additionally, when users are invited to give feedback, they feel more agentic and thereby trusting the AI more, although this approach can reduce ease of use for White participants if the AI is unbiased.
“Initially, we hypothesized that soliciting user feedback would increase users’ intention to adopt AI systems, especially when the systems show biased performance as it creates expectation of an improved AI system,” Chen said. “However, our findings revealed that providing an option for feedback did not significantly change users’ behavioral trust in AI when it displayed bias.
“Surprisingly, when an AI system performed without bias but still requested feedback, users showed lower intention to adopt the system in the future. This effect was particularly evident among White users, who perceived feedback requests as a sign that the system was less useful, thereby lowering their trust. This finding suggests that designers should carefully consider when to ask for user feedback. If the system performs well without bias, especially in predominantly White user groups, requesting feedback may actually harm trust rather than build it.”
A prolific researcher since arriving at Elon in fall 2022, Chen has investigated topics relating to psychology of communication technologies, such as social media, AI and generative AI. Last month, she earned a best paper award at the Second International Symposium on Trustworthy Autonomous Systems in Austin, Texas.
Chen explained that it was gratifying to have her research published in Human-Computer Interaction, validating the hard work she and Sundar contributed.
“Publishing this work in a top-tier journal means a great deal both personally and professionally,” she said. “It not only increases the visibility of our research, potentially attracting scholars with similar interests to exchange ideas, but it also serves as recognition of our efforts. We worked tirelessly through every stage of the research process, and I am incredibly proud that our rigorous work is being acknowledged by such a prestigious journal.”