Political science expert explores using AI to talk across the aisle

In an Active Citizen Series lecture, Duke University Professor of Sociology, Political Science and Public Policy Christopher Bail discussed how to use ChatGPT to improve the productivity of our political conversations.

In his lecture, “Bridging Political Divides with Artificial Intelligence,” Duke University Professor of Sociology, Political Science and Public Policy Christopher Bail focused on how a large language model, specifically ChatGPT, can be used for conflict mediation and make political conversations more effective but not more convincing.

The Sept. 18 lecture the Moseley Center was part of the Active Citizen Series, a collection of events at Elon designed to cultivate the next generation of informed leadership who help strengthen our communities and shape democracy. Elon is hosting several nonpartisan events ahead of the 2024 election.

Chris Bail
Chris Bail, professor of sociology, political science and public policy at Duke University.

Ball described the American political atmosphere as similar to a couple having difficulty communicating and deciding to go to a marriage counselor.  

“One of the first things they tell you [in marriage counseling] is people talk past each other…we misunderstand each other because we’re just not connecting, we’re just not able to see the world through the eyes of other people. It is a distinctly human problem,” said Bail, who is the founder of the Polarization Lab and author of “Breaking the Social Media Prism: How to Make Our Platforms Less Polarizing.”

Chat alternatives 

In Bail’s study, a group of people consented to have a conversation with a partner who had a different political view than themselves. The politically left-leaning person described guns as a “stain on democracy” while the politically right-leaning individual would respond with “guns are protecting democracy” and eventually someone was called “incredibly naive” by the end of the discussion. After this first trial, two to three people were selected to have the choice of using Ball’s AI chat assistant technology.

The chat assistant offered alternatives to the author of the message and allowed them to accept or edit the suggestions. Instead of one of the users responding to their partner’s different opinion with an insult, the chat assistant gives the user a choice to rephrase their message to make the conversation more productive. 

Bail and his team hypothesized that the use of the AI chat assistant led to increased conversation quality, and increased democratic reciprocity but would not change their mind on policy issues. At the end of the study, Bail found that the person using ChatGPT is commonly the one changing their mind due to this technology allowing the user to take the perspective of their partner. By stepping into their partner’s shoes, there was a better understanding of their opinion and reasoning behind it. 

“We don’t need AI to do politics for us. But wouldn’t it be great if it could teach us to speak in a way that makes our arguments more effective,” Bail said. 

This experiment tested democratic reciprocity in which, despite having differing opinions and views on gun control, participants who used and received chat assistant messaging were more inclined to have future tough conversations. 

Chris Bail gives a presentation
Duke University Professor of Sociology, Political Science and Public Policy Chris Bail discusses his experiment with ChatGPT and political discourse during an Active Citizen Series lecture on Sept. 18 in the Moseley Center.

Your new neighbor: AI

Nextdoor, the neighborhood community forum app, reached out to Bail and his team about using their AI research for their app. The company was having difficulty with its users violating their community messaging guidelines. Bail cited research that noted people witnessed or experienced something that negatively affected them on Nextdoor.

With Bail’s assistance, Nextdoor’s “Kind replies make a difference” initiative was put in place allowing for AI chat assistants to flag messages that violated Nextdoor community guidelines, offer options for the user to rephrase their language or to continue to send the flagged message and recommend rephrased messages for the user to share instead.  It gives the messenger a chance to edit the comment and use a suggestion or don’t rephrase and still send the flagged message. There is no forced censoring, just thoughtful suggestions to limit verbal conflict. 

“I was pretty surprised when in a transparency report last year, Nextdoor reported that this kindness intervention reduced the creation of content that violated its community guidelines by 15%,” Bail said.

This technology, and Bail’s research, also assisted the development of a new company, Temper, which provides this chat assistant technology to be used by non-profit organizations in the service of the social good. 

As the audience was allowed to ask questions, Bail addressed the negatives of this technology, specifically Stanford University sociologist’s research on a content moderation tool that can disproportionately flag comments from African American users. However, Bail defends his position with the possibility of using ChatGPT to improve the divide of American politics by changing the way we speak to the “other side.”