Politics in the time of AI

Lee Rainie, director of Elon’s Imagining the Digital Future Center, discusses the role of artificial intelligence in the country’s political and civil life.

A composite depicting a hand holding the dome of the U.S. Capitol on puppet strings while the silhouette of a person holds up the dome from underneath. The composite also includes a large megaphone and a close-up of a person's mouth screaming.The growing prevalence and capabilities of artificial intelligence have many people wondering how these expanding technologies will impact this year’s elections. In the spring, Elon’s Imagining the Digital Future Center partnered with the Elon University Poll to conduct a national public opinion survey and gather perspectives from a wide range of experts about the role AI could play as voters make their decisions and head to the polls in November.

Lee Rainie, former director of the Pew Research Center’s Internet and Technology research who now leads the Elon center, has been tracking opinions about the impact of technology on society. He shared with The Magazine of Elon what the center found in its survey and how voters can stay aware and informed heading into Election Day.

American voters worry about an increase in the role of misinformation in elections during the past decade. How do voters think the advancing capabilities and prevalence of AI will factor into the election process this year?

Our survey found that Americans are most concerned about misinformation and disinformation and how AI systems can be used to create convincing fake material that will impact the election. Specifically,

  • 73% think it is “very” or “somewhat” likely AI will be used to manipulate social media in ways that might distort voters’ impressions about what is taking place in the campaign.
  • 70% say it is likely the election will be affected by the use of AI to generate fake information, video and audio material.
  • 62% say it is likely the election will be swayed by the targeted use of AI to convince voters not to vote.

Overall, more than three-quarters of U.S. adults believe at least one of these three abuses of AI will affect the election outcome and about half think all three kinds of abuse are likely to occur. Americans just have this foreboding that the civic and political information environment is already awful and will get worse when these tools are in the hands of the wrong people. To compound the problem, many Americans admit they themselves cannot detect faked material from the real thing, and a big majority — 69% — say they are not confident their fellow citizens can detect faked videos and audio.

How does the insertion of artificial intelligence into politics compare to how the electoral process has been impacted by previous technological advances, such as television and social media? What are the similarities? The differences?

There have long been concerns about the impact of media on the political process stretching back to the days of yellow journalism of the 19th century. There was a lot of hand-wringing in the mid-20th century about propaganda artists using radio and TV to manipulate voters. In 2016, of course, there was a lot of attention to the way bad actors could misuse social media to affect voters’ views.

The concern about AI systems now is that they can compound all the abuses of social media. Voters know it is pretty easy to generate text, video and audio that looks quite real and do it at an unprecedented scale. At the same time, this kind of material can be micro-targeted to go to specific groups of vulnerable or persuadable citizens.

Voters know it is pretty easy to generate text,
video and audio that looks quite real and do it at
an unprecedented scale. At the same time, this kind of material can be micro-targeted to go to specific
groups of vulnerable or persuadable citizens.

Do you believe there are attributes of generative AI and other AI tools that will help voters sift through information this election season more effectively or efficiently? Do you think this will change where voters go to get their information?

Under optimal circumstances, AI systems can be used to do such things as compare candidate positions on issues, get background information on candidates and learn what’s happening in the campaign. The tools can also be used by voters to learn how to register to vote and where to vote.

The big question is whether the systems are reliable enough and up to date enough to be helpful. There is an “arms race” dynamic that will play out here. Good guys are already using AI tools to detect and counteract the problems created with AI by bad guys. They are set up to spot where foreign actors and troll armies are trying to use AI to stoke division and confusion among Americans.

Still, clever and malevolent actors have shown they can find ways to spread misinformation and promote strife. So, there are ongoing questions about whether
people will adjust their norms and behaviors to make sure they find high-quality news and information about politics and civic life.

In the long run, there’s a very strong chance that people will come to use language models and other AIs to get information on all kinds of subjects, ranging from politics and civic information to crucial health-related material to business-related intelligence to childcare and eldercare tips. The experts we canvass have plenty of worries about the spread of AIs. Yet, many also think the AI systems will get better over time at meeting the information needs of citizens.

The survey found support for punishing candidates who use fake photos, video or audio. Do you think it will be possible to police such malicious use of fake information and to hold candidates accountable?

Voters want wrongdoers punished and governments are already creating laws that would ban candidates from the ballot or from office if they are found to have used fake photos, video or audio maliciously.

I think it’s possible to police the known, formal actors in politics to catch those abusing AI — that would be candidates, political party officials, the activities of political action committees, even well-known activists and advocates. The thing that is much harder to police is the subterranean communications of those without official and obvious political titles — the trolls who pass along material in social media and websites and communication apps that don’t necessarily look “political.”

What guidance can you offer for voters who want to be active in ensuring the quality of the information they are consuming as they decide how to cast their ballots? What role can the media play in helping hold campaigns and bad actors accountable?

Related Articles

Voters would be well advised to be on guard as they encounter political information this year. Of course, the best way to be confident about what you’re encountering is to get the material from a trusted news source, preferably one that has been around for a while. Another strategy is to be highly skeptical of all material coming from anonymous or pseudonymous sources. And the time-tested strategies for stress-testing information are to consult multiple sources and sources with differing points of view.

How do you hope the findings from the center’s survey on AI and politics will impact opinions and policies around the use of AI during the electoral process?

We hope studies like this do several things. First, we hope they tell people something new and revealing. For us, enlightenment is the most important job. Second, we want people to appreciate that surveys like this capture the views of all citizens — and that the differences between groups that we surface are important to understand and explore. Not everyone has the same experiences and perspectives, and these pluralistic differences are a fundamental part of democracy. Third, we hope we do things that have relevance to policymakers and other stakeholders, such as civil society actors, technology company leaders and engaged citizens.

We hope the grand outcome of research like this is that we spark debate and discussion about the findings, and we help people understand the implications of the findings for their lives and their communities. At the Imagining the Digital Future Center, we think of ourselves as explorers of the future and mapmakers of previously uncharted landscapes.

See the full survey results