Experts weigh in on the control people will retain in the age of artificial intelligence.
Experts are split about how much control people will retain over essential decision-making as digital systems and AI become more ubiquitous. They agree that powerful corporate and government authorities will expand the role of AI in people’s daily lives in useful ways. But many worry these systems will diminish individuals’ ability to control their choices, according to a new report from Elon University’s Imagining the Internet Center and the Pew Research Center.
This report is part of a long-running series about the future of the internet and is based on a nonscientific canvassing of technology innovators, developers, business and policy leaders, researchers and activists who were asked to consider the future of human agency. In all, 540 respondents shared their views, with 56% of them agreeing with the statement that by 2035 smart machines, bots and systems will not be designed to allow humans to easily be in control of key decision-making and 44% backing the idea that such systems will allow humans to be in control of essential decision-making.
“These experts largely agree that digital technology tools will increasingly become an integral part of people’s decision-making,” said Janna Quitney Anderson, professor of communications and director of Imagining the Internet. “The tools will provide ever-larger volumes of information to people that, at minimum, will assist them in exploring choices and tapping into expertise as they navigate the world. At the same time, many of these experts said the future of these technologies will have both positive and negative consequences for human agency.”
This canvassing took place between June 28 and Aug. 22, 2022, before the release of some major new AI applications from large language models, including ChatGPT, Bard and the new Bing search engine. Still, many of the answers assumed that generative systems like those will be well-embedded in daily life in the next decade. The report itself contains an answered generated by ChatGPT; it argued that AI systems will be designed in ways to give humans control because “this type of user control is becoming increasingly important as AI is integrated into more aspects of our lives and decision-making.”
The experts replying to this canvassing sounded several broad themes in their answers. Those who said that evolving digital systems will not be designed to allow humans to easily be in control of most tech-aided decision-making, shared thoughts along these main themes:
- Powerful interests have little incentive to honor human agency: The dominant digital-intelligence tools and platforms the public depends upon are operated or influenced by powerful elites – both capitalist and authoritarian – that have little incentive to design them to allow individuals to exert more control over their tech-abetted daily activities. One result of this could be a broadening of the digital divide.
- Humans value convenience and will continue to allow black-box systems to make decisions for them: People already allow invisible algorithms to influence and even sometimes “decide” many, if not most aspects of their daily lives – that won’t change. In addition, when they have been given an opportunity to exercise some control over their tech tools and activities, most have not opted to do so.
- AI technology’s scope, complexity, cost and rapid evolution are just too confusing and overwhelming to enable users to assert agency: AI systems are designed for centralized control, not personalized control. It is not easy to allow the kind of customization that would hand essential decision-making power to individuals. And these systems can be too opaque even to their creators to allow for individual interventions.
Several main themes also emerged among those who said that evolving digital systems will be designed to allow humans to easily be in control of most tech-aided decision-making, including:
- Humans and tech always positively evolve: The natural evolution of humanity and its tools and systems has always worked out to benefit most people most of the time. Regulation of AI and tech companies, refined design ethics, newly developed social norms and a deepening of digital literacy will emerge.
- Businesses will protect human agency because the marketplace demands it: Tech firms will develop tools and systems in ways that will enhance human agency in the future in order to stay useful to customers, to stay ahead of competitors and to assist the public and retain its trust.
- The future will feature both more and less human agency, and some advantages will be clear: The reality is that there will always be a varying degree of human agency allowed by tech, depending upon its ownership, setting, uses and goals. Some digital tech will be built to allow for more agency to easily be exercised by some people by 2035; some will not.
Among the thousands of intriguing predictions from those canvassed:
- Paul Saffo warned that it is likely that in the future, “those who manage our synthetic intelligences will grant you just enough agency to keep you from noticing your captivity.”
- Gary Grossman worriedly predicted that humans will increasingly live their lives on autopilot. “The positive feedback loop presented by algorithms regurgitating our desires and preferences contributes information bubbles, reinforcing existing views, making us less open to different points of view, and it turns us into people we did not consciously intend to be.”
- Marcus Foth said that, considering the many problems humanity and the planet are facing, “having the humans who are in control now not being in control of decision-making in 2035 is absolutely a good thing that we should aspire toward.”
- Jamais Cascio shared several compelling 2035 scenarios, ranging from humans benefiting greatly from “machines of loving grace” to a digital dictatorship that might even include “a full digital duplication of a notorious authoritarian leader of years past.”
- Russ White predicted, “Humans could lose the ability to make decisions, eventually becoming domesticated and under the control of a much smaller group of humans.”
- Andre Brock said future automated decision-making will be further “tuned to the profit/governance models of extraction and exploitation integrated into legal mechanisms for enhancing the profits of large corporations.”
- Maggie Jackson predicted that soon, “Human agency could be seriously limited by increasingly powerful intelligences other than our own due to humans’ innate weakness.”
- Alf Rehn wrote that if things play out well, algorithms can be as considerate to human needs as they are wise. “We need AIs that are less ‘Minority Report’ and more of a sage uncle, less decision-makers than they are reminders of what might be and what might go wrong.”
- Sara M. Watson said in 2035 technology should “prioritize collective and individual human interests above all else, in systems optimized to maximize for the democratically recognized values of dignity, care, well-being, justice, equity, inclusion and collective- and self-determination.”
- Gillian Hadfield optimistically declared, “Democracy is ultimately more stable than autocratic governance. That’s why powerful machines in 2035 will be built to integrate into and reflect democratic principles, not destroy them.”
- Neil Davies commented, “One of the enduring problems of widescale, ubiquitous, autonomous systems is that mistakes get buried and failures aren’t shared; these things are prerequisites for people to learn from.”
- Claudia L’Amoreaux said the digital divide will widen, “creating two distinct classes with a huge gap between a techno-savvy class, and a techno-naive class. Techno-naive humans are easily duped and taken advantage of – for their data, for their eyeballs and engagement metrics and for political gain by the unscrupulous groups among the techno-savvy.”
- Jim Dator spelled out new contours of human agency, identity and intelligence, arguing, “Humanity can no longer be considered to be the measure of all things, the crown of creation. We are participants in an eternal evolutionary waltz that enabled us to strut and fret upon the Holocene stage.”
The full report features a selection of the most comprehensive overarching responses shared by the 540 thought leaders participating in the nonrandom sample, including Avi Bar-Zeev, an AR, VR and MR pioneer who has developed the tech at Microsoft, Apple, Amazon, Google and more; danah boyd, founder of the Data & Society Research Institute and principal researcher at Microsoft; Daniel Castro, vice president and director of the Center for Data Innovation at the Information Technology and Innovation Foundation; Cathy Cavanaugh, chief technology officer at the University of Florida Lastinger Center for Learning; Vint Cerf, vice president and chief internet evangelist at Google; Barry Chudakov, founder and principal at Sertain Research; Moira de Roche, chair of the International Federation for Information Processing; Amali De Silva-Mitchell, founder/coordinator of the IGF Dynamic Coalition on Data-Driven Health Technologies; Stephen Downes, expert with the Digital Technologies Research Centre of the National Research Council of Canada; Seth Finkelstein, principal at Finkelstein Consulting and Electronic Frontier Foundation Pioneer Award winner; Gary Grossman, senior vice president and global lead of the Edelman AI Center for Excellence; Gus Hosein, executive director of Privacy International; Maggie Jackson, award-winning journalist, social critic and author; Jim Kennedy, senior vice president for strategy at The Associated Press; Chris Labash, associate professor of communication and innovation at Carnegie Mellon University; John Laudun, professor of social information systems at the U.S. Army Combined Arms Center; Mike Liebhold, distinguished fellow, retired, at The Institute for the Future; Leah Lievrouw, professor of information studies at UCLA; Greg Lindsay, non-resident senior fellow at the Atlantic Council’s Scowcroft Strategy Initiative; J. Nathan Matias, leader of the Citizens and Technology Lab at Cornell University; Giacomo Mazzone, global project director for the United Nations Office for Disaster Risk Reduction; Sean McGregor, technical lead for the IBM Watson AI XPRIZE and machine learning architect at Syntiant; Monique Jeanne Morrow, senior distinguished architect for emerging technologies at Syniverse; Mike Nelson, director of the Carnegie Endowment’s technology and international affairs program; Ojelanki Ngwenyama, professor of global management and director of the Institute for Innovation and Technology Management at Toronto Metropolitan University; Raymond Perrault, a distinguished computer scientist at SRI International and director of the AI Center there from 1988-2017; Andre Popov, principal software engineer at Microsoft; Marc Rotenberg, founder and president of the Center for AI and Digital Policy; Douglas Rushkoff, digital theorist and host of NPR’s “Team Human”; Paul Saffo, well-known Silicon Valley-based futurist; Henning Schulzrinne, Internet Hall of Fame member and co-chair of the Internet Technical Committee of the IEEE; Doc Searls, internet pioneer and co-founder and board member at Customer Commons; Ben Shneiderman, widely respected human-computer interaction pioneer and author of “Human-Centered AI”; Marija Slavkovik, professor of information science and AI, University of Bergen, Norway; Nrupesh Soni, founder and owner of Facilit8, a digital agency located in Namibia; Brad Templeton, internet pioneer, futurist and activist, chair emeritus of the Electronic Frontier Foundation; and David Weinberger, senior researcher at Harvard’s Berkman Center for Internet and Society.