This page holds hundreds of predictions and opinions expressed by experts who chose to remain anonymous when sharing remarks in a canvassing conducted from late June to early August 2022 by Elon University’s Imagining the Internet Center and Pew Research Center. The experts were asked to respond with their thoughts about the likely evolution of human agency and human decision-making as automated systems rapidly spread and evolve in the digital age.
Results released February 24, 2023 – Internet experts and highly engaged netizens participated in answering a survey fielded by Elon University and the Pew Internet Project from late June through early August in 2022. Some respondents chose to identify themselves, some remained anonymous. We share only the anonymous respondents’ written elaborations on this page. This page does not hold the full report, which includes analysis, research findings and methodology. Click here or click the image to read the full report.
In order, this page contains only: 1) the research question; 2) a brief outline of the most common themes found among both anonymous and credited experts’ remarks; 3) submissions from the respondents to this canvassing who preferred to remain anonymous.
This survey question asked respondents to share their answer to the following prompt and query:
Digital tools and human agency: Advances in the internet and online applications have allowed humans to vastly expand their capabilities, increased their capacity to tackle complex problems, allowed them to share and access knowledge nearly instantly, helped them become more efficient and amplified their personal and collective power to understand and shape their surroundings. Smart machines, bots and systems powered mostly by autonomous and artificial intelligence (AI), will continue those advances. As people more deeply embrace these technologies to augment, improve and streamline their lives, they are outsourcing some decision-making and autonomy to digital tools. That’s the issue we explore in this survey. Some worry that humans are going to turn the keys to nearly everything – including life-and-death decisions – over to technology. Some argue these systems will be designed in ways to better-include human input on decisions, assuring that people remain in charge of the most relevant parts of their own lives and their own choices.
Our primary question: By 2035, will smart machines, bots and systems powered by artificial intelligence be designed to allow humans to easily be in control of most tech-aided decision-making that is relevant to their lives?
- Yes, by 2035 smart machines, bots and systems powered by artificial intelligence WILL be designed to allow humans to easily be in control of most tech-aided decision-making relevant to their lives.
- No, by 2035 smart machines, bots and systems powered by artificial intelligence WILL NOT be designed to allow humans to easily be in control over most tech-aided decision-making relevant to their lives.
Results for this question regarding the evolution of human-machine design in regard to human agency by 2035:
- 56% of these experts selected that by 2035 smart machines, bots and systems will not be designed to allow humans to easily be in control of most tech-aided decision-making.
- 44% said they hope or expect that by 2035 smart machines, bots and systems will be designed to allow humans to easily be in control of most tech-aided decision-making.
Follow-up qualitative question: Why do you think humans will or will not be in control of important decision-making in the year 2035? We invite you to consider addressing one or more of these related questions in your reply. When it comes to decision-making and human agency, what will the relationship look like between humans and machines, bots and systems powered mostly by autonomous and artificial intelligence? What key decisions will be mostly automated? What key decisions should require direct human input? How might the broadening and accelerating rollout of tech-abetted, often autonomous decision-making change human society?
Click here to download the print version of the “Future of Human Agency” report
Click here to read the full “Future of Agency” report online
Click here to read credited responses to this research question
Common themes found among the experts qualitative responses:
*Powerful interests have little incentive to honor human agency – the dominant digital-intelligence tools and platforms the public depends upon are operated or influenced by powerful elites – both capitalist and authoritarian – that have little incentive to design them to allow individuals to exert more control over their tech-abetted daily activities. *Humans value convenience and will continue to allow black-box systems to make decisions for them – people already allow invisible algorithms to influence and even sometimes “decide” many if not most aspects of their daily lives and that won’t change. *AI technology’s scope, complexity, cost and rapid evolution are just too confusing and overwhelming to enable users to assert agency – it is designed for centralized control, not personalized control. It is not easy to allow the kind of customization that would hand essential decision-making power to individuals. And these systems can be too opaque even to their creators to allow for individual interventions. *Humans and tech always positively evolve – the natural evolution of humanity and its tools has always worked out to benefit most people most of the time, thus regulation of AI and tech companies, refined design ethics, newly developed social norms and a deepening of digital literacy will emerge. *Businesses will protect human agency because the marketplace demands it – tech firms will develop tools and systems in ways that will enhance human agency in the future in order to stay useful to customers, to stay ahead of competitors and to assist the public and retain its trust. *The future will feature both more and less human agency – tech will always allow a varying degree of human agency, depending upon its ownership, setting, uses and goals; some allow for more agency to easily be exercised by some people by 2035; some will not.
Responses from those preferring to make their remarks anonymous. Some are longer versions of expert responses contained in shorter form in the survey report.
Following is a large sample including a majority of the responses from survey participants who chose to remain anonymous in the survey; some are the longer versions of expert responses that are contained in shorter form in the official survey report. (Credited responses are published on a separate page.) The experts were asked: “By 2035, will smart machines, bots and systems powered by artificial intelligence be designed to allow humans to easily be in control of most tech-aided decision-making that is relevant to their lives? Why or why not?”
Some respondents chose not to provide a written elaboration, only choosing to respond to the closed-end Yes-No question. They are not included here. The statements are listed in random order.
A well-known internet pioneer now working as a principal architect at one of the world’s leading software companies said, “The relationship between humans and machines will be largely guided by law. Just as autonomous vehicles have not progressed to widespread deployment as quickly as was initially thought, so will many other uses of machine learning be delayed. The basic problem is that making decisions brings with it liability and in most cases the software developers are not adequately compensated for that liability, which today cannot be insured against.”
A public policy professional at a major global AI initiative wrote, “In 2035 we will be allowed to have a degree of control over our tech-abetted decisions if we want to have it. But the value of that control may no longer seem important to us. As machines become better at predicting what we want and organizing our decisions for us, many of us are likely find real value in their contributions to the decisions we make and many people will simply defer to them.
“An imperfect analogy can be found in a hospital visit—when you visit the hospital you still remain in control, but with two clear limitations. The first is that the options set for you are generated by the hospital and doctors, based on knowledge that you lack and in such a way that it is hard for you to evaluate whether there are any other options you should be considering. The second is that your choice is limited to what some aspects of society have collectively decided will be made affordable to you—that experimental or expensive treatment may be available, but only if you foot the bill.
“Now, fast-forward to the use of powerful AI-based decision-making aides and you will see that you are in roughly the same situation: The options generated will be hard to second-guess, and there may well be a legal or regulatory assumption that if you deviate from some or all of them, well, then you may not be covered by your insurance. You can choose to drive yourself, but if you do you assume all liability for what happens next. You can decide among extremely complex investment strategies for your retirement—but you may not know how to generate your own options.
“And if we imagine that these tools are really good—which they have the potential to be—we should also note that there is another possibility: That we will want to follow the advice the tools provide us with. We are in control, but we have learned that the recommendations we get are usually very, very good—and so we simply choose to trust and follow them. We can still deviate from them if we want—we have, as Stanislaw Lem quipped, the free will to make slightly worse decisions. We are in control, but the value of that control has decreased radically.
“How may this then change human society? One thing is, of course, that we will need to ensure that we occasionally force people to make their own decisions or we will have created a dangerous vulnerability by which hijacking these systems could mean hijacking all of society. You could imagine a society with recommendation as a luxury good, where the wealthy use decision aides for 100% of their decisions, but the poorer will afford only 40% or less. Completely autonomous control—in this society—will be a class marker.
“The rich will be able to afford high-resolution decision models of themselves, guaranteeing that they make better decisions than they themselves could make and the poor will only be able to use very templatized recommendation engines.
“It is worth noting, by the way, that when we say that we are in ‘control’ of a decision today, that is hardly the case. We make decisions embedded in networks of technology, people and ideologies—and these determine our decisions today. Technology can be helpful in breaking through and going against some of the patterns and habits created in these networks, and perhaps optimize things for better decisions for everyone.”
An information security, risk, privacy and identity expert based in Texas said, “Designers are lazy and marketers are arrogant. Designing a complex system to be easy for non-expert users to understand and guide is very difficult, and most product designers will opt not to do it, falling back instead on simpler modes of operation for automated decision-making systems that don’t require human input. Product marketers will then overstate the advantages of these relatively limited applications of automation.”
An expert in economic forecasting and policy analysis for a leading energy consultancy commented, “Ever since the icon-controlled environment was adopted for computing technologies vendors have prioritized usability over agency and widened what I referred to in my master’s thesis as the ‘real digital divide,’ the knowledge gap between skilled manufacturers (and fraudsters) and unskilled users. It is not merely application software that has become opaque and limited in its flexibility.
“Programming languages themselves often rely on legacy object libraries that programmers seldom try to understand and do not meaningfully control. It may be inefficient to build from first principles each time one sets out to write software, but the rapid development of the last several decades veers to the opposite extreme, exposing widely used applications to ‘black-box risk’—the inadvertent incorporation of vulnerabilities and/or functional deficits.
“Neural nets are even more opaque, with inscrutable underlying decision algorithms. It seems highly unlikely that today’s results-oriented practices will become more detail-focused with the advent of a technology that does not lend itself to detailed scrutiny.”
A researcher affiliated with the Future of Humanity Institute at the University of Oxford wrote, “Competitive pressures between firms, institutions and nation-states could lead to more key decisions becoming automated because AI systems will be more efficient at processing huge volumes of information.”
A civil liberties director for a foundation with global reach said, “Those who control these technologies—the gatekeepers of tech-aided decision-making—will have inadequate incentive to give people control over the decision-making processes. I don’t think in 2035 that robots and machines will be making important decisions with human control but I doubt that the humans for whom the decisions are being made will be the ones fully controlling the process.”
A UK-based expert in social psychology and human communication responded, “Terry Gillam’s movie ‘Brazil’ was quite prescient. Tools that emerge with a sociotechnocratic line from the behavioural sciences will ensure that control is not evenly distributed across society, and the control in question will probably be quite clumsy. And why aren’t political analysts up on this? Where are the political scientists and philosophers who should be helping us with this? Probably still mithering around about what someone said and meant in the 19th century.”
A U.S.-based designer expert in human-computer interfaces said, “The successes of already widely adopted technologies have resulted in people already being much less ‘hands-on’ in understanding and analyzing their lives. Humans don’t seem to want to think for themselves; they seem to prefer allowing a black box to govern them. They are using fertility trackers, autonomous driving and automatic scheduling assistants to guide their daily activities.
“The key automations are probably purchasing (food/clothing/consumer item subscription services) and wellness (checking in on health statistics, scheduling appointments for doctors and exercise regimens).
“All this automation means people may be less aware of the cause-and-effect between habits and their own health. Instead of reasoning and experimenting among different choices they are making, they are given a ‘standard plan’ without needing to understand the science and components of wellness. They are also losing the ability to communicate about their own health choices and outside-the-normal services may be missed or reduced. Further, as people take less active involvement in their own care and management they will be less educated on best to care for their own welfare.”
A professor of political science based in the UK wrote, “The primary reasons I think humans will not be designed to allow people to easily be in control over most tech-aided decision-making relevant to their lives are as follows: It is not clear where in the chain of automated decision-making humans can or would be expected to make decisions (does this include automated advertising auction markets, for example). Allowing humans to be more in charge requires a new infrastructure to think about what sorts of control could be applied and how those would be realized. To the extent that some of these machines/bots/systems are for use by law enforcement or security, it is not clear that more choice would be allowed. (Paradoxically, this may be where people MOST want such ability.)
“Due to the fact that the tech ethos remains ‘disruption’ and things move quickly, it is not clear how embedding more human/user control will work out, especially if users are given more choice. Giving them more choice also risks the demonetization of automated decisions.
“Ultimately, answering the complicated questions of when users should have more control, what type of control they should have, how this control would be exercised, and whether or how those who make such systems might be willing to acquiesce to giving user more control (especially if this must be forced through regulation) seems a tall order to achieve in just over 10 years. 2035 sounds far away, but these are a lot of hurdles to solve.”
A director of initiatives at a major global foundation focused on keeping communications networks open and accessible commented, “I expect that the majority of humans will not be in control of important decision-making in the year 2035. In addition to the fact that there is less profit for builders and managers of the tech if they work to support humans in understanding their options and exercising their agency:
- There appears to be a strong human tendency to give away agency to other entities, even if these entities are machines, especially when the interfaces undermining human agency are designed to be attractive and/or easier to use.
- A significant percent of the population may not be concerned with exercising agency if the options given to them to help them manage or personalize their interactions with machines are either complex or already programmed to be close enough to what they would want in any case.
“I expect that tech-abetted autonomous decision-making will amplify the many forms of bias already existing, including around gender, racial and cultural characteristics, disability, class, age, and more. What would counter these social tendencies would be a strong public demand for systems that help maintain human agency through tech.”
An expert in cyberliteracy based in the U.S. Midwest, predicted, “The Bezos, Musk, etc., types who have all the money and power will continue to operate their companies in ways that rely on algorithms that the average person does not see, never mind understand. Legislation will never catch up. Misinformation and disinformation will continue to be increasingly automated and thus divide people even further into information silos, all so these companies can continue to profit. Autonomous decision-making in other areas, such as medicine, language translation, manufacturing, the stock market and so forth, can be for the good only when augmented with a keen human eye to correct for errors. We just don’t have any well-functioning models of participatory design at a large scale, especially when it comes to AI and algorithmic decision-making.”
The CEO and lead strategist of a major global communications platform commented, “Well, first I think, 2035 is too short-term. We’re only seeing, for example, the first autonomous cars working in very small geographic areas of the world, and they are not working well. It still feels quite experimental to me. Advances have been great over the last several years but concepts such as AI are decades old and will still quite some more time for the technology to mature.
“And second, humans will want to automate as many boring and repetitive boring tasks as possible. I’ve used driving as an example above, but it may be something way easier such as preparing the orange juice in the morning. I have serious doubts human nature will allow machines to take any serious life-and-death decisions any time soon. We don’t even know how decision-making processes work, as the transparency around the immense majority is non-existent.”
The executive director of a major U.S. digital policy center said, “I’m not confident that we will have grappled with the implications of reliance on machines adequately by 2035 to ensure that we prioritize human control over the machine-aided decisions that impact our lives.”
A retired senior lecturer in computer science and engineering commented, “The public lost control of their lives a long time ago. AI is just going to continue to make control by corporations and governments easier.”
An expert in information science wrote, “There are many instances with low-level, low-stakes decisions that algorithmic systems that people will turn over to AI (search, navigation, image-tagging, recommendations) in ways that become routine and taken for granted. However, there are too many examples from banking, human resources, parole boards and other institutions that indicate the difficulty of handing over high-stakes decisions to these systems. It can easily become a tool that can be used, intentionally or unintentionally, to deny access to customer service, jobs, health care, etc.
“We have not yet seen such systems that have been designed in ways that do not reflect a large set of conscious and unconscious cognitive biases. Attempts to embed principles of fairness and transparency are in their infancy but eventually explainable AI may mitigate this problem somewhat in the future, not by 2035. The goal of embedding social intelligence and/or commonsense reasoning in AI systems also seems to be a far-off goal.
“Considering the social life of algorithmic systems, there are already examples of it being used to for surveillance. hacking, phishing, deepfakes and in war. They can generate disinformation at large scales, leading to a general distrust of the flow of information. They are being engineered in the social media domain to increase engagement which, in a sense, removes some aspect of decision-making. Handing over command-and-control decisions in the national security area to AI systems could be another domain where there is danger in automated decision-making.”
A UK-based expert on social media’s impact on culture and society said, “Well, they could be, but most developers of technology are not necessarily signed up to building ethical and responsible methods of development. In contrast, all decisions should have some kind of human input, oversight or direction—fairness, accountability and transparency should be built into tech design and development. The rollout of technology does not, on its own, change human society.”
A professor of computer science based in Canada said, “We interact with computers and AI systems within too many contexts to have all of these properly audited and controlled. The sum of many small and/or seemingly insignificant decisions suggested by our technology in the future will end up having larger unintended consequences on our daily lives. Humans should be in control of important decision-making, but without significant action by governing bodies and other regulations, this will not begin to happen, and even if effective governance might be adopted for some fraction of important decisions, it is unlikely to be universal.”
An open-access advocate and researcher based in South America wrote, “For design reasons, much of today’s technology—and future technology will come with default configurations that cannot be changed by users. I don’t doubt that there will be more machines, bots and AI-driven systems by 2035 but I don’t think they will be equally distributed around the world. Nor do I believe that people can have the same degree of decision-making vis-à-vis the use of such technologies equally. Unfortunately, by 2035 the access gap will still be significant for at least 30% of the population, much wider will be the gap in the use and appropriation of digital technologies. In this scenario, human decision-makers will be in the minority.
“Possibly the management and distribution of public and private goods and services is something that will be automated to optimize resources. Along these lines, direct human intervention is required to balance possible inequalities created by automation algorithms, to monitor possible biases present in the technology, and to create new monitoring indicators so that these systems do not generate further exclusion. Also, to make decisions that mitigate possible negative impacts on the exercise of human rights. It is important that pilots are carried out and impacts are evaluated before massive implementations are proposed. All stakeholders should be consulted and there should be periodic control and follow-up mechanisms.”
A top editor for an international online news organization wrote, “The true basis by which to answer this question is by asking, “In 2022, do smart machines, bots, and systems powered by artificial intelligence allow people to easily be in control over most tech-aided decision-making relevant in their lives?” While future innovations and popular attitudes towards them will not necessarily follow clearly predictable linear trends from those that exist today, it is the most-sound means by which to address future possibilities.
“At present, many people on earth have already effectively outsourced—knowingly or unknowingly—their tech-aided decisions to these systems. Many of these people do not give extensive thought to the reality of their personal agency in such matters. In many cases this is because they do not fully understand such processes. Perhaps they have fully invested their faith into them, or they simply do not have the time nor inclination to care. Save a most unlikely paramount event that causes society to radically reevaluate its relationship to these systems, there is no reason to conclude at present that these common prevailing attitudes will change in any revolutionary way.
“Moreover, the very nature of this question does not address various unique situations that exist relative to their country and society. Indeed, the answer to this question may be quite different if one were to compare the answer of an urban resident living in the PRC, compared to the answer of a rural resident living in the United States. The sheer litany of services offered through the Chinese platform Weibo is a salient example, as no fully equivalent platform exists in the West.
“For all intents and purposes, many people’s tech-aided decision-making is largely out of their control, or they do not know how to more-capably direct such systems themselves. Many of the most critical tech-aided decisions in practice today do not lend themselves to clear control through the conscious agency of the individual. The way in which automated recurring billing is designed often does not clearly inform people that they have agreed to pay for a given service.
“Many people do not understand the impact of sharing their personal information or preferences to set up algorithm-generated recommendations on streaming services based on their viewing behavior, to exchange text messages for upcoming doctor appointments. They may not know of their invariable sacrifice of personal privacy due to their use of verbally controlled user interfaces on smart devices, or of the fact that they are giving over free control over their personal data when using any aspect of the internet.
“For better or worse, such trends are showing no clear signs of changing, and in all likelihood are unlikely to change over the span of the next 13 years. The sheer convenience these systems provide often do not invite deeper scrutiny. It is fair to say tech design often gives the seeming appearance of such control, the reality of which is often dubious.”
An executive director expert in information systems said, “AI and ML are still not sufficient today, nor will they be in 2035 to replace humans in all decision-making. Automation will continue of mundane tasks as well. But, people do not like talking or interacting with machines. They still want a living human on the other side of the phone/email/chat. Yet much of the factory work we do can be moved to automation. This might actually save the next generation of workers and consumers as we do not have enough employees now willing to work in factories to make the goods individuals want to buy.
“AI will not be completely trusted probably for another generation because the data that is being collected is faulty, biased, messy and is missing some things. One of the only ways I see people trusting AI to do the job is to open up the data sources, much like science and research have done with Open Science.”
An author whose writing has focused on digital and post-digital humanity asked, “Is it clear that humans are in control even now? They are not in control on Wall Street, not in control over what they see on the internet, not in control piloting airplanes, not in control in interacting with customer service of corporate providers of everyday services, etc.
“Are we in a period of co-evolution with these systems and how long might that last? Humans do better with AI assistance. AI does better with human assistance. The word ‘automation’ sounds very 20th century. It is about configuring machines to do something that humans formerly did or figured out they could do better when assisted by the strength, precision or predictability of machines. Yet the more-profound applications of AI already seem to be moving toward the things that human beings might never think of doing.
“Could even the idea of ‘decisions’ eventually seem dated? Doesn’t adaptive learning operate much more based on tendencies, probabilities, continual refactorings, etc.? The point of coevolution is to coach, witness and selectively nourish these adaptions. By 2035 what are the prospects of something much more meta, possibly making Google seem as much an old-fashioned industry as it once did to Microsoft?
“This does not imply the looming technological singularity as popular doomsayers seem to expect. Instead, the drift is already on. Like a good butler, as they say, software anticipates needs and actions before you do. Thus, even the usability of everyday software might be unrecognizable to the expectations of 10 years ago. This is coevolution.
“Meanwhile Google is feeding and mining the proceedings of entire organizations. For instance, in my university, they own the mail, the calendars, the shared documents, the citation networks and ever more courseware. In other words, the university is no longer at the top of the knowledge food chain. No humans are at the top. They just provide the feed to the learning. The results tend to be useful. This, too, is coevolution.”
An internet systems consultant wrote, “I say that humans will not be in control of important decision-making in the year 2035 because this is already the trend even without a full and real introduction of AI, and I do not believe that greater introduction of AI will improve the situation for ordinary people.
“Large companies are dictating more and more decisions for consumers, generally favoring the companies’ interests rather than the consumers’ interests, whether or not those companies’ decisions are based on AI. Especially in areas of technology, their use of automatic remote firmware updates allows large companies to arbitrarily change the behavior of many products long after those products are initially purchased by consumers. And the lack of competition for many kinds of technology products exacerbates this problem.
“I do, however, believe that introduction of AI in various aspects of product and service design, implementation and operation will likely make the effects of choices still delegated to ordinary consumers less effective and less predictable by those consumers. For example, a consumer will have no idea whether their deliberate or accidental viewing of particular images online will be seen by an AI as evidence of possible criminal activity warranting an investigation by the government. Even if no prosecution ensues without a human in the loop, such triggers will have a chilling effect on human inquisitiveness.
“Of course, this problem exists even without AI in the loop, but the use of AI will tip the balance away from individual rights and toward increased automatic surveillance. Companies’ uses of AI to detect potential criminal activity may even be seen by courts to shield those companies from violating privacy protection laws.
“It’s also possible that courts will treat AI estimates of potential criminal activity as relatively unbiased agents that are less subject to remedial civil action when their estimations are in error.”
A professor of human-robot interaction at a university in Japan commented, “The master-slave relationship depends on the machine used. Personally, the machine needs to remain in a subordinate role. The result of machine learning depends on the data. Results will vary depending on which data you train with. Everything should be decided by the person in the end. Machines should only provide information for judgment and offer choices. To be convenient or easy means that human beings give up their responsibility.”
A professor of culture and communication based in Australia wrote, “This would demand huge social and political change, and runs counter to the powerful commercial operation built on data extraction that characterises the contemporary digital economy.”
An accomplished professor of computer science at a U.S. Ivy League university wrote, “In scenarios involving machine-learning-mediated (ML) decision-making, it’s hard to imagine that humans will have total agency in those interactions, barring significant advances in the field of explainable ML. Modern technological systems are extremely complex, so explaining how such a system works in a way that is complete, transparent, and understandable to laypeople is hard. Lacking such explanations, laypeople will struggle to make fully-informed decisions about how to use (or not use) technology.
“Absent regulation, many companies will also be reluctant to fully reveal how user data is collected, processed, and shared with other companies. Laws like the GDPR are very helpful, but such laws need to be kept up to date with new advances in technology. With respect to ML technologies in particular, models like GPT3 are already sufficiently complicated that engineers (let alone laypeople) struggle to fully understand how the models work.”
An expert on the sociology of communications technology responded, “Well, the question was if humans will be in control of important decision-making in the year 2035 for themselves via AI systems, and this will not be the case, as either civilization will have collapsed (well perhaps not in Europe too badly) or if there are such AI systems, people won’t be in control of those systems in their own lives, giant capitalist corporations such as Google will be in charge of those systems.
“Pattie Maes thought this was going to be the case back in the 1990s, and her predictions were pretty good except she missed the power of capitalism (as is the case here, the capitalist corporations will be in control, not regular people).”
A professor of sociology said, “Although technology will be available to automate many tasks, people’s willingness to use this technology and access of this technology will prevent its widespread adoption. People may be willing to allow machines to handle routine functions (vacuuming, lawn mowing, oil changes, manufacturing, etc.), but human desire to control will prevent many tasks from being automated. People will want to see a human doctor, whose diploma from medical school hangs on the wall. People will want to interact with a human attorney to plan their estate or resolve a legal difficulty. In addition, in many places in the developing world, AI technology will still be unavailable in 2035. The spread of technology depends on viable markets for the purchase and use of such services.”
An activist/voice of the people commented, “The relationship between humans and machines may involve humans who program machines that make decisions to manipulate the decisions of other humans. I.e., if an industry finds it is most profitable to lead consumers toward certain decisions, then automating decisions toward outcomes that would allow those decisions to be made and making it more difficult to make less profitable decisions would be most optimal for the industry. As AI and algorithms advance whatever processes cause them to become more advanced is no longer in human control. Important decision-making should be left to humans if it affects their livelihood significantly and if it is advantageous for AI to sway the decisions in a way geared towards manipulation or profit.”
An applications-design professional said, “I work with teams who create AI applications, such as cybersecurity. What I see is the technology is almost completely incapable of collaborating with humans in any meaningful way. The ideal scenario would be one where the computer does complex analysis but then explains its analysis to end users, who can then make informed decisions about what they want to do with the analysis. But this is NOT what I am seeing. What I see is the analysis is so complex the computer is not able to explain the reasoning, nor is it able to provide meaningful ways for the human to coach it into better decisions.”
The co-founder of an award-winning nonprofit action network and consultancy wrote, “My 12-year-old son and virtually all his friends will not voluntarily leave their rooms or screens. They simply do not play outside or engage in free play. They are slaves to their screens. They are experiencing a vastly different and worse childhood than I experienced in the 1960s and ’70s. One of the many negative impacts is a distinct lack of initiative and agency. They don’t start anything. Almost all activities are scheduled by parents otherwise they wouldn’t leave the house.
“They are experiencing a new form of totalitarianism. The significant level of control corporations gained over workers has been extended to consumers. This does not bode well for democracy. Everything online is made easy and addictive. Children and young adults have no idea what they are missing—they expect the real world to operate this way. But it doesn’t, or it certainly shouldn’t.
“Civilization and democracy require patience, grit, fortitude, long-term thinking, relationship skills and initiative. Online life does not foster these. As long as the profit motive rules the tech sector, this trend of decreasing agency of consumers and increasing power of tech companies will continue. Control equals profit, at least in the short to medium turn. I am extremely pessimistic.”
A professor based in Europe said, “Long story short, if we look at the Metas, Googles and anti-democratic movements of this world today and extrapolate into the future, it seems reasonable to assume that tech will mainly be used instrumentally by powerful actors in both the business and political realms, which is to say, tech will likely be used to maintain power and domination instead of empowering individuals in ways that could be considered democratic. This has to do with power and other social dynamics, not with tech itself. We should stop talking about tech as if it was an actual actor.”
A futurist based in Europe commented, “When you say humans, I understand it to mean humanity as a whole, and I don’t think this will be the case. Those investing in the development of these technologies will make sure to remain on top. That said, AI will be capable to take most decisions on its own.”
An anonymous respondent wrote, “Commercial interests will override legitimate interests and meaningful human democratic control. Large companies will allow only symbolic tokens of control. ‘Autonomous’ machine decision-making won’t be autonomous in the sense of independent, machine-perspective based, goal selection. It will be semi-autonomous, following goals set by global producers of AI technology. Most, if not all, key decisions with substantial financial consequences (including small ones at large scale) will be controlled by machines in the interest of big companies. It will lead to increased surveillance and (informational) control.”
An expert who has won honors as a distinguished AI researcher commented, “There is a lot of research on humans and AI, and it will produce results in a few years. Tech companies are interested in making products that people will buy, so there is more attention than ever in making software that interacts with humans. Many decisions are made by machines today without having to use any AI. That will continue. What will be left for humans to decide is not clear at this point.”
A research scientist expert in developing human-machine systems and machine common sense said, “I do think AI bots and systems will be ‘designed’ to allow people to be in control, but there will likely be many situations where humans will not understand how to be in control or they will choose not take advantage of the opportunity. AI currently is not all that good at explaining its behavior, and user-interface design is often not as friendly as it should be. My answer is that people are already interacting with sophisticated machines—e.g., their cars—and they allow their phones to support more and more transactions. Those who already use technology to support many of their daily tasks may have more trust in the systems and develop deeper dependencies on the decisions and actions enabled by these systems.
“How might the broadening and accelerating rollout of tech-abetted, often autonomous decision-making change human society? It could make people lazier and have less interest in understanding certain aspects of daily life. Also, if these services get disrupted or stop, people might have a hard time going back to doing things in a ‘manual’ way.”
An executive who works for the U.S. government wrote, “This is so way beyond my realm of imagination. I think people generally want to maintain control of anything that affects their well-being. Things that are mundane and repetitive can be automated, and the robot can make the decisions. On the other hand, it just occurred to me ways that AI can see a problem or a danger and have the capability to avoid it, like autos that brake because you’re a distracted driver, or a surgeon who is about to make the wrong cut, then humans would be grateful for the technology.
“I think humans will want to maintain control over life-or-death decisions, but we/they need to learn how to take advantage of what/how technology/AI can provide pertinent information to help humans make evidence-based decisions. I guess I think that’s what/how technology can/should do—provide whatever is needed so that the decisions that humans make, the decisions that humans keep for themselves, that these decisions are aided by evidence.”
An award-winning human-centered AI researcher who works with one of the top five global technology companies commented, “If you asked people in 1990 whether allowing Google to tell us how to get from every point A to every point B would be removing their agency, many would have said ‘yes,’ but now most would say ‘no.’ A similar analogy can be made for information retrieval/search. I am not a scholar of agency, but my guess is that it is about power. If people feel empowered by AI, they will feel they have agency. It’s subjective. What isn’t as subjective is whether the rewards from these empowerment tools more generally are being distributed equitably.”
An internet policy expert and activist based in West Africa wrote, “Emotional and health-related decisions should still be the priority of humans so should require humans to be in control of AIs.”
A distinguished researcher at IBM said, “Any key decision that should have a human in the loop can and will be designed to do that.”
An anonymous respondent commented, “In many ways, I think that Europe (the EU) will tackle the broad issue you set forth. They have taken the lead with GDPR, though they are walking it back a little today, and seem to be much more interested in the human part of the relationship. If the United States is smart, which I often doubt given Congress’s increasing inaction and lack of interest in big-picture public policy, it will follow the lead of the EU.
“As to human agency, I challenge the idea that the tug of war is between humans and technology. I believe the struggle is between humans of different tribes and individual humans’ capacity to be thoughtful. This isn’t about disagreeing with people, but rather the idea that, at least as far as I can tell, many don’t know how to reason well. Again, I am not challenging any one person or group. But how many people actually take the time to understand and reason out what they believe: the philosophical foundations, etc.? I totally understand why that is, but nonetheless, it is the foundation on which we operate as a society.”
An expert in digital technologies and education design responded, “Human decision-making is and always has been imperfect and biased, even if on an unconscious level. If the engineers and programmers are aware of potential biases going forward, it may be easier for them to design better systems that are able to take those biases into account. The key question is not whether automated systems can ever be unbiased; the question is whether automated systems can be less biased than humans. Nothing is clearer than recent SCOTUS decisions to demonstrate that decision-making is unfair, even at the (supposedly) highest levels of decision-making.”
A distinguished professor of information studies at a major California technological university said, “It will further divide affluent Global North countries from disadvantaged nation-states. It will also take over many people’s driving, shopping, the ordering of consumer products. I see this to be most unfortunate all around.”
A tech entrepreneur whose work is to create open-source knowledge platforms commented, “I suspect that we are unlikely to have legal frameworks in place which are sufficient to support evolving and emerging case law in the context of robotic decision makers. I base that, in part, on the well-documented polarization of our political and social systems. On the theory that we are more likely to muddle along and not face complex and urgent problems in appropriate ways, we will not be ready for fully autonomous decision makers by 2035.
“I see gains in the capabilities of autonomous agents, as, for instance, in the self-driving transportation field; we have come a very long way since the early DARPA-funded experiments, but still, we see spectacular failures. The fact that an autopilot will be tasked to make moral decisions in the face of terrible alternatives in emergency situations remains a hot topic; legal frameworks to support that? By 2035? Perhaps, but it seems unlikely.
“Surgeons use robots to assist in surgery; robots are beginning to outperform radiologists in reading diagnostic images, and so the progress goes. By 2035, will hospitals be ready to surrender liability-laden decisions to robots? Will surgeons turn over a complex surgical procedure to a robot? Consider the notion of stents; which surgeon would give up a six-figure surgery for a 4-figure operation? Not until students in med schools were trained to use them did they penetrate surgery suites.
“The dimensionality of this question is far too high for any single mortal to see all of them; it’s good that this question is being posed to a wide audience.”
A senior research scientist at Google said, “It’s unclear to me how we can rely on full autonomy in any systems that lack commonsense knowledge. As you know, commonsense knowledge is the hardest nut to crack. I see no reason that we’ll have it in the next 10 years. Until then, letting robot systems have full autonomy will be a disaster. My prediction: There will be a few disasters when people release autonomous systems, which will then be banned.”
A professor of computer science at Carnegie Mellon University wrote, “I believe that the current work in AI and ethics will accelerate, such that important ethical considerations, such as human autonomy and transparency, will be incorporated into critical decision-making AI software, enabling humans to stay in control of the AI systems.”
A futurist and designer based in Europe commented, “I am not at all certain we will have AGI by 2035, so between now and that time all decisions will still be the consequence of the humans behind the design and operation of the technologies and those using them. Deceptive and/or poorly thought-through ways of using technology will persist throughout all of humanity’s time. With this in mind, how we as societies and civilizations allow humans to spread the use of these technologies and the gravity of repercussions when they are being misused will steer us towards the future that is only a consequence of consequences, etc.
“Any number of decisions that can be automated will be—the question I would ask concerns who is in charge of putting these causal structures into automation. If it is governments, we are likely to see greater ideological extremes, if companies, we will experience great as well as terrible things, and if this power is with individuals we will all need to be persistently educated unless these systems are so intuitive to use that literally anyone will understand them.
“I believe an advanced AI should be able to assess if a person understands what it is they are handing over to automation, and if this is the case, I see very few boundaries for it beyond the making of life-and-death decisions. This being said, I would be surprised if automated military technology would not be in practice by then to some capacity.
“The biggest issue from the accelerated rollout will be an economic distribution within and between societies. It is obvious that if you employ thousands of AI engineers today, you will develop IP that multiplies your revenue streams without this offering any certainty for this multiplication to have any sort of general benefit to each person who is impacted by the same technology. If there was certainty concerning wealth distribution, there would be very little reason to fear the rollout beyond the likelihood of ever-more-complex scams.”
A former VP of a major U.S. technological university commented, “Machines will increasingly drive consumer-related decisions—marketing, product reviews, purchase comparisons, etc. This includes decisions related to medicine and health. The most automated decisions will be about entertainment choices (movies, music, books) and practical purchases (household products). There will be more-direct human input required on decisions related to medicine and health because of individual differences affecting consequences in practice or application. Broadened, accelerated rollout could make society more lemming-like than it already is.”
A professor at a leading U.S. technological university who is expert in network science and agent-based modeling predicted, “In 13 years, AI will not have advanced to the point where it will be bias-free, generalizable across contexts and sensitive to nuances in human decision-making. Hence, humans will not allow it to make all decisions. Further, there is currently a growing anti-science, distrust in technology, movement that will stall the acceptance of even well-structured technology. Further, many key decisions are based on opinion, not fact—which limits the utility of tech.
“The tasks that will be mostly automated are scheduling, financial reimbursement, payroll, routine manufacturing, replacement part manufacturing, tax filing, college admissions, HVAC regulation, large-scale farming, automated health testing/screening, early warning of disasters. Tasks that should require human input are life-and-death decisions, occupational choice, recreational choice, purchasing choice.
“More tech-abetted autonomous decision-making could lead to reduced bias in awards, more equitable distribution of resources, more time to avoid a disaster, better optimized production and transportation schedules, less time in the doctor’s office, more just-in-time education, automated restocking in warehouses and stores, improved supply chain management. But without the right safeguards it could lead to automated cyberattacks, machine freezes, autonomous vehicle crashes and so forth.”
A telecommunications policy expert wrote, “In 2035 humans will often be in control because people will want the option of being in control and thus products that offer the option of control will dominate in the market. That said, many routine tasks that can be automated will be. Key decisions that will be fully automated are 1) those that require rapid action and 2) those that are routine and boring or difficult to perform well.
“Indeed, we have such systems today. Many automobiles have collision avoidance and road departure mitigation systems. We have long had anti-lock brakes systems (ABS) on automobiles. I believe that ABS usually cannot be turned off. In contrast, vehicle stability assist (VSA) can often be turned off. Automobiles used to have chokes and manual transmissions. Chokes have been replaced by computers controlling fuel-injection systems. In the U.S., most cars have automatic transmissions. But many automatic transmissions now have an override that allows manual or semi-manual control of the transmission.
“This is an example of the market valuing the ability to override the automated function. The expansion of automated decision-making will improve efficiency and safety at the expense of creating occasional hard-to-use interfaces like automated telephone attendant trees.”
An anonymous respondent predicted, “In the next 10-15 years, we are likely to see a resurgence of regulation. In some cases, it will be the byproduct of an authoritarian government, who wants control of technology and media. In other cases (especially in Europe), it will be the by-product of governments increasingly anxious about the rise of authoritarianism, and who thus want to control technology and media.
“This regulation will, among other things, take the form of AI and related algorithms that produce predictable (although constrained) results. Humans will be in control, though in a way that skews the algorithms towards preferred results rather than what ‘the data’ would otherwise yield. Key decisions that will be automated would thus include news feeds, spam filters, and content moderation (each with some opportunity for human intervention).
“Other decisions that would be automated (as they often are today) include credit decisioning, commercial measurement of fraud risks, and targeted advertising. Some of these decisions should require direct human input, e.g., in order to correct anomalous or discriminatory results. That input will be applied inconsistently, though with regulators taking enforcement action to incent more breadth and rigor to such corrections.
“The effects on society will include less change, in some ways: existing power structures would be reinforced, and power could even be consolidated. In other ways, the effects will be to shift value from those who analyze data to those who collect and monetize data (including those who have collected and monetized the most). European efforts to dethrone or break up large U.S. platform companies will fail, because the best algorithms will be those with the best data.”
A researcher at a North American University said, “To date, humankind has shown an immense and innate capacity to turn over decision-making to others—religious leader, political leader, educational leader, technology… even at the expense of the individual’s best interest, or society’s best interest. It is unclear whether this choice has been made because it is easier, because it absolves the individual of responsibility, or some other reason. While tech-guided decision-making could be extraordinarily beneficial, it is likely future tech guided decision-making will further remove morality from decision-making. The responsibility will be on humans to ensure morality remains in the equation, whether as a component of tech-aided decision-making, or as a complementary human element.”
The director of a U.S. military research group wrote, “2035 is likely to see a muddied (or muddled) relationship between technology and its human overlords. While humans will be (mostly) in control of decision-making using automated systems in 2035, the relationship between humans and automated systems will likely be mixed. Designers of systems will work to enable automated systems and early ‘AI’ to assist humans in everyday decisions.
“While some humans will likely adopt these systems, others may not. There is currently distrust of automated systems in some segments of society as evidenced by distrust of ballot-counting machines (and the associated movement to only count them by hand), distrust of automated driving algorithms (despite them having a better track record per driven mile than their human counterparts), etc.
“There are enough modern-day Luddites that some technologies will have to be tailored to this segment of the population. These technologies will enable absolutely full human control—but these technologies may also be only slightly more advanced than those of 2022. Thus, while humans will likely still be in charge of their decisions in 2035, it is also likely adoption of AI technologies will be uneven, and in some groups, avoided.”
The director of an institute for bioethics, law and policy said, “I think most humans would be very troubled by the prospect of machines making decisions over vital human interests like how health care or other societal goods are allocated. There will undoubtedly be pressure to grant greater decision-making responsibility to machines under the theory that machines are more objective, accurate and efficient. I hope that humans can resist this pressure from commercial and other sources, so that privacy, autonomy, equity and other values are not eroded or supplanted.”
A professor of political science said, “As a general matter, technological developments will move in that direction. But whether people use this technology is a different question. They will pick and choose what they use. They will still defer to tax and financial advisors and other experts who make technical decisions for them. I think those advisors or others whom they rely on will increasingly use the latest technology.”
A leader for a major global networked communications foundation said, “ “Most machines and applications are not designed based on diverse perspectives/experiences or with a diverse humanity in mind, and therefore are also not designed to allow for adjustments, changes based on users’ desires and contexts, or to allow humans to control them. The assumption that machines are well-designed is a flawed start. Until machines are designed to support and nurture humanity, they will not help decision-making or human agency. Instead, they will be used as tools to manipulate and/or influence decision-making.
“In a world where the majority are not able to fully comprehend machines, applications, and any kind of digital system, especially those that designed to take advantage of personal data, humans will become an even greater target of machine/system abuses and violations of human rights, including the right to know and understand how to interact with machines and such.”
An anonymous respondent wrote, “Anything safety-critical will be controllable by AI by 2035. Medical treatments / transportation / construction / will require humans-in-the-loop for the foreseeable future, not least because the legal theories of liability will not have evolved sufficiently (and this because the technology will also have not proven itself trustworthy enough).”
An anonymous respondent commented, “People fear AI so they will be slow to give power over to it.”
An anonymous respondent wrote, “Too many autocratic politicians and self-serving bureaucracies have learned how to limit or control the autonomous decision-making of the rest of us.”
An anonymous respondent said, “Much of our online work will be driven by algorithms that are unseen. Individuals who are not knowledgeable about computer science will not even know they are being manipulated, and those of us who have some knowledge of science and technology will know about the situation but feel powerless to subvert the machines.”
An anonymous respondent wrote, “People today can’t even control their own TikTok feed ranking algorithm. People can’t control what their Roku devices recommend they look at, and they can’t control the Google search algorithm. If the question, instead, was ‘Will any human have control over large AI systems?’ then the question becomes interesting again. The thing is, I think corporations mostly are automated agents already. And they actually were way back in the 1970s and ’80s, before computers really took off.
“Large corporations are entities with minds of their own, with desires and ethics independent of any human working for them (including the CEO). Mostly, I am worried more about monopolization and market power than AI having a mind of its own. The problem of ‘Amazon controls the AI that runs your world’ is Amazon, not the AI.”
An anonymous respondent said, “It will take longer until bots, machines and so on are this far along, and it will take even longer until the majority of people actually uses any type of control over these tools. I assume there will be improvements for organizing daily life (shopping, travel, smart homes, fitness) and there will be people in certain industries whose decisions are more and more supported (but not made) by algorithms. Especially when it comes to consequential decision-making, the human still needs the agency to make the final decision.”
An anonymous respondent wrote, “Decision trees are too complex and contingencies too many to consider to trust automated mechanisms for making truly transformative tools in medical diagnosis, education or justice. There will only be incremental advances in terms of daily routines. Areas of most progress can be anticipated in transportation, industrial processes and such things.”
If you wish to read the full survey report online, with analysis, click here.
To read for-credit survey participants’ responses with no analysis, click here.
To download the print version of the report, please click here.