This page holds hundreds of predictions and opinions expressed by experts who agreed to have their comments credited in a canvassing conducted from late June to early August 2022 by Elon University’s Imagining the Internet Center and Pew Research Center. These experts were asked to respond with their thoughts about the likely evolution of human agency and human decision-making as automated systems rapidly evolve in the digital age.
Results released February 24, 2023 – Internet experts and highly engaged netizens participated in answering a survey fielded by Elon University and the Pew Internet Project from late June through early August 2022. Some respondents chose to identify themselves, some chose to be anonymous. We share the for-credit respondents’ written elaborations on this page. Workplaces are attributed for the purpose of indicating a level of expertise; statements reflect personal views. This page does not hold the full report, which includes analysis, research findings and methodology. Click here or click the image to read the full report.
In order, this page contains: 1) the research question; 2) a brief outline of the most common themes found among both anonymous and credited experts’ remarks; 3) the submissions from respondents to this canvassing who agreed to take credit for their remarks.
This survey question asked respondents to share their answer to the following prompt and query:
Digital tools and human agency: Advances in the internet and online applications have allowed humans to vastly expand their capabilities, increased their capacity to tackle complex problems, allowed them to share and access knowledge nearly instantly, helped them become more efficient and amplified their personal and collective power to understand and shape their surroundings. Smart machines, bots and systems powered mostly by autonomous and artificial intelligence (AI), will continue those advances. As people more deeply embrace these technologies to augment, improve and streamline their lives, they are outsourcing some decision-making and autonomy to digital tools. That’s the issue we explore in this survey. Some worry that humans are going to turn the keys to nearly everything – including life-and-death decisions – over to technology. Some argue these systems will be designed in ways to better-include human input on decisions, assuring that people remain in charge of the most relevant parts of their own lives and their own choices.
Our primary question: By 2035, will smart machines, bots and systems powered by artificial intelligence be designed to allow humans to easily be in control of most tech-aided decision-making that is relevant to their lives?
- Yes, by 2035 smart machines, bots and systems powered by artificial intelligence WILL be designed to allow humans to easily be in control of most tech-aided decision-making relevant to their lives.
- No, by 2035 smart machines, bots and systems powered by artificial intelligence WILL NOT be designed to allow humans to easily be in control over most tech-aided decision-making relevant to their lives.
Results for this question regarding the evolution of human-machine design in regard to human agency by 2035:
- 56% of these experts selected that by 2035 smart machines, bots and systems will not be designed to allow humans to easily be in control of most tech-aided decision-making.
- 44% said they hope or expect that by 2035 smart machines, bots and systems will be designed to allow humans to easily be in control of most tech-aided decision-making.
Follow-up qualitative question: Why do you think humans will or will not be in control of important decision-making in the year 2035? We invite you to consider addressing one or more of these related questions in your reply. When it comes to decision-making and human agency, what will the relationship look like between humans and machines, bots and systems powered mostly by autonomous and artificial intelligence? What key decisions will be mostly automated? What key decisions should require direct human input? How might the broadening and accelerating rollout of tech-abetted, often autonomous decision-making change human society?
Click here to download the print version of the “Future of Human Agency” report
Click here to read the full “Future of Agency” report online
Click here to read anonymous responses to this research question
Common themes found among the experts qualitative responses:
*Powerful interests have little incentive to honor human agency – the dominant digital-intelligence tools and platforms the public depends upon are operated or influenced by powerful elites – both capitalist and authoritarian – that have little incentive to design them to allow individuals to exert more control over their tech-abetted daily activities. *Humans value convenience and will continue to allow black-box systems to make decisions for them – people already allow invisible algorithms to influence and even sometimes “decide” many if not most aspects of their daily lives and that won’t change. *AI technology’s scope, complexity, cost and rapid evolution are just too confusing and overwhelming to enable users to assert agency – it is designed for centralized control, not personalized control. It is not easy to allow the kind of customization that would hand essential decision-making power to individuals. And these systems can be too opaque even to their creators to allow for individual interventions. *Humans and tech always positively evolve – the natural evolution of humanity and its tools has always worked out to benefit most people most of the time, thus regulation of AI and tech companies, refined design ethics, newly developed social norms and a deepening of digital literacy will emerge. *Businesses will protect human agency because the marketplace demands it – tech firms will develop tools and systems in ways that will enhance human agency in the future in order to stay useful to customers, to stay ahead of competitors and to assist the public and retain its trust. *The future will feature both more and less human agency – tech will always allow a varying degree of human agency, depending upon its ownership, setting, uses and goals; some allow for more agency to easily be exercised by some people by 2035; some will not.
Responses from those preferring to take credit for their remarks. Some are longer versions of expert responses contained in shorter form in the survey report.
Following are the responses from survey participants who chose to take credit for their remarks in the survey; some are the longer versions of expert responses that are contained in shorter form in the official survey report. (Anonymous responses are published on a separate page.) The respondents were asked: “By 2035, will smart machines, bots and systems powered by artificial intelligence be designed to allow humans to easily be in control of most tech-aided decision-making that is relevant to their lives? Why or why not?”
Some respondents chose not to provide a written elaboration, only choosing to respond to the closed-end Yes-No question. They are not included here. The statements are listed in random order. The written remarks are these respondents’ personal opinions; they do not represent their employers.
Rakesh Khurana, professor of sociology and professor of leadership development at Harvard University, responded, “People tend to be submissive to machines or any source of authority. Most people don’t like to think for themselves but rather like the illusion that they are thinking for themselves. Consider, for example, how often people follow GPS instructions even when instinct suggests they are going in the wrong direction. In politics or consumption, people often outsource their decision-making to what their friends ‘like’ on Facebook or the songs Pandora chooses, even if it is against their interests or might expose them to new ideas or music.
“In most instances, even without machines, there is a strong tendency among humans to rely on scripts and taken-for-granted unquestioned assumptions for their actions. Whether these scripts come from ‘society’ (a type of programmed machine) or an algorithm seems to be a difference of degree, not kind.
“For example, many people believe they have no agency in addressing problems linked to capitalism, human-caused climate change or any other ‘system’ that seems to exist outside of human control, even though these phenomena are designed and perpetuated by humans. It is easier for many people to imagine the end of the world than it is for them to imagine the end of capitalism.”
Mike Liebhold, distinguished fellow, retired, at the Institute for the Future, wrote, “By 2035 successful AI and robotic ‘full’ autonomous ‘control’ of ‘important’ decisions will be employed only in secure and well-managed and controlled applications of highly refined generations of applied machine intelligence, where any autonomous processes are managed and operated carefully, by high skilled workforces, with high degrees of systems literacies.
“While there will be gradually widespread adoption of AI augmentation (not necessarily replacement) of human decisions by applied AI and machine learning deeply embedded in hardware and digital services. In most cases down the long adoption curves uses of fully autonomous systems, will only gradually be successfully applied, still constrained by evolved versions of the same systemic vulnerabilities including pervasive digital security vulnerabilities, continued tensions between targeted personalization, and privacy surveillance.
“Finally, complexity is a continuing challenge, Computing technology is enabling way more capabilities than humans are capable of understanding and using effectively. It’s a form of cognitive dissonance like an impedance mismatch in electronic connections. Given the explosive availability of potentially useful data and structured knowledge resources and promising but immature, data fusion, AI and cloud computing capabilities, there’s years of work ahead to design systems that somehow systematise and simplify the complexity of AI machines to reliably summarise, explain and amplify capabilities to growing, but still limited useful sets for human cognitive capabilities and focused tasks.”
Paul Jones, emeritus professor of information science at the University of North Carolina-Chapel Hill, said, “Once I taught people to use computers. Later I taught computers how to use people. The computers are the better students. How automation takes over can be subtle. Compare searching with Google to searching CD-ROM databases in the 1990s. Yes, humans can override the search defaults, but all evidence shows that they don’t and for the most part they won’t.
“In information science, we’ve known this for some time. Zipf’s Law tells us that least effort is a strong predictor of behavior—and not just in humans (although I’ll stay with humans here).
“We once learned how to form elegant search queries. Now we shout ‘Alexa’ or ‘OK, Google’ across the room in a decidedly inelegant fashion with highly simplified queries. And we take what we get for the most part.
“Driving has changed completely since I learned to drive at 16. While the GPS and then Google Maps and for the scofflaw Waze, ‘suggest’ routes and ‘warn’ us of problems, we generally do as we are told. Airplanes and trains are for the most part already self-driving for the better parts of their trips. They can be overridden but are not.
“The more often automated results please us the more we trust the automation. So far (save for Teslas), so good. While such assistance with cooking, math, money-management, driving routes, question-answering, etc., may seem benign there are problems lurking in plain sight. As Cory Doctorow dramatizes in ‘Unauthorized Bread,’ complicated access, ownership agreements and other controls will and do put the users of even the most-simple networked technologies in a kind of centralized control that threatens both individual autonomy and social cohesion.
“The question you didn’t ask is: ‘Is this a good thing for humans?’ That’s a more complicated and interesting question than ‘Will humans be in control of important decision-making in the year 2035?’ I hope that one will be asked of the designers of any automated control system heading for 2035 and beyond.”
Henry E. Brady, professor and former dean of the school of public policy, University of California-Berkeley, wrote, “Many ‘decisions’ are already automated. My sense is that there will be a tremendous demand for having methods that will ensure that most important decisions are curated and controlled by humans. Thus, there will be a lot of support, using AI, text-processing and other methods, and there will be ways developed to control these processes to ensure that they are performing as desired.
“One of the areas in which I expect a lot of work will be done is in precisely defining ‘key decisions.’ Clearly there is already a recognition that bail, parole and other decisions in the criminal justice system are key decisions that must be approached carefully to avoid bias. Even for decisions that are less key such as using a dating app or Uber there is a recognition that some features are key: there must be some security regarding the identity of the parties involved and their trustworthiness. Indeed, developing trustworthy methods will be a major growth industry.
“One of the tradeoffs will be allowing a broader range of choices and opportunities versus verifying the authenticity of these as real choices that can deliver what they promise. So far technology has done a better job of broadening choices than assuring their authenticity. Hence the need for methods to ensure trustworthiness.”
Avi Bar-Zeev, founder and CTO of RealityPrime, an XR pioneer who helped develop the technology of HoloLens, Google Earth and Second Life and has worked with Microsoft, Google, Apple, Amazon and Disney, said, “Ad-tech is a business model designed to offset the apparent price of digital goods and services (down to “free”) by siphoning money from physical product purchases or app-store captive digital purchases through the ad-tech network.
“After decades of dominance, we’ve learned that nothing is free, and the cost of this approach is social harm. An individual price of this business model is in the loss of human agency, as ads always serve a remote master with an agenda, gradually perfecting its efficacy with more and more data and better and better prediction. The better they are, the less autonomy we have.
“By 2035, I expect ad-tech to finally diminish in dominance, with greater privacy controls. However, the most obvious replacement is “Personal AI,” which makes recommendations for our personal benefit. In fact, in a world full of content (think AR channels), we will either need the traditionally centralized filters and rankers, like Google, or we will need the intelligence to be distributed to our own devices, like personal firewalls and discovery engines.
“In either case, the algorithm still has the most control, because we only see what it shows us.
“We have to work harder to escape its natural bubble. The personal AI revolution has the potential to help AI make decisions the way we would, and thus do so to our benefit. But we will still rely on it, one way or another. The key question is whether we gain or lose by automating so much of our lives.”
Vint Cerf, pioneer innovator, co-inventor of the Internet Protocol and vice president at Google, wrote, “My thought, perhaps only hazily formed, is that we will have figured out how to take intuitive input from users and turn that into configuration information for many software-driven systems. You might imagine questionnaires that gather preference information (e.g., pick ‘this’ or ‘that’) and, from the resulting data, select a configuration that most closely approximates what the user wishes.
“Think about the Clifton StrengthsFinder questionnaire, a tool developed by the Gallup Organization that asks many questions that reveal preferences or strengths—sometimes multiple questions are asked in different ways to tease out real preferences/strengths.
“It’s also possible that users might select ‘popular’ constellations of settings based on ‘trend setters’ or ‘influencers’—that sounds somewhat less attractive (how do you know what behavior you will actually get?). Machine learning systems seem to be good at mapping multi-dimensional information to choices.”
Andre Brock, associate professor of literature, media and communication at Georgia Tech and advisor to the Center for Critical Race Digital Studies, wrote, “In 2035, automated decision-making will largely resemble the robo-signing foreclosure courts of the 2020’s, where algorithms tuned to the profit/governance models of extraction and exploitation are integrated into legal mechanisms for enhancing the profits of large corporations.
“My grounds for this extraordinary claim draw upon my observations about how governments have been captured by private/business entities, meaning that any pretense of equity based on the recognition of the ‘human’ has begun being supplanted by what Heidegger deemed humanity’s future as a ‘standing reserve’ of technology.
“Many decisions affecting everyday life for those in need of equity and justice already are framed through anti-blackness and extractive models; I’m specifically focused on the United States whose ‘democratic’ government was conceptualized by white men who worshiped property, owned Black folk, destroyed entire indigenous populations, and denied women the vote.
“Decision-making, from this perspective, largely benefits the political and economic interests of particular interests who fight savagely to retrench the gains made by Black folk, Asian folk, queer folk, women and the differently abled. There is no inherent democratic potential in information or algorithmic technologies designed to counter these interests, as the creators are themselves part of a propertied, monied, raced and sexualized elite.
“If anything, rolling out tech-abetted autonomous decision will further entrench the prevailing power structures, with possibilities for resistance or even equitable participation left to those who manage to construct alternate socialities and collectives on the margins.
“I’m intrigued by your question ‘what key decisions will be mostly automated?’ I feel that ‘key decisions’ is a phrase often understood as life-changing moments such as the purchase of a home, or what career one will pursue, or whether to become romantically involved with a possible life partner. Instead, I urge you to consider that key decisions are instead the banal choices made about us as we navigate society:
- Whether a police officer will pull you over because you’re a Black driver of a late model vehicle
- Whether a medical professional will improperly diagnose you because of phenotype/race/ethnicity/economic status
“These decisions currently rely upon human input, but the human point of contact is often culturally apprehended by the institutions through which these decisions are framed. I’m already uncomfortable with how these decisions are made; technology will not save us.”
Bill Woodcock, executive director, Packet Clearing House, commented, “The unholy trinity of the surveillance economy, pragmatic psychology and machine learning have dug us into a hole. They are convincing us to dig ever faster, and they are making us believe that it’s our own bright idea. I don’t see us getting out of this hole as long as the automated exploitation of human psychological weaknesses continues to be permitted.
“I’m very pessimistic about the balance of beneficial outcomes between humans and autonomous systems based on our track record thus far. For the first time in human history, we’ve created a stand-alone system which predates people and has its own self-contained positive feedback loops driving it toward increased scale. What’s particularly problematic is that the last 40 years of investigation of human psychology have revealed how easily people can be externally directed and how much work their brains will do to rationalize their actions as having been self-determined.
“Everyone wants to believe that they always have free will—that they always make their own choices based on rational processes—so they’ll do all of the work necessary to convince themselves of that while simultaneously opening their wallets to pay for more GPUs to further direct their own, and others,’ behavior.”
Douglas Rushkoff, digital theorist and host of the NPR One podcast “Team Human,” wrote, “The incentives structure of western civilization would have to be changed from profit to mutual flourishing in order for any technology development company to choose to design technologies that augment human control.
“I do believe we could easily shift the emphasis of technology development from control-over-others to augmentation of agency, but this would require a radical shift in our cultural value system. I don’t believe that billions of dollars will be spent on a counter-narrative until such a shift were to occur. It’s also hard to imagine scenarios years in the future without also taking into account mass migrations, the rise of authoritarianism, climate change and global health catastrophe.
“So, are we talking about the ‘key decisions’ of 6 billion climate refugees, or those of 200,000 corporate executives? My main reason for believing that human beings will not be in control of automation technology in the future is that human beings are in control of neither automated nor manual technologies in the present.
“I don’t see why the emergence of autonomous technologies would shift the trajectory away from using technology to control human behavior.”
Jamais Cascio, distinguished fellow at the Institute for the Future, predicted, “Several scenarios will likely co-exist in the future of agency by 2035:
1) Humans believe they are in control but they are not: The most commonly found scenario will be the one in which humans believe themselves to be in control of important decision-making in the year 2035, but they’re wrong. This will (largely) not be due to nefarious action on the part of rogue AI or evil programmers, but simply due to the narrowing of choice that will be part of the still-fairly-simple AI systems in 2035. Humans will have full control over which option to take, but the array of available options will be limited to those provided by the relevant systems. Sometimes choices will be absent because they’re ‘obviously wrong.’ Sometimes choices will be absent because they’re not readily translated into computer code. Sometimes choices will be absent because the systems designed to gather up information to offer the most relevant and useful options are insufficient.
“In this scenario, as long as the systems allow for human override to do something off-menu, the impact to agency can be minor. If it’s not clear (or not possible) that humans can do something else, partial agency may be little better than no agency at all.
2) Humans know they are not in control and they’re OK with that: Less common will be the scenario where humans do NOT believe themselves to be in control of important decision-making in the year 2035 and they like it that way. Humans are, as a general rule, terrible at making complex or long-term decisions. The list of cognitive biases is long, as is the list of historical examples of how bad decision-making by human actors have led to outright disaster. If a society has sufficient trust and experience with machine decision-making, it may decide to give the choices made by AI and autonomous systems greater weight.
“This would not be advisable with current autonomous and AI systems, but much can happen in a decade or so. There may be examples of AI systems giving warnings that go unheeded due to human cognitive errors or biases, or controlled situations where the outcomes of human vs. machine decisions can be compared, in this case to the AI’s benefit. Advocates of this scenario would argue that, in many ways, we already live in a world much like this—only the autonomous systems that make decisions for us are the emergent results of corporate rules, regulations and myriad minor choices that all add up to outcomes that do not reflect human agency. They just don’t yet have a digital face.
3) A limited number of AI-augmented humans have control: Last is a scenario that will somewhat muddy the story around human agency, as it’s a scenario in which humans do have control over important decision-making in the year 2035, but it’s a very small number of humans, likely with AI augmentations. Over the past few decades, technologies have vastly extended individuals’ power. Although this typically means extended in scale, where human capabilities become essentially superhuman, it can also mean extended in scope, where a single or small number of humans can do what once took dozens, hundreds, or even thousands of people. By 2035, we’ll likely see some development of wearable augmentations that work seamlessly in concert with their humans; whether or not we think of that person as a cyborg comes down to language fashion. Regardless, the number of people needed to make massive life-or-death decisions shrinks, and the humans who retain that power do so with significant machine backup.
“This may sound the most fantastical of the three, but we’re already seeing signals pointing to it. Information and communication systems make it easy to run critical decisions up the chain of command, taking the yes-or-no choice out of the hands of a low-ranking person and giving it to the person tasked with that level of responsibility. Asking the president for authorization to fire a weapon is just a text message away. Whether or not we go as far as cyborg augmentation, the humans-plus-AI model (as described by Kevin Kelly as ‘centaurs,’ his name for future people who use artificial intelligence to complement their thinking) will deeply enmesh decision-making processes. Advocates will say that it leads to better outcomes by taking the best parts of human and machine; critics will say that the reality is quite the opposite.
“For these scenarios, the canonical ‘important decision-making’ I’ve had in my head regards military operations, as that is the topic that gets the most attention (and triggers the most unrest). All three of the scenarios play out differently.
- In Scenario 1, the information and communication systems that enable human choice potentially have a limited window on reality, so that the mediated human decisions may vary from what might have been chosen otherwise.
- In Scenario 2, advocates would hope that carefully designed (or trained) systems may be seen as having ‘cooler heads’ in the midst of a crisis and be less-likely to engage in conflict over ego or ideology; if the system does decide to pull the trigger (literally or metaphorically), it will only be after deep consideration. One hopes that the advocates are right.
- In Scenario 3, there’s the potential for both narrowed information with AI mediation and the ‘wise counsel’ that could come from a well-designed long-term thinking machine; in my view, the former is more plausible than the latter.
“Outside of these scenarios there are some key factors in common. The primary advantage to AI or autonomous decision-making is speed, with machines generally able to take action far faster than can a human (e.g., algorithmic trading). In competitive situations where first-mover advantage is overwhelming, there will be a continued bias towards AI taking charge, with likely diminishing amounts of human guidance over time.
“Another advantage of AI is an imperviousness to tedium, meaning that an AI can undertake the same repeated action indefinitely or pore over terabytes of low-content data to find patterns or anomalies, and give the final pass as much attention as the first. An amount or diversity of information that would be overwhelming to a human could easily be within the capacity of an intentionally-designed AI. When decisions can be made more precisely or accurately with more information, machine systems will likely become the decision-makers.
“The most unusual advantage of AI is ubiquity. If an AI system can make better (or at least useful) decisions, it does not need to be limited to the bowels of the Pentagon. Arguably, a military where every human soldier has AI ‘topsight’ that can see the larger dimensions of the conflict is more effective than one that has to rely on a chain-of-command or potentially biased human decision-making in the field. More broadly, a decision-making system that proves the most insightful or nuanced or aggressive or whatever can be replicated across all of the distributed AIs. If they’re learning systems, all the better—lessons learned by one can very rapidly become lessons learned by them all.
“I suggested at the outset that the conditions of 2045 will likely differ significantly from the world of 2035. The world of mid-century would be an evolution of the world we made in the previous couple of decades. By 2045, I suspect that our three scenarios would be the following:
- No AI, No Cry: For many reasons, there are few if any real AIs left by 2045, and humans will be the default important decision-makers. This could be by choice (a conscious rejection of AI, possibly after some kind of global disaster) or by circumstance (the consequences of climate disaster are so massive that infrastructural technologies like power, parts and programmers are no longer available).
- All Watched Over by Machines of Loving Grace: The full flowering of the second 2035 scenario, in which our machines/AIs do make significantly smarter and wiser decisions than do humans and that’s OK. We let our technology make the big choices for us because it will simply do a better job of it. It works out.
- Digital Dictators: The full flowering of the third 2035 scenario. Here we see a massive consolidation of power in the hands of a very small number of ‘people,’ hybrids of AI top-sight and human biases. Maybe even a full digital duplication of a notorious authoritarian leader of years past, able to live on forever inside everyone’s devices.
“Of course, there’s always some aspects of the #1 scenario across issue areas—the Miserable Muddle. Stuff doesn’t work exactly as we’d like, but we can get enough done to carry on with it. People in power always change, but day-to-day demands (food, shelter, entertainment) don’t. Humans just keep going, no matter what.”
Ben Shneiderman, widely respected human-computer interaction pioneer and author of “Human-Centered AI,” wrote, “Increasing automation does not necessitate less human control. The growing recognition is that designers can increase automation of certain subtasks so as to give humans greater control over the outcomes. Computers can be used when they are reliable, safe and trustworthy while preserving human control over essential decisions, clarifying human responsibility for outcomes and enabling creative use by humans. This is the lesson of digital cameras, navigation and thousands of other apps. While rapid performance is needed in some tasks, meaningful human control remains the governing doctrine for design. As automation increases so does the need for audit trails for retrospective analysis of failures, independent oversight, and open reporting of incidents.”
For this study, Shneiderman also shared insights from an August 2022 interview he did with the Fidelity Center for Applied Technology: “The hopeful future we can continue to work toward is one in which AI systems augment, amplify and enhance our lives. Humans have agency over key decisions made while using a vast number of AI tools in use today. Digital cameras rely on high levels of AI for setting the focus, shutter speed and color balance while giving users control over the composition, zoom and decisive moment when they take the photo. Similarly, navigation systems let users set the departure and destination, transportation mode and departure time, then the AI algorithms provide recommended routes for users to select from as well as the capacity to change routes and destinations at will. Query completion, text auto-completion, spelling checkers and grammar checkers all ensure human control while providing algorithmic support in graceful ways.
“We must respect and value the remarkable capabilities that humans have for individual insight, team coordination and community building and seek to build technologies that support human self-efficacy, creativity, responsibility and social connectedness. Some advocates of artificial intelligence promote the goal of human-like computers that match or exceed the full range of human abilities from thinking to consciousness. This vision attracts journalists who are eager to write about humanoid robots and contests between humans and computers. I consider these scenarios as misleading and counterproductive, diverting resources and effort from meaningful projects that amplify, augment, empower, and enhance human performance.
“The past few years we have seen news stories about tragic failures of automated systems. The two Boeing 737 MAX crashes are a complex story, but one important aspect was the designers’ belief that they could create a fully autonomous system that was so reliable that the pilots were not even informed of its presence or activation. There was no obvious visual display to inform the pilots of the status, nor was there a control panel that would guide them to turn off the autonomous system. The lesson is that the excessive belief in machine autonomy can lead to deadly outcomes. When rapid performance is needed, high levels of automation are appropriate, but so are high levels of human independent oversight to track performance over the long-term and investigate failures.
“We can accelerate the wider, successful adoption of human-centered AI. it will take a long time to produce the changes that I envision, but our collective goals should be to reduce the time from 50 to 15 years. We can all begin by changing the terms and metaphors we use. Fresh sets of guidelines for writing about AI are emerging from several sources, but here is my draft offering:
1) Clarify human initiative and control.
2) Give people credit for accomplishments.
3) Emphasize that computers are different from people.
4) Remember that people use technology to accomplish goals.
5) Recognize that human-like physical robots may be misleading.
6) Avoid using human verbs to describe computers.
7) Be aware that metaphors matter.
8) Clarify that people are responsible for use of technology.”
Rob Reich, professor of political science and director of the Center for Ethics in Society at Stanford University, said, “No, systems powered by AI will not be designed to allow people to easily be in control over decision-making. The reigning paradigm for both basic research and industrial product design in AI is to strive to develop AI systems/models that meet or exceed human-level performance. This is the explicit and widely accepted goal of AGI, or artificial general intelligence. This approach sets AI on a course that leads inexorably to the diminishment or replacement of human agency.”
Sara M. Watson, writer, speaker and independent technology critic, replied with a scenario, writing, “The year is 2035. Intelligent agents act on our behalf, prioritizing collective and individual human interests above all else. Technological systems are optimized to maximize for democratically recognized values of dignity, care, well-being, justice, equity, inclusion and collective- and self-determination. We are equal stakeholders in socially and environmentally sustainable technological futures.
“Dialogic interfaces ask open questions to capture our intent and confirm that their actions align with stated needs and wants in virtuous, intelligent feedback loops. Environments are ambiently aware of our contextual preferences and expectations for engagement. Rather than paternalistic or exploitative defaults, smart homes nudge us toward our stated intentions and desired outcomes. We are no longer creeped out by the inferred false assumptions that our data doppelgängers perpetuate behind the uncanny shadows of our behavioral traces. This is not a utopian impossibility. It is an alternative liberatory future that is the result of collective action, care, investment and systems-thinking work. It is born out of the generative, constructive criticism of our existing and emergent relationship to technology.
“In order to achieve this:
- Digital agents must act on stakeholders’ behalf with intention, rather than based on assumptions.
- Technology must augment, rather than replace human decision-making and choice.
- Stakeholders must trust technology.
“The stakes of privacy for our digital lives have always been about agency. Human agency and autonomy is the power and freedom of self-determination. Machine agency and autonomy are realized when systems have earned trust to act independently. Sociotechnical futures will rely on both in order for responsible technological innovation to progress.
“As interfaces become more intimate, seamless, and immersive, we will need new mechanisms and standards for establishing and maintaining trust. Examples:
- Audio assistants and smart speakers present users not with a list of 10 search results but instead initiate a single command line action.
- Augmented-reality glasses and wearable devices offer limited real estate for real time detail and guidance.
- Virtual reality and metaverse immersion raise the stakes for connected, embodied safety.
- Synthetic media like text and image generation are co-created through the creativity and curation of human artistry.
- Neural interfaces’ input intimacy will demand confidence in maintaining control of our bodies and minds.
“Web3 principles and technical standards promise trustless mechanism solutions, but those standards have been quickly gobbled by rent seekers and zero-to-one platform logics before significant shifts in markets, norms and policy incentive structures can sustainably support their vision. Technology cannot afford to continue making assumptions based on users’ and consumers’ observed behaviors.
“Something akin to Lawrence Lessig’s four forces of regulatory influence over technology must be enacted: Code – Technology is built with agency by design. Markets – Awareness and demand for agency interfaces increases. Norms – Marginalized and youth communities are empowered to imagine what technology agency futures look like. Law – Regulators punish and disincentivize exploitative, extractive economic logics.”
Maja Vujovic, owner/director of Compass Communications, based in Belgrade, Serbia, wrote, “Whether we are ready or not, we must find ways to restore our control over our digital technology. If we don’t build user interfaces with a large button, simple keyword or short voice command for clearly separating what we agree to give out willingly (that which can be used) and what we don’t (which is off limits), then we’re just dumb. And doomed.
“Let’s look at the larger picture. We don’t need to wait for 2035 to automate our world. We can already patch a half a dozen applets, get our smart fridge to converse with our grocery app and link them both to our pay-enabled smart phone and a delivery service; they could restock our pantry on their own, every week. Yes, in the coming years, we will happily delegate such decisions in this interim period, when a sea of compute power will have to match an ocean of tiny parameters to just propose our next beach read or our late-night dinner-on-wheels.
“But wait! A nosy wearable will sound an alarm about that late-night meal intent and might even independently report it to our family doctor and to our health insurer. Our life insurance plan might also get ‘upgraded’ to a steeper premium, which our smart bank would automatically approve and honour every month. We might then also lose points on our gym score list, which could trigger a deserved bump of our next month’s membership fee, as a lesson.
“And just as we use our Lessons Learned app to proscribe late night-eating (because it makes us sick in more ways than one), we could see a popup flash before us, with a prompt: ‘Over three million of your look-alike peers voted for this candidate in the last election. She fights to protect our privacy, empowers disadvantaged groups and leads a healthy life—no late-night meals in her house! Would you join your peers now and cast your vote, quickly and confidentially?’
“All of this seems not implausible. The systems invoked above would work for each of us as users—we are their ‘Player One.’ Alas, there are also those systems that we are rarely aware of, where we are not users, but items. Any of those systems could—right now—be assessing our credit or dwelling application. Some applicant-tracking systems already blindly filter out certain job candidates or education seekers. Airbnb, hotels and casinos filter out unruly guests. In some countries of Europe, Middle East and Asia, authorities use facial recognition (de facto, though not always de jure) to keep tabs on their perceived opponents. It’s chilling to see the U.S. on the brink beyond which a patronizing governmental body or a cautious medical facility could filter out and penalize people based on their personal life choices.
“The technology to generate all kinds of recommendations already exists and is in use, often in ways that are not best for us. What is conspicuously lacking is real utilities, built for our benefit. Perhaps we might have a say in evaluating those who work for us: professors, civil servants, police officers, politicians, presidents. In fact, electoral voting systems might be equipped with a shrewd AI layer, Tinder-style: swipe left for impeachment; swipe right for second term.
“One reason more useful public-input recommender systems are not widely available is that they haven’t been successfully built and deployed. All other recommender systems have backers. We, the people, could try using Kickstarter to crowdfund our own.
“We can and will draft and pass laws that will limit the ability of technological solutions to decide too many things for us. In the coming decade, we will simply need to balance those two coding capacities of ours—one based on numbers, the other on letters. That’s a level of ‘programming’ that non-techies are able to do to put technology (or any unbridled power, for that matter) on a short leash. That interface has existed for several millennia; in fact, it was our first coding experience: regulation.
“There are already initiatives. An example is California’s ‘Kids’ Code’ (an age-appropriate-design code) that incorporates youth voices and energy. It shows that legislators and users possess impressive maturity around human-computer interaction and its risks, though the tech industry may appear unfazed, for now.”
R Ray Wang, founder, chairman and principal analyst at Constellation Research, wrote, “In almost every business process, journey or workflow, we have to ask four questions: 1) When do we fully intelligently automate? 2) When do we augment the machine with a human? 3) When do we augment the human with a machine? 4) When do we insert a human in the process? And these questions must also work with a framework that addresses five levels of AI Ethics: 1) Transparent. 2) Explainable. 3) Reversible. 4) Trainable. 1) Human-led.”
Richard Ashcroft, deputy dean and professor of bioethics at City University of London Law School, an expert on AI and ethics in healthcare, commented, “I am not optimistic because designing in human agency to AI/ML based systems is not easy from an engineering point of view, plus the industry and most of academia is mainly focussed on ‘quick wins,’ ‘low-hanging fruit’ and gaining competitive advantage in so doing.
“There’s also a strong tendency in the field to look for the advantages that ‘cutting out’ human agency, cognitive biases and other failures of ‘rationality’ bring, so I don’t think there is much appetite for designing human agency into these systems, outside the rather narrow field of ‘AI ethics,’ and the general debate in that area is more about assuring us that AI is safe, rather than looking for ways to make it so.
“A third point: Only some of these problems are specific to AI/ML systems; many of the issues were already built into complex sociotechnical systems, such as state bureaucracy, precisely to eliminate individual discretion because of issues around efficiency, avoidance of corruption and personal bias and so on.
“Also, any sufficiently complex system has ‘control problems’ that become problems of causal efficacy and epistemology. Humans have influence over such systems, but the effects of such influence are not always predictable or even desirable, from the point of view of the purposes built into such systems.”
Lillie Coney, chief of staff and policy director for a member of the U.S. House of Representatives, formerly associate director of the Electronic Privacy Information Center, said, “Agency and autonomy for one person may deny agency and autonomy to others. There will need to be norms, values and customs that alignment to transition to this state. There will likely be the ‘four walls rule’ that in one’s dwelling the person has full rights to exercise autonomy over technology, but even this will rely on Supreme Court decisions that uphold or strike down laws governing such matters.”
Ojelanki Ngwenyama, professor of global management and director of the Institute for Innovation and Technology Management at Toronto Metropolitan University, said, “It is pretty clear to me that it is not about the technology, but who controls it. Already tech firms determine what technologies we have and how we interact with them. Presently, we cannot even stop a mobile phone from recording our conversations and sending them to service providers, the makers of the technology, the developers of mobile operating systems, security agencies, etc.”
Paul Saffo, longtime Silicon Valley foresight guru, observed, “We have already turned the keys to nearly everything over to technology. The most important systems in our lives aren’t the ones we see, but the systems we never notice,—until they fail. This is not new: consider the failure of the Galaxy IV satellite a quarter century ago: puzzled consumers who never noticed the little dishes sprouting atop gas stations discovered they couldn’t fill their tank, get cash from ATMs, or watch their favorite cable TV programs.
“We have experienced 16 Moore’s Law doublings since then. Our everyday dependence on technology has grown with even greater exponentiality. We carry super-computers in our pockets, our homes have more smarts than a carrier battle group, and connectivity has become like oxygen—lose it for more than a few moments and we slip into digital unconsciousness, unable to so much as buy a latte, post a Tweet or text a selfie.
“Technologists are optimists. They promise that the next wave of technology will solve the failings of prior innovations and make glitches a thing of the past. Empowered by AI, Richard Brautigan’s ‘machines of loving grace’ will keep omniscient watch over our lives in a harmonious cybernetic meadow. There is no reason why the next technological wave can’t expand human agency, giving us greater satisfaction and control. It is just a matter of design. Or, rather, if it was just a matter of design, the now ubiquitous spell-checkers that so annoy us would actually be helpful—and come with an off switch to flick when they weren’t. This is just a minor example, but if we can’t make the small, simple stuff work for us, how will more complex systems ever live up to our expectations?
“But don’t blame the machines. No matter how brilliant AIs, avatars and bots become, they will never be truly autonomous. They will always work for someone—and that someone will be their boss and not you, the hapless user. Consider Uber or any of the other mobility services: in theory, their ever more brilliant algorithms should be working tirelessly to enhance the customer experience and driver income. Instead, they answer to their corporate minders, coldly calculating how far fares can be boosted before the customer walks—and how much can be salami-sliced off the driver’s margin before they refuse to drive.
“Nearly a century ago, Will Durant observed that ‘history reports that the men who can manage men manage the men who can manage only things, and the men who can manage money manage all.’ If Durant were here today, he would surely recognize that those who manage our synthetic intelligences will inevitably become the ones who manage all. And they will instruct their intelligences to grant you just enough agency to keep you from noticing your captivity.”
Nrupesh Soni, founder and owner of Facilit8, a digital agency located in Namibia, commented, “I fear that we have a whole generation of youth that is used to instant gratification, quick solutions and we do not have enough people who can think long-term and work on solutions. I do not think humans will be in-charge of the bots/AI decision-making, mainly because we are seeing a huge gap between the people who grew up with some understanding of programming and the basics motivations behind our digital technologies, and the next-gen that is used to using APIs provided to them without knowing the backend coding required to work on something new. There will be a time in the next 10 years when most of those who developed the core of these bots/AI will be aging out of the creative force, in their late 50s or 60s, and the younger generation will not know how to really innovate as they are used to plug-and-play systems.”
Maggie Jackson, award-winning journalist, social critic and author, commented, “Unless urgent steps are taken to protect human autonomy in our relations with AI, human agency in future will be seriously limited by increasingly powerful intelligences other than our own. I see the danger arising from both humanity’s innate weaknesses and from the unintended consequences of how AI is constructed.
“One point of vulnerability for human agency stems from how standard AI has been formulated. As AI pioneer Stuart Russell has brilliantly noted, we have created AI systems that have one overarching goal: to fulfill the objectives that humans specify. Through reinforcement learning, the machine is given a goal and must solve this objective however it can. As AI becomes more powerful, its foundational motivation becomes dangerous for two reasons.
- People can’t know completely and perfectly what a good objective is; AI doesn’t account for a device or a person’s interactions within an unpredictable world.
- A machine that seeks to fulfill a specific objective however it can/will stop at nothing—even dismantling its off switch—in order to attain its goal, i.e., ‘reward.’ The implications are chilling.
“Consider the case of using AI to replace human-decision-making. AI is increasingly used to diagnose health problems such as tumors, to filter job candidates, and to filter and shape what people view on social media via recommender systems. While attention has rightly been drawn to the innate bias that is invested in AI, a larger danger is that AI has been created to solely to maximize click-through or other similarly narrow objectives.
“In order to maximize their goals, algorithms try to shape the world, i.e., the human user, to become more predictable and hence more willing to be shaped by the AI system.
“Social media and search engines, for instance, aren’t giving people what they want as much as modifying users with every click to bend to the goals they were created to pursue. And the more capable AI becomes, the more it ‘will be able to mess with the world’ in order to pursue its goals, write Russell and colleagues in a recent paper on AI’s future. ‘We are setting up a chess match between ourselves and the machines with the fate of the world as the prize. We don’t want to be in that chess match.’
“The result may be a form of chilling human enfeeblement, a dependence on powerful devices coupled with an indifference to this imbalance of power. It’s a mark of the seriousness of AI’s perils that leading scientists are openly discussing the possibility of this enfeeblement or ‘Wall-E problem’ (the movie of that name that portrayed humans as unwittingly infantilized by their all-powerful devices).
“A second point of vulnerability can be found in the rising use of caregiver robots. Simple robots are used mainly with vulnerable populations whose capacity to protect their cognitive and physical agency is already compromised. Robots now remind sick and elderly people to take their medicines; comfort sick children in hospitals; tutor autistic youth and provide companionship to seniors. Such ‘care’ seems like a promising use for what I call ‘AI with a face.’ But humanity’s proven willingness to attribute agency to and to develop intense social feelings for simple robots and even for faceless AI such as Siri is perilous. People mourn ‘sick’ Roombas, name and dress their healthcare-assistants and see reciprocity of social emotions such as care where none exists. As well, patients’ quick willingness to cede responsibility to a robot counters progress in creating patient-centered care.
“While studies show that a majority of Americans don’t want a robot caregiver, forces such as the for-profit model of the industry, the traditional myopia of designers, and the potential for people with less voice in healthcare to be coerced into accepting such care mean that public reservations likely will be ignored. In sum human autonomy is threatened by rising calls to use caregiver robots for the people whose freedom and dignity may be most threatened by their use.
“I am heartened by the urgent discussions concerning ethical AI ongoing around the world and by rising public skepticism—at least compared with a decade or so—of technology in general. But I am concerned that the current rapid devaluation of human agency inherent in AI as it is used today is largely absent from public conversation.
- “We need to heed the creative thinkers such as Russell who are calling for a major reframing of standard models of AI to make AI better aligned with human values and preferences.
- “We need to ignite serious public conversation on these topics—a tall order amidst rising numbness to seemingly ceaseless world crises.
“When it comes to human agency and survival, we are already deeply in play in the chess match of our lives—and we must not cede the next move and the next and the next to powerful intelligences that we have created but are increasingly unable to control.”
John Sniadowski, a systems architect based in the UK, said, “Our lack of agency has arrived. I suggest that the bias towards never challenging the machines is inevitable. Decision systems are generally based on opaque formulas with targeted outcomes the usually serve only the best interests of the AIs’ vendors. In most cases, the ultimate outcome from these automated, data-based decisions cannot be challenged and are, in fact, rarely challenged because the human belief is the system is correct often enough to be followed.
“Consider the financial industry today, in 2022. Lending decisions are based on smart systems that are nearly impossible to challenge. In addition, AI is frequently trained on data sets that are biased and may contain hidden anomalies that significantly alter the decision process.
“The vast majority of the population will be convinced by marketing, propaganda or other opinion-bending messages that these systems are right and any individual’s opinion is wrong. We already see that sort of behaviour in human-based systems operated by Big Pharma, where millions/billions of revenue could be lost if a significant outcome of a product/decision is successfully challenged.
“Life-and-death decisions should always require responsible human input, and they should have a set of criteria that the AI system must present in arriving at its decision that is transparent and capable of human interpretation. This should be enshrined in legislation with punitive consequences for vendors that do not comply with decision transparency. I would strongly suggest that this should be incorporated in a global human rights framework, that all humans have the right to challenge the outcome of an autonomous decision. This should be part of the UN charter and put in place as soon as possible.
“Given what we are experiencing on social media, where people can become captured by ‘echo chambers,’ there is a significant danger that AI and autonomous decision processes will exacerbate a broad range of societal inequalities. The vast array of data metrics now harvested from individuals’ internet activities will continue to categorize each person more and more towards an inescapable stereotype without the individual even being aware of the label unfairly applied to them.
“Companies will harvest information from ‘smart cities,’ and AI will build dossiers on each citizen that will be applied for a wide variety of decisions about a person completely without their personal consent. This is very dangerous, and we are already seeing this capability being subverted by some governments to tighten their authoritarian grip on their population.”
Ginger Paque, an expert in and teacher of internet governance with the Diplo Foundation, commented, “I note that the response to this question assumes that most conditions in the world will be on a predicted trajectory, following what is generally considered progress. I responded ‘no’ for two reasons. We are facing serious challenges today: pandemics, war, discrimination, polarizations, for example.
“It’s impossible to predict what kind of or level of civilization will prevail 13 years from now. I have no confidence in mathematical probabilities concerning our future on or off Earth. And while I think that coding, algorithms and machine learning will advance, I do not think they will be self-aware or reach sentience in the foreseeable future. That’s a belief, I guess, more than a well-founded scientific position. It leads me to think, then, that AI will continue to be designed, coded, and controlled by profit-seeking companies who have a vested interest in shaping and controlling our decision-making processes. So, it is not AI that controls our decisions; it is other humans who use the powerful resources of AI.
“What key decisions should be mostly automated? Those that each individual chooses to have automated based on clear and transparent settings. Which key decisions should require direct human input? Any that the individual chooses to have direct input on, and in particular any that affects the quality of life, education, privacy, health or characteristics unless clear options and alternatives are presented and evaluated for transparent choice.
“Autonomous decision-making is directed by some agency, most often a profit-making entity that logically has its profit as a priority. Whoever writes the code controls the decision-making and its effects on society. It’s not autonomous, and we should have clear and transparent options to ensure we do not continue to cede control to known or unknown entities without proper awareness and transparency. It’s not AI that’s going to take humans’ decision-making faculties away any more than phones and GPS ruin our memories. Humans choose—quite often without the proper awareness, information and training—to do so.”
Jean Seaton, director of the Orwell Foundation and professor of media history at the University of Westminster, said, “The questions you pose do not factor in political systems. Already we can see the impact of new apparently ‘democratic’ ways of communicating on political choices on political structures. The manipulability of apparently technical systems has already moved the world dramatically away from a wider human agency. The willingness—particularly of authoritarian states—to monitor but also ‘please’ people and manipulate understanding depends on these systems. The hostility towards expertise seen today, the politicization of every critical issue, and more—these are all manipulable. What political systems do well out of this? In future, very few people may have agency. How will they use it?
“Fear and anxiety are proper responses to the challenges we face. For one, the existential threat of climate extinction is about to be fogged by the initial waves of refugees from soon-to-be uninhabitable places—Delhi? Central and South Africa? Afghanistan and Pakistan? Mis-, dis- and malinformation succeed as distractions, and human agency is wasted on the small revenges rather than solving the long-term challenges that must be addressed now.”
Leah Lievrouw, professor of information studies at UCLA, wrote, “If the main point of this set of questions is agency, ‘will be designed’ is the fulcrum. Someone is designing these technologies for some purpose based on the assumptions of those who commission them. ‘Design’ and ‘designers’ seem to be presented here as passive processes, ‘manna from heaven’ in the old line from economics about the effects of technology.
“Who exactly has ‘agency’? According to the June 11, 2022, cover feature on AI in The Economist, the only ‘designers’—organizations? individuals?—with the cash and brute-force computing capabilities to do the newest ‘foundational AI’ are huge private, for-profits, with one or two non-profits like OpenAI being supported by the private firms; there are also a few new startups attempting ‘responsible’ or ‘accountable’ algorithms. So, there’s the agency of designers (will they design for user control?) and the agency of users (decision-making based on what AI presents them?).
“Decision-making may not be the only aspect of agency involved. The ‘machine-human’ relationship binary has been around in popular culture for ages, but I think the current thinking among AI designers goes way beyond the one-to-one picture. Rather, AI will be integrated into many different digital activities for lots of reasons, with ripple effects and crossovers likely. Thus, there’s unlikely to be a bright-line division between machine decisions and human decisions, both for technical reasons and because who, exactly, is going to declare where the line is? Employers? Insurers/finance? State agencies? Legislatures?
“Any entity deploying AI will want to use it to the greatest extent possible unless specifically enjoined from doing so, but right now (except maybe in the EU…?) it seems to me that few regulators or organizations are there yet. We already see some very worrisome outcomes, for example, algorithmic systems used in legal sentencing.”
Marti Hearst, professor and head of the school of information, University of California-Berkeley, said, “In general, interfaces to allow people to adjust settings do not work well because they are complicated and they are disfavored by users. Consider tools that aid people in what they are doing, such as search engines or machine translation. These tools use a lot of sophisticated computation under the hood, and they respond quickly and efficiently to people’s queries. Research shows that people do not want to adjust the settings of the underlying algorithms. They just want the algorithms to work as expected.
“Today’s machine translation tools work adequately for most uses. Research shows that translators do not want to expend a lot of effort correcting a poor translation. And users do not want to take the time to tweak the algorithm; they will use the results even if they are poor since there is often no other easy way to get translations. Alternatives interfaces might give users lots of choices directly in the interface rather than anticipating the users’ needs. This can be seen in faceted navigation for search, as in websites for browsing and searching for sporting goods and home decor products.
“Tools will continue make important decisions for people, whether they want this or not. This includes settings such as determining recidivism and bail, judging teacher performance and perhaps including push advertising and social media feeds. These tools will not allow for any user input since it is not in the interests of those imposing the decisions on others to do so.”
Laurie L. Putnam, educator and communications consultant, commented, “If you look at where we are now and plot the trajectory of digital ‘tools,’ it looks like we’re going to land in a pretty dark place.
“Already we would be hard-pressed to live our lives without using digital technologies, and already we cannot use those phones and apps and cars and credit cards without having every bit of data we generate—every action we take, every purchase we make, every place we go—hoovered up and monetized. There is no way to opt out. Already we are losing rather than gaining control over our personal data, our privacy, our lives.
“Yes, digital technologies can do a lot of good in the world, but when they are created to improve a bottom line at any cost, or to control people through surveillance, then that is what they will do. If we want to alter our course and land in a better place, we will need to reinvent the concept of consumer protection for the information age. That will require thoughtful, well-informed human decision-making—now, not years from now—in legislative policies, legal standards and business practices. These are the checks and balances that can help move us in the right direction.”
Kunle Olorundare, principal manager at the Nigerian Communications Commission, wrote, “Based on today’s trends in terms of the development of artificial intelligence, by 2035 the advancement of AI will have taken over most of the decisions that once were taken solely by humans. Bots with high-level intelligence will be manufactured for the sole purpose of solving complex tree decisions. These will take over most human decisions.
“Key decisions in engineering design, packaging in the manufacturing sector, arrangement of stock, logistics tracking, triggering of alarms will be done by bots, etc. However, human decisions will still be relevant even if relegated to the background. Ethical issues in engineering will still be taken on by humans because they require making relative arguments for and against.
“Our society will be changed for good, with integrated bots taking on most movement logistics decisions. Autonomous vehicles will lead to safer road practices because bots will always keep to the rules (except for occasional accidents that may occur due to bugs). There won’t be unreasonable competitive driving on our roads, in the sky or on the ocean.
“Those of us in the Internet Society—for which I am vice president of the Nigeria Chapter –believe in an open but secured internet. Autonomous systems will operate on a secured internet that allows for secure dissemination of relevant data for informed decisions based upon analytics. Due the increasing volume of vital data flowing through these systems, we must make continuous improvements in cyber security.
“Among the important places in which autonomous systems and the Internet of Things will play roles in resolving complex problems are in hospitals—for diagnosis and other tasks—and in agriculture, where data analytics and unmanned aerial vehicles will be useful in all aspects of farming and food distribution.”
Jeremy Foote, a computational social scientist studying cooperation and collaboration in online communities, said, “People are incredibly creative at finding ways to express and expand their agency. It is difficult to imagine a world where they simply relinquish it. Rather, the contours of where and how we express our agency will change, and new kinds of decisions will be possible. In current systems, algorithms implement the goals of their designers. Sometimes those goals are somewhat open-ended and often the routes that AI/ML systems take to get to those goals are unexpected or even unintelligible. However, at their core, the systems are designed to do things that we want them to do, and human agency is deeply involved in designing the systems, selecting parameters and pruning or tweaking them to produce outputs that are related to what the designer wants.
“AI and ML are already and will continue to be used to identify weaknesses in our processes. Through big data processing and causal inference approaches, we have recognized ways that judicial systems are racially biased, for example. I think that these sorts of opportunities to use data to recognize and correct for blind spots will become more common.
“Some systems, of course, will become more automated—driving, like flying, will likely be something that is primarily done by computers. There is great promise in human-machine collaborations. Already, tools like GPT-3 can act as on-demand sounding boards, providing new ways to think about a topic or writing up the results of research in ways that augment human authors.”
Gary A. Bolles, chair for the future of work at Singularity University and author of “The Next Rules of Work,” predicted, “Innovators will continue to create usable, flexible tools that will allow individuals to more easily make decisions about key aspects of their lives and about the technologies they use. There’s also a high probability that 1) many important decisions will be made for people, by technology, without their knowledge, and 2) the creators of media and information platforms will lead the arms race, creating tools that are increasingly better at hacking human attention and intention, making implicit decisions for people and reaping the data and revenue that comes from those activities.
“First, every human needs education in what tech-fueled decision-making is and what decisions tech can and does make on its own.
“Second, tech innovators need a stringent code of ethics that requires them to notify humans when decisions are made on their behalf, tells them the uses of related data and tells how the innovator benefits from the use of their tools. Finally, industry needs open protocols that allow users to manage dashboards of aggregated decisions and data to provide transparent information that allows users (and their tools) to know what decisions technology is making on their behalf, empowering them to make better decisions.”
Henning Schulzrinne, Internet Hall of Fame member and co-chair of the Internet Technical Committee of the IEEE, said, “Agency and recourse are privileges now and they are likely to become more so. By 2035, automated decision-making will affect all high-volume transactions—from hiring and firing to obtaining credit, renting apartments, gaining access to health care, and interactions with the criminal justice system. Wealth, income and social standing will determine the extent to which individuals will have the ability to contest and work around automated decisions. It doesn’t seem likely that any of this will change.
“This is not a new concept. An example is the scheduling of when and where you work; for many hourly workers and gig workers this is automated, with little ability to influence the hours and locations. Employment and termination are also already largely algorithmic (see Amazon warehouses and many gig platforms). High-income, higher-status individuals will likely still get interviewed, hired and evaluated individually, and have at least some leverage. This is also more trivially true for travel—economy class travelers book or rebook online; business class travelers get concierge service by a human. In 2035, the notion of talking to an airline representative, even after waiting for hours in a call center queue, will become a rarity.
“Areas that are currently ‘inefficient’ and still largely human-managed will become less so, particularly in regard to employment, rental housing and health care. Human input is only modestly useful if the input is a call center staff person who mainly follows the guidance of their automated systems. Human input requires recourse, rights and advocacy, i.e., the ability to address unfair, discriminatory or arbitrary decisions in employment, credit, housing and health care.”
Christian Huitema, 40-year veteran of the software and internet industries and former director of the Internet Architecture Board, wrote, “Past experience with technology deployment makes be dubious that all or even most developers will ‘do the right thing.’
“We see these two effects colliding today, in domains as different as camera auto-focus, speed-enforcement camera, and combat drones. To start with a most benign scenario, camera makers probably try to follow the operator’s desires when focusing on a part of an image, but a combination of time constraints and clumsy user-interaction design often proves frustrating. These same tensions will likely play in future automated systems.
“Nobody believes that combat drones are benign, and most deployed systems keep a human in the loop before shooting missiles or exploding bombs. I hope that this will continue, but for less-critical systems I believe designers are likely to take shortcuts, like they do today with cameras. Let’s hope that humans can get involved after the fact and have a process to review the machines’ decisions.
“Autonomous driving systems are a great example of future impact on society. Human drivers often take rules with a grain of salt, do rolling stops or drive a little bit above the speed limit. But authorities will very likely push back if a manufacturer rolls out a system that does not strictly conform with the law. Tesla already had to correct its ‘rolling stop’ feature after such push-back. Such mechanisms will drive society towards ‘full obedience to the laws,’ which could soon become scary.”
Richard Bennett, founder of the High Tech Forum and ethernet and WiFi standards cocreator, said, “While future AI systems will be designed to allow human override, most people will not take advantage of the feature. Just as we agree to privacy policies without reading them, we will happily defer to algorithmic judgments when it means saving a few minutes and avoiding the headaches of parsing complex policies. Policy advocates will continue to demand transparency and various other features, but the public will continue to value convenience over control.”
Laura Stockwell, executive VP for strategy, WundermanThompson, wrote, “When you look at the generation of people designing this technology—primarily Gen Z and Millennials—I do believe they have both the awareness of the implications of technology on society, along with the political leaning required to implement human-first design. Specifically, when you look at the development of Web3 and ownership of data you see proof of this approach already. We are also witnessing a generation of workers raised with DEI initiatives as well as the diversification of our population which are both positive steps forward in more inclusive design. I also believe that those in decision-making positions—primarily Gen X, will support these decisions. That said, I do believe legislation will be required to support large companies to take into account user autonomy and agency.”
Daniel Castro, vice president and director of the Center for Data Innovation at the Information Technology and Innovation Foundation, asks, “When you wake up to an alarm, is this a machine in control of a human or a human in control of a machine? Some would argue the machine is waking up the human, so therefore the machine is in control. Others would say the human set the alarm, so therefore the human is in control. Both sides have a point.
“What is exciting about AI is that we can move the debate to a new level—humans will have the option to use technology to better understand their entire sleep patterns, and how factors like diet, exercise, health, and behavior impact their sleep and what options are available to them to change. Some of this will be automated, some of this will involve direct human choice and input. But the net impact will be greater opportunities for control of one’s life than before.
“Some people may be happy putting their lives completely on autopilot. Others will want to have full control. Most will probably be somewhere in the middle, allowing algorithms to make many decisions but scheduling regular check-ins to make sure things are going right—the same way that people may check their credit card bills, even if they have autopay.”
danah boyd, founder of the Data & Society Research Institute and principal researcher at Microsoft, complained, “Of course there will be technologies that are designed to usurp human decision-making. This has already taken place. Many of the autopilot features utilized in aviation were designed for precisely this, starting in the 1970s; recent ones have presumed the pilot to be too stupid to take the system back. (See cultural anthropologist Madeleine Elish’s work on this.)
“We interface every day with systems that prevent us from making a range of decisions. Hell, the forced-choice, yes-no format of this survey question constrained my agency. Many tools in workplace contexts are designed to presume that managers should have power over workers; they exist to constrain human agency. What matters in all of these systems is power. Who has power over whom? Who has the power to shape technologies to reinforce that structure of power?
“But this does not mean that ALL systems will be designed to override human agency in important decisions. Automated systems will not control my decision to love, for example. That doesn’t mean that systems of power can’t constrain that. The state has long asserted power over marriage, and families have long constrained love in key ways.
“Any fantasy that all decisions will be determined by automated technologies is science fiction. To be clear, all decisions are shaped (not determined!) by social dynamics, including law, social norms, economics, politics, etc.
“Technologies are not deterministic. Technologies make certain futures easier and certain futures harder, but they do not determine those futures. Humans—especially humans with power—can leverage technology to increase or decrease the likelihood of certain futures by mixing technology and authority. But that does not eliminate resistance, even if it makes resistance more costly.
“Frankly, focusing on which decisions are automated misses the point. The key issue is who has power within a society and how can they leverage these technologies to maximize the likelihood that the futures they seek will come to pass.
“The questions for all of us are: 1) How do we feel about the futures defined by the powerful; and 2) How do we respond to those mechanisms of power? And, 3), more abstractly, what structures of governance do we want to invest in to help shape that configuration?”
Steve Sawyer, professor of information studies at Syracuse University, wrote, “We are bumping through a great deal of learning about how to use data-driven AI. In 15 years, we’ll have much better guidance for what is possible. And the price point for leveraging AI will have dropped—the range of consumer and personal guidance where AI can help will grow.”
Gillian Hadfield, professor of law and chair of the University of Toronto’s Institute for Technology and Society, said, “By 2035 I expect we will have exceedingly powerful AI systems available to us including some forms of artificial general intelligence. You asked for a ‘yes-no’ answer although the accurate one is ‘either is possible and what we do today will determine which it is.’
“If we succeed in developing the innovative regulatory regimes we will need–including new ideas about constitutions (power-controlling agreements), ownership of technology and access to technology by the public and regulators—then I believe we can build aligned AI that is responsive to human choice and agency. It is just a machine, after all, and we can decide how to build it. At the same time, it is important to recognize that we already live with powerful ‘artificially intelligent’ systems—markets, governments—and humans do not have abstract, ideal agency and choice within those systems.
“We live as collectives with collective decision-making and such highly decentralized decisions that constrain any individual’s options and paths. I expect we’ll see more automated decision-making in domains in which markets now make decisions—what to build, where to allocate resources and goods and services. Automated decision-making, assuming it is built to be respected and trusted by humans because it produces justifiable outcomes, could be used extensively in resolving claims and disputes. The major challenge is ensuring widespread support for decision-making; this is what democratic and rule-of-law processes are intended to do now.
“If machines become decision-makers, they need to be built in ways the earn that kind of respect and support—from winners and losers in the decision.
“The version of the future in which decisions are automated on the basis of choices made by tech owners and developers alone (i.e., implementing the idea that a public services decision should be made solely on the basis of a calculation of an expert’s assessment of costs and benefits) is one in which some humans are deciding for others and reducing the equal dignity and respect that is foundational to open and peaceful societies. That’s a bleak future, and one on which the current tensions between democratic and autocratic governance shed light. I believe democracy is ultimately more stable and that’s why I think powerful machines in 2035 will be built to integrate into and reflect democratic principles, not destroy them.”
Barry Chudakov, founder and principal, Sertain Research, wrote, “Before we address concerns about turning the keys to nearly everything over to technology, including life-and-death decisions, it is worthwhile to consider that humanity evolved only recently to its current state after hundreds of thousands of years of existence.”
“The Open Education Sociology Dictionary defines agency as ‘the capacity of an individual to actively and independently choose and to affect change; free will or self-determination.’ For much of human history, individual agency was not the norm. David Wengrow and David Graeber asked in ‘The Dawn of Everything’: ‘How did we come to treat eminence and subservience not as temporary expedients … but as inescapable elements of the human condition?’ In a review of that book, Timothy Burke argues, ‘An association between small-scale foraging societies and egalitarian norms is robust. If we are to understand human beings as active agents in shaping societies, then applying that concept to societies at any scale that have structures and practices of domination, hierarchy and aggression should be as important as noting that these societies are neither typical nor inevitable.’
“Much earlier, writing in the 1600s, the English philosopher Thomas Hobbes claimed in ‘Leviathan’ that without government life would be ‘solitary, poor, nasty, brutish and short.’ Yet by 1789, no country had a democratic government. As of 2021 (232 years later) only half of all countries were democracies and just 89 of the 179 countries for which data is available held meaningful free-and-fair, multi-party elections.
“Agency has been more a privilege than a reality for a broad swath of human history; humans did not possess the more-recent sense of agency in which they feel the freedom to be able to break out of an assigned role, advance or change their career or profession, or raise a voice in various ways on any issue. Technology tools have reformulated agency. In fact, to say they ‘reformulate’ it is not strong enough.
“Within the context of limited liberal democracies, human agency took a quantum leap with the advent of computers and the smartphone. Today via podcast, YouTube, Snap, TikTok or an appearance on CNN, a Greta Thunberg or Felix Finkbeiner can step out of the shadows to fight for climate change or any other issue. Today humans have a host of tools, from cell phones to laptops to Alexa and Siri to digital twins. These tools are still primitive compared to what’s coming. They don’t only provide opportunities. They can also usurp agency, as when a person driving looks down at a text ping and crashes the car, even ending their life.
“We cannot fully grasp the recency of the agency we have gained, nor the encroachments to that agency that new tools represent. In concert with understanding this, we come to the startling realization—the acknowledgement—that today we are not alone; our agency is now shared with our tools.
“In effect, shortly after about 20% of the world’s population (those living in liberal democracies) were afforded actual agency, things abruptly changed. Technology outpaced democracy. It also outpaced our awareness of the effects of technology gadgets and devices. For most of us who use them tools, agency today is impinged, compromised, usurped and ultimately blended with a host of tools. This is the new baseline.
“Seeing agency as shared compels response and responsibility. If people are to remain in charge of the most relevant parts of their own lives and their own choices, it is imperative to realize that as we more deeply embrace new technologies to augment, improve and streamline our lives, we are not outsourcing some decision-making and autonomy to digital tools; we are using tools—as we always have done—to extend our senses, to share our thinking and responses with these tools. We have done this with alphabets and cameras, computers and videos, cell phones and Siri.
“We are facing a huge paradigm shift with the advent of new technologies and AI and machine learning. We need to reconfigure our education and learning to teach and incorporate tool logic. Anticipating tool consequences must become as basic and foundational as reading or numeracy. We can no longer thoughtlessly pick up and use, or hand over to children, devices and technologies that have the ability (potential or actual) to alter how we think and behave.
“Agency has no meaning if we are unaware. There is no agency in blindness; agency entails seeing and understanding. From kindergarten to post-graduate studies, we need students and watchers who are monitoring surveillance capitalism, algorithm targeting, software tracking, user concentration and patterns and a host of other issues.”
“Considering agency from this perspective requires a rethink and re-examination of our natures, our behaviors and the subliminal forces that are at work when we pick up technology gadgets and devices. As Daniel Kahneman wrote, ‘Conflict between an automatic reaction and an intention to control it is common in our lives.’ We have little choice but to become more conscious of our reactions and responses when we engage with smart machines, bots and systems powered mostly by autonomous and artificial intelligence (AI).
Stephen Hawking said of AI and human agency, ‘The real risk with AI isn’t malice, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble.’
“Our goals start with understanding how humans (mostly unconsciously) adopt the logic of tools and devices. Here is the truth about agency: no one escapes adhering to tool logic. We are not invincible when we engage with our gadgets and devices. The Marlboro Man notion of the cool, detached actor is defunct. We are now—and we will be much more so in the future—co-creators with our tools; we think with our tools; we act with our tools; we are monitored by them; we entrain with their logic. This is a re-statement of agency for those who claim the line from ‘Invictus,’ ‘I am the master of my fate: I am the captain of my soul.’ Actually, our technologies are at the helm with us.
“Rather than be offended by the intrusion of AI, or threatened by it, it would be far better to understand how AI affects agency in different dimensions, realize its potential and limitations and adjust our self-image accordingly. That adjustment starts with the understanding that we follow in the footsteps of our tools thus they will always affect our sense of self and our agency.
“What, then, does it mean to frame agency as a shared dynamic of the modern human condition? What does shared agency mean? It means that humans have willingly acceded a measure of their will and intent—their agency—to tools that augment (and often compete with) that will and intent; tools and software designed to involve human consciousness to such a considerable degree that humans can lose track of where they are, what they are doing, who they are with. Whether considered distraction or absorption, this is what agency looks like in 2022.
“Agency is not so simple. Like technology itself, agency is complicated. The short history of modern technology is the history of human agency encroached upon by tools that include ever greater ‘intelligence. The Kodak Brownie camera, a cardboard tool released in 1900, had no computing power built into it; today’s digital SLR has a raft of metadata that can ‘take over’ your camera, or simply inform you regarding many dimensions of light intensity, distance, aperture or shutter speed. In this instance, and in many others like it, humans choose the agency they wish to exert. That is true of computers, cell phones, video games or digital twins. We must now become more nuanced about that choice and shun simplistic encapsulations. As the website, AI Myths, notes:
‘No AI system, no matter how complex or ‘deep’ its architecture may be, pulls its predictions and outputs out of thin air. All AI systems are designed by humans, are programmed and calibrated to achieve certain results, and the outputs they provide are therefore the result of multiple human decisions.’
“But how many of us are aware of that programming or calibration? Unless we acknowledge how our agency is affected by a host of newer tools—and will be affected to an even greater extent by tools now in the works—our sense of agency is misguided. Our thinking about and assumptions of agency will be erroneous unless we acknowledge that we share agency with these new tools. We need to recognize that we share our consciousness with newer technology tools, we are no longer lone, independent agents.
“That’s not all. We are capable of creating new beings. Yuval Noah Harari says, ‘We are breaking out of the organic realm and starting to create the first inorganic beings in the history of life.’ These alt beings will further confound our sense of agency. Along with a question of our proprioception—where does our body start and end as we take ourselves into the metaverse or omniverse—inorganic beings will force us to ask, ‘what is real?’ and ‘what does real mean anymore?’ Will people opt for convenience, romanced by entertainment, and allow the gadgetry of technology to run roughshod over their intentions and eventually their rights?
“The answer to those questions becomes an issue of design informed by moral awareness. Technology must, at some level, be designed not to bypass human agency but to remind, encourage and reward it. Software and technology need to become self- and other-aware, to become consequence-aware.
“Technology seduction is a real issue; without engendering techno-nag, humans must confront AI with HI—human intelligence. Humans must step up to embrace and realize the potential and consequences of living in a world where AI can enhance and assist. Partnering with artificial intelligence should be an expansion of human intelligence, not an abdication of it.
“No, I do not expect that by 2035 smart machines, bots and systems powered by artificial intelligence will be designed to allow people to easily be in control over most tech-aided decision-making relevant to their lives. My rationale: In 13 years will we have completely revamped our educational systems to enable full understanding of the effects of technology and tech-aided decision-making? I doubt it.
“I believe the relationship between humans and machines, bots and systems powered mostly by autonomous and artificial intelligence will look like an argument with one side shouting and the other side smiling smugly. The relationship is effectively a struggle between the determined fantasy of humans to resist (‘I’m independent and in charge and no, I won’t give up my agency!’) and the seductive power of technology (‘I’m fast, convenient, entertaining! Pay attention to me!’) designed to undermine that fantasy. However, this relationship is not doomed to be forever tense and unsatisfying.
“In a word, the relationship between humans and machines, bots and systems powered mostly by autonomous and artificial intelligence will be fraught; it will still be unclear by 2035 that humans are now sharing their intelligence, their intentions, their motivations with these technological entities. Why? Because we have not built, nor do we have plans to build, awareness and teaching tools that retrain our populace or make people aware of the consequences of using newer technologies; and because in 13 years the social structures of educational systems—ground zero for any culture’s values—will not have been revamped, rethought, reimagined enough to enable humans to use these new entities wisely. Humans must come to understand and appreciate the complexity of the tools and technologies they have created, and then teach one another how to engage with and embrace that complexity.
“Some elaboration will clarify. Today most educational structures are centered around the older model of retention as the driver of matriculation. That is, students are tested on how much of the subject matter they are retaining, remembering. While it is important for students to understand, remembering has much less utility or meaning in a world where every fact is available at anyone’s fingertips via a host of devices, Alexa, Siri, etc. Instead, what is now supremely important is to understand the dynamics and logic of smart machines, bots and systems powered mostly by autonomous and artificial intelligence. This is the new foundation of learning.
“Further, instead of starting in kindergarten along with language and math skills, techno education is still siloed in “computer science”—which means much of our educated populace, unless they go to enlightened grammar schools or are computer science majors, are not trained in the languages of the future (JavaScript, Python, Go, etc.); or in the logic and grammar of quantum computing, digital twins, blockchain, algorithms, Internet of Things, or federated learning to name a few. This puts many students at a disadvantage when it comes to today’s most critical skill: learning to think through, question, test, and probe the moral and ethical dimensions of these new tools. In effect, questions of agency are off the table.
“Given the segregated hierarchical nature of today’s pedagogy, the issues of understanding the dynamics and logic of smart machines, bots and systems powered by autonomous and artificial intelligence is relegated—as mentioned—to the field of computer science. This means a university computer science major directly encounters these phenomena starting around the age of 18. That is utter nonsense. By that time, they have had their noses in devices at least since they were five, and many earlier as careless parents try to pacify toddlers with devices as early as two or three. Our devices as currently designed bypass agency, trick agency, deaden agency, lull agency—and these are the crude forerunners to the good stuff, to devices and technologies just around the corner in the metaverse and omniverse.
“Will we have entirely revamped educational structures K through university graduate school to enable necessary and powerful learning by 2035? Today we can’t even get large portions of our population to get vaccinated against a deadly virus; the population does not understand—or care about—disease vectors or communicable and mutating viruses. They think of them as conspiracies instead of DNA strings. What makes anyone think that in 13 years our educational system will revamp itself to address fundamental changes in the world and prepare students to coherently and ethically deal with smart tools, emerging technologies and their effects on agency?
“It will still be unclear to most by 2035 that humans are now sharing their intelligence, their intentions, their motivations with these technological entities. Why? Because we have not built, nor do we have plans to build, awareness and teaching tools that retrain our populace or make people aware of the consequences of using newer technologies; and because in 13 years the social structures of educational systems—ground zero for any culture’s values—will not have been revamped, rethought, reimagined enough to enable humans to use these new entities wisely.
“Humans must come to understand and appreciate the complexity of the tools and technologies they have created and then teach one another how to engage with and embrace that complexity. It is now supremely important is to understand the dynamics and logic of smart machines, bots and systems powered mostly by autonomous and artificial intelligence. This is the new foundation of learning. But most are at a disadvantage when it comes to today’s most critical skill: learning to think through, question, test and probe the moral and ethical dimensions of these new tools.
“The nature of consumer-focused smart tools is to keep the logic and dynamics of the tools hidden, or at least less noticeable, and to engage the user with features that are quick, convenient, pacifying. These can be the enemies of agency. The inside revolt of people in technology development is an enlightened push-back against the usurping of agency:
- Steve Jobs wouldn’t let his kids use an iPad or iPhone.
- Jaron Lanier has written extensively about the danger of treating humans as gadgets.
- Former Google insider, Christian Harris, has railed against social media algorithms and how they amplify nonsense, creating the new propaganda, which he calls ‘amplifiganda.’
- Stephen Hawking said that efforts to create thinking machines pose a threat to our very existence.
“Yet agency arrives in the smallest moments. Looking at your phone can be so addictive you don’t notice you’re ignoring a loved one; your attention can be compromised when driving, with deadly consequences. These are agency compromises. If you had full-awareness agency, you would notice that being alone together is not the purpose of togetherness; or that driving while texting is dangerous. But the device has usurped a measure of your agency.
“We would be wise to prepare for what shared consciousness means. Today that sharing is haphazard: We pick up a tool and once we are using how it is programmed, we see (and can be shocked by) how much agency the tool usurps. Instead, what we need is awareness of what technology-human sharing means and how much, if any, agency we are willing to share with a given tool. We will increasingly use AI to help us resolve wicked issues of climate change, pollution, hunger, soil erosion, vanishing shorelines, biodiversity, etc. In this sharing of agency, humans will change. How will we change?
“If we consider mirror worlds, the metaverse or digital twins, a fundamental design feature raises a host of philosophical questions. Namely, how much agency can we design, should we design into machines, bots, and systems powered by autonomous and artificial intelligence? This has the potential to affect a death by a thousand cuts. What constitutes agency? Is a ping, a reminder, an alert—agency? Probably not. But when those (albeit minimal) features are embedded in a gadget and turning them off or on seems difficult to maneuver—does the gadget have effectively some agency? I would argue yes. If a robot is programmed to assist a failing human, say during respiratory arrest or cardiac failure, is this a measure of (or actual) agency? (We’re going down the slope.) What about when an alarm system captures the face of an intruder and is able to instantly match that face with a police database—and then calls 911 or police dispatch? (We may not be here today, but we’re not far away from that possibility.)
“So, agency is poised to become nuanced with a host of ethical issues. The threat of deepfakes is the threat of stolen agency; if AI in the hands of a deepfaker can impersonate you—to the degree that people think the deepfake is you—your agency has vanished. The cultural backdrop of techno agency reveals other ethical quandaries which we have not properly addressed: does a woman have full agency if the government can intervene with her choice not to keep a baby if she is financially unable to care for another child?
“We need a global convention of agency. We are heading towards a world where digital twinning and the metaverse are creating entities which will function both in concert with humans and apart from them. We have no scope or ground rules for these new entities.”
“What key decisions will be mostly automated and what should require direct human input? We will need a new menu of actions and reactions which we collectively agree do not compromise agency if we turn them over to AI. We can then, cautiously, automate these actions. I am not prepared to list which key decisions should or should not be automated (beyond simple actions like answering a phone) until we have fully examined agency in an historical context; only then are we prepared to consider tool logic and how humans have previously entrained with that logic while not acknowledging our shared consciousness with our tools; and only then are we ready to examine how to consider which decisions could be mostly automated.
“With that caveat, assuming the capability of human override, it is possible to automate certain recognition protocols such as x-rays in a medical context. Or traffic patterns and lane violations. These appear to be agency-neutral. But, again, agency interference should be our primary consideration.”
“Any decision that affects human agency should require direct human input. That sounds easy enough, but in practice this is complex and confusing. Any job interview, any taxation policy or review, any algorithm that guides content and engagement on a website, any tracking or recognition software, test proctoring, cybersecurity, airport and border screening, organ harvesting—there is almost no use of automation where human agency is uninvolved. Among other reasons, this is because human assessment of techno-oversight is so subjective and liable to be influenced by prior cognitive commitments and prejudice. Of course, it is possible to train software to avoid prejudice and inadvertent discrimination to some extent. However, the full understanding of that extent remains to be seen.”
“The broadening and accelerating rollout of tech-abetted, often autonomous decision-making has the potential to change human society in untold ways. The most significant of these is human reliance on autonomous decision-making and how, from passivity or packaged convenience, the scope of the decision-making could creep into, and then overtake, previously human-moderated decisions.
“Humans are notorious for giving up their agency to their tools from habit and path-of-least-resistance lethargy. This is not an indictment of humans but an acknowledgement of the ease with which we follow the logic of our convenience-marketed products. Heart disease is an example which is rooted in packaged poison: so many foods in plastic and cans are harmful to human heart health, but the convenience of getting, say, packaged meats, has fostered reliance on growth hormones, (which has also fueled inhumane animal conditions) which drive up meat consumption which drives up rates of heart disease. The same could be said of diabetes which is due to an overreliance on sugar sweeteners in every product imaginable. These examples are harbingers of what could happen to human society if proper oversight is not exercised regarding tech-abetted, often autonomous decision-making.”
Alejandro Pisanty, Internet Hall of Fame member, longtime leader in the Internet Society and professor of internet and information society at UNAM, National Autonomous University of Mexico, predicted, “There are two obstacles to human agency triumphing: enterprise and government. Control over the technologies will be more and more a combination of cooperation and struggle between those two forces, with citizens left very little chance to influence choices.
“One reason to be more pessimistic about this than in previous years is the way things have gone in the COVID-19 crisis. We had a glimpse of a whiff of a sliver of a chance to let science, reason and civil debate gain the upper hand in determining the course and we missed it. Atavistic beliefs and disbeliefs have gained too much space in societies’ decision-making. This will be transferred to the development and deployment of automated systems.
“The politicization of even basic decisions such as respiratory hygiene for preventing a respiratory disease or immunization against it negate the truly miraculous level of scientific and technological development that allowed humankind to have a vaccine against COVID-19 in less than a year after the disease entered human life with ravaging effects. The flaws in elementary logic exhibited by citizens who have completed 12 or more years of education in advanced economies are appalling and sadly have too much influence.
“That these systems are powered by what we now call ‘AI’ (in its different forms) is of secondary importance to the fact that the systems are automated and black-boxed. Technologists cite some good reasons for blackboxing, such as to prevent bad actors from hacking and weaponizing the systems; but this ‘security by obscurity’ is a rather naïve excuse for hiding the work behind the AI because simple reverse engineering and social engineering can be applied to weaponize these systems anyway.
“The rollout of automated, to some extent autonomous, decision-making systems is not happening as one central decision made in public. It is death by a thousand cuts in which smaller units develop or purchase and deploy such systems. The technical complexity is hidden, and even well-trained scientists and technologists struggle to keep up with the pace of developments and their integration. It is thus quite difficult for them to inform the general population clearly enough and in time.
“We will continue to struggle for reason and for agency, but the void created by these trends indicates that the future design of decision-making tech will most likely not be determined by the application of science and well-reasoned, well-intended debate. Instead, the future is to be determined by the agendas of commercial interests and governments, to our chagrin.”
Luis Germán Rodríguez Leal, teacher and researcher at the Universidad Central de Venezuela and consultant on technology for development, wrote, “Humans will not be in control of important decision-making in the year 2035 because the digitalization of society will continue to advance as it has been. Promoters of these controlling technologies encourage people to appreciate their convenience, and they say the loss of agency is necessary, unstoppable and imminent, a part of helping refine and enhance people’s experiences with this tech.
“The society based on the data economy will advance in the surveillance and control of people as citizens and as consumers. The creation, manipulation and propagation of consumption habits or ideological positions will be increasingly based on these types of resources. This is an issue of grave concern.
“Society might be entering a dangerous state of digital submission that will rapidly progress without the necessary counterweights that may recover and promote the role of the human over economic indicators. Individuals’ ownership of their digital identity must be guaranteed in order to stimulate the exercise of free will and to encourage a reasoned commitment to participate in the creation and achievement of objectives in the social contexts in which they live.
“The progress of thoughtless digital transformation led by commercial interests is stimulated by the actions or lack of action of governments, the private sector and a broad sector of academia and civil society. Many relevant voices have been raised with warnings about the digital emergency we are experiencing. International organizations such as UNESCO, the European Commission and others have highlighted the need to advance information and digital literacy strategies, together with alternative skills of personal communication, promotion of empathy, resilience and, above all, the raising of ethical awareness among those who create these tools and systems on the influence and impact of all aspects of the creation, introduction and use of digital tools.”
Russ White, infrastructure architect at Juniper Networks and longtime Internet Engineering Task Force (IETF) leader, said, “When it comes to decision-making and human agency, what will the relationship look like between humans and machines, bots and systems powered mostly by autonomous and artificial intelligence?
“In part, this will depend on our continued belief in ‘progress’ as a solution to human problems. So long as we hold to a cultural belief that technology can solve most human problems, humans will increasingly take a ‘back seat’ to machines in decision-making. Whether or not we hold to this belief depends on the continued development of systems such as self-driving cars and the continued ‘taste’ for centralized decision-making—neither of which are certain at this point.
“If technology continues to be seen as creating as many problems as it solves, trust in technology and technological decision-making will be reduced, and users will begin to consider them more of a narrowly focused tool rather than a generalized solution to ‘all problems.’ Thus, much of the state of human agency by 2035 depends upon future cultural changes that are hard to predict.
“What key decisions will be mostly automated? The general tendency of technology leaders is to automate higher-order decision, such as what to have for dinner, or even which political candidate to vote for, or who you should have a relationship with. These kinds of questions tend to have the highest return on investment from a profit-driving perspective and tend to be the most interesting at a human level. Hence, Big Tech is going to continue working towards answering these kinds of questions. At the same time, most users seem to want these same systems to solve what might be seen as more rote or lower-order decisions. For instance, self-driving cars.
“There is some contradiction in this space. Many users seem to want to use technology –particularly social or immersive neurodigital media—to help them make sense out of a dizzying array of decisions by narrowing the field of possibilities. Most people don’t want a dating app to tell them who to date (specifically), but rather to narrow the field of possible partners to a manageable number. What isn’t immediately apparent to users is technological systems can present what appears to be a field of possibilities in a way that ultimately controls their choice (using the concepts of choice architecture and ‘the nudge’). This contradiction is going to remain at the heart of user conflict and angst for the foreseeable future.
“While users clearly want to be an integral part of making decisions they consider ‘important,’ these are also the decisions which provide the highest return on investment for technology companies. It’s difficult to see how this apparent mismatch of desires is going to play out. Right now, it seems like the tech companies are ‘winning,’ largely because the average user doesn’t really understand the problem at hand, nor its importance. For instance, when users say, ‘I don’t care that someone is monitoring my every move because no-one could really be interested in me,’ they are completely misconstruing the problem at hand.
“Will users wake up at some point and take decision-making back into their own hands? This doesn’t seem to be imminent or inevitable.
“What key decisions should require direct human input? This is a bit of a complex question on two fronts. First, all machine-based decisions are actually driven by human input. The only questions are when that human input took place, and who produced the input. Second, all decisions should ultimately be made by humans—there should always be some form of human override on every machine-based decision. Whether or not humans will actually take advantage of these overrides is questionable, however.
“There are many more ‘trolley problems’ in the real world than are immediately apparent, and it’s very hard for machines to consider unintended consequences. For instance, we relied heavily on machines to make public health policies related to the COIVD-19 pandemic. It’s going to take many decades, however, to work out the unintended consequences of these policies, although the more cynical among us might say the centralization of power resulting from these policies was intended, just hidden from public view by a class of people who strongly believe centralization is the solution to all human problems.
“How might the broadening and accelerating rollout of tech-abetted, often autonomous decision-making change human society?
- As humans make fewer decisions, they will lose the ability to make decisions.
- Humans will continue down the path towards becoming, in Lewis’ words, ‘domesticated,’ which essentially means some small group of humans will increasingly control the much larger ‘mass of humanity.’
“The alternative is for the technocratic culture to be exposed as incapable of solving human problems early enough for a mass of users to begin treating ML and AI systems as ‘tools’ rather than ‘prophets.’ Which direction we go in is indeterminate at this time.”
J. Nathan Matias, leader of the Citizens and Technology Lab at Cornell University, said, “Because the world will become no less complex in 2035, society will continue to delegate important decision-making to complex systems involving bureaucracy, digital record-keeping and automated decision rules. In 2035 as in 2022, society will not be asking whether humans are in control, but which humans are in control, whether those humans understand the consequences of the systems they operate, whether they do anything to mitigate the harms of their systems, and whether they will be held accountable for failures.
“In 2035, organizations will continue to use information asymmetries associated with technology-supported decision-making to gain outsized power and influence in society and will seek to keep the details of those systems secret from democratic publics, as has been the case throughout much of the last century and a half.
“Automated systems will be more prevalent in areas of labor involving human contact with institutions, supported by the invisible labor of people who are paid even less than today’s call-center workers. In some cases, such as the filtering of child sexual abuse material and threats of violence, the use of these imperfect systems will continue to reduce the secondary trauma of monitoring horrible things. In other cases, systemic errors from automated decision-making systems (both intentional and unintentional) will continue to reinforce deep inequalities in the U.S. and beyond, contributing to disparities in health, the economy, and access to justice.
“By 2035, automated decision-making systems will continue to be pervasive, powerful, impossible to monitor let alone govern. In response to this situation, U.S. states and the federal government will develop regulations that mandate testing automated systems for bias, discrimination, and other harms. By 2035, efforts at ensuring the safety and reliability of algorithmic decision-making systems will shift from a governance void into a co-evolutionary race among regulators, makers of the most widely known systems, and civil society.
“In 2035, scientists will still be debating whether and how decision-making by automated systems can reduce bias and discrimination on average, compared to human institutions—since the answer depends on context.”
David J. Krieger, director of the Institute for Communication and Leadership in Lucerne, Switzerland, predicted, “Individual agency is already a myth, and this will become increasingly obvious with time. The problem here is not technological, but ideological. Humanism attempts to preserve the myth of individual agency and enshrine it in law. Good design of socio-technical networks will need to be explicit about its post-humanist presuppositions in order to bring the issue into public debate. Humans will act in partnership, that is, distributed agency, with technologies of all kinds. Already this is so and will be more so in the future.
“In a data-driven society all decisions on all levels and in all areas, business, healthcare, education, etc., will need to be evidence-based and not based on position in a hierarchy, intuition, gut feeling, experience, etc. The degree of automation is secondary to the principle of evidence-based decision-making. When sufficient evidence is available the level of automation will increase. Also, constraints of time and space will condition the level of automation.
“No decisions will be left to individual agency, since there is no such thing. Even decisions about who to marry, what to study, what job to take, what therapy is appropriate etc. will be assisted by automated data evaluation. Society will no longer be ‘human’ but instead ‘socio-technical.’ Already there is no such thing as human society, for without technology there would be no society as we know it. The problem is that our current political and social ideologies do not acknowledge this fact and continue to portray a mythical version of the social and the human.”
Peter Suber, director of the Harvard University Open Access Project, responded, “The main reason to think that AI tools will help humans make important decisions is that there will be big money in it. Companies will want to sell tools providing this service and people will want to buy them. The deeper question is how far these tools will go toward actually helping us make better decisions or how far they will help us pursue our own interests. There’s good reason to think the tools will be distorted in at least two ways.
“First, even with all good will, developers will not be able to take every relevant variable into account. The tools will have to oversimplify the situations in which we make decisions, even if they are able to take more variables into account than unaided humans can.
“Second, not all tools and tool providers will have this sort of good will. Their purpose will be to steer human decisions in certain directions or to foster the political and economic interests the developers want to foster. This may be deceptive and cynical, as with Cambridge Analytica. Or it may be paternalistic. A tool may intend to foster the user’s interests, but in practice this will mean fostering what the developer believes to be the user’s interests or what the tool crudely constructs to be the user’s interests.”
Lia DiBello, principal scientist at Applied Cognitive Sciences Labs Inc., commented, “I actually believe this could go either way, but so far, technology has shown itself to free human beings to focus on higher-order decision-making by taking over more practical or mundane cognitive processing.
“Human beings have shown themselves to appropriate technology as part of their own thinking process—as they do with any tool. We see this with many smart devices, with GPS systems and with automation in general in business and medicine and in other settings across society. For example, people with implantable medical devices can get data on how lifestyle changes are affecting their cardiac performance and do not have wait for a doctor’s appointment to know how their day-to-day choices are affecting their health.
“What will the relationship look like between humans and machines, bots and systems powered mostly by autonomous and artificial intelligence? I expect we will continue to see growth in the implementation of AI and bots to collect and analyze data that human beings can use to make decisions and gain the insights they need to make appropriate choices.
“Automation will not make ‘decisions’ so much as it will make recommendations based on data. Current examples are the driving routes derived from GPS and traffic systems, shopping suggestions based on data and trends and food recommendations based on health concerns. It provides near-instant analysis of large amounts of data.
“As deep learning systems are further developed, it is hard to say where things will go. The relationship between AI and human beings has to be managed—how we use the AI. Skilled surgeons today use programmable robots that—once programmed—work pretty autonomously, but these surgeries still require the presence of a skilled surgeon. The AI augments the human.
“It’s hard to predict how the further development of autonomous decision-making will change human society. It is most important for humans to find ways to adapt in order to integrate it within our own decision-making processes. For some people, it will free them to innovate and invent, for others, it could overwhelm and deskill them. My colleagues, cognitive scientists Gary Klein and Robert Hoffman have a notion of A.I.-Q. Their research investigates how people use and come to understand AI as part of their individuals decision-making process.”
Thomas Levenson, professor and director of the graduate program in science writing at the Massachusetts Institute of Technology, commented, “The diminishment of human agency is already a feature—not a bug—of U.S. and world society. A lot of social infrastructure and the harsh realities of power are doing so. To some extent, the use of AI (or AI-labeled automated systems) is just a way to mask existing threats to individual autonomy/agency. A first-order answer to this difficult question is that AI-powered systems will be deeply embedded in decision-making because doing so serves several clear interests of those developing and deploying those systems. They’re cheaper at the marginal action than a human-staffed system (much of the time). Embedded assumptions disappear behind the technology; such assumptions most often reflect and serve the status quo ante, in which the deployer of bots/systems is a successful player.”
Kathryn Bouskill, anthropologist and AI expert at the Rand Corporation, said, “Looking ahead, humanity will be challenged to redefine and reimagine itself. It must consider the unprecedented social and ethical responsibilities that the new speed of change is ushering into our lives—including crucial issues being raised by the spread of AI.
“The number of places in which individuals have agency and can take control in this era of swift technological speed is dwindling. Hitting the brakes is not an option. When life happens quickly it can feel difficult to process change, create a purpose, hold our social ties together and feel a sense of place. This kind of uncertainty can induce anxiety, and anxiety can lead to isolationism, protectionism, fear, gridlock and lack of direction.
“Some very basic functions of everyday life are now completely elusive to us. People have little idea how we build AI systems, control them and fix them. Many are grasping for control, but there is opaqueness in terms of how these technologies have been created and deployed by creators who oversell their promises. Right now, there is a huge chasm between the public and AI developers. We need to ignite real public conversations to help people fully understand the stakes of these developments.
“Is AI going to completely displace human autonomy? We may forget that humanity still has the opportunity to choose what is being developed. That can still be our decision to make. Most people are just passively watching the technology continue to rapidly roll out without being actively engaged as much as they should be with it. For now, I’m leaning toward the optimistic view that human autonomy will prevail. However, this requires the public implementation of educational components, so the black-box aspects of AI are explored and understood by more people. And the public and Big Tech must learn how to build equity into AI and know what levers to pull to assure that it works for the good of humanity. Smart regulation and robust data protection are also critically important.
“The greatest resource in the human tool kit is our ability to cooperate and creatively adapt to or change our surroundings. It will take a concerted effort across multiple stakeholders—citizens, consumers, employers, voters, tech developers, and policymakers—to collectively devote attention to vetting and safeguarding technologies of the future to make the world safer.”
Mark Crowley, an assistant professor of computer engineering at the University of Waterloo whose research seeks dependable and transparent ways to augment human decision-making, responded, “I see two completely separate issues here: 1) Will scientific advances in AI make it possible to give decision-making assistance in most human decision-making domains by 2035? 2) Will designers of available and popular AI systems such as bots, tools, search engines, cellphones, productivity software, etc., design their tools in such a way as to give people meaningful control over decision-making?
“My take on each: 1) Yes, entirely possible that most fields of human endeavour could have meaningful AI powered decision-making assistance by 2035, and that it would be possible to allow meaningful human input, oversight, veto and control. 2) No, I am not confident at all that those who create these tools, or those who pay for them to be created, will do so in a way that supports meaningful input.
“Here’s a related issue and another question that needs an answer: Will the larger users, such as industry and government, create or request creation, of tools that enable their constituencies to have meaningful control?
“The public already has far too much confidence today in accepting the advice coming from AI systems. They seem to have a false sense that if an AI/ML powered system has generated this answer it must be correct, or at least very reasonable. This is actually very far from the truth. AI/ML can be arbitrarily wrong about predictions and advice in ways human beings a have difficult time accepting. We assume systems have some baseline of common sense whereas this is not a given in any software system. Many AI/ML systems do provide very good predictions and advice, but it entirely depends on how hard the engineers/scientists building them have worked to ensure this and to test the boundaries.
“The current trend of ‘end-to-end learning’ in ML is very exciting and impressive technically, but it also magnifies this risk, since the entire point is that no human prior knowledge is needed. This leads to huge risks of blind spots in the system that are difficult to find.”
Laura Forlano, director of the Critical Futures Lab, Illinois Institute of Technology, an expert on the social consequences of technology design, said, “It is highly likely, with current design and engineering practices, that decisions about what is too much or what is too little automation will never be clearly understood until the autonomous systems are already deployed in the world.
“In addition, due to current design and engineering practices, it is very likely that the people who must use these systems the most—as part of their jobs—especially if they are in customer-facing and/or support roles with less power—will never be consulted in advance about how best to design these systems to make them most useful in each setting. The ability for primary users to inform the design processes of these systems in order to make them effective and responsible to users is extremely limited.
“Rather than understanding this as a binary relationship between humans vs. machines, systems that allow for greater flexibility, modularity and interoperability will be key to supporting human agency. Furthermore, anticipating that these systems will fail as a default and not as an aberration will allow for human agency to play a greater role when things do go wrong.”
Alf Rehn, professor of innovation, design and management at the University of Southern Denmark, observed, “The future will clearly cut both ways. On the one hand, better information technologies and better data have improved and will continue to improve human decision-making. On the other, black box systems and non-transparent AI can whittle away at human agency, doing so without us even knowing it is happening. The real challenge will lie in knowing which dynamic is playing out strongest in any given situation and what the longer-term impact might be.
“We need AIs that are less ‘Minority Report’ and more of a sage uncle, less decision-makers than they are reminders of what might be and what might go wrong. I do believe that—yes—humans will still be making the big decisions, and if things play out well we may have algorithms that help us do that in more considered, ever-wise, ways.
“When it comes to the obvious issues—making decisions about immediate life or death, peace or war, and the most impactful laws—I think we humans will always insist on having our hand on the throttle or finger on the button. The trouble will more likely start brewing in smaller things, decisions we may think are best handled by algorithmic logics, and where we may lack an understanding of long-term consequences.
“Take research funding and innovation projects, for instance. These may seem like things that are best handled ‘objectively,’ with data, and could be an area where we are fairly open to leaving some of our agency to, e.g., an AI system. At the same time, these are often things where the smart move is to fund longshots, things where you have to rely on intuition and imagination more than historical data.
“Or consider things such as education. We have already made things such as school districts and university admittance partially automated, and there seems to be a desire to let assumedly rational systems make decisions about who goes where and who gets to study what. Whilst there might be benefits to this, e.g., lessening bias, these also represent decisions that can affect people for decades and have impacts generations into the future.
“The key issue, then, might be less one of any sort of change in what is perceived as agency, and more one about the short term versus the longer term. We might lose some agency when we let smart machines pick the soundtrack to our everyday life or do some of our shopping for us without asking too many questions beforehand. Sure, we might get some dud songs and some tofu when we wanted steak, but this will not have a long-term impact on us and we can try to teach the algorithm better.
“Allowing an algorithm to make choices where it might be impossible to tell what the long-term effects will be? This is an altogether different matter. We’ve already seen how filter bubbles can create strange effects—polarized politics and conspiracy theories. As smart machines get more efficient there is the risk that we allow them to make decisions that may impact human life for far longer than we realize, and this needs to be addressed.
“We need to pay heed to injections of bad data into decision-making and the weaponization of filtering. That said, such effects are already seen quite often in the here and now. Our perspective needs to shift from the now to address what may come in years and decades. We don’t need to fear the machines, but we need to become better at understanding the long-term implications of decisions. Here, in a fun twist, algorithms might be our best friends, if smartly deployed. Instead of using smart machines to make decisions for us, we need to utilize them to scan the likely future impact of decisions.”
Calton Pu, professor and co-director of the center for experimental research in computer systems, Georgia Tech, wrote, “There will not be one general trend for all AI systems. Given the variety of smart machines, bots and systems that will incorporate some kind of AI, a complex question has been simplified to provide clarity of a binary answer. The question on human decision-making has two implicit dimensions: 1) technical vs. managerial, and 2) producer vs. consumer.
“On the first dimension, there are some technical constraints, but the manufacturers are expected to develop the technology to provide a wide range of capabilities that can support many degrees of human control and decision-making in AI-assisted products and systems. However, of the wide range of technical capabilities, the management will choose what they think will be appropriate for them either politically (governments) or for profit (companies).
“On the second dimension, the producers will be guided by managerial decisions (since the technical capabilities will be many). In contrast, the consumers will have their own wide range of preferences, from full human control to full automation (machines making many decisions). Producers may (or may not) choose to satisfy consumer preferences for a variety of political and monetary reasons.
“An analysis from these two dimensions would suggest that the relationship between humans and machines will not be dominated by technology forecasts. Instead, the selection of available technology in products and systems will be dictated by political/monetary concerns. Therefore, technological advances will provide the capabilities to enable a wide range of answers to the question on which key decisions that will be automated or requiring human input. However, the management will determine which way to go for each decision for political or monetary reasons that can change quickly in the space-time continuum.
“It is common for management to hide political/monetary reasons behind technological facilities, e.g., blaming an algorithm for ‘automated’ decision-making, when they specified the (questionable) policies implemented by the algorithm in the first place. In many such cases, the so-called autonomous decision-making is simply a convenient mislabel, when the systems and products have been carefully designed to follow specific political/monetary policies.”
Giacomo Mazzone, global project director for the United Nations Office for Disaster Risk Reduction, responded, “The future will be unevenly distributed. Positive progress will be made only in those countries where a proper system of rules based on the respect of human rights is put in place. A large part of the world’s population living outside of democracies, will be under the control of automated systems that serve only the priorities of the regional regime. Examples include the massive use of facial recognition in Turkey and the ‘stability maintenance’ mechanisms in China. Also, in the countries where profit-based priorities are allowed to overrule human rights such as privacy or respect of minorities the automated systems will be under the control of corporations. I believe the U.S. will probably be among those in this second group.
“In the countries with human rights-compliant regulation, greater agency over human-interest decision-making may come in the realms of life-and-death situations, health- and justice-related issues, some general-interest and policymaking situations, and in arbitration between different societal interests (e.g., individuals vs. platforms). In countries that respect human rights, automated decisions will generally be turned to in cases in which the speed, safety and/or complexity of the process requires them. Examples include the operation of unmanned vehicles, production and distribution of goods based on automatic data collection, and similar.
“Perhaps one of the most likely broad societal changes in a future with even more digitally enhanced decision-making is that—similarly to what happened with the introduction of pocket calculators, navigation systems and other innovations that in the past brought a loss of mental calculation capacity, of orientation in space, of the ability to repair simple objects—most of the humanity will find their skills to be significantly degraded.”
Rasha Abdulla, professor of journalism and communication, The American University in Cairo, commented, “I certainly hope that machines will be designed to allow humans to be in control. However, it’s one thing to talk about coffee makers and self-driving cars and another to talk about smart surveillance equipment. Another important aspect is how such technology will be used across different regions of the world with more authoritarian rule. While I think consumer-oriented products will be better designed in the future to make life easier, mostly with the consumer in control, I worry about what products with broader use by governments or systems will be like.”
Steven Sloman, a cognitive scientist at Brown University whose research focus is how people think, reason, make decisions and form attitudes and beliefs, commented, “The main changes I expect in human society are the standardization of routine decisions as AI takes them over and the uses of AI advice that make even unique decisions much more informed.
“Handing routine decisions over to AI will make many life decisions that are made repeatedly more reliable, easy to justify and more consistent across people. This approach could be applied everywhere in society, e.g., automating rulings in sports contests and other aspects of life.
“Should we interpret this type of radiology image as a tumor? Does a mechanic need to look at my car? Is it time for a new roof? Will student essays be graded automatically? My guess would be a bifurcation in class within society: public schools with large demands will rely on automatic grading; private schools that demand a lot of tuition will not. Efficiency will trade off with cost, with the result that richer students will learn to express themselves with more freedom, less constrained by the less-flexible, less-insightful criteria of AI.
“Many difficult, unique decisions, though, involve large amounts of uncertainty and disagreement about objectives. Such decisions will never be handed over to AI. Doing so would reduce the justifiability of the decisions and put the responsible individuals in jeopardy. They will certainly be aided by AI, but I don’t see handing decision-making over to them entirely. Should my country go to war? Who should I vote for? Even Is it time to buy a new dishwasher? Or What TV show should I watch tonight? All of these questions involve either enormous uncertainty about outcomes or large disagreements about values, and people will always want to make the final decision.”
Daniel S. Schiff, lead for Responsible AI at JP Morgan Chase and co-director of the Governance and Responsible AI Lab at Purdue University, commented, “Algorithms already drive huge portions of our society and the lives of individuals. This trend will only advance in the coming years. Facilitating meaningful human control in the face of these trends will remain a daunting task. By 2035 AI systems (including consumer-facing systems and government-run, automated decision systems) will likely be designed and regulated so as to enhance public transparency and control of decision-making. However, any changes to the design and governance of AI systems will fall short of functionally allowing most people—especially the most vulnerable groups—to exercise deeply meaningful control in their own lives.
“Optimistically speaking, a new wave of formal regulation of AI systems and algorithms promises to enhance public oversight and democratic governance of AI generally. For example, the European Union’s developing AI Act will have been in place and iterated over the previous decade. Similarly, regulation like the Digital Services Act, and even older policies like the General Data Protection Regulation will have had time to mature with respect to efficiency, enforcement, and best practices in compliance.
“While formal regulation in the United States is less likely to evolve on the scale of the EU AI act (e.g., it is unclear when or if something like the Algorithmic Accountability Act will be passed), we should still expect to see the development of local and state regulation (such as New York’s restriction on AI-based hiring or Illinois’ Personal Information Protection Act), even if leading to a patchwork of laws. Further, there are good reasons to expect laws like the EU AI Act to defuse internationally via the Brussels effect; evidence suggests that countries like the UK, Brazil, and even China are attentive to the first and most-restrictive regulators with respect to AI. Thus, we should expect to see a more expansive paradigm of algorithmic governance in place in much of the world over the next decade.
“Complementing this is an array of informal or soft governance mechanisms, ranging from voluntary industry standards to private sector firm ethics principles and frameworks, to, critically, changing norms with respect to responsible design of AI systems realized through higher education, professional associations, machine learning conferences, and so on.
“For example, a sizable number of major firms which produce AI systems now refer to various AI ethics principles and practices, employ staff who focus specifically on responsible AI, and there is now a budding industry of AI ethics auditing startups helping companies to manage their systems and governance approaches. Other notable examples of informal mechanisms include voluntary standards like NIST’s AI Risk Management Framework as well as IEEE’s 7000 standard series focused on ethics of autonomous systems.
“While it is unclear which frameworks will de facto become industry practice, there is an ambitious and maturing ecosystem aimed at mitigating AI’s risks and increasing convergence about key problems and possible solutions.
“The upshot of having more-established formal and informal regulatory mechanisms over the next decade is that there will be additional requirements and restrictions placed on AI developers, complemented by changing norms. The question then is which particular practices will diffuse and become commonplace as a result. Among the key changes we might expect are:
1) Increased evaluations regarding algorithmic fairness, increased documentation and transparency about AI systems and some ability for the public to access this information and exert control over their personal data.
2) More attempts by governments and companies employing AI systems to share at least some information on their websites or in a centralized government portal describing aspects of these systems including how they were trained, what data were used, their risks and limits and so on (e.g., via model cards or datasheets). These reports and documentation will result, in some cases, from audits (or conformity assessments) by third-party evaluators and in other cases from internal self-study, with a varying range of quality and rigor. For example, cities like Amsterdam and Helsinki are even now capturing information about which AI systems are used in government in systematic databases, and present information including the role of human oversight in this process. A similar model is likely to take place in the European Union, certainly with respect to so-called high-risk systems. In one sense then, we will likely have an ecosystem that provides more public access to and knowledge about algorithmic decision-making.
3) Further, efforts to educate the public, emphasized in many national AI policy strategies, such as Finland’s Elements of AI effort, will be aimed at building public literacy about AI and its implications. In theory, individuals in the public will be able to look up information about which AI systems are used and how they work. In the case of an AI-based harm or incident, they may be able to pursue redress from companies or government. This will may be facilitated by civil society watchdog organizations and lawyers who can help bring the most egregious cases to the attention of courts and other government decision-makers.
4) Further, we might expect researchers and academia or civil society to have increased access to information about AI systems; for example, the Digital Services Act will require that large technology platforms share information about their algorithms with researchers.
“However, there are reasons to be concerned that even these changes in responsible design and monitoring of AI systems will support much in the way of meaningful control by individual members of the general public. That is, while it may be helpful to have general transparency and oversight by civil society or academia, the impact is unlikely to filter down to the level of individuals.
“The evolution of compliance and user adaptation to privacy regulation exemplifies this problem. Post-GDPR, consumers typically experience increased privacy rights as merely more pop-up boxes to click away. Individuals often lack the time, understanding or incentive to read through information about cookies or to go out of their way to learn about privacy policies and rights. They will quickly click ‘OK’ and not take the time to seek greater privacy or knowledge of ownership of data. Best intentions are not always enough.
“In a similar fashion, government databases or corporate websites with details about AI systems and algorithms are likely insufficient to facilitate meaningful public control of tech-aided decision-making. The harms of automated decision-making can be diffuse, obfuscated by subtle interdependencies and long-term feedback effects.
“For example, the ways in which social media algorithms affect individuals’ daily lives, social organization and emotional well-being are non-obvious and take time and research to understand. In contrast, the benefits of using a search algorithm or content recommendation algorithm are immediate and these automated systems are now deeply embedded in how people engage in school, work and leisure.
“As a function of individual psychology, limited time and resources and the asymmetry in understanding benefits versus harms, many individuals in society may simply stick with the default options. While theoretically, they may be able to exercise more control—for example by opting out of algorithms, or requesting their data be forgotten—many individuals will see no reason to exert such ownership.
“This problem is exacerbated for the individuals who are most vulnerable; the same individuals who are most affected by high-risk automated decision systems (e.g., detainees, children in low-income communities, individuals without digital literacy) are the very same people who lack the resources and support to exert control.
“The irony is that the subsets of society most likely to attempt to exert ownership over automated decision systems are those who are less in need. This will leave it to public watchdogs, civil society organizations, researchers and activist politicians to identify and raise specific issues related to automated decision-making. That may involve banning certain use cases or regulating them as issues crystallize. In one sense then, public concerns will be reflected in how automated decision-making systems are designed and implemented, but channeled through elite representatives of the public, who are not always well-placed to understand the public’s preferences.
“One key solution here, again learning from the evolution of privacy policy, is to require more human-centered defaults. Build automated decision systems that are designed to have highly transparent and accessible interfaces, with ‘OK’ button-pushing leading to default choices that protect public rights and well-being and require an individual’s proactive consent for anything other than that. In this setting, members of the public will be more likely to understand and exercise ownership.
“This will require a collective effort of government and industry, plus design and regulation that is highly sensitive to individual psychology and information-seeking behavior. Unless these efforts can keep pace with innovation pressures, it seems likely that automated decision systems will continue to be put into place as they have been and commercialized to build revenue and increase government efficiency. It may be some time before fully sound and responsible design concepts are established.”
Robert D. Atkinson, founder and president of the Information Technology and Innovation Foundation, said, “What key decisions will be mostly automated? What key decisions should require direct human input? For the most part, it is too early to determine this. If, for example, a system can be shown to be more accurate than a human decision, then likely that key decision should be made by a machine. Likewise, if a decision or action can be performed by a machine with the same or better level of accuracy but can be done more cheaply or efficiently, then all else equal, society is better off having the machine make the decision.
“I worry that we overestimate the ability of machines to, as Steely Dan wrote, ‘to make big decisions, programming compassion and vision.’ AI is not magic. It definitely is not sentient, even if it can be programmed to make it appear that it is. And these systems will remain in human control.
“How might the broadening and accelerating rollout of tech-abetted, often autonomous decision-making change human society? The advancement of autonomous decision-making systems will boost productivity, the most important factor in human and societal well-being. Imagine autonomous trucks, postal delivery systems, food-service robots and much more. This technology system holds the promise of reversing the 15-year unprecedented productivity slump that has dramatically slowed growth in developed economies.
“In terms of risks to human autonomy, we should not be very concerned. Technology always has been a tool that humans controlled and there is no reason to believe otherwise going forward. To the extent autonomous decision-making systems make important decisions, they will 1) on average be more accurate and timely decisions than humans make (or else they wouldn’t be used); 2) in most cases they will be able to be overridden by humans. If a company or other organization implements such a system and it does not improve people’s lives, the company will not be able to sell the system because people will not use it.”
Charles Ess, professor emeritus of digital ethics, University of Oslo, wrote, “These past few years of an ‘AI spring’ have distinguished themselves from earlier ones, at least among some authors and projects, as they are accompanied by considerably greater modesty and recognition of the limits perhaps intrinsic to what AI and machine learning (ML) systems are capable of. Notably important resources along these lines include the work of Katharina Zweig, e.g., ‘Awkward Intelligence Where AI Goes Wrong, Why It Matters, and What We Can Do about It’ (MIT Press, 2022).
“On the other hand, I still find that in most of the emerging literatures in these domains—both from the corporations that largely drive the development of AI/ML systems as well as in from research accounts in the more technical literature—there remains a fundamental failure to understand the complexities of human practices of decision-making and most especially judgment. Some AI ethicists point out that a particular form of human judgment—what Aristotle called phronesis and what Kant called reflective judgment—are what come into play when we are faced with the difficult choices in grey areas. In particular, fine-grained contexts and especially novel ethical dilemmas usually implicate several possible ethical norms, values, principles, etc.
“In contrast with determinative judgments that proceed from a given norm in a deductive, if not algorithmic fashion to conclude with a largely unambiguous and more or less final ethical response—one of the first tasks of reflective judgment is to struggle to discern just which ethical norms, principles, values, indeed are relevant to a specific case, and, in the event of (all but inevitable) conflict, which principles, norms, etc., override the others. As many of us argue, these reflective processes are not computationally tractable for a serious of reasons—starting as they draw from tacit, embodied forms of knowledge and experience over our lifetimes. As well, these processes are deeply relational—i.e., they draw on our collective experience, as exemplified in our usually having to talk these matters through with others in order to arrive at a judgment.
“There is, then, a fundamental difference between machine-based ‘decision-making’ and human-based ethical reflection—but this difference seems largely unknown in the larger communities involved here. In particular, there is much in engineering cultures that sets up ‘decision-making’ as more or less deductive problem-solving: but this approach simply cannot take on board the difficulties, ambiguities and uncertainties intrinsic to human reflective judgment.
“Failing to recognize these fundamental differences then results in the engineering paradigm overriding the ethical one. Catastrophe is sure to result—as it already has, e.g., there is discussion that the financial crises of 2008 in part rested on leaving ‘judgments’ as to credit-worthiness increasingly to machine-based decision-making, which proved to be fatally flawed in too many cases.
“As many of us have argued (e.g., Mireille Hildebrandt, ‘Smart Technologies and the End(s) of Law,’ 2015, as an early example, along with more recent figures such as Virginia Dignum, who directs the large Wallenberg Foundation’s projects on humanities and social science approaches to AI, as well as Zweig, among others), leaving ethical judgments in particular and similar sorts of judgments in the domains of law, credit-worthiness, parole considerations (i.e., the (in)famous COMPAS system), ‘pre-emptive policing,’ and so on to AI/ML processes is to abdicate a central human capacity and responsibility—and this to systems that, no matter how further refined they may be with additional ML training, etc., are in principle incapable of such careful reflection.
“We are very often too prone to offloading our tasks and responsibilities to our machineries—especially when the tasks are difficult, as reflective judgment always is. And in this case, failure to recognize in the first place just what it is that we’re offloading to the machines makes the temptations and drives to do so doubly pernicious and likely.
“Like the characters of ‘Brave New World’ who have forgotten what ‘freedom’ means, and so don’t know what they have lost, failing to take on board the deep differences between reflective forms of judgment and AI/ML decision-making techniques—i.e., forgetting about the former, if we were ever clear about it in the first place—likewise means we risk losing the practice and capacity of reflective judgment as we increasingly rely on AI /ML techniques, and not knowing just what it is we have lost in the bargain.
“What key decisions should require direct human input? This category would include any decision that directly affects the freedom and quality of life of a human being. I don’t mind AI/ML driving the advertising and recommendations that come across my channels—some of which is indeed useful and interesting. I am deeply concerned that offloading ethical and legal judgments to AI/ML threatens to rob us—perhaps permanently—of capacities that are central to human freedom and modern law as well as modern democracy. The resulting dystopia may not be so harsh as we see unfolding in the Chinese Social Credit Systems.
“’Westerners’ may be more or less happy consumers, content with machine-driven options defining their work, lives, relationships, etc. But from the standpoint of Western traditions, starting with Antigone and then Socrates through the democratic and emancipatory movements that mark especially the 18th-20th centuries, that emphasize the central importance of human freedom over against superior force and authority—including the force and authority of unquestioned assumptions and rules that must always be obeyed, or else—such lives, however pleasant, would fail to realize our best and fullest possibilities as human beings, starting with self-determination.”
Christopher W. Savage, a leading expert in legal and regulatory issues based in Washington, D.C., wrote, “Human decision-making is a complicated and subtle process that we do not fully understand. But one thing we do know is that it is not entirely (or even primarily) ‘rational’ or ‘logical.’ Essentially the entire field of behavioral economics is devoted to identifying and explaining ways that people’s decision processes systematically differ from what a ‘rational’ person would do. And, of course, more-pragmatic fields like marketing, advertising, sales and user-interface design are focused on exploiting those divergences from rationality.
“Among the things we know people are bad at if ‘rational’ decision-making is the goal is dealing with information that is statistical rather than anecdotal/narrative in nature, and dealing with risk, particularly complicated risk. In theory, a well-deployed AI/ML system could help people make rational decisions in their own best interest under conditions of risk and involving stochastic processes. But I suspect that in practice most AI/ML systems made available to most people will be developed and deployed by entities that have no interest encouraging such decisions. They will instead be made available by entities that have an interest in steering people’s decisions in particular ways.
“A good current example is the ‘engagement’ algorithms of large social media platforms. These algorithms use AI/ML to decide what information to present to us while on the platforms. While framed in terms of effectuating our own individual choices (“we want to present you with interesting things”), the function of these algorithms is to keep us engaged on the platform, for basically one of two ends: a) to sell us stuff via advertising; or b) to influence our political/social views via promoted content. It is far from clear (to put it mildly) that being maximally ‘engaged’ with social media is what we would choose for ourselves in a calm, rational way (any more than being ‘engaged’ with cigarettes is what anyone not addicted to nicotine would choose).
“These concerns are independent of those arising from the existence of potentially severe but not readily detectable biases in the output of AI/ML algorithms that are or will be used by public or private entities to assist in making decisions that affect us as citizens and consumers, such as eligibility for and terms of loans, hiring and firing decisions, criminal sentencing or parole decisions, etc. My view is that avoiding bias in such settings requires conscious and conscientious effort by humans acting in good faith. I am concerned that the use of AI/ML systems for those kinds of decisions will in practical terms be a means by which human decisionmakers can avoid that effort.”
Chris Labash, associate professor of communication and innovation, Carnegie Mellon University, wrote, “It’s not so much a question of ‘will we assign our agency to these machines, systems and bots? but ‘what will we’ assign to them? If, philosophically, the best decisions are those based on intelligence and humanity, what happens when humanity takes a back seat to intelligence? What happens when agency gives way to comfort? If you are a human without agency, are you still human?
“The major issue that the data I look at suggests is that our future won’t be so much one where humans will not have agency, but one where humans will selectively offload some specific decisions to autonomous and artificial intelligence. The comfort level is increasing progressively. We already trust making requests to bots, automated intelligence and voice assistants, and this will only increase. A 2018 PwC study on voice assistants indicated that usage, trust and variety of commands were increasing, and customer satisfaction was in the 90% range. This was less than five years ago. There is likely to be a considerable broadening of consumer dependence upon decisions offered up by autonomous and artificial intelligence by 2035.
“My guess is, for this reason, although many important decisions will be made by autonomous and artificial intelligence (and ‘important’ decisions is a pretty broad descriptor), these decisions will be willingly delegated to non-human intelligence, but we will still keep the decision of what decisions to offload to ourselves.”
Marcus Foth, professor of informatics, Queensland University of Technology, Australia, responded, “The question whether humans will or will not be in control of important decision-making in the future is often judged on the basis of agency and control—with agency and control thought of as good and desirable. Compared to the individual realm of conventional decision-making, most humans come from a culture with a set of values where ‘being in control’ is a good thing. And there is merit in staying in control when it comes to the usual use cases and scenarios being described in tech utopias and dystopias.
“However, I want to raise a scenario where relinquishing individual control and agency can be desirable. Perhaps this is a philosophical/conceptual thought experiment and deemed unrealistic by many, but perhaps it is nonetheless useful as part of such futuring exercises. Arguably, the types of wicked problems humanity and the planet face are not a result of lacking scientific ingenuity and inventiveness but a lack of planetary governance that translates collective wisdom and insights into collective action. While we sometimes see positive examples such as with the rapid response to the COVID19 pandemic, my overall assessment suggests there continue to be systemic failures in the systems of planetary governance. I argue that maintaining individual human agency and control as a value is partly to blame: Human comfort, safety, control and convenience always triumph over planetary well-being.
“Would relinquishing individual human control in favour of collective human control offer a more desirable future scenario of governance systems that serve not just the well-being of (some) humans but also forgotten humans ‘othered’ to the fringes of public attention, as well as more-than-humans and the planet?
“In essence, what I propose here is arguably nothing new: Many First Nations and indigenous peoples have learnt over millennia to act as a strong collective rather than a loose amalgamation of strong-minded individuals. Relationality—or as Mary Graham calls it, the ‘relational ethos’—is a key feature of good governance yet despite all the AI and tech progress we still have not been able to achieve a digitally supported system of governance that bestows adequate agency and control to those who have none: minority groups of both human and non-human/more-than-human beings.
“This is why I think having the typical humans (who are in control now) not being in (the same level of) control of important decision-making in the year 2035—is absolutely a good thing that we should aspire toward. The alternative I envisage is not the black-and-white opposite of handing control over to the machines but a future scenario where technology can aid in restoring the relational ethos in governance that serves all humans and more-than-humans on this planet.”
Alan Mutter, consultant and former Silicon Valley CEO, observed, “Successive generations of AI and successive iterations of applications will improve future outcomes, however, the machines—and the people who run them—will be in control of those outcomes. AI is only as good as the people underlying the algorithms and the datasets underlying the systems.
“AI, by definition, equips machines with agency to make decisions and judgments using large and imperfect databases. Because AI systems are designed to operate more or less autonomously, it is difficult to see how such systems could be controlled by the public, who for the most part are unlikely to know who built the systems, how the systems operate, what inputs they rely on, how the system was trained and how it may have been manipulated to produce certain desired and perhaps unknown outcomes.
“The point of AI is to allow machines to do the ‘busy work’ of sifting through huge amounts of information to discern patterns, identify relationships, pinpoint data and so forth. Early work in this area, while exciting, also shows that mistakes can be made. While AI proved to be pretty good at detecting the difference between a blueberry muffin and a chihuahua, systems notably have had trouble telling the difference between an ape and some humans of color.”
Alan S. Inouye, senior director for public policy and government relations at the American Library Association, commented, “Fundamentally, system designers do not have incentive to provide easy control to users. Designers can mandate the information and decision-making flow to maximize efficiency based on huge data sets of past transactions and cumulative knowledge. User intervention is seen as likely to decrease such efficiency and so it is discouraged. Designers also have motivations to steer users in particular directions.
“Often these motivations derive from marketing and sales considerations, but other motivations are applicable, too (e.g., professional norms, ideology or values). Thus, the ability for users to be in control will be discouraged by designers for motivational reasons. As the political context has become yet more polarized, the adoption of new laws and regulations becomes only more difficult. Thus, in the next decade or two, we can expect technology development and use to continue to outpace new or revised laws or regulations, quite possibly even more intensely than in the last two decades. So, there will be only modest pressure from the public policy context to mandate that design implement strong user control. (The exception to this may occur if something really bad becomes highly publicized.)
“I do believe that areas for which there are already stronger user rights in the analog world will likely see expansion to the digital context. This will happen because of the general expectations of users, or complaints or advocacy if such control is not initially forthcoming from designers. Some domains such as safety, as in vehicle safety, will accord considerable user control. People have particular expectations for control in their vehicles. Also, there is a well-developed regulatory regime that applies in that sector. Also, there are considerable financial and reputational costs if a design fails or is perceived to have failed to accommodate reasonable user controls.”
Peter Reiner, professor and co-founder of the National Core for Neuroethics at the University of British Columbia, said, “One way of restating the question is to ask to what degree is autonomy a protected value—one that resists trade-offs. Humans surely value autonomy. Or at least Westerners do, having inherited autonomy as one of the fruits of the Enlightenment. But whether the affordances of AI are sufficiently enticing to give up autonomous decision-making is really more of an empirical question—to be answered in time—than one to be predicted. Nonetheless, several features of the relationship between humans and algorithms can be anticipated to be influential.
“Most important is the matter of trust, both in the companies offering the technology and in the technology itself. At the moment, the reputation of technology companies is mixed. Some companies reel from years of cascading scandals, depleting trust. At the same time, three of the top five most-trusted companies worldwide base their businesses on information technology. Maintaining faith in the reliability of organizations will be required in order to reassure the public that their algorithms can be trusted in carrying out important decisions.
“Then there is the matter of the technology itself. It goes without saying that it must be reliable. But beyond that, in the realm of important decisions, there must be confidence that the technology is making the decision with the best interests of the user in mind. Such loyal AI is a high bar for current technology yet will be an important factor in convincing people to trust algorithms with important decisions.
“Finally, it is generally observed that people still seem to prefer humans to help with decisions rather than AIs, even when the algorithm outperforms the human. Indeed, people are comfortable having a total stranger—even one as uncredentialed as an Uber driver—whisk them from place to place in an automobile, but they remain exceedingly skeptical of autonomous vehicles, not just of using them but of the entire enterprise. Such preferences, of course, may depend on the type of task.
“To date we only have fragmentary insight about the pushes and pulls that determine whether people are willing to give up autonomy over important decision-making, but the initial data suggest that trade-offs such as this may represent a substantial sticking point. Whether this will change over time—a phenomenon known as technomoral change—is unknown.
“My suspicion is that people will make an implicit risk-benefit calculation: the more important the decision, the greater the benefit must be. That is to say that algorithms are likely to be required to vastly outperform humans when it comes to important decision-making in order for them to be trusted.”
Raymond Perrault, a distinguished computer scientist at SRI International (he directed the AI Center there from 1988-2017), wrote, “Current AI systems based on machine learning are showing continued improvement on tasks where large amounts of training data are available. However, they are still limited by their relative inability to incorporate and interact with symbolic information.
“The role of symbolic information and reasoning is one of the major outstanding questions in AI, and there are very different opinions as to whether and how integration should be achieved. I believe that robust, verifiable AI systems, needed for high-reliability systems such as self-driving cars, depend on progress in this area and that this technical problem will eventually be solved, though whether that will be sufficient to field high-reliability systems remains to be seen. I accept that it will, but I don’t know when.
“AI is and will continue to be used in two kinds of scenarios, those where the AI operates completely autonomously, as in recommender systems and those where humans are in ultimate control over the decisions suggested by the AI, and as in medical diagnostics and weapons. The higher the risk of the AI system being wrong and the higher the consequences of a bad decision, the more important it is for humans to be in control.
“Let’s look at a few of the main categories where that sorting will likely occur:
- Major personal and life-and-death, decisions (education, marriage, children, employment, residence, death): I don’t see full automation of decision-making in major personal decisions, though support of decisions could improve, e.g., with respect to choices in education and employment.
- Financial decisions (buying a house, personal investments, more): Financial decisions will continue to get more support, and I could see significant delegation of investment decisions, especially of simple ones. But I can’t see an AI system ever deciding which house you should buy.
- Use of major services (healthcare, transportation): AI support for healthcare and transportation will continue to increase, but I can’t see life-and-death health decisions ever being completely automated. I doubt that self-driving cars will operate at scale except in controlled conditions until the availability of highly reliable AI systems.
- Social decisions (government, national security): Government faces enormous challenges on many fronts. We could save large amounts and improve fairness by streamlining and automating tax collection, but it is hard to see the will to do so as long as minimizing government remains a high priority of a large part of the population. I don’t see another 15 years changing this situation. The use of AI for national security will continue to increase and must continue to be under the control of humans, certainly in offensive situations. With appropriate controls, AI-based surveillance should actually be able to reduce the number of mistaken drone attacks, such as those recently reported by major news organizations.”
David Barnhizer, a professor of law emeritus and author of “Human Rights as a Strategic System,” wrote, “Various futurists project that AI systems will, or already are developing, an internal version of what I think of as ‘alternative intelligence’ as opposed to artificial intelligence, and they expect that there could or will be a shift (possibly by 2035 but most likely 15 or 20 years later) to significant control by interacting AI systems that subordinate human agency to the increasingly sentient and aware AI systems.
“To put it even more bleakly, some say humanity may be facing a ‘Terminator’-type apocalyptic world. I don’t know if that very dark future awaits but I do know that the human race and its leaders are getting dumber and dumber, greedier and greedier while the tech experimenters, government and military leaders, corporations, academics, etc., are engaged in running an incredible experiment over which they have virtually no control and no real understanding.
“One MIT researcher admitted several years ago after some AI experiments they were conducting that it was obvious the AI systems were self-learning outside the programmatic algorithms and the researchers didn’t know exactly how or what was going on. All of that happened within relatively unsophisticated AI systems by today’s research standards. As quantum AI systems are refined the speed and sophistication of AI systems will be so far beyond our comprehension that to think we are in control of what is going on is pre-Copernican. The sun does not revolve around the Earth and sophisticated AI systems do not revolve around their human ‘masters.’
“As my son Daniel and I set forth in our 2019 book ‘The Artificial Intelligence Contagion,’ no one really knows what is going on, and no one knows the scale or speed of the consequences or outcomes we are setting into motion. But some things are known, even if ignored. They include:
- For humans and human governments, AI is power. By now it is obvious that the power of AI is irresistible for gaining and maintaining power. Big Tech companies, political activists, governmental agencies, political parties, the intelligence-gathering actors, etc., simply can’t help themselves.
- Information is power, and data-creation, privacy intrusions, data mining and surveillance are rampant and will only get worse. I don’t even want to get into the possibilities of cyborg linkages of AI within human brain systems such as are already in the works, but all of this sends a signal to me of even greater control over humans and the inevitable deepening of the stark global divide between the ‘enhanced haves’ and everyone else (who are potentially under the control of the ‘haves.’)
“We need to admit that regardless of our political rhetoric, there is no overarching great ‘brotherhood’ of the members of the human race. The fact is that those who are the most aggressive and power-driven are always hungry for more power, and they aren’t all that concerned with sharing that power or its benefits widely. The AI developments that are occurring demonstrate this phenomenon quite clearly whether we are talking about China, the U.S., Russia, Iran, corporations, agencies, political actors or others.
“The result is that there is a very thin tier of humans who, if they somehow are able to work out a symbiosis with the enhanced AI systems that are developing, will basically lord it over the remainder of humanity—at least for a generation or so. What happens after that is also unknown but unlikely to be pretty. There is no reason to think these AI systems as homogenous or identical. They will continue to grow, with greater capabilities and more-evolved insights, emerging from varied cultures. We (or they, actually) could sadly see artificial intelligence systems at war with each other for reasons humans can’t fathom. This probably sounds wacko, but do we really know what might happen?
“As we point out in our book, many people look at the future through the proverbial ‘rose-colored glasses.’ I, obviously, do not. I personally love having the capabilities computer systems have brought me. I am insatiably curious and an ‘info freak.’ I love thinking, freedom of thought and the ability to communicate and create. I have no interest in gaining power. I am in the situation of Tim Berners-Lee, the creator of the fundamental algorithms that brought the Internet within the reach of global humanity. Berners-Lee and many others who worked on the issues intended to create systems that enriched human dialogue, created shared understanding and made us much better in various ways than we were. Instead, he and other early designers realize they opened a Pandora’s Box in which, along with their significant and wonderful benefits, the tools they offered the world have been corrupted and abused in destructive ways and brought out the darker side of humanity.”
Andy Opel, professor of communications, Florida State University, commented, “The question of the balance between human agency and artificial intelligence is going to be one of the central questions of the coming decade. Currently corporate-designed and controlled algorithms dominate our social media platforms (as well as credit scores, healthcare profiles, marketing, and political messaging) and are completely opaque, blocking individuals’ ability to determine the contents of their social media feeds. The control currently wielded by these privately held corporations will not be given up easily, so the struggle for transparency, accountability and public access is going to be a challenging struggle that will play out over the next 10 to 15 years.
“If the past is any predictor of the future, corporate interests will likely overrule public interests and artificial intelligence, autonomous machines and bots will extend to invisibly shape even more of our information, our politics and our consumer experience. Almost 100 years ago there was a vigorous fight over the public airwaves and the regulation of radio broadcasting. The public lost that fight then and lost even more influence in the 1996 Telecommunications Act, resulting in the consolidated media landscape we currently have that is dominated by five major corporations who have produced a system of value extraction that culturally strip mines information and knowledge out of local communities, returning very little cultural or economic value back to those local communities.
“With the collapse of journalism and a media landscape dominated by echo chambers, as a society we are experiencing the full effects of corporate domination of our mediascape. The effects of these echo chambers are being felt at intimate levels as families try to discuss culture and politics at the dinner table or at holiday gatherings. As we come to understand how deeply toxic the current mediascape is, there is likely to be a political response that will call for transparency and accountability of these algorithms and autonomous machines. The foundations of this resistance are already in place but the widespread recognition of the need for media reform is still not fully visible as a political agenda item.
“The promise of artificial intelligence—in part the ability to synthesize complex data and produce empirically sound results—has profound implications, but as we have experienced with climate data over the last 40 years, data often does not persuade. Until artificial intelligence is able to tell stories that touch human emotions, we will be left with empirically sound proposals/decisions/policies that are left unfulfilled because the ‘story’ has not persuaded the human heart. What we will be left with is the accelerated exploitation of human attention with the primary focus on consumption and entertainment. Only when these powerful tools can be wrestled away from corporate control and made transparent, accessible and publicly accountable will see their true benefits.”
Amali De Silva-Mitchell, founding coordinator of the UN Internet Governance Forum Dynamic Coalition on Data-Driven Health Technologies, said, “The true, intuitive human decision-making capabilities of technologies are still in their infancy. By 2035 we will have hopefully opened most of the AI developers’ minds to the issues of data quality, trojan data, data warps and oceans, ethics, standards, values, and so forth, that come in a variety of shapes and sizes across segments of society.
“The bias of using the data from one segment on another can be an issue for automated profiling. Using current statistical techniques does not make for strong foundations, for universal decision-making, it only allows for normalized decision-making or even a group think.
- Exceptional issues, small populations, unusual facts will be marginalized, and perhaps even excluded which is an issue for risk management.
- Data corrections will have lags, impacting data quality if correct at all. Misinformation, issues for semantics and profiling will result.
- Data translations such as from a holographic source in to a 2D format, may cause illusions and mis-profiling.
- Quantum technologies may spin data in manners still not observed.
- An ethical approach of data cleaning may cost money that technology maintenance budgets cannot accommodate.
- The movement of data from one system to another data system must be managed with care for authenticity, ethics, standards and so forth.
“Lots of caveats have to be made and these caveats must be made transparent to the user, however, there are some standardized commonly identified processes that can be very well served by automated decision-making, for example, for repetitive practices that have good procedures or standards already in place. In some instances, automated decision-making may be the only available procedure available, say for a remote location—including outer space. What is critical is that human attention to detail, transparently and continuous betterment is ever-present every step of the way.
“We may be forced to enter into the use of an AI before an application is fully ready for service due to the need to service at speed, fill a gap, and so forth. In these cases, it is especially important that human oversight is ever-present and that members of the public—everyday users—have the opportunity to provide feedback or raise concerns without reprimand.
“Humans must not feel helpless and hopeless with no opportunity for contact with a person when it is necessary. This is something that some developers of bots—for instance—have not taken into account. Humans must also have the opportunity to be the owners of their own identities and be able to check it if they wish to and get it corrected within a reasonable amount of time.
“Assumptions must not be made of persons, and the ‘reasonable person’ concept must always be maintained. Good Samaritans must also have a way to weigh in, as compassion for humans must be at the core of any technology.”
Sean McGregor, technical lead for the IBM Watson AI XPRIZE and machine learning architect at Syntiant, said, “The people in control of automated decision-making will not necessarily be the people subject to those decisions. The world in 2022 already has autonomous systems supervised by people at credit-rating agencies, car companies, police, corporate HR departments and more.
“How might the broadening and accelerating rollout of tech-abetted, often autonomous decision-making change human society? We will better appreciate the importance of random chance. Non-random computers mean you will not discover the unexpected, experience and learn from what you don’t prefer and grow beyond the bubble of algorithmically protected comfort. We will need to find new ways to look beyond ourselves and roll the dice of life.”
Ben Waber, president and CEO of Humanyze, a behavioral analytics company, and author of “People Analytics,” responded, “Today, leaders in organizations (companies, governments, etc.) are making the design decisions for systems and algorithms they deploy. Individuals affected by those systems have little ability to question or influence them. This has been true for decades, think of credit rating systems, search algorithms and the like. These systems will continue to be controlled by humans—the humans with power within those organizations.”
Daniel Berleant, professor of information science at the University of Arkansas at Little Rock and author of the book “The Human Race to the Future,” commented, “Software development focuses on the goal of meeting specific requirements. If human agency in such systems is not a specific requirement, it will not be specifically addressed. So, the real question is whether human agency will be required. Given the generally hands-off approach of government regulation of software design and many other areas, the only way it would become legally required is if there was a compelling reason that would force governments to respond. I don’t foresee such a compelling reason at this time.”
Claude Fortin, clinical investigator at the Centre for Interdisciplinary Research, Montreal, an expert in the untapped potential and anticipated social impacts of digital practices, commented, “The issue of control is two-fold: first, technological devices and techniques mediate the relationship between subject and object, whether these be human, animal, process or ‘thing.’ Every device or technique (such as an AI algorithm) adds a layer of mediation between the subject and the object. For instance, a smartphone device adds one layer of mediation between two people SMS texting. If an auto-correct algorithm is modifying their writing, that adds a second layer of mediation between them. If a pop-up ad were to appear on their screen as a reactive event (reactive to the subject they are texting about—for instance, they are texting about running shoes and an ad suddenly pops up on the side of their screens) that adds a third layer of mediation between them.
“Some layers of mediation are stacked one over another, while others might be displayed next to one another. Either way, the more layers of mediation there are between subject and object, the more interference there is in the control that the user has over a subject and/or object. Each layer has the possibility of acting as a filter, as a smokescreen or as a red herring (by providing misleading information or by capturing the user’s attention to direct it elsewhere, such as towards an ad for running shoes). This affects their decision-making. This is true of anything that involves technology, from texting to self-driving cars.
“The second issue of control is specifically cognitive and has to do with the power and influence of data in all its forms—images, sounds, numbers, text, etc.—on the subject-as-user. Humans are always at the source. In the coding of algorithms, it is either a human in position of power, or else an expert who works for a human in a position of power who decides what data and data forms can circulate and which ones cannot. Although there is a multiplying effect of data being circulated by powerful technologies and the ‘layering effect’ described above, at its source, the control is in the hands of the humans who are in positions of power over the creation and deployment of the algorithms.
“When the object of study is data and data forms, technological devices and techniques can become political tools that enhance or problematize notions of power and control. The human mind can only generate thoughts from sensory impressions it has gathered in the past. If data and data forms that constitute such input are only ideological (power-driven) in essence, then the subject-as-user is inevitably being manipulated. This is extraordinarily easy to do. Mind control applied by implementing techniques of influence is as old as the world—just think of how sorcery and magic work on the basis of illusion.
“In my mind, the question at this point in time is: What degree of manipulation is acceptable? When it comes to the data and data forms side of this question, I would say that we are entering the age of information warfare. Data is the primary weapon used in building and consolidating power—it always has been if we think of the main argument in ‘The Art of War.’
“I can’t see that adding more data to the mix in the hope of getting a broader perspective and becoming better informed in a balanced way is the fix at this point. People will not regain control of their decision-making with more data and more consumption of technology. We have already crossed the threshold and are engulfed in too much data and tech.
“I believe that most people will continue to be unduly influenced by the few powerful people who are in a position to create and generate and circulate data and data forms. It is possible that even if we were to maintain somewhat of the shape of democracy, it would not be a real democracy for this reason. The ideas of the majority are under such powerful forces of influence that we cannot really objectively say that they have control over their decision-making. For all of these reasons, I believe we are entering the age of pseudo-democracy.”
Peter Rothman, lecturer in computational futurology at the University of California, Santa Cruz, responded, “As we can see with existing systems such as GPS navigation, despite the evidence that using these impairs users’ natural navigation abilities and there is a possibility of a better design that wouldn’t have these effects, no new products exist because users are satisfied using current systems. As Marshall McLuhan stated, every extension is also an amputation.”
Clifford Lynch, executive director of the Coalition for Networked Information, wrote, “As I think about the prospects for human agency and how this compares to delegation to computational decision-making in 2035, I’m struck by a number of issues.
“As far as I know, we’ve made little progress in genuine partnership and collaboration between computational/AI systems and humans. This seems to be presented as a binary choice: either hand off to the AI, or the human retains control of everything. Some examples: AI systems don’t seem to be able to continuously learn what you already know, information that you have already seen and evaluated, and how to integrate this knowledge into future recommendations it may offer.
“One really good example of this: car navigation systems seem unable to learn navigational/routing preferences of drivers in areas very close to their homes or offices. Another: recommender systems often seem unable to integrate prior history when suggesting things.
“As far as I can tell, stunningly little real work has been done on computational systems that co-evolve, co-mature or co-develop with humans; this has been largely left to science fiction writers. As an additional issue, some of the research here involves time horizons that don’t fit conveniently with relatively short-term grant funding. Without a lot more progress here, we’ll continue to tend to frame the issue as ‘delegate or don’t delegate agency to computational systems.’
“I wonder about the commercial incentives that might exist in maintaining agency as a binary choice (retain or delegate) rather than seeking the cultivation of collaborations between humans and machines. There are many situations when delegation is the easy choice because making a human decision will take a lot of time and have to encompass a lot of complex data; combine this with opaque decision-making by the algorithms once delegation has been made, and this may well advance commercial (or governmental) objectives.
“There are staggering commercial incentives to delegate decision-making to computational agents (including really stupid agents like chatbots) in areas such as customer service, billing, fraud detection and the like, and companies are already doing this at massive scale. Most of these systems are really, really bad.
“Bluntly, the companies deploying these could mostly care less about errors or misjudgments by these computational agents unless the result in a high-visibility public relations blowup. There’s every reason to expect these trends to continue, and to get worse rather than better. This represents a really huge abdication of human agency that’s already far advanced.
“There are situations where there’s a very strong motivation to default to the machines. Human decision-makers may be overworked, overwhelmed and don’t have time. Not delegating (or delegating and over-riding or collaborating) may be risky. There are also still widespread general public beliefs that computational decisions are less biased or more accurate than human decision-making (though there’s been a lot of good research suggesting this is frequently not true).
“Good examples here: judges going against sentencing or bail recommendations; doctors going against diagnostic/treatment recommenders (often created by health care systems or insurers trying to minimize costs). These overrides can happen, but often only when someone is important enough or persuasive enough to demand and gain the human attention, risk-acceptance and commitment to override the easy default delegation to the AI. Put another way, when they want to, the wealthy and powerful (and perhaps the tech-savvy as well) will have a much better chance of appealing or overriding computational decision-making that’s increasingly embedded in the processes of our society.
“As an extension of the last point, there are situations where human decisionmakers are legitimately overwhelmed or don’t have time, when they cannot react quickly enough, and algorithmic triage and decision-making must be the norm. We do not understand how to define and agree on these situations. Relatively easy cases include emergency triage of various kinds, such as power grid failures or natural disasters.
“Computationally directed trading in financial markets might be a middle ground. More challenging cases might include response to minimal warning nuclear strikes (hypersonic vehicles, orbital strikes, close offshore cruise missiles, etc.) where there’s a very short time window to launch a major ‘use it or lose it’ counterforce strike. One can also construct similar cyberwar strike scenarios.
“Related to the previous point: As a society we need to agree on how to decide when agency delegation is high-stakes or low-stakes. Also, we need to try to agree on the extent to which we are comfortable delegating to computational entities. For example, can we identify domains where there is a high variance between human and computational predictions/recommendations hence we should probably be nervous about such delegation?
“We haven’t considered augmented humans (assuming that they exist in 2035 in a meaningful way) and how they fit into the picture of computational decision-making, humans and perhaps collaborative middle grounds. This could be important.
“I have been tracking the construction of systems that can support limited delegation of decision-making with great fascination. These may represent an important way forward in some domains. Good examples here are AI/ML-based systems that can explore a parameter space (optimize a material for these requirements, running experiments as necessary and evaluating the resultant data); often these are coupled by robotics that allow the computational system to schedule and run the experiments. I think these are going to be very important for science and engineering, and perhaps other disciplines, in the coming years; they may also become important in commercial spheres.
“The key issue here is to track how specific the goals (and perhaps suggested or directed methodologies) need to be to make these arrangements successful. It’s clear that there are similar systems being deployed in the financial markets, though it’s more difficult to find information about experiences and plans for these. And it’s anybody’s guess how sectors like the intelligence community are using these approaches.”
Daniel R. Mahanty, innovations unit director, Center for Civilians in Conflict, commented, “The gradual and incremental concession of human control takes place in ways that most people either don’t recognize or don’t muster the energy to resist. Take, for example, the facial-recognition screening taking place in airports—it is a form of intrusion into human agency and privacy that most people simply don’t seem to see as worth questioning. We can also see this in other interactions influencing human behavior—e.g., along the lines of the nudge theories of behavior economics. These have been applied through imperceptible changes in public policy; humans are and will be directed toward behaviors and decisions influenced by automated processes of which they are not even aware.”
James Hendler, director of the Institute for Data Exploration and Applications and professor of computer, web and cognitive sciences at Rensselaer Polytechnic Institute, commented, “While I would hope we will get better and better at keeping humans in control in many cases, there are three reasons I think we may not get there by 2035 —two positive and one negative:
“Positive 1 – There are a few cases where machines are superior and should be acknowledged as such—this is not something new, for example there are very few businesses that do payrolls by hand-automated payroll systems (which don’t need AI technology, I note) have been around a long time and have become trustworthy and relied upon. There will be some cases with existing and emerging AI technologies where this is also true—the key will be a need to identify which these are and how to guarantee trustworthiness.
“Positive 2 – There will be cases where the lack of trained humans will require greater reliance on machine decision-making. As a case in point, there are some single hospitals in the U.S. that have more X-ray analysts than some entire nations in the Global South. As machines get more reliable at this task, which is happening at a rapid rate, the potential deployment of such would not be as good as human-machine teaming (which will happen in the wealthier countries who do have the trained personnel to be in the loop) but will certainly be way better than nothing. A good solution that could improve health care worldwide, in certain cases, would be worth deploying (with care) even if it does require trusting machines in ways we otherwise might not.
“The Negative – The main thing holding back the deployment of autonomous technology in many cases has more to do with policy and litigation than technology. For example, many of the current autonomous driving systems can, in certain situations, drive better than humans—and with improvements to roads and such at least certain kinds of vehicles could be turned over to autonomous systems part, if not all, of the time—however, deploying such has huge risk to the companies doing so due to long established rules of liability for automobile related regulations—which will keep humans in the loop until the technology is provably superior and the rules of the road (if you’ll pardon the pun) are more clearly developed. Thus, these companies opt to keep humans in the loop out of legal, rather than technical, reasons. On the other hand, there are many industries where the speed of technological change is far beyond the scope, scale and speed of regulatory change (face-recognition systems deployment vs. regulation is an example). The companies developing technologies in these less-regulated areas do not have the restrictions on taking humans out of the loop, whether it is a good idea or not, and the economic rewards are still, unfortunately, on the side of autonomous deployment.
“All of that said, in general, as Alice Mulvehill and I argue in the book ‘Social Machines: The Coming Collision of Artificial Intelligence, Social Networking and Humanity’ and as other authors of similar recent books have argued, in most cases keeping humans in the loop (due to the differences between human and computer capabilities) is still necessary for the foreseeable future.
“I do consider my prediction a pessimistic one—it would be better to see a world where humans will remain in control of many areas where the speed of technical deployment coupled with the lack of regulation may hinder this happening. This could have potentially disastrous consequences if allowed in high-consequence systems (such as military weaponry, political influence, privacy control—or the lack thereof—etc.)
“Also, investment in human development would be a wonderful thing to see, (for example, in the case of X-ray analysts training more humans to work with automated systems would be preferable to simply deploying the systems), but right now that does not seem to be a political reality in most of the world.”
Axel Bruns, Australian Research Council Future Fellow and professor at the Digital Media Research Centre, Queensland University of Technology, said, “Blind belief in automated solutions still prevails without sufficient caution. Yes, some aspects of human life will be improved by automated systems, especially for well-off segments of society, but there is an equal or even greater tendency for these systems to also be used for social sorting, surveillance and policing purposes, from the automated vetting of job applications through the generation of credit scores to sociodemographic profiling.
“In such contexts, the majority of people will be subject to rather than in control of these automated systems, and these systems will actively curtail rather than assist individual and collective human agency. The fundamental problem here is that such systems are often designed by a small and unrepresentative group of tech solutionists; fail to take into account the diverse needs of the population they are designed to address; fail to consider any unintended consequences of algorithmic intervention (i.e., apply a trial-and-error approach without sufficient safeguards); and are often too complex to be understood by those who are supposed to regulate them (or even by their designers).”
Andrew Natchison, chief communications and marketing officer, National Community Reinvestment Coalition, commented, “Corporate interests will continue to shape and overshadow individual interests. Decisions mediated by computing may become more transparent, but transparency is not the same as agency. For instance, mortgage, insurance and other risk-related decisions that once were made by humans will increasingly be made by computing and algorithms. If new laws and regulations demand it, then we, the ‘subjects’ of these decisions, may have greater insight into how those algorithmic decisions were made. But we will not have agency over the decisions themselves. Most of these decisions will never be reviewed or vetted by humans.”
Bryan Alexander, futurist, consultant and senior scholar at Georgetown University, said, “We will cede a good amount of decision-making to automation (AI and/or robotics) for several reasons, the first being that powerful force: convenience. Letting software take care of tasks we usually don’t enjoy—arranging meetings, doing most email, filing taxes, etc.—is a relief to many people. The same is true of allowing robots to take care of dish washing or driving in heavy traffic.
“A second reason is that while our society putatively claims to enjoy sociability, for some people interpersonal encounters can be draining or worse and they prefer to automate many interactions. Further, there are tasks for which human interaction is not something nearly anyone enjoys; consider, for example, most bureaucratic functions.
“A third reason is that many people will experience social and political instability over the next dozen years due to the climate crisis and legacy political settlements. Settings like these may advance automation, because governments and other authorities may find automating rule and order to be desirable in chaotic situations, either openly or in secrecy. People may cede decision-making because they have other priorities.
“Each of these claims contain the possibilities of people demanding more control over decision-making. Social unrest, for example, often features political actors vying for policy direction. Our issues with human interaction may drive us to want more choices over those interactions, and so on. Yet I think the overall tendency will be for more automated decision-making, rather than less.”
Tom Valovic, journalist and author of “Digital Mythologies,” shared passages from a recent article of his, writing, “In a second Gilded Age in which the power of billionaires and elites over our lives is now being widely questioned, what do we do about their ability to radically and undemocratically alter the landscape of our daily lives using the almighty algorithm? The poet Richard Brautigan said that one day we might all be watched over by ‘machines of loving grace.’ I surmise Brautigan might do a quick 180 if he was alive today. He would see how intelligent machines in general and AI in particular were being semi-weaponized or otherwise appropriated for purposes of a new kind of social engineering. He would also likely note how this process is usually positioned as something ‘good for humanity’ in vague ways that never seem to be fully explained.
“In the Middle Ages, one of the great power shifts that took place was from medieval rulers to the church. In the age of the enlightenment, another shift took place: from the church to the modern state. Now we are experiencing yet another great transition: a shift of power from state and federal political systems to corporations and, by extension, to the global elites that are increasingly exerting great influence. It seems abundantly clear that technologies such as 5G, machine learning and AI will continue to be leveraged by technocratic elites for the purposes of social engineering and economic gain.
“As Yuval Harari, one of transhumanism’s most vocal proponents has stated: ‘Whoever controls these algorithms will be the real government.’ If AI is allowed to begin making decisions that affect our everyday lives in the realms of work, play and business, it’s important to be aware of who this technology serves. We have been hearing promises for some time about how advanced computer technology was going to revolutionize our lives by changing just about every aspect of them for the better. But the reality on the ground seems to be quite different than what was advertised.
“Yes, there are many areas where it can be argued that the use of computer and Internet technology has improved the quality of life. But there are just as many others where it has failed miserably. Healthcare is just one example. Here misguided legislation combined with an obsession with insurance company-mandated data gathering has created massive info-bureaucracies where doctors and nurses spend far too much time feeding patient data into a huge information databases where it often seems to languish. Nurses and other medical professionals have long complained that too much of their time is spent on data gathering and not enough time focusing on healthcare itself and real patient needs.
“When considering the use of any new technology, the questions should be asked: Who does it ultimately serve? And to what extent are ordinary citizens allowed to express their approval or disapproval of the complex technological regimes being created that we all end up involuntarily depending upon?”
Devin Fidler, futurist and founder of Rethinkery Labs, commented, “Turning over decisions to digital agents ultimately has the same downsides as turning over decisions to human agents and experts. In many ways, digital tools to support decision-making are upgrades of old-fashioned bureaucracies. For one thing, it can be easy to forget that, like digital systems, bureaucracies are built around tiered decision trees and step-by-step (algorithmic) processes. Indeed, the reason for both bureaucracy and digital agents is ultimately the same—humans have bounded attention, bounded time, bounded resources to support decision-making, and bounded information available. We turn over our agency to others to navigate these limitations. Importantly however, we still need to establish the equivalent of a clear equivalent to the principle of ‘fiduciary duty’ that covers the majority of digital agents designed to act on our behalf.”
Barrett S. Caldwell, professor of industrial engineering, Purdue University, responded, “I believe humans will be offered control of important decision-making technologies by 2035, but for several reasons, most will not utilize such control unless it is easy (and cost-effective) to do so. The role of agency for decision-making will look similar to the role of active ‘opt-in’ privacy: people will be offered the option, but due to the complexity of the EULAs (end-user license agreements), most people will not read all of them, or will select the default options (which may push them to a higher level of automation) rather than intelligently evaluate and ‘titrate’ their actual level of human-AI interaction.
“Tech-abetted and autonomous decision-making in driving, for example, includes both fairly simple features (lane following) and more-complex features (speed-sensitive cruise control) that are, in fact, user-adjustable. I do not know how many people actually modify or adjust those features. We have already seen the cases of people using the highest level of driver automation (which is nowhere close to true ‘Level 5’ driver automation) to abdicate driving decisions and trust that the technology can take care of all driving decisions for them. Cars such as Tesla are not inexpensive, and so we have a skewing of the use of more fully-autonomous vehicles towards more affluent, more educated people who are making these decisions to let the tech take over.
“Key decisions should be automated only when the human’s strategic and tactical goals are clear (keep me safe, don’t injure others) and the primary role of the automation is to manage a large number of low-level functions without requiring the human’s attention or sensorimotor quickness. For example, I personally like automated coffee heating in the morning, and smart temperature management of my home while I’m at work.
“When goals are fluid or a change to pattern is required, direct human input will generally be incorporated in tech-aided decision-making if there is enough time for the human to assess the situation and make the decision. For example, I decide that I don’t want to go straight home today, I want to swing by the building where I’m having a meeting tomorrow morning. I can imagine informing the car’s system of this an hour before leaving; I don’t want to have to wrestle with the car 150 feet before an intersection while traveling in rush-hour traffic.
“I am really worried that this evolution will not turn out well. The technology designers (the engineers, more than the executives) really want to demonstrate how good they are at autonomous/AI operations and take the time to perfect it before having it publicly implemented. However, executives (who may not fully understand the brittleness of the technology) can be under pressure to rush the technological advancement into the marketplace.
“The public can’t even seem to manage simple data hygiene regarding privacy (don’t live-tweet that you won’t be home for a week, informing thieves that your home is easy to cherry pick and telling hackers that your account is easy to hack with non-local transactions), so I fully expect that people will not put the appropriate amount of effort into self-management in autonomous decision-making. If a system does not roll out well (I’m looking at Tesla’s full-self-driving or the use of drones in crowded airport zones), liability and blame will be sorted out by lawyers after the fact, which is not a robust or resilient version of systems design.”
James H. Morris, professor emeritus at the Human-Computer Interaction Institute, Carnegie Mellon University, wrote, “The social ills of today—economic anxiety, declining longevity and political unrest—signal a massive disruption caused by automation coupled with AI. The computer revolution is just as drastic as the industrial revolution but moves faster relative to humans’ ability to adjust.
“Suppose that between now and 2035, most paid work is replaced by robots, backed by the internet. The owners of the robots and the internet—FAANG (Facebook, Apple, Amazon, Netflix, Google) and their imitators—have high revenue per employee and will continue to pile up profits while many of us will be without work. If there is no redistribution of their unprecedented wealth, there will be no one to buy the things they advertise. The economy will collapse.
“Surprisingly, college graduates are more vulnerable to AI because their skills can be taught to robots more easily than what infants learn. The wage premium that college graduates currently enjoy is largely for teaching computers how to do their parents’ jobs. I’m reminded of a claim sometimes attributed to Lenin, ‘When it comes time to hang the capitalists, they will vie with each other for the rope contract.’
“We need progressive economists like Keynes who (in 1930) predicted that living standards today in ‘progressive countries’ would be six times higher and this would leave people far more time to enjoy the good things in life. Now there are numerous essays and books calling for wealth redistribution. But wealth is the easy part. Our culture worships work. Our current workaholism is caused by the pursuit of nonessential, positional things which only signify class. The rich call the idle poor freeloaders, and the poor call the idle rich rentiers.
“In the future the only likely forms of future human work are those that are difficult for robots to perform, often ones requiring empathy: caregiving, art, sports and entertainment. In principle, robots could perform these jobs also, but it seems silly when those jobs mutually reward both producer and consumer and enhance relationships.
“China has nurtured a vibrant AI industry using all the latest techniques to create original products and improving on Western ones. China has the natural advantages of a larger population to gather data from and a high-tech workforce that works 12 hours a day, six days a week. In addition, in 2017 the Chinese government has made AI its top development priority. Another factor is that China’s population is inured to the lack of privacy that impedes the accumulation of data in the West. Partly because it was lacking some Western institutions, China was able to leapfrog past checks, credit cards and personal computers to performing all financial transactions on mobile phones.
“The success of AI is doubly troubling because nobody, including the people who unleash the learning programs, can figure out how they succeed in achieving the goals they’re given. You can try—and many people have—to analyze the gigantic maze of simulated neurons they create, but it’s as hard as analyzing the real neurons in someone’s brain to explain their behavior.
“I once had some sympathy with the suggestion that privacy was not an issue and, ‘If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place,’ but media I’ve been consuming like the Facebook/Cambridge Analytics fiasco has woken me up. Simply put, FAANG and others are building large dossiers about each of us and using AI to discover the stimuli that elicit desired responses, informed by psychographic theories of persuasion.
“The responses they desire vary and appear benign. Google wants to show us ads that appeal to us. Facebook wants us to be looking at its pages continually as we connect with friends. Amazon wants us to find books and products we will buy and like. Netflix wants to suggest movies and shows we should like to watch. But China, using TV cameras on every lamppost and WeChat (one, single app providing services with the capabilities of Facebook, Apple, Amazon, Netflix, Google, eBay and PayPal), is showing the way to surveillance authoritarianism.
“While we recoil at China’s practices, they have undeniable societal benefits. It allows them to control epidemics far more effectively. In some cities, drones fly around to measure the temperatures of anyone outside. Surveillance can prevent acts like suicide bombing for which punishment is not a deterrent. With WeChat monitoring most human interactions, people might be more fair to each other. Westerners may believe China’s autocracy will stifle its economic progress, but it hasn’t yet.
“Facebook’s AI engine was instructed to increase users’ engagement and, by itself, discovered that surprising or frightening information is a powerful inducement for a user to stick around. It also discovered that information that confirmed a user’s beliefs was a much better inducement than information that contradicted them. So, without any human help, the Facebook engine began promoting false, incredible stories that agitated users even beyond what cable TV had been doing. And when the Facebook people saw what their AI engine was doing, they were slow to stop it.
“Facebook, Apple, Amazon, Netflix and Google run ecosystems in which memes (but not genes!) compete for survival and drive the competition among their business entities. Human minds are seen as collateral damage. Facebook has been used to conduct whisper propaganda campaigns about people who were oblivious to the attacks, attacks that no one outside Facebook can even assess.
“It gets worse. To increase profits, the massive U.S. tech companies sell their engines’ services to anyone who pays and lets the payers instruct the engines to do whatever serves their ambition. The most glaring example: In 2016 Russian operatives used Facebook to target potential Trump voters and fed them information likely to make them vote.”
Philip J. Salem, retired professor, Texas State University, wrote, “Most AI designers and researchers are sensitive to many issues about human decision-making and the modeling of this in AI, and most of them will design AI with a sense of individual agency. One thing I am worried about is what they don’t know. What they don’t know well are the social constraints people routinely create for each other when they communicate. Most don’t know how human communication works or how humans can communicate more effectively.
“The people who design AI need training in human communication—dialogue, and they need to be more mindful of how that works. Many have experiences with social media that are more about presentations and performance than about sustaining dialogue. Many people use these platforms as platforms—opportunities to take the stage. People’s use of these platforms is the problem, rather than the technologies. The use is nearly reflexive, thinking very fast, with little time for reflection or deliberation.
“What I am afraid of is the use of AI to simulate human communication and the development of human relationships. The communication will be contrived, and the relationships will be shallow. When the communication is contrived and the relationships are shallow, the society becomes brittle. When the communication is contrived and relationships are shallow psychological well-being becomes brittle. Human communication provides the opportunities for cognitive and emotional depth. This means there are risks for incredible sadness and incredible bliss. This also means there are opportunities for resilience. Right now, many people are afraid of dialogue. Providing simulated dialogue will not help. Making it easier for people to actually connect will help.”
Valerie Bock, principal at VCB Consulting, observed, “Computers are wonderful at crunching through the implications of known relationships. That’s one thing, but where there is much that is uncertain they are also used to test what-if scenarios. It is the human mind that is best attuned to ask these ‘what if’ questions. One of the most useful applications for models, computerized and not, is to calculate possible outcomes for humans to consider.
“Quite often, knowledgeable humans considering model predictions feel in their gut that a model’s answer is wrong. This sets off a very useful inquiry as to why. Are certain factors weighted too heavily? Are there other factors which have been omitted from the model? In this way, humans and computers can work effectively together to create more realistic models of reality, as tacit human knowledge is made explicit to the model. We’ve learned that AI that is programmed to learn from databases of human behavior learns human biases, so it’s not as easy as just letting it rip and seeing what it comes up with.
“I expect we’ll continue to have the computers do the grunt work of poring through data but will continue to need experts to look at the conclusions drawn from AI analysis and do reality and gut checks for where they may have gone astray.
“It has been possible for decades to construct spreadsheets which model complex decision-making. What we find, time and time again, is that the most accurate models are the ones in which there are multiple places for humans to intervene with updated variables. A ‘turnkey’ system that uses a pre-programmed set of assumptions to crank out a single answer is much too rigid to be useful in the dynamic environments in which people operate. I do not believe we are going to find any important decisions that we can fully and reliably trust only to tech.”
Pat Kane, futurist and consultant Pat Kane Global, said, “It’s obvious that the speed and possibility space of computation is bringing extraordinary new powers of material-shaping to humans’ hands. See AlphaFold’s 200 million protein-shape predictions. How can we bring a mind-modelling articulacy to the communication of these insights, and their workings? Indeed, putting their discoveries at the service of human agency? The recent lesson from the Blake Lemoine LaMDA incident, reinforced by the Google AI exec Blaise Arnos, is that advanced machine-learning has a ‘modelling-of-mind’ tendency, which makes it want to respond to its human interlocutors in a sympathetic and empathetic manner. This particular evolution of AI may well be peculiarly human-centered.”
Tori Miller Liu, chief information officer, American Speech-Language-Hearing Association, commented, “Innovation is not usually altruistic. Investment in innovation may be driven by a desire for increased revenue, but the most sustainable solutions will also be ethical and equitable. There is less tolerance amongst users for anti-ethical behavior, inaccessibility and lack of interoperability. The backlash experienced by Meta is an example of this consumer trend.
“Until someone can program empathy, human agency will always have a role in controlling decision-making. While AI may assist in decision-making, algorithms and datasets lack empathy and are inherently biased. Human control is required to balance and make sense of AI-based decisions. Human control is also required to ensure ethical technology innovation. Companies are already investing in meaningful standardization efforts. For example, the Metaverse Standards Forum or the Microsoft Responsible AI Standard are focused on improving customer experience by touting interoperability, transparency and equity.”
Jeff Jarvis, director of the Tow-Knight Center for Journalism, Columbia University, wrote, “It is critical to get past the simplistic media-framed perspective about AI and machine learning to assure that people understand what these systems can and cannot do. They use large data sets to make predictions: about responses to queries, about human behavior and so on. That is what they are good at; little else. They will need to be tied with databases of reliable information. They will need to be monitored for quality and bias on input and output. They will be helpful.
“In the words of David Weinberger in his book, ‘Everyday Chaos,’ ‘Deep learning’s algorithms work because they capture better than any human can the complexity, fluidity and even beauty of a universe in which everything affects everything else, all at once.’ The interesting issues are that these systems will not be able to deliver a ‘why.’ They are complex A/B tests. They do not have reasoning or reasons for their decisions. As I wrote in a blog post, I can imagine a crisis of cognition in which humans—particularly media—panic over not being able to explain the systems’ outcomes.”
Melissa R. Michelson, dean of arts and sciences and professor of political science at Menlo College, wrote, “The trend I see in terms of AI-assisted life is that AI makes recommendations, while humans retain ultimate control. For example, texting and email systems often predict or suggest words or phrases, but humans retain the decision whether to accept those recommendations or to override them. While AI is likely to improve its ability to predict our needs by 2035, based on tracking of our behavior, there is still a need for a human to make final decisions, or to correct AI assumptions.
“In part, this is due to the inherent nature of human behavior—it is not always consistent or predictable, and AI is thus unable to always accurately predict what decision or action is appropriate to the moment. It is also due to the undermining of AI tracking that individuals engage in, either deliberately or unintentionally, as when they log in using another person’s account or share an email address, or when they engage in offline behavior.
“I expect that by 2035 there will be more automation of many routine activities, but only at the edges of our daily lives. Complex activities such as driving, teaching, writing and interacting with one another will still require direct human input. A shortcoming of AI is the persistent issue of racism and discrimination perpetuated by processes programmed under a system of white supremacy. Until those making the programming decisions become anti-racists, we will need direct human input to control and minimize the harm that might result from automated systems based on programming overwhelmingly generated by white men.”
Michael Pilos, marketing consultant with GLG in Cyprus, said, “I empathise with and respect the proactive approach from Elon Musk in protecting humanity from itself via AI. However, strong, sentient AI remains a unicorn myth so far because the machine learning process is unidimensional and has zero agency. Recent Google AI events have demonstrated just that. Now, it’s good to worry and good to protect our future from weaponized AI online or offline. But, in any reason and season, AI currently remains a glorified algorithm.”
Stephen D. McDowell, professor of communication and assistant provost, Florida State University, said, “Autonomous and artificial intelligence will assist in routines and repeated behaviors, which are often seen by individuals as decision-making. Some of these behaviors of choices fit into predictable daily, weekly or longer patterns, probabilities and cycles, but are perceived as individual decisions that we make. Consumption or purchasing likelihoods might be the most discoverable from past behavior.
“Autonomous systems may provide a more-informed and logical basis to making larger decisions, reminding us of stated values, preferences, likes and patterns of behavior that contribute to core utilities or identity. AI systems could provide a critical resource in unpacking product or service ‘brand’ identity and its connection to personal identity.
“These systems could assist in overcoming the problem of limited information or high information search costs that we face in major decisions (school or university to attend, jobs and career goals, buying a house, saving, etc.). These systems could also be used to remind us of those characteristics of individuals who are close to us, or looser friend groups, that is pulling together larger data sets and merging these for analysis of social networks and affiliations. This could help turn habits and practices into decisions.
“There have to be standards in AI systems to highlight information sources and automated processes that are designed into systems or being used, so we understand the information presented to us, our perceptions of our own values and preferences and our choice environments more fully. The challenge is figuring out how we can think about or conceptualize individual decisions when our information sources, online relationships and media environments are curated for us in feedback loops based upon demonstrated preferences and intended to enhance time engaged online with specific services. To serve to enhance the quality of individuals’ and citizens’ decision-making, there will need to be some underlying model in our systems of what the individual person, citizen, family member, worker should have a scope of choice and decision. It would need to go beyond the generalized image of a consumer or a member of the public.”
Karl M. van Meter, author of “Computational Social Science in the Era of Big Data” and leader with the Association Internationale de Méthodologie Sociologique, wrote, “In the automation cases of Boeing’s 737 and the deadly assembly lines where there have been deaths among workers due to lack of control, higher echelons decided safer ‘tech-aided decision-making’ was either too expensive or couldn’t be installed on time. Such administrative decisions will very probably determine where we will be by 2035, if we all agree to keep technology out of making major social, political, economic, ecological and strategic decisions. Does anyone consider that the roles of presidents, senators, judges and other powerful political positions might be filled on the basis of ‘tech-aided decision-making’?
“The same question can be asked of the financial, economic and business sectors. Should housing, poverty, health and environment policies also be based on ‘tech-aided decision-making,’ or is it more likely that the public might prefer that these decisions should come about through a process that includes discussion among shareholder human beings, with technology simply providing additional information, analysis and the suggestion of possible future scenarios based on eventual choices made?
“We already have witnessed—on the negative side in the case of the Boeing 737—the result when tech-aided decision-making computer flight programs could not ‘easily be controlled’ and—on the positive side—‘tech-aided’ micro-surgery and tele-commanded operations in distant clinics during which no surgeon would ever let the tech work without being under control.”
Alex Simonelis, retired professor of computer science based in Quebec, wrote, “AI is nowhere near that level, and unlikely to be.”
John Hartley, professor of digital media and culture, University of Sydney, observed, “The question is not ‘What does decision-making tech do to us,’ but ‘Who owns it?’ Digital media technologies and computational platforms are globalising much faster than formal educational systems, faster indeed than most individual or community lives. They are however neither universal nor inclusive. Each platform does its best to distinguish itself from the others (they are not interoperable but they are in direct competition), and no computational technology is used by everyone as a common human system (in contrast to natural language).
“Tech giants are as complex as countries, but they use their resources to fend off threats from each other and from external forces (e.g., regulatory and tax regimes), not to unify their users in the name of the planet. Similarly, countries and alliances are preoccupied with the zones of uncertainty among them, not with planetary processes at large.
“Taken as a whole, over evolutionary and historical time, ‘we’ (H. sapiens) are a parochial, aggressive, faction- and fiction-driven species. It has taken centuries—and is an ongoing struggle—to elaborate systems, institutions and expertise that can exceed these self-induced boundaries. Science seeks to describe the external world but is still learning how to exceed its own culture-bound limits. Further, in the drive towards interpretive neutrality, science has applied Occam’s razor all the way down to the particle, whose behaviour is reduced to mathematical codes. In the process, science loses its connection to culture, which it must needs restore not by data but by stories.
“For their part, corporations seek to turn everyone into a consumer, decomposing what they see as ‘legacy’ cultural identities into infinitely substitutable units, of which the ideal type is the robot. They promote stories of universal freedom to bind consumers closer to the value placed on them in the information economy, which hovers somewhere between livestock (suitable for data-farming) and uselessness (replaceable by AI).
“Universal freedom is not the same as value. In practice, something can only have value if somebody owns it. Things that can’t be owned have no value: the atmosphere; biosphere; individual lives; language; culture. These enter the calculus of economic value as resource, impediment, or waste. In the computational century, knowledge has been monetised in the form of information, code and data, which in turn have taken the economic calculus deep into the space previously occupied by life, language, culture and communication. These, too, now have value. But that’s not the same as meaning.
“Despite what common sense might lead you to think, ‘universal freedom’ does not mean the achievement of meaningful senses of freedom among populations. Commercial and corporate appropriations of ‘universal freedom’ restrict that notion to the accumulation of property, for which a widely consulted league table is Forbes’ rich lists, maintained in real time, with ‘winners’ and ‘losers’ calculated on a daily basis.
“For their part, national governments and regulatory regimes use strategic relations not to sustain the world as a whole but for defence and home advantage. Strategy is used to govern populations (internally) and to outwit adversaries (externally). It is not devoted to the overall coordination of self-created groups and institutions within their jurisdiction, but to advantage corporate and political friends, while confounding foes. As a result, pan-human stories are riven with conflict and vested interests. It’s ‘we’ against ‘they’ all the way down, even in the face of global threats to the species, as in climate-change and pandemics.
“Knowledge of the populace as a whole tends to have value only in corporate and governmental terms. In such an environment, populations are known not through their own evolved cultural and semiotic codes, but as bits of information, understood as the private property of the collecting agency. A ‘semiosphere’ has no economic value; unlike ‘consumers’ and ‘audiences,’ from which economic data can be harvested. Citizens and the public (a.k.a. ‘voters’ and ‘taxpayers’) have no intrinsic value but are sources of uncertainty in decision-making and action. Such knowledge is monopolised by marketing and data-surveillance agencies, where ‘the people’ remain ‘other.’
“Population-wide self-knowledge, at semiospheric scale, is another domain where meaning is rich but value is small. Unsurprisingly, economic and governmental discourses routinely belittle collective self-knowledge that they deem not in their interests. (Thus, they might applaud ‘unions’ if they are populist-nationalist-masculine sporting codes, but campaign against self-created and self-organised unions among workers, women, and human-rights activists. They pursue anti-intellectual agendas, since their interests are to confine the popular imagination to fictions and fantasies, and not to emancipate it into intellectual freedom and action. From the point of view of partisans in the ‘culture wars,’ the sciences and humanities alike are cast as ‘they’ groups, foreign—and hostile—to the ‘we’ of popular culture. Popular culture is continually apt to being captured by top-down forces with an authoritarian agenda. Popularity is sought not for universal public good but for the accumulation of private profit at corporate scale. As has been the case since ancient empires introduced the terms, democracy is fertile ground for tyranny.”
Ayden Férdeline, a public-interest technologist based in Berlin, Germany, commented, “In 2035 smart machines, bots and systems powered by AI and ML will invariably be more sophisticated and perhaps even more opaque than current technologies. In a world where sensors could be nearly invisible or installed in such great numbers that it is impractical to understand how they are surveilling us, persistent power imbalances have the potential to reorder our society in ways that cause more pains than gains. As the snowclone goes, ‘It’s the business model, stupid!’ Enabled by technological change, we have already seen a series of economic, social and cultural adaptations that have radically undermined the potential for the Internet and other emerging technologies to elevate human trust, agency and democratic values.
“Persistent market failures have arisen as dominant platforms and mysterious algorithms box consumers inside of echo chambers. It is difficult to imagine the same market that has supported these problematic practices to, in parallel, support the emergence of technologies that promote human autonomy and decision-making.
“Unless there are incentives to develop an alternate ecosystem—perhaps regulation is needed to impose some kind of duty of care on data-extractive businesses—the supply of suitable for-profit entities willing to voluntarily surrender power to consumers is likely to be adversely imbalanced against the demand from consumers for such a marketplace.”
“Are consumers concerned enough about the risks associated with artificial intelligence and the deep analytics that AI can generate that they will actively seek out and shift their behavior to consciously adopt only the new technologies that support their agency and autonomy? Particularly if they don’t even know the current technologies are monitoring them? As it stands, in poll after poll, consumers seem to indicate that they don’t trust much of the information being shared on the World Wide Web and they say they believe their privacy and security is compromised—yet barely anyone uses an ad blocker on their web browser and billions of people use Facebook and Google, while they are quite cognizant of the fact that those companies’ business models are built off of monetizing their personal information.”
Daria J. Kuss, associate professor and lead for the Cyberpsychology Research Group, Nottingham Trent University, UK, said, “Today, we can already see the importance of tech in our lives via smart homes, smartphones and apps that control increasing numbers of aspects of our lives. Technology use is the status quo and our gadgets have become extensions of our physical and psychological selves. I have no doubt that this development towards singularity and transhumanism will keep progressing.”
Andrew Czernek, former VP at a major technology company, predicted, “Within 15 years most complicated decisions will be guided by machine wisdom. This includes most of the skills for which we train people today in trade and professional roles. Already we are prompting medical professions broadly in analyzing physical problems, anticipating future problems and offering diagnoses.
“Expect it to expand into lower roles such as nurses, physical therapists, dental hygienists. It will also expand into more trade roles such as carpentry, plumbing and electricians.
“Where humans will be necessary is to judge when the ‘complicated’ decision—one based on a known, static but still very complicated system—is not appropriate. In these cases, there’s more than one environment or activity to consider, making for a situation that’s “complex” and difficult to judge interactions. A fine example of a ‘complex’ decision appears in stock or financial markets, where many small systems are interacting.
“As another example, within 15 years we’re unlikely to understand the interaction between certain mental conditions and physical ailments. Dementia or bipolar conditions still won’t be well enough understood to make artificial intelligence decisions about co-existing conditions like physical stability or symptoms that might look like Parkinson’s disease (but aren’t) or intestinal ailments. This will still make the skilled professional tradesperson important because you can’t ask AI to perform well in an unknown environment. As a result, you’ll see more use in ‘new construction’ situations rather than ‘maintenance and repair,’ where different environments are jumbled and complex.
“We will also start to define human activities in terms of ‘predictable,’ even if complex. Can a vehicle under control of AI navigate city streets safely? Can a long-distance truck navigate the country safely under control of AI? Can an aircraft safely navigate the country and weather in 3 dimensions under control of AI? Training requirements will make a major shift during this period. We’ll move from modeling a system to understanding models constructed by AI. A major part of this will be understanding the limitations of the AI decisions, specifically so humans can intervene.
“And we can anticipate some spectacular AI failures. People are already claiming that AI routines can predict market and pricing turns, which has burned hedge funds, exchange-traded funds and speculators when smarter humans realize the limitations of the models.”
Christopher Richter, professor and chair of communication studies, Hollins University, wrote, “I am only moderately optimistic that AI will give people more extensive control by 2035 for three overlapping reasons. First, it will be designed up front or leveraged for profit, not for social benefit. Second, as with social media, there will be unintended and unforeseen consequences, both positive and negative, of its development. Third, even given the ever-increasing rate of tech development, 13 years seems too soon for both development and adoption.
“An example that supports all three reasons: Consider the current case of self-driving vehicles. They are being developed for profit, the current target market is the wealthy, they will likely exacerbate some existing transportation problems (continued emphasis on personal vehicles using public highways), and there remains great skepticism and resistance to adoption.”
Bert Huang, a professor and scientist at Tufts University’s Data Intensive Studies Center whose research is focused on incorporating human knowledge into algorithms to make them more reliable, efficient and fair, said, “My pessimism about the chances these tools will be built with humans retaining agency comes from the fact that primitive versions of them allowing no human agency are already embedded in our society. I find it hard to imagine any efforts to counteract this trend outpacing the incentives to deploy new technology.”
Fred Baker, internet pioneer, longtime leader in the IETF and Cisco Systems Fellow, wrote, “I think people will remain in ultimate control of such decisions as people have a history of making their opinions known, if only in arrears. If a ‘service’ makes errors, it will find itself replaced or ignored.”
Jenny L. Davis, senior lecturer in sociology, Australian National University, and author of “The Power and Politics of Everyday Things,” commented, “The general retention of human decision-making will eventuate because the public will resist otherwise, as will the high-status professionals who occupy decision-making positions. I don’t think there will be a linear or uniform outcome in regard to who maintains control over decision-making in 2035. In some domains—such as consumer markets, low- and mid-level management tasks (e.g., resume sorting), and operation of driverless vehicles—the decisions will lean heavily towards full automation. However, in the domains accepted as subjective, high stakes and dependent on expert knowledge, such as medicine, judicial sentencing and essay grading, for example, human control will remain in place, albeit influenced or augmented in various capacities by algorithmic systems and the outputs those systems produce.”
Geoff Livingston, a digital marketing pioneer who is now marketing VP at Evalueserve, wrote, “Any fully automated decision-making en masse without a human approving it first and without the ability to retract it is a long, long way from being realized. I sincerely doubt that AI will become fully autonomous by 2035. The promises of general AI have mainly been unmet and in many cases have provided friction between businesses deploying them and their customers. Consider dissatisfaction and the lack of trust that continues to build between the public and AI-fueled social network content streams.
“Where AI is working well in the business world is via domain-specific use cases. This is when AI is guided by humans—subject matter experts, data scientists and technologists—to address a specific use case. In these instances, AI becomes highly effective in informing users, making recommendations, providing key intelligence related to a market decision, identifying an object and suggesting what it is. These domain-specific AI experiences with human guidance are the ones that are becoming widespread.
“So, when a business unleashes autonomous decision-making via a domain-specific AI on its customers and that experience is not awesome, you can rest assured 1) customers will leave and 2) competitors with a more user-friendly experience will welcome them. When a business suggests a customer use AI to better the experience, gives them the ability to opt-in and later opt-out at their discretion, successes will occur. In fact, more revenue opportunities may come by providing more and more manual human control.
“Let’s consider AI-driven autofocus on high-end cameras and iPhones alike. Autofocus suggests the subject, and even locks on and tracks it. In both systems, the photographers have the ability to override. In the case of a high-end camera, this comes with pinpoint joystick-driven focusing. In the case of an iPhone a photographer can use their finger for tactile focus guidance. The level of human control is a paid-for option.”
Michael Wollowski, professor of computer science, Rose-Hulman Institute of Technology, and associate editor of AI Magazine, said, “In order to ensure wide acceptability of these systems, the users need to be in charge of any decision made, whether having a seemingly large impact or an apparent small impact. Those systems need to be engineered to be a pleasant assistant to the user, like a personal assistant might be. However, a user should be able to override any decision for any reason. The system, just like the driving directions given by a navigation system will continuously replan.
“Given that most humans are creatures of habit, all decisions that can be automated based on learning a human’s habits can be automated. Such systems should take into consideration human input, and they should ask the user whether they are sure to go through with a decision that the system deems to have a significant impact. However, it depends on the person. What I consider a high-impact decision, my next-door neighbor may not care about. Here too, the system just has to learn the user’s preferences.
“I expect that systems will let those willing to adopt them live a life of ‘luxury.’ Just as people with means employ gardeners, nannies, housekeepers, pool boys, personal assistants, etc., these systems will assume many subservient roles and free users of many tedious chores.”
Lenard Kaye, professor of social work, University of Maine, said, “I have faith that ultimately it will be perceived to be important that humans have significant input in the process of how technology-based decision-making advances.”
Frank Kaufmann, president of the Twelve Gates Foundation, listed four big points, writing: Humans will use as many tech aids as possible; these will be as internal and as ‘bionic’ as possible, and these machines will have the ability to learn, and virtually all will be powered by autonomous and artificial intelligence. No key decisions will be automated. All key decisions will require human input. There is no such thing as genuine autonomous decision-making. Mechanical and digital decision-making will characterize machines. These will help human society greatly. The only negatives of this will be those perpetrated by bad or evil people. Apart from augmented capacity, machines will merely serve and enhance what humans want and want to do.”
Ian O’Byrne, an associate professor, College of Charleston whose focus is on literacy and technology, wrote, “At this point, the Internet and the digital, social tools we use on a daily basis have largely become unintelligible to most individuals. We do not understand the ways in which these tools act or origins of the data scooped up that informs these decisions. As machine learning and artificial learning continue to learn from these data streams, the number of people who understand the complexity of this who can address these instances, I fear, is very small. I think we’re already at a place where machines are teaching machines.
“When it comes to decision-making and human agency, most decisions will be made by autonomous and artificial intelligence. The devices will know when we wake, when we sleep, how we like the temperature and lights set and how much milk we need in the fridge. These are decisions that most of us will gladly hand over to some other entity to make our lives a bit easier. The challenge is to what extent do these decisions remain in the purview of mostly autonomous and artificial intelligence?
“I recently read that most flights are operated on autopilot, with the pilots rarely taking control of the plane. We see autonomous vehicles becoming increasingly more popular, with more and more decision-making in the vehicle being taken over by systems and algorithms. If and when there is a need to deviate from an established model, is that a possibility? For the most part, this broadening and accelerating of technology in society will improve the lives of most individuals. The challenge is the initial question: How many humans will have access to, or understanding of how these decisions are made and the data that informs them?”
Carl Schramm, professor of information science at Syracuse University and a leading authority on innovation, entrepreneurship and economic growth, commented, “Not only does the logic of decision-support technology work to displace the decision-making capacity of individuals, there is a denial of agency as they interface with such technology, e.g., the continued subtle changes in grammar rules imposed by Apple in its writing -assisting software (Pages) (Example, the placement of period in complete sentences set out in parentheses not inside the final parenthesis.) [NOT this ). <-) and the continued push towards ‘germanization’ of English by Microsoft which continuously puts hyphens in compound nouns.
“A much larger issue is the overall societal damage being done to human agency by social theorists who seek to absolve individuals of individual and social responsibilities. One incontestable example is the government’s Social Determinants of Health. This rhetorical device is continuously used in public policymaking to deny agency as central to individual’s taking responsibility for protecting their own health.”
Andre Popov, principal software engineer at Microsoft, wrote, “Humans have already outsourced important decision-making in a number of areas, including stock trading and operating machinery/vehicles. This outsourcing happens wherever possible, as a cost-cutting measure, due to machines making decisions faster or in order to eliminate human error. Autonomous decision-making and improvements in AI further reduce the subset of the population that is needed for the society to operate. These trends make human society even more dependent on and susceptible to complex technology and infrastructure that no one person really understands end-to-end. On one hand, we have complex and therefore inherently fragile systems in charge of basic human needs. On another hand, computer-assisted humans have less dependency on their own intellectual capabilities for survival.”
Aram Sinnreich, professor and chair of communication studies, American University, commented, “There is neither a political nor a financial incentive for powerful organizations to build systems that allow human beings to intercede into automated decision-making processes. This is already the case in 2022, when decisions about loan approval, bail amounts, healthcare options and other life-changing matters have been delegated to black-box algorithms embedded in unaccountable institutions. Arguably, part of the social function of AI is to serve as a (semi)credible mechanism for powerful institutions to avoid accountability for the consequences of their decisions.”
Cindy Cohn, executive director of the Electronic Frontier Foundation, wrote, “I expect that some humans will likely be in control of important decision-making for themselves. I do not think that this amount of control will be possessed by all humans. As with all technologies, the experience of them and the amount of control that is exercised will be different for different people and will likely track the power those people have in society. Marginalized people will likely be subjected to a range of decisions about their lives that are made by machines, bots and systems, with little control. I expect that this will be the case in situations involving public support, access to health care and necessities such as food, energy and water, law enforcement, national security.”
Cláudio Lucena, member of the National Data Protection Council of Brazil and professor of law at Paraíba State University, commented, “For the sake of efficiency and agility, most processes will depend upon some extent of automation in 2035. Proper oversight will be demanded by some segments and groups, but their influence will not be strong enough to prevent the broader rollout of automated decision-making. It is possible that a grave, impactful event may somehow shake things up and alter economic, social and political priorities. Incremental steps toward some sort of oversight might be expected if that happens, but the automation path will move further in spite of this, barely embedding mild adjustments.”
Daniel Obam, communications policy advisor and futurist based in Kenya, wrote, “Humans will still be in charge of decision-making in 2035. AI systems will play an assistive role to humans in decision-making. Still, at that time a large percentage of the world’s human population will be unconnected, so there is still more work to be done. Manufacturing and other repetitive decision processes will be automated. Tech-abetted decision-making will change human interactivity by people not needing to have in-person interactivity to benefit.”
Marc Rotenberg, founder and president of the Center for AI and Digital Policy, said, “Over the next decade, laws will be enacted to regulate the use of AI systems that impact fundamental rights and public safety. High standards will be established for human oversight, impact assessments, transparency, fairness and accountability. Systems that do not meet these standards will be shut down. This is the essence of human-centric, trustworthy AI.”
Dan McGarry, journalist, editor and investigative reporter, said, “Human control of all decision-making must be vested in the tech equally as in law and regulation. Very little agency should be given to algorithmically-based services. While machine learning is an exceptionally good manner of facilitating interaction with large volumes of data, even apparently trivial decisions may lead to unforeseen negative consequences.
“The challenge we face in spreading the role of machine learning and algorithmically-driven tech is that it’s treated as proprietary ‘secret sauce,’ owned and operated centrally by companies capable of insanely resource-intensive computation. Until that changes, we face a risk of increased authoritarianism, surveillance and control over human behaviour, some of it insidious and unremarked.
“The problem is that machine learning, and especially training of ML services, requires a kind of input to which most people are unaccustomed. They closest they come to interacting with learning algorithms are the ‘Like,’ ‘Block’ and ‘Report’ buttons. That communication and information exchange will have to involve a great deal more informed consent from individuals. If that happens, then it may become possible to train so-called AIs for numerous tasks. This interaction will, of necessity, take the form of a conversation—in other words, a multi-step, iterative communication allowing a person to refine their request, and it will take the ‘AI’ to refine its suggestions. As with all relationships, these will, over time, become based on non-verbal cues as well as explicit instructions.
“Machine learning will, eventually, become affordable to all, and initiate fundamental changes in how people interact with ‘AIs.’ If and when that transpires, it may become possible to expand a person’s memory, their capacity for understanding, and their decision-making ability in a way that is largely positive and affirming, inclusive of other people’s perspectives and priorities. Such improvements could well transform all levels of human interaction, from international conflict to governance to day-to-day living. In short, it will not be the self-driving car that changes our lives so much as our ability to enhance our understanding and control over our minute-to-minute and day-to-day decisions.”
Adam Nagy, senior research coordinator, Berkman Klein Center for Internet and Society, Harvard University, predicted, “Under the upcoming European AI Act higher-risk use-cases of these technologies will demand more robust monitoring and auditing. I am cautiously optimistic that Europe is paving the way for other jurisdictions to adopt similar rules and that companies may find it easier and beneficial to adhere to European regulations in other markets.
“Algorithmic tools can add a layer of complexity and opacity for a layperson, but with the right oversight conditions, they can also enable less arbitrariness in decision-making and data-informed decisions. It is often the case that an automated system augments or otherwise informs a human decision-maker. This does come with a host of potential problems and risks.
“For example, a human might simply serve as a rubber stamp or decide when to adhere to an automated recommendation in a manner that reinforces their own personal biases. It is crucial to recognize that these risks are not unique to ‘automated systems’ or somehow abetted by human-led systems. The true risk is in any system that is unaccountable and does not monitor its impacts on substantive rights.”
Cathy Cavanaugh, chief technology officer at the University of Florida Lastinger Center for Learning, predicted, “The next 12 years will be a test period for IT policy. In countries and jurisdictions where governments exert more influence, limitations and requirements in technology providers, humans will have greater agency because they will be relieved from the individual burden of understanding algorithms, data risks and other implications of agreeing to use a technology because governments will take on that responsibility on behalf of the public, just as they do in other sectors where safety and expert assessment of safety are essential such as building construction and restaurants.
“In these places, people will feel more comfortable using technology in more aspects of their lives and will be able to allocate more repetitive tasks such as writing, task planning and basic project management to technology. People with this technology will be able to spend more time in interactions with each other about strategic issues and leisure pursuits. Because technology oversight by governments will become another divide among societies, limitations on with whom and in what ways a person uses an application may follow geographic borders.”
Ulf-Dietrich Reips, professor and chair for psychological methods, University of Konstanz, Germany, wrote, “Many current issues with control of important decision-making will in the year 2035 have been worked out, precisely because we are raising the question now. Fundamental issues with autonomous and artificial intelligence will have come to light, and we will know much better if they can be overcome or not. Among the ‘we’ may then actually be some autonomous and artificial intelligence systems, as societies (and ultimately the world) will have to adapt to a more hybrid human-machine mix of decision-making. Decision-making will need to be guided by principles of protection of humans and their rights and values, and by proper risk assessment. Any risky decision should require direct human input, although not necessarily only human input and most certainly procedures for human decision-making based on machine input need to be developed and adapted. A major issue will be the tradeoff between individual and society. But that in itself is nothing new.”
John Verdon, a Canada-based consultant on complexity and foresight, wrote, “First—as Marshall McLuhan noted—technology is the most human part of us—language and culture are technologies—and this technology liberated humans from the need to translate learnings into genes (genetic code) and enabled learning to be coded into memes (language and behavior that can be taught and shared). This enabled learning to expand, be combined, archived and more. Most of the process of human agency is unconscious.
“The challenge of a civilization sustaining and expanding its knowledge base—its ‘know-how’ (techne) is accelerating—every generation has to be taught all that is necessary to be fluent in an ever-wider range of know-how. Society’s ‘know-how’ ecology is increasing in niche density at an accelerating rate (the knowledge and know-how domains that enable a complex political-economy) so yes—AI will be how humans ‘level-up’ to ensure a continuing flourishing of knowledge fluency and ‘know-how’ agency. Like using calculators and computers for math and physics.
“The key is ‘accountability’ and response-ability—for that we need all software to shift to open-source platforms—with institutional innovations. For example—Auditor Generals of Algorithms—similar to the FDA or Health Canada (does that algorithm do what it says it does—what are the side effects—what are the approved uses—who will issue ‘recall warnings’—etc.) Humans became humans because they domesticated themselves via techne (know-how) and enabling ‘built environments’. Vigilance with Responsibility is the key to evolving a flourishing world.”
Willie Curry, a longtime global communications policy expert based in Africa, said, “My assumption is that over the medium term, two things will happen: greater regulation of tech and a greater understanding of human autonomy in relation to machines. The combination of these factors will hopefully prevent the dystopian outcome from occurring, or at least mitigate against any negative effects. Two factors will operate as countervailing forces towards dystopian outcomes: the amorality of the tech leaders and the direction of travel of autocratic regimes.”
J. Meryl Krieger, senior learning designer at the University of Pennsylvania, said, “It’s not the technology itself that’s of issue, but of who has access to it. Who are we considering to be ‘people?’ People of means will absolutely have control of decision-making relevant to their lives. the disparities in technology access needs to be addressed. This issue has been in front of us for most of the past two decades but there’s still so much insistence on technology as a luxury—to be used by those with economic means to do so—that the reality of it as a utility has still not been sorted out. Until internet access is regulated like telephone access, or power or water access, none of the bots and systems in development or in current use are relevant to ‘people.’ We’re still treating this like a niche market and assuming that this market is all ‘people.’”
Jaak Tepandi, professor emeritus of knowledge-based systems at Tallinn University of Technology, Estonia, commented, “In 2035, we will probably still be living in an era of far-reaching benefits from artificial intelligence. Governments are beginning to understand the dangers of unlimited artificial intelligence, new legislation and standards are being developed. If people can work together, this era can last a long time.
“The longer term is unstable. Conflicts are an integral part of complex development. History shows that humans have difficulty managing their own intelligence. The relationships between humans and machines, robots and systems are like the relationships between humans themselves. Human conflicts are reflected in artificial intelligence and amplified in large robot networks of both virtual and physical reality. People would do well to defend their position in this process.
“Maintaining control over the physical world and critical infrastructure and managing AI decisions in these areas is critical. Smart and hostile AI can come from sophisticated nation-state-level cyber actors or individual attackers operating anywhere in the world. Those who do not control will be controlled, perhaps not in their best interest.”
John McNutt, professor emeritus of public policy and administration, University of Delaware, wrote, “The technology will clearly make these things possible. I have little doubt that we have the ability to create such machines. Whether we will use our creations like this will depend on culture, social structure and organization and public policy. We have a long history of resistance to tools that will make our lives better. The lost opportunities are often depressing.”
Erhardt Graeff, a researcher at Olin College of Engineering expert in the design and use of technology for civic and political engagement, wrote, “Though the vast majority of decisions will be made by machines, the most important ones will still require humans to play critical decision-making roles. What I hope and believe is that we will continue to expand our definition and comprehension of important decisions demanding human compassion and insight. As Virginia Eubanks chronicled in her book ‘Automating Inequality,’ the use of machines to automate social service provision has made less humane the important and complex decision of whether to help someone at their most vulnerable. Through advocacy, awareness and more-sophisticated and careful training of technologists to see the limits of pure machine logic, we will roll back the undemocratic and oppressive dimensions of tech-aided decision-making and limit their application to such questions.”
Jim Fenton, an independent network privacy and security consultant and researcher who previously worked at OneID and Cisco, responded, “I’m somewhat optimistic about our ability to retain human agency over important decisions in 2035. We are currently in the learning stage of how best to apply artificial intelligence and machine learning. We’re learning what AI/ML is good at (picking up patterns that we humans may not notice) and its limitations (primarily the inability to explain the basis for a decision made by AI/ML).
“Currently, AI is often presented as a magic solution to decision problems affecting people, such as whether to deny the ability to do something seen as fraudulent. But errors in these decisions can have profound effects on people, and the ability to appeal them is limited because the algorithms don’t provide a basis for why they were made. By 2035, there should be enough time for lawsuits about these practices to have been adjudicated and for us as a society to figure out the appropriate and inappropriate uses of AI/ML.”
Jane Gould, founder of DearSmartphone, said, “The next generation, born, say, from 2012 onward, will be coming of age as scientists and engineers by 2035. Evolving these tools to serve human interests well will seem very natural and intuitive to them.”
Ebenezer Baldwin Bowles, an activist and voice of the people who blogs at corndancer.com, wrote, “By keeping the proletariat clueless about the power of technology to directly and intentionally influence the important decisions of life, big money and big government will thrive behind a veil of cyber mystery and deception, designed and implemented to confuse and manipulate.
“To parse the question, it is not that humans won’t be in control, but rather that things won’t ‘easily be in control.’ (That is, of course, if we as a human race haven’t fallen into global civil warfare, insurrection and societal chaos by 2035, which some among us suspect is a distinct possibility.) Imagine yourself to be a non-expert and then see how not easy it becomes to navigate the cyberlands of government agencies, money management, regulatory bodies, medical providers, telecommunications entities and Internet pipelines (among others, I’m sure). Nothing in these realms shall come easily to most citizens.
“Since the early aughts I’ve maintained that there is no privacy on the Internet. I say the same now about the illusion of control over digital decision-making powered by AI. The choices offered, page to page, belong to the makers.
“We are seldom—and n’er fully—in charge of technology—that is, unless we break connections with electricity. Systems are created by hundreds of tech-team members, who sling together multiple algorithms into programs and then spill them out into the universe of management and masters. We receive them at our peril.”
Jonathan Grudin, principal researcher at Microsoft and affiliate professor at the University of Washington Information School, observed, “People won’t control a lot of important decision-making in the year 2035. We’re already losing control.
- When Google exhibits the editorial control that has long been expected of publishers by removing 150,000 videos and turning off comments on more than 600,000 and removing ads from nearly 2 million videos and more than 50,000 channels, algorithms decide. Overall, this is a great service, but thousands of false alarms will elude correction.
- When an algorithm error dropped a store from Amazon humans were unable to understand and fix the problem.
- A human resources system that enforces a rule where it shouldn’t apply can be too much trouble for a manager to contest, even if it may drive away valued employees.
- Human agency is undermined by machine learning that finds effective approaches to convince almost any individual to buy something they don’t need and can’t afford.
“Our sense of control is increasingly illusory. Algorithms that support management and marketing decisions in some organizations operate on a scale too extensive for humans to validate specific decisions. Unless the machine stops, this will spread by 2035, and not just a little.”
Marija Slavkovik, professor of information science and AI, University of Bergen, Norway, commented, “When it comes to decision-making and human agency, what will the relationship look like between humans and machines, bots and systems powered mostly by autonomous and artificial intelligence?
“It is, of course, impossible to speak with certainty about the future. We can only observe trends and interpret them with the information we have. Automation has always been used to do away with work that people do not want to do. This type of work is typically low-paid, difficult or tedious. In this respect automation supports human agency. It is tempting to automate tasks and decisions related to jobs that are expensive. But, typically, expensive jobs are hard to automate because they require specific difficult-to-acquire skills or flexibility in task description. In some settings, we automate some parts of a job in order to augment the activities of the human expert. This requires that the human is left in control.
“In addition, legislation and regulation is globally moving toward higher governance of automated decision-making. The goal of that legislation is protecting human agency and values. There is no reason why this trend would stop.”
Kurt Erik Lindqvist, CEO and executive director, London Internet Exchange, wrote, “Absent breakthroughs in undying math supporting AI and ML, we will continue to gain from the advances in storage and computing, but we will still have narrow individual applications. We will see parallel AI/ML functions cooperating to create a seamless user experience where the human interaction will be with the system, assisted by guidance from each individual automated decision-making.
“Human decisions will be required where interaction with other humans is at stake. This will be due to regulation rather than technical limits. Through automated decision-making, many routine tasks will disappear from our lives.”
Greg Lindsay, non-resident senior fellow at the Atlantic Council’s Scowcroft Strategy Initiative, commented, “Humans will be out of the loop of many important decisions by 2035, but they shouldn’t be. And the reasons will have less to do with the evolution of the technology than politics, both big and small. For example, given current technological trajectories, we see a bias toward large, unsupervised models such as GPT-3 or DALL-E 2 trained on data sets riddled with cognitive- and discriminatory biases using largely unsupervised methods. This produces results that can sometimes feel like magic (or ‘sapience,’ as one Google engineer has insisted) but will more often than not produce results that can’t be queried or audited.
“I expect to see an acceleration of automated decision-making in any area where the politics of such a decision are contentious—areas where hard-coding and obscuring the apparatus are useful to those with power and deployed on those who do not.”
“In the face of seemingly superior results and magical outcomes—e.g., an algorithm trained on historical crime rates to ‘predict’ future crimes—will be unthinkingly embraced by the powers-that-be. Why? First, because the results of automated decision-making along these lines will preserve the current priorities and prerogatives of institutions and the elites who benefit from them. A ‘pre-crime’ system built on the algorithm described above and employed by police departments will not only post outcomes ad infinitum, it will be useful for police to do so. Second, removing decisions from human hands and placing them under the authority of “the algorithm,” it will only make it that much more difficult to question and challenge the underlying premises of the decisions being made.”
Brad Templeton, internet pioneer, futurist and activist, chair emeritus of the Electronic Frontier Foundation, wrote, “The answer is both. Some systems will be designed in this way, others will not. However, absent AGI with its own agency, the systems which make decisions will be designed or deployed by some humans according to their will, but that’s not necessarily the will of the person using the system or affected by the system. This exists today even with human customer service agents, who are given orders and even scripts to use in dealing with the public. They are close to robots with little agency of their own—which is why we always want to ‘talk to a supervisor’ who has agency. Expect the work of these people to be replaced by AI systems when it is cost-effective and the systems are capable enough.”
Zizi Papacharissi, professor of communication and political science, University of Illinois-Chicago, responded, “Humans will be in control of important decision-making by 2035, but they will not be alone in that decision-making. They will share agency in decision-making with a number of automated processes that will design options for them to choose from. Our agency will further be defined by the abilities of the technologies we create and the political/economic/socio-cultural boundaries that shape these technologies.
“A metaphor: We have agency or autonomy within our private homes, but this agency/autonomy is defined by what the concept ‘home’ means in a contemporary society shaped by the economics of attention.”
Richard Watson, author of ‘Digital vs. Human: How We’ll Live, Love and Think in the Future,’ commented, “I think 2035 is too soon for machines to be taking control of most or all important human decision-making. It is far more likely that humans will cooperate and collaborate with machines, which we will still largely see as tools as opposed to some kind of higher intellect. In terms of making important decisions, humans will still have oversight and we will still trust human judgment ahead of AIs in most important cases.
“By 2035 it’s possible that we will have delegated most small decisions (e.g., replenishing groceries) and may have fully automated other matters (e.g., hiring and firing, sentencing), but it is a bit early for deeper dependence on machines—2045 is more likely. Also, even if the tech has developed to the point where it’s more reliable than humans (e.g., passenger drones vs piloted helicopters) I still think the historical inertia will be significant and society and government in particular may hold things back (not a bad thing in many cases).
“This isn’t to say that the tech companies won’t try to remove individuals’ agency though, and the work of Shoshana Zuboff is interesting in this context. How might automated decision-making change human society? As Zuboff asks: Who decides? Who is making the machines and to what ends? Who is responsible when they go wrong? What biases will they contain? I think it was Sherry Turkle who asked whether machines that think could lead us to becoming humans who don’t. That’s a strong possibility, and we can see signs of it already.
“The key decisions that will not be fully automated by 2035 are likely to involve matters of human life and death, although autonomous weapons systems could be an exception in some countries. So, for example, in healthcare the final decision of whether to assist in ending a human life will likely remain a human one.”
Jeff Johnson, a professor of computer science at the University of San Francisco who previously worked at Xerox, HP Labs and Sun Microsystems, wrote, “Some AI systems will be designed as ‘smart tools,’ allowing human users to be the main controllers, while others will be designed to be almost or fully autonomous. I say this because some systems already use AI to provide more user-friendly control.
“For example, cameras in mobile phones use AI to recognize faces, determine focal distances and adjust exposure. Current-day consumer drones are extremely easy to fly because AI software built into them provides stability and semi-automatic flight sequences. Current-day washing machines use AI to measure loads, adjust water usage and determine when they are unbalanced. Current day vehicles use AI to warn of possible obstacles or unintended lane-changes. Since this is already happening, the use of AI to enhance ease-of-use without removing control will no doubt continue and increase.
“On the other hand, some systems will be designed to be highly—perhaps fully—autonomous. Some autonomous systems will be beneficial in that they will perform tasks that are hazardous for people, e.g., find buried land mines, locate people in collapsed buildings, operate inside nuclear power plants, operate under water or in outer space. Other autonomous systems will be detrimental, created by bad actors for nefarious purposes, e.g., delivering explosives to targets or illegal drugs to dealers.”
Sam Lehman-Wilzig, author of “Virtuality and Humanity” and professor at Bar-Ilan University, Israel, said, “One can posit that many (perhaps most) people throughout history have been perfectly happy to enable a ‘higher authority’ (God, monarch/dictator, experts, technocrats, etc.) to make important decisions for them (see: Erich Fromm, ‘Escape from Freedom’).
“For many important choices we will not care much that the AI-directed decisions will be autonomous (where we will not be in control)—much as many of today’s governmental ‘nudges’ seem not to be a problem for most people. Thus, once we can be relatively assured that such AI decision-making algorithms/systems have no more (and usually fewer) inherent biases than human policymakers, we will be happy to have them ‘run’ society.
“This will be the case especially in the macro-public sphere. On the private plane we will be in control of the AI (‘partner’)—using it to make decisions within our personal life. On the micro, personal level, AI ‘brands’ will be competing in the marketplace for our use—much like Instagram, Facebook, Twitter, TikTok compete today—designing their AI ‘partners’ for us to be highly personalized, with our ability to input our values, ethics, mores, lifestyle, etc., so that the AI’s personalized ‘recommendations’ will fit our goals to a large extent. In short, the answer to the survey question is ‘no’ (on the macro plane): humans will not be in charge of decisions/policy; and ‘yes’ (on the micro level): we will partner with AI for decision-making that is best attuned to our personal preferences.”
Hari Shanker Sharma, professor of neurobiology at Uppsala University, Sweden, commented, “Yes, I expect people will have some control. Good and evil are integral parts of existence. Technology is neutral, but evil can use it for destruction. Example: nuclear energy can be used for bombs and as a power source. The same is true for AI and the internet. E.g., bitcoin can be used for good and bad. Social media can be used for good and bad. The use of AI-based weapons in recent conflicts is another example.”
David Weinberger, senior researcher at Harvard’s Berkman Center for Internet and Society, commented, “Machine learning models’ interests can and should be regulated and held up for public debate. That could alter our idea of our own autonomy, potentially in very constructive ways, leading us to assume that our own interests likewise affect more than our own selves and our own will. But this assumes that regulators and the public will do their jobs of making machine learning models’ interests—their objective functions—public objects subject to public control.
“Autonomous selves have interests that they serve. Those interests have to be made entirely explicit and measurable when training a machine learning model; they are objects of discussion, debate and negotiation. That adds a layer of clarity that is often (usually?) absent from autonomous human agents.
“There is certainly a case for believing that humans will indeed be in control of making important decisions in the year 2035. I see humans easily retaining decision-making control things like who to marry, what career to pursue, whether to buy or rent a home, whether to have children, which college to go to (if any), and so forth. Each of those decisions may be aided by machine learning, but I see no reason to think that machine learning systems will actually make those decisions for us. Even less-important personal decisions are unlikely to be made for us. For example, if an online dating app’s ML models get good enough that the app racks up a truly impressive set of stats for dates that turn into marriages, when it suggests to you that so-and-so would be a good match, you’ll still feel free to reject the suggestion. Or so I assume.
“But not all important decisions are major decisions. For example, many of us already drive cars that slam on the brakes when they detect an obstacle in the road. They do not ask us if that’s OK; they just bring the car to a very rapid halt. That’s a life-or-death ‘decision’ that most of us want our cars to make because the car’s sensors plus algorithms can correct for human error and the slowness of our organic reactions. And once cars are networked while on the road, they may take actions based on information not available to their human drivers, and so long as those actions save lives, decrease travel times, and/or lower environmental impacts, many if not most of us will be OK with giving up a human autonomy based on insufficient information.
“But an uninformed or capricious autonomy has long been understood to be a false autonomy: in such cases we are the puppets of ignorance or short-sighted will. Delegating autonomy can itself be a proper use of autonomy. In short, autonomy is overrated. The same sort of delegation of autonomy will likely occur far more broadly. If smart thermostats keep us warm, save us money and decrease our carbon footprints we will delegate to them the task of setting our house’s temperature. In a sense, we already do that when we set an old-fashioned thermostat, don’t we?
“But there are more difficult cases. For example, machine learning models may well get better at diagnosing particular diseases than human doctors are. Some doctors well may want to occasionally override those diagnoses for reasons they cannot quite express: ‘I’ve been reading biopsy scans for 30 years, and I don’t care what the machine says, that does not look cancerous to me!’ As the machines get more and more accurate, however, ‘rebellious’ doctors will run the risk of being sued if they’re wrong and the machine was right. This may well intimidate doctors, preventing them from using their experience to contradict the output from the machine learning system. Whether this abrogation of autonomy is overall a good thing or not remains to be seen.
“Finally, but far from least important, is to ask what this will mean for people who lack the privileges required to exercise autonomy. We know already that machine learning models used to suggest jail sentences and conditions of bail are highly susceptible to bias. The decisions made by machine learning that affect the marginalized are likely to be a) less accurate because of the relative paucity of data about the marginalized most affected by them; b) less attuned to their needs because of their absence from the rooms where decisions about what constitutes a successful model are made; and c) are likely to have less power to get redress for bad decisions made by those models. Does this mean that the ‘autonomy gap’ will increase as machine learning’s sway increases? Quite possibly. But it’s hard to be certain because while machine learning models can amplify societal biases, they can also remove some elements of those biases. Also, maybe by 2035 we will learn to be less uncaring about those whose lives are harder than our own. But that’s a real longshot.
“As for less-direct impacts of this delegation of autonomy, on the one hand, we’re used to delegating our autonomy to machines. I have been using cruise control for decades because it’s better at maintaining a constant speed than I am. Now that it’s using machine learning, I need to intervene less often. Yay!
“But as we delegate higher-order decisions to the machines, we may start to reassess the virtue of autonomy. This is both because we’ll have more successful experience with that delegation, but also perhaps we’ll come to reassess the concept of autonomy itself. Autonomy posits an agent sitting astride a set of facts and functions. That agent formulates a desire and then implements. Go, autonomy! But this is a pretty corrupt concept. For one thing, we don’t input information that (if we’re rational) determines our decision. Rather, when in the process of making a decision we decide which information to credit and how to weigh it. That’s exactly what machine learning algorithms do with data when constructing a model.”
Greg Sherwin, a leader in digital experimentation with Singularity University, predicted, “Decision-making and human agency will continue to follow the historical pattern to date: It will allow a subset of people with ownership and control of the algorithms to exert exploitative powers over labor, markets and other humans. They will also operate with the presumption of guilt with the lack of algorithmic flagging as a kind of machine generated alibi. Key decisions that will be heavily automated include outsourced ethics for employment decisions, medical diagnoses and decisions, compliance decisions, some court decisions, criminal law and enforcement.”
Micah Altman, social and information scientist at MIT’s Center for Research in Equitable and Open Scholarship, wrote, “’The fault, dear Brutus, is not in our stars but in ourselves that we are underlings.’ Decisions determined by algorithms affecting our lives are increasingly governed by opaque algorithms, from the temperature of our office buildings to what interest rate we’re charged for a loan to whether we are offered bail after an arrest. More specifically complex, opaque, dynamic and commercially-developed algorithms are increasingly replacing complex, obscure, static and bureaucratically-authored rules.
“Over the next decade and a half, this trend is likely to accelerate. Most of the important decisions affecting us in the commercial and government sphere will be ‘made’ by automated evaluation processes. For the most high-profile decisions, people may continue to be ‘in the loop’, or even have final authority. Nevertheless, most of the information that these human decision-makers will have access to will be based on automated analyses and summary scores—leaving little for nominal decision-makers to do but flag the most obvious anomalies or add some additional noise into the system.
“This outcome is not all bad. Despite many automated decisions being outside of both our practical and legal (if nominal) control, there are often advantages from a shift to out-of-control automaticity. Algorithmic decisions often make mistakes, embed questionable policy assumptions, inherit bias, are gameable, and sometimes result in decisions that seem (and for practical purposes, are) capricious. But this is nothing new—other complex human decision systems behave this way as well, and algorithmic decisions often do better, at least in the ways we can most readily measure. Further, automated systems, in theory, can be instrumented, rerun, traced, verified, audited, and even prompted to explain themselves—all at a level of detail, frequency, and interactivity, that would be practically impossible to conduct on human decision systems: This affordance creates the potential for a substantial degree of meaningful control.
“In current practice, algorithmic auditing and explanation require substantial improvement. Neither the science of machine learning nor the practice of policymaking has kept pace with the growing importance of designing algorithmic systems such that they can provide meaningful auditing and explanation.
- Meaningful control requires that algorithms provide truthful and meaningful explanations of their decisions, both at the individual decision scale and at the aggregate policy scale. And, to be actionable algorithms must be able to accurately characterize the what-ifs, the counterfactual changes in the human-observable inputs and contexts of decisions that will lead to substantially different outcomes. While there is currently incremental progress in the technical and policy fields in this area it is unlikely to catch up with the accelerating adoption of automated decision-making over the next 15 years.
- Moreover, there is a void of tools and organizations acting directly on behalf of the individual. Instead, most of our automated decision-making systems are created, deployed and controlled by commercial interests and bureaucratic organizations.
- We need better legal and technical mechanisms to enable the creation, control and audition of AI agents and we need organizational information fiduciaries to represent our individual (and group) interest in real-time control and understanding of an increasingly automated world.
“There is little evidence that these will emerge at scale over the next 15 years. The playing field will remain slanted.”
Stephen Downes, expert with the Digital Technologies Research Centre of the National Research Council of Canada, commented, “This question can be interpreted multiple ways: could there be any technology that allows people to be in control, will some such technology exist, and will most technology be like that? My response is that the technology will exist. It will have been created. But it is not at all clear that we will be using it.
“There will definitely be decisions out of our control, for example, whether we are allowed to purchase large items on credit. These decisions are made autonomously by the credit agency, which may not use autonomous agents. If the agent denies credit, there is no reason to believe that a human could, or even should, be able to over-rise this decision.
“A large number of decisions like this about our lives are made by third parties and we have no control over them, for example, credit ratings, insurance rates, criminal trials, applications for employment, taxation rates. Perhaps we can influence them, but they are ultimately out of our hands.
“But most decisions made by technology will be like a simple technology, for example, a device that controls the temperature in your home. It could function as an autonomous thermostat, setting the temperature based on your health, on external conditions, on your finances and the on cost of energy. The question boils down to whether we could control the temperature directly, overriding the decision made by the thermostat.
“For something simple like this, the answer seems obvious: yes, we would be allowed to set the temperature in our homes. For many people, though, it may be more complex. A person living in an apartment complex, condominium or residence may face restrictions on whether and how they control the temperature.
“Most decisions in life are like this. There may be constraints such as cost, but generally, even if we use an autonomous agent, we should be able to override it. For most tasks, such as shopping for groceries or clothes, choosing a vacation destination, or electing videos to watch, we expect to have a range of choices and to be able to make the final decisions ourselves. Where people will not have a sufficient range of control, though, is in the choices that are available to us. We are already seeing artificial intelligences used to shape market options to benefit the vendor by limiting the choices the purchaser or consumer can make.
“For example, consider the ability to select what things to buy. In any given category, the vendor will offer a limited range of items. These menus are designed by an AI and may be based on your past purchases or preferences but are mostly (like a restaurant’s specials of the day) based on vendor needs. Such decisions may be made by AIs deep in the value chain; market prices in Brazil may determine what’s on the menu in Detroit.
“Another common example is differential pricing. The price of a given item may be varied for each potential purchaser based on the AI’s evaluation of the purchaser’s willingness to pay. We don’t have any alternatives—if we want that item (that flight, that hotel room, that vacation package) we have to choose between the prices we the vendors choose, not all prices that are available. Or if you want heated seats in your BMW, but the only option is an annual subscription—really.
“Terms and conditions may reflect another set of decisions being made by AI agents that are outside our control. For example, we may purchase an e-book, but the book may come with an autonomous agent that scans your digital environment and restricts where and how your e-book may be viewed. Your coffee maker may decide that only approved coffee containers are permitted. Your car (and especially rental cars) may prohibit certain driving behaviours.
“All this will be the norm, and so the core question in 2035 will be: What decisions need (or allow) human input? The answer to this, depending on the state of individual rights, is that they might be vanishingly few. For example, we may think that life and death decisions need human input. But it will be very difficult to override the AI even in such cases. Hospitals will defer to what the insurance company AI says, judges will defer to the criminal AI, pilots like those on the 737 MAX cannot override and have no way to counteract automated systems. Could there be human control over these decisions being made in 2035 by autonomous agents? Certainly, the technology will have been developed. But unless the relation between individuals and corporate entities changes dramatically over the next dozen years, it is very unlikely that companies will make it available. Companies have no incentive to allow individuals control.”
Michael G. Dyer, professor emeritus of computer science, University of California-Los Angeles, wrote, “The smartest humans create the products of convenience that the rest of us use on a daily basis. A major goal of those smartest humans is to a product easily usable without the user having to understand how the product works or how it was constructed.
“I turn on a flat-screen TV and use its controls to navigate the internet without having to understand its internal structure or manufacture. I get into a car and drive it in similar fashion. Many extremely important decisions are being made without input from a majority of humans.
“Digital weapons of war (attack drones, future robot warriors under development, cyber weapons) are being developed and deployed by major governments without input from a majority of their citizens and, as we can see in the Ukraine, those weapons have major consequences.
“Heads of major tech companies make key decisions about how their products will affect the public (examples: in terms of surveillance and info gathering on their consumers) without supplying much if anything in the way of human agency. While we will remain in control of products of convenience in 2035 (that’s what makes them convenient), we will continue to lose control in terms of major command-and-control systems of big tech and government.
“In fact, major tech-driven decisions affecting the rest of us are being made by smaller and smaller groups of humans. For autocratic governments this is business as usual. In democracies the law-making branches seem unable to understand the most basic aspects of emerging technologies. It is nearly impossible that they would ever come to craft proper laws to democratize technologies at the systems level. The proof: the democratizing regulation of technology has only been somewhat successful at the convenience-product-use level.”
Neil Davies, co-founder of Predictable Network Solutions and a pioneer of the committee that worked on the UK’s initial networking developments, said, “As someone who has designed and taken to production large-scale systems, I am abundantly aware that feasibility of executing fully autonomous systems is, for all practical purposes, zero. The main reason is the ontological/epistemological chasm: People forget that machines (and the systems they create) can only ‘know’ what they have ‘experienced’—the things they have been exposed to. They, by definition, cannot reach out to a wider information base and they can’t create ‘ontological’ framing. And that framing is an essential way in which humans—and their societies—make decisions.
“I can see great use for machine-learning tools that look over the shoulders of experts and say, ‘Have you considered X,Y or Z as a solution to your problem?’ But fully autonomous systems cannot be made that detect problems automatically and deal with them automatically. You have to have human beings make certain types of decisions or penalize certain bad behavior. Often behaviour can only be considered bad when intent is included—machines can’t deal in intent.
“The problem is that if you try to build a machine-based system with autonomy, you have to handle not only the sunny-day cases but also the edge cases. There will inevitably be adversarial actors endeavoring to attack individuals, groups or society by using the autonomous system to be nasty. It’s very hard to account for all the dangerous things that might happen and all the misuses that might occur.
“The systems are not god-like and they don’t know the whole universe of possible uses of the systems. There’s an incompleteness issue. That incompleteness makes these systems no longer autonomous. Humans have to get involved. The common problem we’ve found is that is not feasible to automate everything. The system eventually has to say, ‘I can’t make a sensible/reasoned decision’ and it will need to seek guiding human input.
“One example: I work with companies trying to build blockchain-y systems. When designers start reasoning about what to build, they find that systems of formal rules can’t handle the corner cases. Even when they build systems they believe to be stable—things they hope can’t be gamed—they still find that runs on the bank can’t be ruled out and can’t easily be solved by creating more rules. Clever, bad actors can still collapse the system. Even if you build incentives to encourage people not to do that, true enemies of the system don’t care about incentives and being ‘rational actors.’ They’ll attack anyway. If they want to get rid of you, they’ll do whatever it takes, no matter how irrational it seems.
“The more autonomous you make the system, the more you open it up to interactions with rogue actors who can drive the systems into bad places by hoodwinking the system. Bad actors can collude to make the system crash the stock market, cause you to be diagnosed with the wrong disease, make autonomous cars crash. Think of ‘dieselgate,’ where people could collude by hacking a software system to allowing a company to cheat on reporting auto emissions. Sometimes, it doesn’t take much to foul up the system. There are frightening examples of how few pixels you need to change to make the driverless car navigation system misread the stop sign or the speed-limit sign.
“Another example of a problem: Even if you build a system where the rules are working well by reading the same environment and making the same decisions, you can run into a ‘thundering herd problem.’ Say, everyone gets rerouted around a traffic problem to the same side streets. That doesn’t help anyone.
“In the end, you don’t want to give systems autonomy when it comes to life-and-death decisions. You want accountability. If a battlefield commander decides it’s necessary to put troops at risk for a goal, you want to be able to court martial the commander if it’s the wrong choice for the wrong reasons. If an algorithm has made that catastrophic command decision, where do you go to get justice?
“Finally, I am pessimistic about the future of wide-scale, ubiquitous, autonomous systems because no one is learning from the collective mistakes. One of the enduring problems is that many big companies (as well as others, such as researchers and regulators) do not disclose what didn’t work. Mistakes get buried and failures aren’t shared, these things are prerequisites for people to learn from them.
“In the large, the same mistakes get made over and over as the collective experience and knowledge base is just not there (as, say, would be the case in the aircraft industry).
“There is a potential light at the end of this tunnel: the insurance system . They will have a lot to say about how autonomous decision-making rolls out. Will insurers underwrite any of these things? Clearly not, where an autonomous system that can be arbitrarily forced into a failure mode. Underwriting abhors correlations, the resulting claims are an existential risk to their business.
“The battle over who holds the residual risks extant in autonomous systems is already being played out between the judicial, commercial, insurance and political spheres. Beware the pressure for political expediency, either dismissing or capping the consequences of failure. It may well be that the insurance industry is your greatest ally. Their need to quantify the residual risks for them to underwrite could be the driver that forces the whole industry to face up to issues discussed here.”
Seth Finkelstein, principal at Finkelstein Consulting and Electronic Frontier Foundation Pioneer Award winner, wrote, “These systems will be designed to allow only a few people (i.e., the ruling class, and associated managers) to easily be in control of decision-making, and everyone else will not be in charge of the most relevant parts of their own lives and their own choices.
“There’s an implicit excluded middle in the phrasing of the survey question. It’s either turn the keys over to technology, or humans being the primary input in their own lives. It doesn’t consider the case of a small number of humans controlling the system so as to be in charge of the lives and choices of all the other humans.
“There’s not going to be a grand AI in the sky (Skynet) which rules over humanity. Various institutions will use AI and bots to enhance what they do, with all the conflicts inherent therein.
“For example, we don’t often think in the following terms, but for decades militaries have mass deployed small robots which make autonomous decisions to attempt to kill a target (i.e., with no human in the loop): landmines. Note well: the fact that landmines are analog rather than digital and they use unsophisticated algorithms is of little significance to those maimed or killed. All of the obvious problems—they can attack friendly fighters or civilians, they can remain active long after a war, etc.—are well-known, as are the arguments against them. But they have been extensively used despite all the downsides, as the benefits accrue to a different group of humans than pays the costs. Given this background, it’s no leap at all to see that the explosives-laden drone with facial recognition is going to be used, no matter what pundits wail in horror about the possibility of mistaken identity.
“Thus, any consideration of machine autonomy versus human control will need to be grounded in the particular organization and detailed application. And the bar is much lower than you might naively think. There’s an extensive history of property owners setting booby-traps to harm supposed thieves, and laws forbidding them since such automatic systems are a danger to innocents.
“By the way, I don’t recommend financial speculation, as the odds are very much against an ordinary person. But I’d bet that between now and 2035 there will be an AI company stock bubble.”
Vian Bakir, professor of journalism and political communication, Bangor University, UK, responded, “I am not sure if humans will be in control of important decision-making in the year 2035. It depends upon regulations being put in place and enforced, and everyone being sufficiently digitally literate to understand these various processes and what it means for them.
“When it comes to decision-making and human agency, what will the relationship look like between humans and machines, bots and systems powered mostly by autonomous and artificial intelligence? It greatly depends upon which part of the world you are considering.
“For instance, in the European Union, the proposed European Union AI Act is unequivocal about the need to protect against the capacity of AI (especially that using biometric data) for undue influence and manipulation. To create an ecosystem of trust around AI, its proposed AI regulation bans use of AI for manipulative purposes; namely, that ‘deploys subliminal techniques … to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm’ (European Commission, 2021, April 21, Title II Article 5).
“But it’s not yet clear what current applications this might include. For instance, in April 2022, proposed amendments to the UK’s draft AI Act included the proposal from the Committee on the Internal Market and Consumer Protection, and the Committee on Civil Liberties, Justice and Home Affairs, that ‘high-risk’ AI systems should include AI systems used by candidates or parties to influence, count or process votes in local, national or European elections (to address the risks of undue external interference, and of disproportionate effects on democratic processes and democracy).
“Also proposed as ‘high-risk’ are machine-generated complex text such as news articles, novels and scientific articles (because of their potential to manipulate, deceive, or to expose natural persons to built-in biases or inaccuracies); and deepfakes representing existing persons (because of their potential to manipulate the persons that are exposed to those deepfakes and harm the persons they are representing or misrepresenting) (European Parliament, 2022, April 20, Amendments 26, 27, 295, 296, 297). Classifying them as ‘high-risk’ would mean that they would need to meet the Act’s transparency and conformity requirements before they could be put on the market: these requirements, in turn, are intended to build trust in such AI systems.
“We still don’t know the final shape of the draft AI Act. We also don’t know how well it will be enforced. On top of that, other parts of the world are far less protective of their citizens’ relationship to AI.
“What key decisions will be mostly automated? Anything that can be perceived as saving corporations and governments money, and that are permissible by law.
“What key decisions should require direct human input? Any decision where there is capacity for harm to individuals or collectives.
“How might the broadening and accelerating rollout of tech-abetted, often autonomous decision-making change human society? If badly applied, it will lead to us feeling disempowered, angered by wrong decisions, and distrustful of AI and those who programme, deploy and regulate it.
“People generally have low digital literacy even in highly digitally literate societies. I expect that people are totally unprepared for the idea of AI making decisions that affect their lives, most are not equipped to challenge this.”
Daniel Rothenberg, professor of politics and global studies and co-director of the Center on the Future of War, Arizona State University, wrote, “It is unlikely that major life decisions will be ‘mostly automated.’ That is, who one marries, what job one pursues, what one believes, how one manages major choices in life, will not be automated. That said, there are areas—such as reading medical tests and scans and aiding with data-driven treatment choices—that will not be automated but will be deeply impacted by AI. The same will be true of news stories on basic factual reporting, like sports scores, stock prices, etc. We are likely to see significant auto-generated content and guided information, as well as all sorts of devices and machines that use AI. This will impact some elements of ‘life decisions,’ but not the most profound choices we make.”
Jon Lebkowsky, CEO, founder and digital strategist at Polycot Associates, wrote, “At levels where AI is developed and deployed I believe there’s an understanding of its limitations. I believe that the emphasis going forward, at least where decisions have critical consequences, will be on decision support vs. decision-making. Anyone who knows enough to develop AI algorithms will also be aware how hard it is to substitute for human judgment. I submit that we really don’t know all the parameters of ‘good judgment,’ and the AI we develop will always be limited in the ability to grasp tone, nuance, priority, etc.
“We might be able to effectively automate decisions about market selection, cosmetics, program offerings (but less so selection) etc. But consequential decisions that impact life and health, that require nuanced perception and judgment, will not be offloaded wholly to AI systems, however much we depend on their support. The evolution of digital tech’s ‘broadening and accelerating rollout’ will depend on the evolution of our sophistication about and understanding of the technology. That evolution could result in disaster in cases where we offload the wrong kinds of decisions to autonomous technical systems.”
Gary Grossman, senior vice president and global lead of the Edelman AI Center for Excellence, previously with Tektronix, Waggener Edstrom and Hewlett-Packard, observed, “The U.S. National Security Commission on Artificial Intelligence concluded in a 2021 report to Congress that AI is ‘world-altering.’ AI is also mind-altering, as the AI-powered machine is now becoming the mind. This is an emerging reality of the 2020s. As a society, we are learning to lean on AI for so many things that we could become less inquisitive and more trusting of the information we are provided by AI-powered machines. In other words, we could already be in the process of outsourcing our thinking to machines and, as a result, losing a portion of our agency.
“Most AI applications are based on machine learning and deep learning neural networks that require large datasets. For consumer applications, this data is gleaned from personal choices, preferences and selections on everything from clothing and books to ideology. From this data, the applications find patterns, leading to informed predictions of what we would likely need or want or would find most interesting and engaging. Thus, the machines are providing us with many useful tools, such as recommendation engines and 24/7 chatbot support. Many of these apps appear useful—or, at worst, benign. However, we should be paying more attention to this not-so-subtle shift in our reliance on AI-powered apps. We already know they diminish our privacy. And if they also diminish our human agency, that could have serious consequences. For example, if we trust an app to find the fastest route between two places, we are likely to trust other apps with a risk that we will increasingly live our lives on autopilot.
“The positive feedback loop presented by AI algorithms regurgitating our desires and preferences contributes to the information bubbles we already experience, reinforcing our existing views, adding to polarization by making us less open to different points of view, less able to change, and turns us into people we did not consciously intend to be. This is essentially the cybernetics of conformity, of the machine becoming the mind while abiding by its own internal algorithmic programming. In turn, this will make us—as individuals and as a society— simultaneously more predictable and more vulnerable to digital manipulation.
“Of course, it is not really AI that is doing this. The technology is simply a tool that can be used to achieve a desired end, whether to sell more shoes, persuade to a political ideology, control the temperature in our homes, or talk with whales. There is intent implied in its application. To maintain our agency, we must insist on an AI Bill of Rights as proposed by the U.S. Office of Science and Technology Policy. More than that, we need a regulatory framework soon that protects our personal data and ability to think for ourselves.”
Steve Stroh, technology journalist, commented, “I think that Apple will lead by example in empowering people to make their decisions such as they did with ad tracking. One decision that will be automated, for the better, is attention requests—evolution of spam filtering. I always liked Arthur C. Clarke’ s vision of a Pocket Secretary. Another will be scheduling meetings. I shouldn’t have to be involved in setting them up—just knowing when they are scheduled. For example, medical appointments should be auto negotiated for availability between the doctor and I. I shouldn’t have to order basic supplies—just tap an app noting that I’m almost out of toilet paper, bananas, and it’s waiting for me on the doorstep the next day. Anything that legally obligates requires human input. It’s going to require much more ‘training’ that decisions are being made on your behalf by AIs that you don’t have to accept the default decision-making.”
Stefaan Verhulst, co-founder and director of the Data Program of the Governance Laboratory at New York University, wrote, “We need digital self-determination (DSD) to ensure humans are in the loop for data action 2035. Humans need new ways to enforce principles of digital self-determination in order to reclaim agency and autonomy that have been lost in the current data era. Increased datafication, combined with advances in analytics and behavioral science applications, has reduced data subjects’ agency to determine not only how their data is used, but also how their attention and behavior are steered, and how decisions about them are made.
“These dangers are heightened when vulnerable populations, such as children or migrants, are involved. Together with a coalition of partners in the DSD Network we are working to advance the principle of digital self-determination in order to bring humans back into the loop and empower data subjects. DSD, in general, confirms that a person’s data is an extension of themselves in cyberspace. We must consider how to give individuals or communities control over their digital selves, particularly those in marginalized communities whose information can be used to disenfranchise them and discriminate against them.
“The DSD principle extends beyond obtaining consent for data collection. DSD is centered on asymmetries in power and control among citizens, states, technology companies and relevant organizations. These imbalances distinguish between data subjects who willingly provide data and data holders who demand data. To account for these nuances, we center DSD on 1) agency (autonomy over data collection, data consciousness and data use); 2) choice and transparency (regarding who, how, and where data is access and used); and 3) participation (those empowered to formulate questions and access the data).
- The DSD principle should be present throughout the entire data lifecycle—from collection to collation to distribution. We can identify critical points where data needs to be collected for institutional actors to develop policy and products by mapping the data lifecycle experience for different groups, for example, for migrants, for children and others. To accomplish this we must examine policy, process and technology innovations at different stages of the data lifecycle.
- Policies must be adopted and enforced in order to ensure that the DSD principle is embedded and negotiated in the design and architecture of data processing and data collection in order to avoid function/scope creep into other areas, as well as to outline robust protections and rights for vulnerable populations in order for them to reclaim control over their information.
- DSD implementation processes have to be iterative, adaptive and user-centered in order to be inclusive in co-designing the conditions of access and use of data.
- Technologies can be used to aid in self-determination by enabling selective disclosure of data to those that need it. Such tools can perform many tasks, for example reducing the administrative burden and other problems experienced by vulnerable groups, or in the establishment of a ‘right to be forgotten’ portal as a potential solution to involuntary and/or unnecessary data collection.
“DSD is a nascent but important concept that must develop in parallel to technological innovation and data protection policies to ensure that the rights, freedoms, and opportunities of all people extend to the digital sphere.”
Eileen Rudden, co-founder of LearnLaunch, said, “Workflows will be more automated. Translations and conversions will be automated. Information summarization will be automated. Many decisions requiring complex integration of information may be staged for human input, such as potential for drugs prescribed to interact. Other complex decisions may be automated, such as what learning material might be presented to a learner next, based on her previous learning or future objective and the ability to scan immense databases for legal precedents. In general, any process that yields significant data will be analyzed for patterns that can be applied to other participants in that process. This will include hiring and promotion systems. If professionals get comfortable with the new systems, they will be expanded.
“What key decisions are likely to require direct human input? What to buy. What job to take. What to read. What to watch. Who to talk to. Who to promote. Final decisions proposed by medical, legal and coding systems. The establishment of real relationships with other humans (e.g., between teachers and students, teammates, etc.). Decisions by small- and medium-sized businesses without access to the more-sophisticated systems.
- “What sorts of worries fall into view?
- Tech- savviness will become even more important as more-advanced systems become more prevalent. There will be a risk of social breakdown if the inequality that has resulted from the last 40 years of the information age is allowed to accelerate.
- We need to understand the power and dignity of work and make sure all people are prepared for change and feel they have value in society.
- It is also important for society to be able to understand the real sources of information in order to maintain democracy.”
Kenneth A. Grady, futurist and consultant on law and technology and editor of The Algorithmic Society newsletter, observed, “We have already reached a point where humans have relinquished important aspects of decision-making to computers. This has happened, and will continue to happen, for the simple reason that we have no consensus on what aspects of decision-making should remain with humans.
“Designers are left to decide for themselves where to draw the line. What one designer considers worthy of human consideration another may consider unworthy. A simple, yet oft-repeated example comes from resume review software. A human might consider frequent job changes and movement from industry to industry as evidence of flexibility, ability to pick up new skills quickly, and a wide range of knowledge. A designer might build into an algorithm a negative score indicating the individual has an unstable employment history. Computers struggle with the unique, preferring patterns. Yet the unique often drives progress whereas pattern behavior retards it.
“Which decisions will be automated or should be automated must vary by context. Even assuming ‘rote’ decisions warrant automation risks overlooking the unique element that should drive a conclusion.
“By broadening and accelerating the rollout of decision-making through computers rather than humans, we risk accelerating society’s movement towards the mean on a range of matters. We will drive out the unique, the outlier, the eccentric in favor of pattern behavior. The irony of this approach lies in its contradiction of nature and what got us to this point. Nature relies on mutations to drive adaptation and progress. We will retard our ability to adapt and progress. We have seen already the early indications of this problem. As we turn over more decisions to computers, we have seen choice diminish. Variety no longer is the spice of life.”
George Onyullo, an environmental-protection specialist at the U.S. Department of Energy and Environment, said, “The broadening and accelerating rollout of tech-abetted, often autonomous decision-making may change human society by increasing human suspicion and decreasing trust. The relationship between humans and machines, bots and systems (powered mostly by autonomous and artificial intelligence) will likely be more complex than it is currently. The more machines are allowed to get into the decision-making spaces that are relevant to people’s lives, the more people will interrogate the ability of machines to make those decisions.”
Scott Johnston, an Australia-based researcher and educator, said, “The social structures which imbue the very few with the vast majority of decision-making power will persist to 2035 and beyond. AI systems are expensive to create, train and deploy, and the most effective of them will be ones created at the behest of the highly resourced ruling elite. Dominant AIs will be created so as to enhance the power of this elite.
“Because AIs are connected to extensive web-based ‘sensory systems’ and act on the basis of the rules created by their makers, their activities will be extraordinarily difficult to oversee. And as we have seen recently, ruling elites have the capacity to dominate the lens through which we are able to oversee such changes. The limits to agency of our world’s population will not be inhibited by AI technologies as such, they are just another of our technological tools, the limits will be imposed, as normal by the demands of corporate empire building and protection.”
James S. O’Rourke IV, professor of management, University of Notre Dame, author of 23 books on communication, commented, “AI-aided decision and control will be far more dominant than it is today after it is enhanced by several fundamental breakthroughs, probably sometime after 2035. At that point ethical questions will multiply. How quickly can a machine program learn to please its master? How quickly can it overcome basic mistakes? If programmers rely on AI to solve its own learning problems, how will it know when it has the right answer, or whether it has overlooked some fundamental issue that would have been more obvious to a human? A colleague in biomechanical engineering who specializes in the use of AI to design motility devices for disabled children told me not long ago, ‘When people ask me what the one thing is that most folks do not understand about AI, I tell them, ‘How really bad it is.'”
Glenn Grossman, North American entrepreneur and business leader, “What we consider as autonomous may be the challenge. Two factors that hinder the growth of AI to make decisions for humans are ethical AI and our desire for control. Explainable and ethical use of AI will limit usage of AI in certain areas. Second, as humans we like our freedom and I believe we will not want decisions to be taken away from us.
“Granted, some decisions are probably better with AI influence but fully chosen, not. In healthcare, we can improve quality with assisted AI and also with financial decision-making. Education could benefit to some degree too. However, the final decision should be human.”
Heather Roff, nonresident fellow in the law, policy and ethics of emerging military technologies at Brookings Institution and senior research scientist at the University of Colorado-Boulder, wrote, “In the present setting, companies are obtaining free data on human users and then using such data to provide them the use of intellectual property-protected applications.
“To crack open this for users to have ‘control’ over ‘important decision-making’—whatever that really means—is to provide users with not merely access to their data, but also the underlying ‘how’ of the systems themselves. That would undermine IP, and thus profits. Additionally, even with some privacy-control tools, users still have very little control about how their data is used or reused, or how that data places them in certain classes or ‘buckets’ for everything as simple as shopping histories to predictive policing, predictive intelligence, surveillance and reconnaissance, etc. The tools themselves are usually built to allow for the minimal levels of control while protecting IP.
“Finally, most users are just not that fluent in AI or how autonomous systems utilizing AI work, and they don’t really care. Looking at the studies on human factors, human systems integration, etc., humans become pretty lazy when it comes to being vigilant over the technology. Humans’ cognitive systems are just not geared to ‘think like’ these systems. So, when one has a lack of literacy and a lazy attitude toward the use of such systems, bad things tend to happen. People put too much trust in these systems, they do not understand the limitations of such systems and/or they do not recognize how they actually may need to be more involved than they currently are.
“As to key decisions, I’m not sure I agree with the wording of the question. For example, a key decision may be whether I leave the house today, as if I did I may end up in a horrible car accident. Or perhaps it is the decision right before to take the left lane rather than the middle. Such decisions are clearly key. Even decisions over life and death are not as simple as a one-shot in time. Rather we have categories of decisions, over classes of activities. And machines are only good at accomplishing particular tasks, within a class of activities, that contribute to decisions.”
“Thus, we are not really talking about ‘decisions’ when it comes to AI, but about the completion of tasks. And AI is merely making an estimation on whatever probability distribution it is trained on (for ML for example). If we say to maximize the task on a probability of X, it will do that. But the notion of ‘decision’ is pretty limited. Even in the foreseeable future, we are not talking about a broad or deep notion of agency. The trouble, then, is when we have many AI applications undertaking these tasks simultaneously, with little understanding about what the optimization function is, and that they may be contradictory or compounding.”
Irina Raicu, director of the internet ethics program, Markkula Center for Applied Ethics, Santa Clara University, commented, “The answer will vary in different countries with different types of governments. Some autocratic governments are deploying a lot of technologies without consulting their citizens, precisely in order to limit those citizens’ decision-making abilities. It’s hard to know whether activists in such countries will be able to keep up, to devise means by which to circumvent those technological tools by 2035 in order to maintain some freedom of thought and action. But rights will be stunted in such circumstances, in any case.
“In other countries, such as the U.S. and various countries in the EU, for example, we might see humans being more in control in 2035 than they are now—in part because by then some algorithmic decision-making and some other specific tech applications might be banned in some key contexts. As more people understand the limitations of AI and learn where it works well and where it doesn’t, we will be less likely to treat it as a solution to societal problems.
“Other forces will shape the tech ecosystem, too. For example, the Supreme Court decision in Dobbs is already prompting a reevaluation of some data-collection, use, and protection decisions that had been seen (or at least presented by some companies) as generally accepted business practices. Redesigning the online ecosystem in response to Dobbs might strengthen human agency in a variety of contexts that have nothing to do with abortion rights.
“Whether the broad rollout of tech-abetted, often autonomous decision-making will continue, is up to us. It depends on the laws we support, the products we buy, the services we access or refuse to use, the way in which we educate people about the technology and the way in which we educate future technologists and lawmakers in particular.”
Gus Hosein, executive director of Privacy International, “On top of all the great analyses out there I just want to offer something different and separate. We need to also step away from the consumer and individual frame, where the company is designing something that will shape what they do based on some product or service the individual is seeking, or some device in their midst.
“After all, the consumer-innovation model is running out of steam anyway, at least for now. (I would be happy to be proven wrong about that.) Rather look at all the areas of our lives where we already have centralized control and very little autonomy. Look at all the institutions that make unaccountable procurement and deployment decisions, the institutions that already have access to our bodies and our lives. With their lack of accountability and their inappropriately near-unlimited resources, they are going to be the institutions developing and deploying the systems that matter most to individuals. And (whether willfully or not) they will deploy these systems in ways that undermine the autonomy and dignity of people. Border agencies. Welfare systems. Employers. Policing. Credit. Cost-limited healthcare. Schooling. Prisons.”
Jill Walker Rettberg, professor of digital culture at the University of Bergen in Norway and principal investigator of the project Machine Vision in Everyday Life, replied, “In 2035 we will see even greater differences between different countries than we do today. How much agency humans will have is entirely dependent upon the contexts in which the technologies are used.
“In the U.S., technologies like smart surveillance and data-driven policing are being implemented extremely rapidly as a response to crime. Because machine learning and surveillance is less regulated than guns or even traffic calming measures (like adding cul-de-sacs to slow traffic instead of installing ALPRs) it is an easy fix, or simply the only possible action that can be taken to try to reduce crime given the stalled democratic system and deep inequality in the U.S. In Europe, these technologies are much more regulated, people trust each other and government more, so using tech as a Band-Aid on the gaping wound of crime and inequality is less attractive.
“Another example is using surveillance cameras in grocery stores. In the U.S., Amazon Fresh has hundreds of cameras in stores that are fully staffed anyway and the only innovation for customers is that they don’t have to manually check out. In Norway and Sweden, small family-owned grocery stores in rural villages are using surveillance cameras so the store can expand its opening hours or stay in business at all by allowing customers to use the store when there are no staff members present. This is possible without AI through trust and a remote team responding to customer calls.
“The same is seen with libraries. In Scandinavia, extending libraries’ opening hours with unstaffed time is common. In the U.S., it’s extremely rare because libraries are one of the few public spaces homeless people can use, so they are a de facto homeless shelter and can’t function without staff.”
Mike Nelson, director of the Carnegie Endowment’s technology and international affairs program, predicted, “Machines will make MANY more decisions for us in 2035 than they do today—but most of them will be small, low-impact decisions like, ‘What route should my autonomous car take?’ or, ‘How can my home heating and air conditioning system save energy and money?’ Machines might even decide menus for my meals and make sure the meals (or the ingredients needed to make them) get delivered to my home. And robots will definitely help us keep our homes cleaner and more organized.
“In general, we’ll be more comfortable letting machines serve the roles that butlers, maids, personal secretaries and accountants used to do for wealthy Americans a hundred years ago. Machines will make decisions that require learning and knowing our preferences and history.
“The results should be that we have more time to devote to the things that matter most: family, fun, learning, and caring for each other. If we have fewer mundane tasks (and decisions) to worry about, we should have more time for the big decisions like ‘Where should I invest?’ ‘How can I improve my health and attitude?’ or ‘How can help my family and friends?’”
“On the other hand, big life decisions that have long-term impacts (regarding investing, career, education, or medical care, for instance) will still be made by humans (often with ‘advice’ from machine learning and other algorithms). Thanks to customized search and knowledge tools (and perhaps even ‘wisdom tech’) we’ll be able to better understand our world and the factors that will affect our future.”
John Laudun, professor of social information systems at the U.S. Army Combined Arms Center, said, “As a folklorist, I have spent much of my career studying how people build the worlds in which they live through everyday actions and expressions. Words said here move across a community and then across communities, leaving one mouth and entering another ear, receptivity most often being established by ready transmission. This is how information spreads.
“We construct ourselves using the information available to us. Our social relations are determined by the information we share; good friends are good because we share certain information with them and not with others. Acquaintances are just that because we share a lot less with them. This is a dynamic situation, of course, and friends and acquaintances can swap places but what does not change is how we construct those relationships: through informational interchange. The self in this view is distributed, not bounded, and thus less in control of itself than it might imagine—or how much it has been told it is in control during its time in formal schooling, which requires the self to be regulated and regimented so it can be assessed, graded and validated.
“My view of this situation is unchanged having now glimpsed into the information operations that lie behind the competition-crisis-conflict continuum engaged in by global actors, both nations and not. What those actors have done is simply harness the algorithmic and gamified nature of media platforms, from social media to social games. Their ability to create an addictive experience highlights rather well, I think, how little we control ourselves, both individually and collectively, at present.
“Despite the howls of concern coming from all corners of modern democracies, I see little hope that either the large corporations profiting, literally, from these infrastructures, or the foreign entities who are building up incredibly rich profiles on us will be willing to turn down their efforts to keep us firmly in their sway. The political will required for us to mute them does not appear to be present, though I hope to be proven wrong in the next few years, not so much for my sake but for the sake of my child and her friends.”
Daniel Wyschogrod, senior scientist at BBN Technologies, commented, “Systems are not likely to be designed to allow people to easily be in control over most tech-aided decision-making. Decisions on credit-worthiness, neighborhoods that need extra policing, etc., are already made today based on deep learning. This will only increase. Such systems’ decisions are heavily based on training corpora that is heavily dependent on the availability of data resources with possible biases in collection.”
Frank Odasz, director at Lone Eagle Consulting, said, “Increasing AI manipulation of beliefs, or media (such as deepfake videos) can be expected in future. Simple AI tools based on facts will become more useful. I see a two-tiered society as 1) those who learn to smartly use AI tools without allowing themselves to be manipulated, and 2) those who allow themselves to believe that they can justify ‘believing anything they want.’ The big question is, in the future which tier will be dominant in most global societies?
“My 39-year history as an early adaptor and promoter of smart online activities such as e-learning, remote work, building community collaborative capacity, began in 1983 and I’ve presented nationally and internationally. Top leaders in DC in the early days didn’t have a clue what would be possible ‘being online’—at any speed. Americans’ misuse of social media and persuasive design at all levels has increasingly resulted in artificial intelligence-spread political manipulation promoted as factual truth by radicals lacking any level of conscience or ethics.
“Various people have proposed that there are seven main intelligences (though a Google search will show different listings of the seven). ‘Intelligence’ and ‘agency’ are related to basing decisions smartly on factual truths, yet many people do not base decisions on proven facts, and many say, ‘I can believe whatever I want.’ Hence the global growth of the Flat Earth Society, refuting the most obvious of facts, that the Earth is a round planet.
“Many people choose to believe ideas shared by those of a particular religious faith, trusting them despite proven facts. There are those who are anti-literacy, and there are deniers who refute proven facts; they represent a growing demographic of followers who are not thinkers. We also have those who will routinely check facts and have a moral compass dedicated to seeking out facts and truth. Eric Fromm said, ‘In times of change, learners inherit the Earth.’
“Automated data via Facebook and persuasive design caused autocratic winners to take major leadership positions in more than 50 national elections in the past few years, sharing misinformation designed to sway voters and automatically deleting the convincing false messages to hide them after they had been read.”
Federico Gobbo, professor of interlinguistics, University of Amsterdam, said, “Humans are losing control of the proliferation of abstract machines. Most of the current systems are so complex that no single human can understand their complexity, both in terms of coding (the known ‘many hand problem’) and in the tower of interfaces. In fact, most of the communication now is not anymore human-machine but machine-machine. Autonomous systems are prone to errors, but this is not the main issue. The crucial point is accountability: who is responsible for errors?”
Johnny Nhan, expert in law, cybercrime and policing and associate dean of graduate studies at Texas Christian University, wrote, “Automation and decision-making are already happening, but at this point, it’s not too smart given that decisions are drawn from predefined choices pre-inputted by humans. With AI and machine learning, we’re seeing decisions that do not require forethought by humans. In a way, this makes our lives easier not having to deal with simple transactions. For example, we have seen this technology heavily used in customer service, retail, etc.
“The complicated parts that humans still need to deal with are the ‘big-picture’ decisions that may require rule-bending. For example, police officers are given the discretion to take into account the totality of circumstances in making decisions to not ticket or arrest someone. Moreover, when there are elements of morality over efficiency, the nuance of human decision-making is still difficult to replicate with AI.
“The changes in society can be profound with AI decision-making. On the plus side, we can have things like automated transportation. On the downside, we will continue to have work displacement for low-skilled workers, justification for discrimination, etc. Due to these considerations, I think the rollout of AI will be slower because it will be politicized.”
Gary Arlen, principal at Arlen Communications, commented, “AI—especially AI designed by earlier AI implementations—may include an opt-out feature that enables humans to override computer controls. Regulations may be established in various categories that prioritize human vs. machine decisions. Primary categories will be financial, medical/health, education … maybe transportation. Human input will be needed for moral/ethical decisions, but (depending on the political situation) such choices may be restricted. ‘Change in human society?’ That all depends on which humans you mean? Geezers may reject such machine interference (by 2035). Younger citizens may not know anything different than machine-controlled decisions. In tech, everything will become political.”
John L. King, professor of information studies and former dean, University of Michigan, “The issue is not that people cannot exercise some level of agency but instead that they usually will not when given a choice. Today, using button-click permission structures such as click-wrap, people simply give away control without thinking about it. Most users will do what is necessary to avoid extra work and denial of benefits. This is illustrated by current systems that allow users to prohibit cookies. Users face two choices: allow cookies to get the goodies or prohibit cookies and get less. It’s hard to tell how much less. Users who remain in control get extra work. Most users will take the low-energy-cost path and opt for letting the system have its way as long as it appears not to hurt them directly. Those who benefit will make transfer of power to them easy and make it easy for the end user from then on. Users, like Pavlov’s dogs, click whenever they see the word ‘Accept.’ Those who benefit from this will push back on anything that makes it easier for users to be in control.”
Doc Searls, internet pioneer and co-founder and board member at Customer Commons, observed, “Human agency is the ability to act with full effect. We experience agency when we put on our shoes, walk, operate machinery, speak and participate in countless other activities in the world. Thanks to agency, our shoes are on, we go where we mean to go, we say what we want and machines do what we expect them to do.
“Those examples, however, are from the physical world. In the digital world of 2022, many effects of our intentions are less than full. Search engines and social media operate us as much as we operate them. Search engines find what they want us to want, for purposes which at best we can only guess at. In social media, our interactions with friends and others are guided by inscrutable algorithmic processes. Our Do Not Track requests to websites have been ignored for more than a decade. Meanwhile, sites everywhere present us with ‘your choices’ to be tracked or not, biased to the former, with no record of our own about what we’ve ‘agreed’ to. Equipping websites and services with ways to obey privacy laws while violating their spirit is a multi-billion-dollar industry. (Search for ‘GDPR+compliance’ to see how big it is.)
“True, we do experience full agency in some ways online. The connection stays up, the video gets recorded, the text goes through, the teleconference happens. But even in those cases, our experiences are observed and often manipulated by unseen and largely unknowable corporate mechanisms.
“Take shopping, for example. While a brick-and-mortar store is the same for everyone who shops in it, an online store is different for everybody, because it is personalized: made ‘relevant’ by the site and its third parties, based on records gained by tracking us everywhere. Or take publications. In the physical world, a publication will look and work the same for all its readers. In the digital world, the same publication’s roster of stories and ads will be different for everybody. In both cases, what one sees is not personalized by you. ‘Tech-aided decision-making’ is biased by the selfish interests of retailers, advertisers, publishers and service providers, all far better equipped than any of us. In these ‘tech-aided’ environments, people cannot operate with full agency. We are given no more agency than site and service operators provide, separately and differently.
“The one consistent experience is of powerlessness over those processes.
“Laws protecting personal privacy have also institutionalized these limits on human agency rather than liberating us from them. The GDPR [The European Union’s General Data Protection Regulation] does that by calling human beings mere ‘data subjects,’ while granting full agency to ‘data controllers’ and ‘data processors’ to which data subjects are subordinated and dependent. The CCPA [California Consumer Privacy Act] reduces human beings to mere ‘consumers,’ with rights limited to asking companies not to sell personal data, and to ask for companies to give back data they have collected. One must also do this separately for every company, without standard and global ways for doing that. Like the GDPR, the CCPA does not even imagine that ‘consumers’ would or should have their own ways to obtain agreements or to audit compliance.
“This system is lame, for two reasons. One is that too much of it is based on surveillance-fed guesswork, rather than on good information provided voluntarily by human beings operating at full agency. The other is that we are reaching the limits of what giant companies and governments can do.
“We can replace this system, just like we’ve replaced or modernized every other inefficient and obsolete system in the history of tech.
“It helps to remember that we are still new to digital life. ‘Tech-aided decision-making,’ provided mostly by Big Tech, is hardly more than a decade old. Digital technology is also only a few decades old and will be with us for dozens or thousands of decades to come. In these early decades, we have done what comes easiest, which is to leverage familiar and proven industrial models that have been around since industry won the industrial revolution, only about 1.5 centuries ago.
“Human agency and ingenuity are boundlessly capable. We need to create our own tools for exercising both. Whether or not we’ll do that by 2035 is an open question. Given Amara’s Law (that we overestimate in the short term and underestimate in the long), we probably won’t meet the 2035 deadline. (Hence my ‘No’ vote on the research question in this canvassing.) But I believe we will succeed in the long run, simply because human agency in both the industrial and digital worlds is best expressed by humans using machines. Not by machines using humans.
“The work I and others are doing at Customer Commons is addressing these issues. Here are just some of the business problems that can be solved only from the customer’s side:
“Identity: Logins and passwords are burdensome leftovers from the last millennium. There should be (and already are) better ways to identify ourselves and to reveal to others only what we need them to know. Working on this challenge is the SSI—Self-Sovereign Identity—movement. The solution here for individuals is tools of their own that scale.
“Subscriptions: Nearly all subscriptions are pains in the butt. ‘Deals’ can be deceiving, full of conditions and changes that come without warning. New customers often get better deals than loyal customers. And there are no standard ways for customers to keep track of when subscriptions run out, need renewal, or change. The only way this can be normalized is from the customers’ side.
“Terms and conditions: In the world today, nearly all of these are ones that companies proffer; and we have little or no choice about agreeing to them. Worse, in nearly all cases, the record of agreement is on the company’s side. Oh, and since the GDPR came along in Europe and the CCPA in California, entering a website has turned into an ordeal typically requiring “consent” to privacy violations the laws were meant to stop. Or worse, agreeing that a site or a service provider spying on us is a ‘legitimate interest.’ The solution here is terms individuals can proffer and organizations can agree to. The first of these is #NoStalking, which allows a publisher to do all the advertising they want, so long as it’s not based on tracking people. Think of it as the opposite of an ad blocker. (Customer Commons is also involved in the IEEE’s P7012 Standard for Machine Readable Personal Privacy Terms.)
“Payments: For demand and supply to be truly balanced, and for customers to operate at full agency in an open marketplace (which the Internet was designed to support), customers should have their own pricing gun: a way to signal—and actually pay willing sellers—as much as they like, however, they like, for whatever they like, on their own terms. There is already a design for that, called Emancipay.
“Intentcasting: Advertising is all guesswork, which involves massive waste. But what if customers could safely and securely advertise what they want, and only to qualified and ready sellers? This is called intentcasting, and to some degree it already exists. Toward this, the Intention Byway is a core focus of Customer Commons. (Also see a list of intentcasting providers on the ProjectVRM Development Work list.)
“Shopping: Why can’t you have your own shopping cart–that you can take from store to store? Because we haven’t invented one yet. But we can. And when we do, all sellers are likely to enjoy more sales than they get with the current system of all-siloed carts.
“Internet of Things: What we have so far are the Apple of things, the Amazon of things, the Google of things, the Samsung of things, the Sonos of things, and so on—all siloed in separate systems we don’t control. Things we own on the Internet should be our things. We should be able to control them, as independent operators, as we do with our computers and mobile devices. (Also, by the way, things don’t need to be intelligent or connected to belong to the Internet of Things. They can be or have persistent compute objects, or ‘picos.’)
“Loyalty: All loyalty programs are gimmicks, and coercive. True loyalty is worth far more to companies than the coerced kind, and only customers are in a position to truly and fully express it. We should have our own loyalty programs to which companies are members, rather than the reverse.
“Privacy: We’ve had privacy tech in the physical world since the inventions of clothing, shelter, locks, doors, shades, shutters and other ways to limit what others can see or hear—and to signal to others what’s okay and what’s not. Instead, all we have are unenforced promises by others not to watch our naked selves, or to report what they see to others. Or worse, coerced urgings to ‘accept’ spying on us and distributing harvested information about us to parties unknown, with no record of what we’ve agreed to.
“Customer service: There are no standard ways to call for service yet, or to get it. And there should be.
Regulatory compliance. Especially around privacy. Because really, all the GDPR and the CCPA want is for companies to stop spying on people. Without any privacy tech on the individual’s side, however, responsibility for everyone’s privacy is entirely a corporate burden. This is unfair to people and companies alike, as well as insane–because it can’t work. Worse, nearly all B2B ‘compliance’ solutions only solve the felt need by companies to obey the letter of these laws while ignoring its spirit. But if people have their own ways to signal their privacy requirements and expectations (as they do with clothing and shelter in the natural world), life gets a lot easier for everybody, because there’s something there to respect. We don’t have that yet online, but it shouldn’t be hard.
“Real relationships: ones in which both parties actually care about and help each other, and good market intelligence flows both ways. Marketing by itself can’t do it. All you get is the sound of one hand slapping. (Or, more typically, pleasuring itself with mountains of data and fanciful maths first described in Darrell Huff’s ‘How to Lie With Statistics,’ written in 1954). Sales can’t do it either, because its job is done once the relationship is established. CRM can’t do it without a VRM hand to shake on the customer’s side. An excerpt from Project VRM’s What Makes a Good Customer: ‘Consider the fact that a customer’s experience with a product or service is far more rich, persistent and informative than is the company’s experience selling those things or learning about their use only through customer service calls (or even through pre-installed surveillance systems such as those which for years now have been coming in new cars). The curb weight of customer intelligence (knowledge, know-how, experience) with a company’s products and services far outweighs whatever the company can know or guess at. So, what if that intelligence were to be made available by the customer, independently, and in standard ways that work at scale across many or all of the companies the customer deals with?’
“Any-to-any/many-to-many business: a market environment where anybody can easily do business with anybody else, mostly free of centralizers or controlling intermediaries (with due respect for inevitable tendencies toward federation). There is some movement in this direction around what’s being called Web3.
“Life-management platforms: KuppingerCole has been writing and thinking about these since not long after they gave ProjectVRM an award for its work, way back in 2007. These have gone by many labels: personal data clouds, vaults, dashboards, cockpits, lockers and other ways of characterizing personal control of one’s life where it meets and interacts with the digital world. The personal data that matters in these is the kind that matters in one’s life: health (e.g., HIEofOne), finances, property, subscriptions, contacts, calendar, creative works and so on, including personal archives for all of it. Social data out in the world also matters, but is not the place to start, because that data is less important than the kinds of personal data listed above—most of which has no business being sold or given away for goodies from marketers. (See ‘We Can Do Better Than Selling Our Data.’)
“The source for that list (with lots of links) is at Customer Commons, where we are working with the Ostrom Workshop at Indiana University on the Bloomington Byway, a project toward meeting some of these challenges at the local level. If we succeed, I’d like to change my vote on this future of human agency question from ‘No’ to ‘Yes’ before that 2035 deadline.”
Emmanuel R. Goffi, co-founder and co-director of the Global AI Ethics Institute, noted, “By 2035, in most instances where fast decision-making is key, AI-fitted systems will be naturally granted autonomy. There is good chance that by 2035 people will be accustomed to the idea of letting autonomous AI systems do most of the work of their decision-making. As any remaining reluctance to use machine autonomy weakens, autonomous systems will grow in number. Many industries that will promote its advantages in order to make it the ‘new normal.’
“You should know that many in the world see the idea of human control/oversight as a myth inherited from the idea that human beings must control their environment. This cosmogony, where humans are on the top of the hierarchy is not universally shared but influences greatly the way people in the Global North understand the role and place of humans. Practically speaking, keeping control over technology does not mean anything. The question of how decisions should be made should be addressed on a case-by-case basis. Asserting general rules outside of any context is pointless and misleading.”
Ken Polsson, longtime author of the online information site Polsson’s Web World, said, “Through democracy, the voting public will not allow AI to take over decision-making functions.”
Steve King, partner at Emergent Research, wrote, “In 2035 humans will continue to control important decision-making; 2035 is only 13 years away. Even if AI technology has advanced enough by 2035 to out-perform humans at highly complex decision-making—which we think is unlikely—we believe humans will be unwilling to cede control over most of their important decisions to machines by then. As ‘Star Trek’s’ Spock said, ‘Computers make excellent servants, but I have no wish to serve under them.’”
Anthony Patt, professor of policy at the Institute for Environmental Decisions at ETH Zürich, a Swiss public research university, said, “I am generally optimistic that when there is a problem, people eventually come together to solve it, even if the progress is uneven and slow. Since having agency over one’s life is absolutely important to life satisfaction, we will figure out a way to hold onto this agency, even as AI becomes ever more prevalent.
“I can imagine that the core questions of what problems we want to solve, what we want to do in life, where we want to live, and whom we want to have relationships with will be maintained within our own agency. The means to accomplish these things will become increasingly automated and guided by AI. For example, my family and I recently decided that we wanted to go to England this summer for a holiday. We are going to drive there from our home in Switzerland. These choices will stay within our control. Until recently, I would have had to figure out the best route to take to get there. Now I hand this over to AI, the navigation system in our car. That navigation system even tells me where I need to stop to charge the battery, and how long I need to charge it for. That’s all fine with me. But I wouldn’t want AI to tell me where to go for holiday, so that’s not going to happen. OK, I know, some people will want this. They will ask Google, ‘Where should I go on holiday?’ and get an answer and do this. But even for them, there are important choices that they will maintain control over.”
Nicholas CL Beale, futurist and consultant at Sciteb, said, “The more-positive outcome will happen if and only if the people responsible for developing, commercialising and regulating these technologies are determined to make it so. I’m optimistic—perhaps too much so. Based upon present trends I might be much less sanguine, but I think the tech giants have to adapt or die.”
Mark Henderson, professor emeritus of engineering, Arizona State University, wrote, “Science fiction has predicted that technology will surreptitiously take charge of decisions. I see that as a fear-based prediction. I have confidence in human intelligence and humane anticipatory prevention of takeover by either technology or those who want to cause harm. I think most humans would be very troubled by the prospect of machines making decisions over vital human interests such as how health care or other societal goods are allocated. There will undoubtedly be pressure to grant greater decision-making responsibility to machines under the theory that machines are more objective, accurate and efficient. I hope that humans can resist this pressure from commercial and other sources, so that privacy, autonomy and other values are not eroded or supplanted.”
Gary M. Grossman, associate professor in the School for the Future of Innovation, Arizona State University, said, “Market conditions will drive accessibility in AI applications. In order to be marketable, they will have to be easy enough for mass engagement. Moreover, they will have to be broadly perceived to be useful. AI will be used as much as possible in routine activities, such as driving, and where minimizing human efforts is seen to be beneficial. All of this will change society profoundly as it has in every major occurrence of widespread technological change. The key question is whether that change is ‘better.’ This depends on one’s perspective and interests.”
Gary Marchionini, University of North Carolina-Chapel Hill, commented, “The forced-choice question speaks to whether systems will be designed to give people control, rather than whether people will be in control. I answered No to the design question, but I would answer ‘yes’ to the human control question. There is little incentive for systems to be designed, marketed and improved unless there are strong economic payoffs for the tech companies. That is why I doubt the design advances for human agency will be significant.
“The advances will come from smaller companies or niche services that afford individuals control. For example, the Freedom app gives people control by blocking selected services. Full control would give people much more agency at the infrastructure level (e.g., to be able to manage the full range of data flows in one’s home or office or while mobile) but such control requires companies like ATT, Spectrum, Apple, Alphabet, etc., to give people the ability to limit the fundamental profitability underlying the surveillance economy. So, I see little incentive for the platforms to give up such control without regulation.
“On the question of human agency, I am optimistic that reflective individuals will continue to demand control of key life decisions. I don’t want to control the antilock braking system on my car because I believe the engineering solution is superior to my body-mind reflexes. But I do want to be able to talk to my physician about what kind of treatment plan is best for medical conditions. The physician may use all kinds of tools to give me scenarios and options but the decision on whether to do surgery, chemotherapy, or nothing should (and I believe will) continue to rest with the individual. Likewise with financial decisions, whom I choose to love, and how I advise my children, grandchildren and students. It is more difficult to imagine personal agency in decisions that affect society. I can choose who to vote for but only among two or possibly three candidates—how those candidates rise to those positions may be strongly influenced by bots, trolls and search engine optimization and marketing algorithms and this is indeed worrisome.”
Jonathan Taplin, author of, “Move Fast and Break Things: How Google, Facebook and Amazon Cornered Culture and Undermined Democracy,” focused his response on the choices people might or might not have in a farther-out future world in virtual reality, “Every indication of the design of the metaverse is that humans will have less agency once they enter the virtual world. The very presentation of what you see will be driven by Big-Tech algorithms, which in turn will be determined by advertisers bidding on the opportunity to present their product to the person with the VR gear on. The only decisions that will require human input will be the decision to purchase (with crypto) some product or experience. All of this will accelerate the move towards a transhumanist future, a future that Francis Fukuyama has called ‘the world’s most dangerous idea.’”
John Loughney, a technology product manager, wrote, “I believe that AI will continue to develop to aid decision-making, but it will be designed to have humans make a decision that is biased by the maker of the AI. As an example, on web pages that give you the choice to accept tracking cookies, the options to choose no tracking cookies are often obscured are confusing, leading people to ‘accept all cookies.’”
Eduardo Villanueva-Mansilla, associate professor at Pontificia Universidad Católica del Perú and editor of the Journal of Community Informatics, wrote, “Humans’ experiences will depend on the level of control that large corporations have of machines driven by artificial intelligence. As the experience thus far indicates, without regulation the potential for profit-driven design will determine how much and for what services these systems will be driven and any social benefits thereof.”
Jesse Drew, associate professor of digital media, University of California-Davis, said, “The outcome is dependent upon the social and political circumstances that will predominate over whatever tech may be in place.”
Lauren Wagner, a post-disciplinary social scientist expert in linguistic anthropology, predicted, “Based on where we are today, where there is limited or no algorithmic transparency and most of the AI that impacts our day-to-day lives is created inside large technology platforms, I do not believe that by 2035 we will be in a place where end users are in control of important decision-making regarding how AI works for them. To accomplish this would require advanced thinking around user experience, up-leveling of user education around AI (how it works and why users should care about controlling it) and likely government-mandated regulation that requires understandable algorithmic transparency and user controls. The broadening and accelerating rollout of tech-abetted, often autonomous decision-making trained on Internet data may lead to biased models that shift dialogue and behaviors in a way that reinforces problematic cleavages within society. An agency that audits the data models are trained on should be created, so that there is better oversight around AI development.”
Ray Schroeder, senior fellow at the University Professional and Continuing Education Association, observed, “The progress of technology, and particularly artificial intelligence, inexorably moves forward largely unfettered. Faster machines and a seemingly endless supply of storage means that progress in many areas will continue. By 2035 access to truly effective quantum computing will further fuel the effectiveness and efficiency of AI.
“Society has a history of accepting and embracing the advance of technology, even when consequences seem horrific such as in the case of instruments of war. Far from those cases, the advance of AI and associated technologies promise to enhance and advance our lives and environment, making work more efficient and society more effective overall.
“Artificial intelligence has the potential to shine a bright light on massive predictive analytics and projections of the impact of practices and effects. Advanced machine learning can handle increasingly large databases, resulting in deeply-informed decision-making. Most impactful may be the advances in health care and education. Yet, day-to-day improvements in commerce and the production of custom products to meet needs and desires of individuals will be attained.
“The question at hand is whether this deep analysis and projections of predictions will be autonomously enforced, or rather will it be used to inform human decisions. Certainly, in some cases such as autonomous vehicles, the AI decisions will be instant-by-instant, so, while the algorithm may provide human override of those decisions, practically, little can be done to countermand a decision in 1/100th of a second to avoid a collision. In other cases—such as human resources employment decisions, selecting from among medical treatment alternatives, and approval of loans—AI may be tempered by human approvals or subject to human appeals of preliminary machine-made decisions.
“We are now at the important inflection point that demands that governance of AI be implemented on a widescale basis. This will come through legislation, industry rules of practice and societal norms of the sort such as that we do not allow children to operate cars.
“That is not to say that there will be no rules asserting the rights of artificial intelligence that will be determined in the near term. For example, the current question of whether AI can hold a patent may afford some rights to AI or the creator of an algorithm.
“I do not expect us to see a truly sentient AI by 2035, though that, too, may be close behind. When that level of cognition is achieved, we will need to reconsider the restrictions that will have been placed in the intervening years.”
Micheal Kleeman, a senior fellow at the University of California-San Diego who previously worked for Boston Consulting and Sprint, commented, “There is a fundamental assumption here that AI is a neutral technology, and it is and, perhaps, cannot be. The algorithms and underlying data are inherently biased, so the decision-making will carry that bias, and there is then the issue of lack of transparency of the decision process. But there will be some simple decisions that can be made based on observation of preferred behaviors (environmental settings, etc.).
“I find it hard to see higher-level decisions being made autonomously. If the rollout is expanded it will be asymmetrical (income-related, etc.) but its impacts may be double-edged. Ideally it can free humans from some tasks but at higher level it can limit perspectives and reduce innovation. As a note, we obviously have some of this today and at the lower levels does help optimize resource use and time in some functions. And it can be adaptive to individuals.”
Llewellyn Kriel, retired CEO of a media services company based in Johannesburg, South Africa, wrote, “The future in this context looks bleaker by the day. This is primarily due to a venal confluence of cybercrime, corporate bungling and government ignorance. This has meant and will continue to mean that individuals (‘end users’) will inevitably be overlooked by programmers. The digital divide will see parts of Africa plunge further and further behind, as intractable corruption entrenches itself as a lifestyle and no longer merely an identifying oddity. The continent is already a go-to haven of exploitation in which the only winners are corruptocrats, some outside nation-states and a handful of mega corporations (especially banks, insurance, medical and IT).”
Monique Jeanne Morrow, senior distinguished architect for emerging technologies at Syniverse, a global telecommunications company, said, “The digital version of ‘do no harm’ translates to valuing human safety. Understanding the potential for harm and mitigation is a starting point. Perhaps a new metric should be created that measures a tech development’s likely benefits to society that also indicates that some degree of human agency must always be in the loop. An example of perceived misuse though cultural and geo-political in nature can be found in the recently reported news that ‘Scientists in China Claim They Developed AI to Measure People’s Loyalty to the Chinese Communist Party.’ There should be embedded ethics and attention to environmental, social and governance concerns as part of the tech development process. Automation is needed to remove friction, however this tech should have ‘smart governance’ capability, with defined and agreed-upon ethics (understanding that the latter is highly contextual).”
Lea Schlanger, a senior business intelligence analyst based in North America, commented, “Absent a major shift in practice and policies, tech companies will keep churning out technologies designed primarily for their own agency as long as they are profitable (and don’t generate an egregious amount of bad PR). Based on the current state of the tech industry and American policies specifically, the main reason(s) individuals will not be in control are:
- Advancements in AI and Machine Learning automation are currently happening faster than research on the impacts they’ll have on society as a whole.
- Not enough research into how new technologies will impact society are being conducted as part of the technology development process (see the issues with facial recognition AI only being trained on data that is skewed towards white men).
- Our most recent and current legislative bodies are so out of touch with how current technology works or is viewed that not only are they barely working through policies around them, but they are also more likely to use the talking points or full-on policy drafts from lobbyists and their political parties when it comes time to create and vote on legislation.”
John Lazzaro, retired professor of electrical engineering and computer science at the University of California, Berkeley, said, “It is tempting to believe that we can outsource the details that determine our interactions to a machine while maintaining high-level control. But I would argue granular decisions are where the true power of being human lies. When we delegate nuanced choices away, we surrender much of our influence.
“We can see this dynamic in play whenever we compose using the Gmail web client. If one turns of the full suite of machine-learning text tools (smart compose, smart reply, grammar suggestions, spelling suggestions, autocorrect), preserving your personal voice in correspondence is a struggle. The medium quickly becomes the message, as you find yourself being prodded to use words, phrases, and constructions that are entirely not your own.
“We also see this dynamic at play in the computational photography tools at the heart of the modern smartphone camera experience. Schematically, an algorithm recognizes that the photographer is taking an outdoor photo with a sky and uses machine-trained (or hand-crafted) models to fashion a sky that ‘looks like a sky should look.’ But on August 9, 2020, in the San Francisco Bay Area, when fire ash created an ‘apocalypse’ red-orange sky, computational photography models made it impossible to capture a sky that was ‘what a sky should never look like.’”
Claudia L’Amoreaux, principal at Learning Conversations—a global internet consultancy—and former director of education programs at Linden Lab (developers of Second Life) wrote, “The two words that stand out in your top-level question are ‘designed’ and ‘easily.’ In designing for human agency and decision-making, we do have a choice. Looking at how the EU handled data protection with the GDPR privacy legislation vs. how the U.S. has pretty much continued business as usual shows that we do have a choice…
“However, I am extremely skeptical that choices will be made in favor of human agency here in the U.S. Where’s the incentive? As long as tech companies’ profits are based on separating users from as much of their personal data as possible—for ad targeting, self-serving recommendations that maximize engagement, and resale—this situation will not improve. Broader, more-sophisticated applications of AI will only accelerate what is already happening today.
“And as regulations around privacy and data extraction do tighten in the U.S., however slightly, companies in the AI industry are and will continue to exploit the data of people in the less-developed world, as Karen Hao lays out so well in the AI Colonialism series in MIT Technology Review.
“I’ll share two examples that fuel my skepticism about human agency and decision-making. The first example regards the UK Biobank’s transfer of genetic data of half a million UK citizens in a biomedical database to China (reported in The Guardian). The sharing of sensitive genetic data in the UK Biobank project, launched as an ‘open science project’ in 2016, is based on a relationship of trust that is eroding as West/China relations transform. Sharing is not reciprocal. Motives aren’t parallel. The 500,000 humans with their DNA data in the Biobank are asked to trust that researchers will do a good job ‘managing risk.’ Is their agency and decision-making being prioritized in the conversations taking place? I don’t think so.
“A second example is the massive surveillance model employed by China that they are now exporting to countries that want to follow in their footsteps. With large infrastructure projects underway already through China’s Belt and Road Initiative, surveillance tech has become an add-on.
“Regarding the use of the term easily in your question—will people ‘…easily be in control of most tech-aided decision-making that is relevant to their lives…’—it’s not looking good for 2035.
“What key decisions will be mostly automated? To start to understand what key decisions will be mostly automated, we can look at what’s mostly automated today (and how quickly this has occurred). Let’s look at two related examples—higher education and hiring.
“Many universities have moved to automating the college admissions process for a variety of reasons. Increasing revenue is an obvious one, but some schools claim the move helps reduce bias in the admissions process. The problem with this which has become very clear across many domains today is that it all depends on the data sets used. Introducing AI can have the opposite effect, amplifying bias and widening the equity gap. Students most at risk for paying the price of increased automation in the admissions process are lower-income and marginalized students.
“Once students do make it into a university, they are likely to encounter predictive analytics tools making academic decisions about their futures that can narrow their options. The education podcast by APM Reports did a good piece on this, titled “Under a Watchful Eye.” While most elite universities are keeping a hands-on approach for now, colleges that serve the majority of students are adopting predictive analytics to point students to what the universities deem as a successful path from their perspective: continued tuition and graduation. This approach benefits the schools but not necessarily the students.
“Don’t get me wrong—identifying students at risk for failing early on and offering support and options to help them graduate is a good thing. But if the school’s priority is to ensure continuing tuition payments and maximize graduate rates, this can actually co-opt student agency. Once again, predictive analytics relies on historical data and we know historical data can carry extensive baggage from long term, systemic bias. Students of color and low-income students can find themselves pushed in different directions than they set out…when an alternative approach that prioritizes around student agency might help them actually succeed on the path of their choice. In this case, predictive analytics helps the schools maintain their rankings based on graduation rates but sacrifices student preferences.
“Then there’s hiring. Hiring is already so heavily automated that job seekers are advised to redesign their CVs to be read by the algorithms. Otherwise, their application will never even be seen by human eyes. These are just a few examples.
“What key decisions should require direct human input? In the military, the use of autonomous lethal weapons systems should be banned. In the 2021 Reith Lecture series Living With Artificial Intelligence, Lecture 2—AI in Warfare, Stuart Russell, founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley, explains, ‘At the moment we find ourselves at an unstable impasse, unstable because the technology is accelerating … we have about 30 countries who are in favor of a ban, as well as the EU parliament, the United Nations, the non-aligned movement, hundreds of civil society organizations, and according to recent polls, the great majority of the public all over the world.’
“I stand with Russell—who supports a ban—along with leaders in 30 countries and the majority of people around the world. But, as Russell says in his Reith Lecture…
‘On the other side, we have the American and Russian governments, supported to some extent by Britain, Israel and Australia, arguing that a ban is unnecessary…Diplomats from both the UK and Russia express grave concern that banning autonomous weapons would seriously restrict civilian AI research…I’ve not heard this concern among civilian AI researchers. Biology and chemistry seem to be humming along, despite bans on biological and chemical weapons.’
“The U.S. and Russian positions do not speak well for the future of human agency and decision-making, although Russell said he is encouraged by decisions humanity has made in the past to ban biological and chemical weapons, and landmines. It is not impossible, but we have a long way to go to ban autonomous lethal weapon systems. Am I encouraged? No. Hopeful? Yes.
“Because of a long history of structural racism and its encoding in the major databases used for training AI systems (e.g., ImageNet), the justice system, policing, hiring, banking (in particular, credit and loans), real estate and mortgages, and college applications and acceptance all involve life-changing decisions that should require direct human input. And in the medical domain, considering possible life-and-death decisions, we’ve seen that the use of image identification for skin cancer that has been trained predominantly on white skin may misidentify skin cancers. This is just one example in health care. Until we rectify the inherent problems with the training sets that are central to AI solutions, key decisions about life and death should require direct human input.
“How might the broadening and accelerating rollout of tech-abetted, often autonomous decision-making change human society? Because this is already happening now, it’s not hard to see how it will be and is changing human society. For one, it is creating two distinct classes with a huge gap in between—a techno-savvy class, and let’s call it a techno-naive class. Techno-savvy humans understand the basics of AI, algorithms, etc. They have the knowledge and the ability to protect their privacy (as much as is possible), opt out, assess validity and sources of content, detect fakes or at least understand that fakes are proliferating, etc. Techno-naive humans are currently and will be easily duped and taken advantage of—for their data, for their eyeballs and engagement metrics and for political gain by the unscrupulous groups among the techno-savvy.
“And whether savvy or naive, people (especially people of color) will find themselves at the mercy of and in the cross hairs of autonomous decision-making—e.g., misidentification, biases embedded in data sets, locked out of jobs, education opportunities, loans, digitally red-lined.
“The uses of AI are so vast already, with so little scrutiny. The public’s knowledge of AI is so minimal, agency is already so eroded, people are too willing to trade agency for convenience, most not even realizing that they are making a trade. Sure, I can find out what data Facebook, etc., has on me, but how many people are going to 1) take the time to do it, 2) even know that they can and 3) understand how it all works.
“I’ve made it clear I think we have serious work to do at international and national levels to protect privacy, human agency, access and equity.
“But we also need to make serious efforts in 1) how we teach young people to regard these technologies, and 2) in how we put these technologies to work in the preK-12 education systems and higher education. Education will play a major role in future outcomes around technology, decision-making, and human agency.
“I am encouraged by the efforts of organizations like UNICEF’s AI for Children project, the Harvard Berkman Klein Center’s Youth and AI project, MIT’s Responsible AI for Social Empowerment and Education (RAISE) project, to name a few. I think these projects are exemplary in prioritizing human agency and decision-making. I especially appreciate how they go out of their way to include youth voices.
“The next horizon is already upon us in education. The choices we make in AI-enabled teaching and learning will play a tremendous role in future outcomes around human agency and decision-making. China is currently pushing hard on AI-enabled teaching and adaptive learning with a focus towards helping students perform on standardized testing. And school systems in the U.S. are looking at their success.
“I understand and appreciate the role for adaptive learning systems like Squirrel AI, a dominant tutoring system in China today. But I lean in the direction of educators like Harvard professor Chris Dede, an early innovator of immersive learning, who emphasizes the necessity for an education system that prioritizes creativity, innovation, directed by student interest and passion. To become adults who value human agency and decision-making, young people need to experience an educational system that embodies and models those values. They need opportunities to develop AI literacy that presents a much wider lens than coding—offering opportunities to explore and engage algorithmic justice, biases, ethics, and especially, building and testing AI models themselves, from a young age.
“Despite my rather bleak answer of ‘No’ to the primary question, this is where I find encouragement and the possibility of ‘Yes’ for the year 2035. The children in kindergarten today who are training and building robots with constructivist platforms like Cognimates will be entering college and/or the workforce in 2035.
“In the 2019 post “Will AI really transform education?” in The Hechinger Report, writer Caroline Preston reports on a conference on AI in education that she attended at Teachers College, Columbia University. Stephania Druga who created the Cognimates platform spoke at the conference and Caroline summarized: “In her evaluations of Cognimates, she found that students who gained the deepest understanding of AI weren’t those who spent the most time coding; rather, they were the students who spent the most time talking about the process with their peers.”
Neil McLachlan, consultant and partner, Co Serve Consulting, predicted, “Highly tailored decision-support systems will be ubiquitous, but I expect that a great deal of decision-making—especially regarding ‘life-and-death’ matters—will remain largely the domain of humans.
“From an individual human perspective there may continue to be scope for some ‘fully’ automated decision-making in lower stakes areas such as when to service your car. Greater degrees of automation may be possible in highly controlled but technology-rich environments such as the higher-level implementations of rail traffic management utilising the European Train Control System. Machines and other systems, whether utilising artificial intelligence or not, will remain in decision-support roles.
“Eventually there will be changes to total employment and employment relations due to the broadening and accelerating rollout of technology and automated systems. Again, ultimate decision-making will remain human-focussed.”
Marydee Ojala, editor-in-chief of Online Searcher, Information Today, said, “At what point will the ‘human in the loop’ be much more able to affect autonomous decision-making in the future? Will we only expand upon our reliance on algorithms we don’t understand? ‘Data-driven’ decisions are becoming more and more prevalent, but this type of decision-making is often a numbers game that ignores the human implications of its decisions.
“Example: If research and development were totally data-driven in the pharma industry, decisions about which diseases to research and fund a cure for would concentrate only on diseases that are the most widespread and reasonably common (and profitable?) at the expense of addressing the damage caused by lesser-known diseases affecting a smaller number of people.
“While the research into COVID that resulted in vaccines was stellar, with huge ramifications for immunity worldwide to a deadly disease, would AI-based decision-making, in the future, discount doing the same type of research if the disease affected only a small number of people rather than a larger population? Might we no longer work to develop ‘orphan drugs’?
“Another example is the selection of content included in libraries. If AI determines which materials go into a collection, would bias be a factor depending on how the algorithm was constructed? If the algorithm takes popularity as one criteria, will a library be unable to buy professional development materials that would be read only by its staff?”
James Hanusa, futurist, consultant and co-founder at Digital Raign, commented, “I want to be an optimist, but based on my experience in the field to date, I’m afraid society will have declining inputs on decision-making. We are in the first years of a 20-year wave of automation and augmented intelligence that I would say started. Most of the past major computing cycles have also been 20 years, mainframes 1960s-80s, personal computer 1980s-2000s, internet/mobile 2000’s-2020.
“Looking at the advances of the web’s impact on society over 20 years and the direction it is being driven by emerging tech, I can only imagine that the business models, machine-to-machine interoperability and convenience of use will render most people’s lives to ‘autopilot.’ Some current inputs that lead to this conclusion include the Internet of Everything, quantum computing and artificial general intelligence moving towards artificial superintelligence, which has been predicted by leading computer scientists as possibly occurring between 2030-2050.
“Another factor that I believe is important here is the combination of big tech, value of data and trust factors in society. The most valuable companies in the world are tech companies, often rivaling countries in their market cap and influence. Their revenue generation is more and more the result of data becoming the most valuable commodity. It will be nearly impossible to change the trajectory of these companies developing autonomous systems at lower costs, especially when AI’s start programming AI’s. Societal trust of institutions is extremely low, but computers, not technology companies, appear to be de facto highly reliable.
“The final point I would submit is based on this observation from Mark Weiser of Xerox Parc—‘The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.’
“Think of the complex yet simple developments of our current technology, for example, GPS telling us where to go or recommendation engines showing us what to watch and buy. What sophistication might we ‘outsource’ just based on those examples? Autonomous cars, medical decisions, mate selection, career paths?
“The key decisions that I believe should have direct human input or an override function include life-termination, birth, death and nuclear missile launch.
“A real fear I hold, is that in the next 30 years, as the world moves toward 10 billion population with integrated exponential technologies simultaneously having a greater impact on society, that a large part of humanity will become ‘redundant.’ The advances in technology will be far greater and longer lasting than that of the industrial revolution and something that capitalism’s creative destruction cannot overcome.
“Humans have unique capacities for creativity, community and consciousness. Those are the areas I believe we should be focusing our education systems on in developing in the populace. Computers will surpass us in intelligence in most everything by 2035.”
Michel Grossetti, director of sociological research at the French National Center for Scientific Research (CNRS), said, “In the far future it is not impossible that automata will reach a level of realism and an autonomy in their behavior that leads many to consider them as an ‘other’ kind of ‘people.’ This could lead the social sciences to have to define ‘artificial persons.’ But automatons will always be caught in the relationships of power and domination between humans.”
Leiska Evanson, a Caribbean-based futurist and consultant, observed, “Machines allow ‘guilt-free decision-making’ along the lines of what the Nuremburg trials revealed about armies’ chains of command. Many will revel in such freedom from decision burden and happily blame ‘the machine’ instead of their choice to trust the machine—much as they have blamed television, social media and videogames for human failings. Programmers and computer scientists do not trust humans. Humans do not trust humans. Very simply, human programming of AI currently relies on reducing specific human input points to reduce the fallacy of ‘organic beings’—twitches, mistakes, miscalculations or bias. It has been known for at least a century how cameras, infrared and other visual/light-based technology do not capture darker skin tones well, yet this technology is being used for oxygen sensors, security cameras, facial recognition yielding the same mistakes and leading to incorrect incarceration, poor medical monitoring and death.”
Peter Lunenfeld, professor and vice chair of design and media arts, UCLA, predicted, “Humans will not be in control of important decision-making in the year 2035. They are not in charge of those decisions now, and in fact rarely or never have been throughout human history. AI and smart systems are less likely to ‘take control’ autonomously than they are to be taken control of from the start by already existing power structures and systems.
“We already have algorithms controlling access to healthcare, economic metrics impeding social action on climate change and social media targeting propaganda to influence or even dismantle democratic institutions. If the first two decades of the 21st century haven’t been able to dim 1990s techno-positivism, I’m not sure what will.
“AI and smart systems could conceivably be integrated into self-monitoring systems—think advanced Fitbits—and allow people to actually understand their own health and how to contribute to the development of a healthier society. Likewise, such advances in machine intelligence could be harnessed to clarify complex, data-driven decision-making that true citizenship increasingly demands.
“But as long as the long tail of neo-liberalism is driven by profit rather than concerns for the greater good, and our society’s most powerful stakeholders benefit personally and professionally from interacting with avid consumers rather than informed citizens, AI and smart systems are likely to reduce rather than increase human flourishing.”
Marc Brenman, managing partner at IDARE LLC, commented, “Humans already make many bad decisions based on wrong assumptions with poor or no inference, logic, evidence or critical thinking. Often, researchers and thinkers compare machines to the best of humans, instead of humans as we are. Machines are already making better decisions, even simple machines like toasters, ovens and microwaves. In addition, humans are already becoming more bionic and artificial, for example through implants to help hearing, heartbeat, to reduce Parkinson’s disease and epilepsy; metal knees, hips and shoulders; teeth implants; pins and screws; protheses; etc. Our cars already make many decisions for us, such as automatic stopping and lane-keeping. GPS systems tell us where to go.”
Mark Perkins, co-author of the International Federation of Library Associations “Manifesto on Transparency, Good Governance and Freedom from Corruption,” commented, “Those with tech knowledge/education will be able to mitigate the effects of the ‘surveillance economy,’ those with financial means will be able to avoid the effects of the ‘surveillance economy,’ while the rest will lose some agency and be surveilled. For most humans—’the masses’—key decisions such as creditworthiness, suitability for a job opening, even certain decisions made by tribunals, will be made automated by autonomous and artificial intelligence, while those with the financial means will be able to get around these constraints. Unlike, in the case of privacy settings, however, I think technical workarounds for retaining control/agency will be much less available/effective.”
Jim Spohrer, board member of the International Society of Service Innovation Professionals, previously a longtime IBM leader and distinguished technologist at Apple, asked, “People will likely have a form of personal-private cognitive mediator by 2035 that they rely on for certain decisions in certain realms. The key to decision-making in our own lives is not so much individual control as it is a process of interaction with trusted others. Are people today in control of important decisions? The short answer is ‘no.’ Instead, they rely on trusted mediators: trusted organizations, trusted experts or trusted friends and family members. Those trusted mediators help make our decisions today and they will continue to do so in 2025. The trusted mediators will likely be augmented by AI.”
Janet Salmons, consultant with Vision2Lead, said, “The accelerating rollout of tech-abetted, often autonomous decision-making will widen the divide between haves and have-nots, and further alienate people who are suspicious of technology. And those who are currently rejecting 21st century culture will become more angry and push back—perhaps violently.”
Robin Cooper, emeritus professor of computational linguistics at the University of Gothenburg, Sweden, commented, “I am pessimistic about the likelihood that humans will have come around to understanding the limits of technology and the right way to design appropriate interfaces by 2035. We will still be in the phase of people believing that AI techniques are more powerful than they really are because we can design machines that appear to behave sensibly. Any key decision should require direct human input, with assistance from knowledge gained from AI technology if appropriate. Given current AI technology trends, some major consequences of the broadening of autonomous AI decision-making by 2035 could include:
- Major disasters caused by absurdly wrong predictions or interpretations of events (e.g., early-warning systems for nuclear attack).
- A perpetuation of discriminatory behaviour based on previous data (e.g., systems that screen job applicants).
- A stifling of humans’ capabilities for change and creativity (again because current AI techniques are based on past behaviour rather than on reasoning about the future).”
Steven Miller, former professor of information systems at Singapore Management University, responded, “Just as the use of electricity is pervasive in most economies above a certain level of economic development, and just as the use of IT applications and computing is pervasive (again, in many but not all economies), using the wide variety of AI methods and always increasing amounts of available data to create ever-improving methods for better predictions, recommendations, classifications, mathematically optimized choices and decisions, and learning-based problem solving will become so pervasive as to be commonplace.
“This is clearly already in the process of happening in a relatively small but rapidly growing number of organizations, and this diffusion will continue to expand across a broader range of organizations for decades to come.
“As such, it is not possible to summarize the subset of ‘key decisions’ that will be AI-enabled through whatever combination of full automation or machine-human augmentation as the deployment and usage of these capabilities will be so intertwined with ANY application where computing, IT and web access are part of how work is done and how goods and services and experiences are delivered.
“Related to this, there is no Yes-vs.-No dichotomy as to whether smart machines, bots and systems powered by AI will be designed (Yes) or will not be designed (No) to allow people to more easily be in control of the most tech-aided decision-making that is relevant to their lives. Both approaches will happen, and they will happen at scale. In fact, both approaches are already happening.
“There is a growing recognition of the need for ‘human-centered AI’ as per the principles enunciated in Ben Shneiderman’s 2022 book on this, as illustrated by the advocacy and research of Stanford’s Institute for Human-Centered AI, and as demonstrated by growing participation in AI Fairness, Accountability and Transparency (FAccT) communities and efforts, and many other initiatives addressing this topic. These types of efforts are generating a growing following for designing and deploying AI-enabled systems for both augmentation and automation that are human-centered and that adhere to principles of ‘responsibility.’
“We are already observing this dynamic tension between these Yes-vs.-No approaches, and we already see examples of the negative power of not designing AI-enabled systems to allow people to more easily be in control of the lives.
“As we proceed to the year 2035, there will be an increasingly strong dynamic tension between institutions, organizations and groups explicitly designing and deploying AI-enabled systems to take advantage of human ‘ways’ and limitations to indirectly influence or overtly control people versus those that are earnestly trying to provide AI-enabled systems that allow people to more easily be in control of not only tech-aided decision-making, but nearly all aspects of decision-making that is relevant to their lives.
“No one knows how these simultaneous and opposing forces will interact and play out. It will be messy. It will be dynamic. The outcome is not pre-determined. There will be a lot of surprises and new scenarios beyond what we can easily imagine today. Actors with ill intent will always be on the scene and will have fewer constraints to worry about. We just need to do whatever we can to encourage and enable a broader range of people involved in creating and deploying AI enabled systems—across all countries, across all political systems, across all industries—to appropriately work within their context and yet to also pursue their AI efforts in ways that move in the direction of being ‘human-centered.’ There is no one definition of this. AI creators in very different contexts will have to come to their own realizations of what it means to create increasingly capable machines to serve human and societal needs.”
Deirdre Williams, an independent internet governance consultant, said, “I should begin by stating that my part of the world is different from your part of the world, and my reality is different from your reality. I live on a small island in the Caribbean and consider myself to be one among the technologically disadvantaged majority of the population of the world. We might consider ‘technologically disadvantaged’ as another synonym for ‘third world’ and Global South. People like me live on every continent, North and South, in huge cities and out in the countryside. And I think there are currently more of us than there are of you. And I have been thinking about the acceptance by the general population of ‘tech-aided decision-making’ and not about the degree to which developers will factor concern about human agency into what they create.
“In my experience, the emphasis for the developers is on how ‘clever’ and autonomous the software can become, and I don’t see that changing any time soon, but the emphasis for the creators does not necessarily correlate with the will of the end-user. If the will of the end user or his/her financial, geographic or other challenges mean that the software isn’t used, then human autonomy survives—except possibly in the technologically advantaged parts of the world.
“In the technologically disadvantaged parts of the world, we are not very good at collecting data or handling it with accuracy. Decision-making software needs good data. It doesn’t work properly without it. When it doesn’t work properly it makes decisions that hurt people and people notice and push back. There is a tendency to forget or not to acknowledge that data is history not prophecy; that it is necessary to monitor ALL of the variables, not just the ones humans are aware of—to note that patterns shift and things change, but not necessarily on a correctly-perceived cycle. So, I don’t believe that humans will NOT be in control of important decision-making, but I also don’t believe that the behaviour of designers is likely to change very much.”
Steve Jones, professor of communication, University of Illinois-Chicago, wrote, “Unfortunately I think we have to look at this—to borrow from the film ‘All the President’s Men—in ‘follow the money’ fashion. Who would benefit from designing AI etc. in a way that gives people agency and who would benefit from not giving them agency? I expect few companies will want to give users much control, for a variety of reasons, not the least of which is that they will want to constrain what users can do within parameters that are a) profitable and b) serviceable (that is, achievable given the capabilities of the software and hardware). This is also going to be a delicate dance between what uses designers envision the technology being put to and what uses people are going to find for it, just as it has been for most every technology used widely by people. It will be the unanticipated uses that will surprise us, delight us and frighten us.”
Sean Mead, CEO at Ansuz Strategy, predicted, “By 2035, human moderation of AI and augmented technology will rarely be available in any significant manner in most settings. Cost-control, speed and reduction of ambiguity in response will drive cutting humans out of the decision loop in most circumstances. One of the exceptions will be combat robots and drones deployed by the U.S. which will maintain humans in the loop at least as far as approval of targets; the same will not be true for Russian and Chinese forces. The improved automation will threaten economic security for wide swaths of today’s employees as the creation of new jobs will fall far behind automated replacement of jobs.”
Ramon F. Brena, a longtime computer scientist and professor based in Mexico, commented, “A large percentage of humans will not be in control of many decisions that will impact their lives because the primary incentive for Big Tech is to make things that way. The problem is not in the technology itself, the incentive for large tech companies, like Meta, Google, Tesla and so on. Much of the relationship between people and digital products is shaped by marketing techniques like the ones described in the book ‘Hooked.’ Tech design is centered on making products addictive, thus driving people to make decisions with very little awareness about the likely consequences of their interactions with them. Digital products appear to make life ‘easy,’ but there is a hidden price. There is an implicit struggle between people’s interests and big companies’ interests. They could be aligned to some degree but Big Tech companies choose to follow their own financial goals.”
Sarita Schoenebeck, associate professor and director of the Living Online Lab at the University of Michigan, said, “Some people will be in charge of some automated decision-making systems by 2035, but I’m not confident that most people will be. Currently, people in positions of power have control over automated decision-making systems and the people whose lives are affected by such systems have very little power over them. We see this across industries: tech, healthcare, education, policing, etc. I believe that the people and companies building automated systems will recognize that humans should have oversight and involvement in those systems, but I also believe it is unlikely that there will be any meaningful redistribution in regard to who gets to have oversight.”
Ruben Nelson, executive director of Foresight Canada, predicted, “My sense is that the slow but continuous disintegration of modern techno-industrial (MTI) cultures will not be reversed. Even today many people, if not yet a majority, are unable to trust their lives to the authority claimed by the major institutions of the culture—science, religion, the academy, corporate business.
“Over the last 30 to 40 years, more and more folks—still a minority but more than a critical mass—have, quietly, withdrawn their trust in such institutions. They no longer feel a deep sense of safety in their own culture. The result is a great fracturing of what used to be a taken-for-granted societal cohesion. One result is that many no longer trust the culture they live in enough to be deferential and obedient enough to enable the cultures to work well. This means that those who can get away with behaviours that harm the culture will have no capacity for the self-limitation required to a civil society to be the norm.
“I expect greater turmoil and societal conflict between now and 2035, and many with power will take advantage of the culture without fear of being held accountable. So, yes, some AI will be used to serve a deeper sense of humanity, but minorities with power will use it to enhance their own game without regard for the game we officially say we are playing—that of liberal democracy. Besides by 2035, the cat will be out of the bag that we are past the peak of MTI ascendency and into a longish decline into greater incoherence. This is not a condition that will increase the likelihood of actions which are self-sacrificial.”
Tom Wolzein, inventor, analyst and media executive, wrote, “Without legislation and real enforcement, the logical cost-savings evolution will be to remove the human even from systems built with a ‘human intervention’ button. Note ‘with a human decision in the loop’ in this headline from a 6/29/2022 press release from defense contractor BAE Systems: ‘BAE Systems’ Robotic Technology Demonstrator successfully fired laser-guided rockets at multiple ground targets, with a human decision in the loop, during the U.S. Army’s tactical scenario at the EDGE 22 exercise at Dugway Proving Ground.’ Think about how slippery the slope is in just that headline. There is a more fundamental question, however. Even if there is human intervention to make a final decision, if all the information presented to the human has been developed through AI, then even a logical and ethical decision by a human based on the information presented could be flawed.”
Lenhart Schubert, a prominent researcher in the field of commonsense reasoning and professor of computer science at the University of Rochester, commented, “We will not have AI personal assistants that make new technology easy to access and use for everyone by 2035. Thirteen more years of AI development will not be enough to shift from the current overwhelming mainstream preoccupation with deep learning—essentially mimicry based on vast oceans of data—to automated ‘thinking,’ i.e., knowledge-based reasoning, planning and acquisition of actionable new knowledge through reading, NL interaction and perceptual experience.”
Simeon Yates, a professor expert in digital culture and personal interaction at the University of Liverpool and research lead for the UK government’s Digital Culture team, said, “I do not think humans will be in meaningful control of many automated decision-making activities in the future. But we need to put these in two categories. First, those decisions that are better made by well-designed automated systems—for example in safety-critical/time-critical environments where appropriate decisions are well documented and agreed upon and where machines can make decisions more quickly and accurately than people.
“Second, decisions that are based on data analytics and what is often erroneously called AI. Many, many systems described as AI are no more than good statistical models. Others are bad models or simply rampant empiricism linking variables. These are bad enough applied to areas with little ethical implication, but many are applied to social contexts. See, for example, the following reports on bias in predictive algorithms for law enforcement and AI supposedly predicting crime. Whatever the methodological issues one might raise with the research into AI for law enforcement—its conclusion is a recommendation to use the modelling to highlight bias in the allocation of police resources away from deprived areas. The news article focuses on predictive analytics and calls it ‘AI.’ The poor empiricist reading is that the AI can help decide where to allocate policing resources. If implemented that way, then human agency is taken out of a very important set of decisions.
“I predict there will be thousands of such models and approaches sold as AI solutions to cash-strapped municipalities, or to companies or medical care etc. After which humans will not have a clear role in these decisions. Nor will human agents—and that means citizens who have rights (digital or other)—be able to see or understand the underlying models. Why do I think this will be the case? Because it already is, and it is just creeping ever-onward.
“There should be much greater debate over things that fall into my first category. Where the removal of human agency is ethically beneficial—the plane does not crash, the reactor is safe, and the medicine dose is checked. As regards the second category (where there is a serious question over the ethics of passing decision-making to algorithms), we need debates and regulations on the transparency of AI/data-driven decision-making and areas where this is socially acceptable or not, and we need much greater data-use transparency.
“We also must educate our computer science colleagues about ethics and responsible innovation in this domain. Bad analyses and bad social science seem to me too often come from unthinking application of data analytics to social questions. Especially where underpinned by naive understandings of social and political processes. This goes beyond bias in data.”
Scott Santens, author of “Let There Be Money” and editor of @UBIToday, said, ““Although it is entirely true that technology can liberate people and increase human agency and enable everyone to be more involved in the collective decision-making process of how to implement technology in a way that benefits everyone, the status quo is for it to benefit only some, and that will remain until people start thinking differently.
“Humankind has a trust problem. Society today seems to be built on distrust—we must assume by default we can’t trust each other. We are witnessing a continuing decline of trust around the world. The trend is not toward greater trust over time. Will that change because we have more technology? Technology is playing an active role in the decrease trust as it is used to spread misinformation and disinformation and create polarization. More technology seems unlikely to resolve this. It is more likely that as technology advances, those in power will prefer to sustain that power by avoiding putting more control into the hands of humans. They will, instead, choose to utilize the opportunity tech provides to reduce the agency of humans. If humans can’t be trusted, trust machines instead. The public seems likely to go along with that as long as trust in each other remains low.
“If social changes can be made through social advances like universal basic income that might increase trust, then that’s when we’ll start seeing decision-making become less centralized and instead more widely distributed.”
William Lehr, an economist and tech industry consultant who was previously associate director of the MIT Research Program on Internet and Telecoms Convergence, wrote, “Bots, agents, AI (and still mostly non-AI ICTs) are already far more advanced than most folks recognize—your car, your appliances and the way companies make decisions already are heavily ICT-automated and AI in multiple forms is part of that. Most folks are incapable of figuring out how their gadgets work even if they have lots of old-fashioned common sense and hands-on (non-tech) savvy and skills. When your car/washing machine/stove breaks today, it is often due to failure in a control board that requires specialized software/IT tools to diagnose.
“By 2035 we will have lots more AI to obscure and address the complexity of ICT automation. For many folks that will make life easier (fewer decisions requiring human, real-time cognition) although lots more human-focused decisions will be forced on people as a matter of policy (e.g., to enable realistic end-user ‘consent’ to data usage as byproduct of privacy/security/data management policies being crafted and so on).
“Yes, this means that we will substitute one type of pain-in-the-neck/I-have-to-think-about my tech instead of ‘it just works’ problem for others—however that is necessary. So, in the end will it really be ‘easier’? I doubt it. Look at arrival of word-processing software. Has it become easier to write papers? I wrote papers just fine in college in the days of manual typewriters, and today I spend much more time revising and many more revisions are possible. Is end product that much better? Not really, but it sure is different.
“’Who is in control’? is the big-bucks question: Who/what/how is control sustained? AI will have to be part of the solution because it will certainly be part of the problem.”
Peter Dambier, an internet pioneer based in Europe, wrote, “I am afraid technology might be misused by somebody else controlling my life. That is why I prefer only the assistance I ask for.”
Jim Dator, well-known futurist, director of the Hawaii Center for Futures Studies and author of the fall 2022 book “Beyond Identities: Human Becomings in Weirding Worlds,” wrote a three-part response tying into the topics of agency, identity and intelligence.
“I. Agency – In order to discuss the ‘future of human agency and the degree to which humans will remain in control of tech-aided decision-making’ it is necessary to ask whether humans in fact have agency in the way the question implies, and, if so, what its source and limits might be.
“Human agency is often understood as the ability to make choices and to act on behalf of those choices. Agency often implies free will—that the choices humans make are not predetermined (by biology and/or experience, for example) but are made somehow freely.
“To be sure, most humans may feel that they choose and act freely, and perhaps they do, but some evidence from neuroscience—which is always debatable—suggests that what we believe to be a conscious choice may actually be formulated unconsciously before we act; that we do not freely choose, rather, we rationalize predetermined decisions. Humans may not be rational actors but rather rationalizing actors.
“Different cultures sometimes prefer certain rationalizations over others—some say God or the devil or sorcerers or our genes made us do it. Other cultures expect us to say we make our choices and actions after carefully weighing the pros and cons of action—rational choices. What we may actually be doing is rationalizing not reasoning.
“This is not just a picayune intellectual distinction. Many people reading these words live in cultures whose laws and economic theories are based on assumptions of rational decision-making that cause great pain and error because those assumptions may be completely false. If so, we need to rethink (!) the foundations of our political economy and base it on how people actually decide instead of how people 300 years ago imagined they did and upon which they built our obsolete constitutions and economies. If human agency is more restricted than most of us assume we need to tread carefully when we fret about decisions being made by artificial intelligences. Or maybe there is nothing to worry about at all. Reason rules! I think there is reason for concern.
“II. Identity – The 20th century may be called the Century of Identity, among other things. It was a period when people, having lost their identity (often because of wars, forced or voluntary migration, or cultural and environmental change), sought either to create new identities or to recapture lost ones. Being a nation of invaders, slaves and immigrants, America is currently wracked with wars of identity. But there is also a strong rising tide of people rejecting identities that others have imposed on them, seeking to perform different identities that fit them better. Most conspicuous now are diverse queer, transexual, transethnic and other ‘trans’ identities, as well as biohackers and various posthumans, existing and emerging.
“While all humans are cyborgs to some extent (clothes may make the man, but clothes, glasses, shoes, bicycles, automobiles and other protheses actually turn the man into a cyborg), true cyborgs in the sense of mergers of humans and high technologies (biological and/or electronic) already exist with many more on the horizon.
“To be sure, the war against fluid identity is reaching fever pitch and the outcome cannot be predicted, but since identity-creation is the goal of individuals struggling to be free and not something forced on them by the state, it is much harder to stop and it should be admired and greeted respectfully.
“III. Intelligence – For most of humanity’s short time on Earth, life, intelligence and agency were believed to be everywhere, not only in humans but in spirits, animals, trees, rivers, mountains, rocks, deserts, everywhere. Only relatively recently has intelligence been presumed to be the monopoly of humans who were created, perhaps, in the image of an all-knowing God, and were themselves only a little lower than the angels.
“Now science is (re)discovering life, intelligence and agency not just in homo sapiens, but in many or all eukarya [plants, animals, fungi and some single-celled creatures], and even in archaea and bacteria as well as Lithbea—both natural and human-made—such as xenobots, robots, soft artificial-life entities, genetically engineered organisms, etc. See Jaime Gómez-Márquez, ‘Lithbea, A New Domain Outside the Tree of Life,’ Richard Grant’s Smithsonian piece ‘Do Trees Talk to Each Other?’ Diana Lutz’s ‘Microbes Buy Low and Sell High’ and James Bridle’s essay in Wired magazine, ‘Can Democracy Include a World Beyond Humans?’ in which he suggests, ‘A truly planetary politics would extend decision-making to animals, ecosystems and—potentially—AI.’
“Experts differ about all of this, as well as about the futures of artificial intelligence and life. I have been following the debate for 60 years, and I see ‘artificial intelligence’ to be a swiftly moving target. As Larry Tesler has noted, ‘intelligence is what machines can’t do yet.’ As machines become smarter and smarter, intelligence always seems to lie slightly ahead of what they just did. The main lesson to be learned from all of this to not judge ‘intelligence’ by 21st century Western, cis male, human standards. If it helps, don’t call it ‘intelligence.’ Find some other word that embraces them all and doesn’t privilege or denigrate any one way of thinking or acting. I would call it ‘sapience’ if that term weren’t already appropriated by self-promoting homo. Similarly, many scientists, even those in artificial life (or Alife) want to restrict the word ‘life’ to carbon-based organic processes. OK, but they are missing out on a lot of processes that are very, very lifelike that humans might well want to adapt. It is like saying an automobile without an internal combustion engine is not an automobile.
“Humanity can no longer be considered to be the measure of all things, the crown of creation. We are participants in an eternal evolutionary waltz that enabled us to strut and fret upon the Holocene stage. We may soon be heard from no more, but our successors will be. We are, like all parents, anxious that our prosthetic creations are not exactly like us, while fearful they may be far too much like us after all. Let them be. Let them go. Let them find their agency in the process of forever becoming.”
Peter Levine, professor of citizenship and public affairs at Tufts University, commented, “Let’s look at three types of agency. One is the ability to make choices among available options, as in a supermarket. AI is likely to accommodate and even enhance that kind of agency, because it is good for sales. Another kind of agency is the ability to construct a coherent life that reflects one’s own thoughtful principles. Social systems both enable and frustrate that kind of agency to varying degrees for various people. I suspect that a social system in which AI is mostly controlled by corporations and governments will largely frustrate such agency. Fewer people will be able to model their own lives on their own carefully chosen principles. A third kind of agency is collective: the ability of groups to deliberate about what to do and then to implement their decisions. AI could help voluntary groups, but it tends to make decisions opaque, thus threatening deliberative values.
“The survey asks about the relationship between individuals and machines. I would complicate that question by adding various kinds of groups, from networks and voluntary associations to corporations and state agencies. I think that, unless we intervene to control it better, AI is likely to increase the power of highly disciplined organizations and reduce the scope of more-democratic associations.”
Susan Aaronson, director of the Digital Trade and Data Governance Hub at George Washington University, wrote, “Governmental agencies should not deploy automated decision-making to address questions related to human rights, (access to credit, education, healthcare).”
Wendell Wallach, bioethicist and director of the Artificial Intelligence and Equality Initiative at the Carnegie Council for Ethics in International Affairs, commented, “I do not believe that AI systems are likely by 2035 to have the forms of intelligence necessary to make critical decisions that affect human and environmental well-being. Unfortunately, the hype in the development of AI systems focuses on how they are emulating more and more sophisticated forms of intelligence, and furthermore, why people are flawed decision makers. This will lead to a whittling away of human agency in the design of systems, unless or until corporations and other entities are held fully responsible for harms that the systems are implicated in.
“This response is largely predicated on how the systems are likely to be designed, which will also be a measure, uncertain at this time, as to how effective the AI ethics community and the standards it promulgates are upon the design process. In the U.S. at the moment, we are at a stalemate in getting such accountability and liability beyond what has already been codified in Tort. However, this is not the case in Europe and in other international jurisdictions. Simultaneously, standards-setting bodies, such as the Institute of Electrical and Electronics Engineers (IEEE) and the International Organization for Standardization are making it clear that maintaining human agency should be central.
“Nevertheless, we are seeing the development and deployment of autonomous weapons systems and other autonomous artifacts in spite of the fact that meaningful human control is often either an illusion or near-impossible to implement. We probably will need a disaster before we can create sufficient popular pressure that focuses on upgrading of our laws and regulatory bodies to reinforce the importance of human agency when deploying AI systems.”
Steven Marsh, an associate professor at Ontario Tech University, a computational philosopher expert in human norms, wrote, “To begin with, the question presupposes that the systems we will be using will indeed be ‘smart’ or ‘intelligent’ enough to be in control. I see no reason why this should be the case. It is indeed the case that we have plenty of machines now that we put ‘in control’ but they’re not smart and, as liminal creatures, humans are able to deal with the edge cases that systems cannot much better than the machines. I believe this is likely to continue to be the case for some time. The danger is when humans relinquish that ability to step in and correct. Will this be voluntary? Perhaps. There are organizations that are active in trying to ensure we can, or that systems are less opaque (like the Electronic Frontier Foundation, for instance), and this is going to be necessary.
“My own take on where humans might remain in the loop is in the area of ‘slow’ computing, where when systems do reach edge cases, situations they haven’t experienced or don’t know how to deal with, they will appropriately and obviously defer to humans. This is especially true where humans are present. There are plenty of philosophical problems that are present here (trust, for one) but if handled properly conundrums like the trolley problem will be seen to be the fallacy that they are.”
Rich Miller, CEO and managing director at Telematica and chair at Provenant Data, said, “For the next 10 to 12 years, the use of AI will be not the totally autonomous, but rather provide the sense to the end user that the action being taken is ‘assisted’ by the AI, and that the human is supervising and directing the offering. The illusion of control may be provided and important for adoption and continued use, but it will be illusion, nonetheless. The sources, the intent and the governance of the AI are as much the factor as any other one could name in where this all will go. The intent of the provider / developer / trainer of the AI must be considered as the real locus of ‘control of important decision-making.’ Because the intent and objectives of these AI offerings are more than likely to be related to impacting the end-user’s behavior (whether consuming media, purchasing merchandise or services, voting, or managing the navigation of an automobile), it is unlikely that even well-intentioned attempts by government to regulate these offerings will be very effective.”
Rich Salz, senior architect and principal engineer at Akamai Technologies, wrote, “No, by 2035 machines, bots and systems powered by AI will not be designed to allow people to easily be in control over most tech-aided decision-making. Commerce will deploy things before government can catch up. Commerce has money as its focus, not individuals. (Not sure government does either, but it’s better than commerce.)”
Richard Barke, associate professor of public policy at Georgia Tech, said, “I can imagine some enterprises that remove humans entirely from the decision process, especially in the commercial sector. It will be much more controversial and difficult to do that in areas such as healthcare, law enforcement, justice and other areas where participation is involuntary. In some areas, perhaps education, we are more likely to see a blend, but probably not total ‘control.’”
Randall Mayes, technology futurist and journalist, commented, “One way to categorize tasks is routine/non-routine and mechanical/cognitive. Cognitive, routine tasks require different skills than mechanical, non-routine tasks, etc. Because of the current limitations of deep learning, robots still have trouble folding towels and opening doors, and AI used for tasks that require cognitive skills have trouble with understanding cause and effect and reasoning skills.
“Currently, a number of brain initiatives, DARPA projects, Google, Elon Musk, and start-up AI researchers are attempting to overcome deep learning’s limitations. While the U.S. and China are considered the world leaders in AI, arguably the two biggest AI breakthroughs in the past two decades were from other countries. Great Britain’s DeepMind conquered chess, Go, and protein folding, and Canadian researchers tweaked the backpropagation algorithm to make it efficiently determine the weights of the nodes in deep learning algorithms. So, where and when the next big breakthrough will occur is a difficult forecasting problem.
“MIT’s study of the trolley problem and autonomous vehicles reveals there was not a consensus when making ethical decisions regarding who to kill. Decisions can vary by age, gender, culture, religion, etc. So, automated decisions could potentially further divide citizens nationally and globally.”
Roger K. Moore, editor of Computer Speech and Language and professor at the University of Sheffield, responded, “In some sense the genie was released from the bottle during the industrial revolution and human society is on a track where control is simply constantly reducing. In my view, unless there is massive investment in understanding this the only way out will be that we hit a global crisis that halts or reverses technological development (with severe societal implications). I am basing my decision on the history of automation thus far. Already, very few individuals are capable of exerting control over much of the technology in their everyday environment, and I see no reason for this trend to be reversed. Even accessing core technologies (such as mending a watch or fixing a car engine) is either impossible or highly specialised. This situation has not come about by careful societal planning, it has simply been an emergent outcome from evolving technology—and this will continue into many areas of decision-making.”
Fred Zimmerman, publisher at Nimble Books, said, “Big Tech has one primary objective: making money. The degree to which ‘AI’ is available to enhance human decision-making will be primarily a function of revenue potential.”
Vincent Alcazar, a retired U.S. military strategist experienced in global intelligence, observed, “The limitations here are less with AI technology and human science, especially in the domain of cognitive science whose advancements form the knowledge boundaries needed by AI to expand capabilities to satisfy the question.”
Mike Silber, South African attorney and head of regulatory policy at Liquid Intelligent Technologies, wrote, “A massive digital divide exists across the globe. Certainly, some people will have tech-abetted decision-making assist them, others will have it imposed on them by third-party decision makers (governments, banks, network providers) and yet others will continue to remain outside of the technology-enabled space.”
Jeremy Pesner, senior policy analyst, Bipartisan Policy Center, Georgetown University, responded, “We can’t become literate about our data and information if we don’t even know exactly what they look like! At the end of the day, it’s important that we know how the machines think, so that we never come to see them as inscrutable or irreproachable. When it comes to the public making data-based decisions, a challenge is that some of the biggest are made by people who are not especially data-literate. They’re going to rely on machines to analyze whatever data they have and either follow the machine’s advice or disregard it and go with their gut. The best collaborations between man and machine on decision-making will always revolve around humans who could analyze the data manually but know how to program machines to do for them. In such a case, if there’s some kind of error or suspicious output, those humans know how to recognize it and investigate.
“Many automated decisions will be based on which data to capture (so it can be mined for some kind of preferencing algorithm), what suggestions to then offer consumers and, of course, what ads to show them. When it comes to issues involving health and legal sentencing and other high-risk matters, I do expect for there to be a human in the mix, but again, they’ll need to be data-literate so they can understand what characteristics about a person’s data led the machine to make that decision. Europe’s AI Act, which puts restrictions on different types of AI systems according to their risk, will hopefully become the de-facto standard in this regard, as people will come to understand that machines can always be second-guessed.
“Then again, I’m concerned that many of the technical information and details—which are what determines any given decision a machine will make—will remain largely masked from users. Already, on smartphones, there is no way to determine the memory allocation of devices or examine their network traffic without the use of third-party, often closed-source apps. With more and more out-of-the-box standalone IoT devices that have sleek smartphone interfaces, it will be extremely difficult to actually know what many of our devices are doing. This is only more true for centralized Internet and social media services, which are entirely opaque when it comes to the use of consumer data. Even these cookie menu options as a result of GDPR only describe them in broad terms, like ‘necessary cookies’ and ‘cookies for analytics.’”
Sebastian Hallensleben, head of digitalisation and artificial intelligence at VDE Association for Electrical, Electronic and Information Technologies, said, “In my view, the vast majority of humans who are affected (positively or negatively) will indeed not be in control of decisions made by machines. They will lack access to sufficient information as well as the technical competence to exert control. However, a small minority of humans will be in control. In a rosy future, these would be regulatory bodies ultimately shaped through democratic processes. In a less rosy (but probably more likely) future, these will be the leadership tiers of a small number of large global companies.”
Tyler Anderson, a senior user-experience designer at Amazon, commented, “While there are a few highly-visible areas over which humanity will retain control (think self-driving cars as the most present example), many of the more important areas of our lives are already slipping solely into the hands of AI and related IT. The types of information we consume on a daily basis and the long arc of how our popular culture reflects our world are influenced by algorithms we over which we have no control. This will only increase and spread to more areas of our lives.
“Our healthcare system, already heavily commoditized and abused, is soon to be invaded by big tech companies looking to automate the caregiving process. There still may be doctors present to add a human touch to medical visits, but behind the scenes all patients will have their anonymized healthcare data put into an information thresher and algorithmic diagnoses will be tested based on reported symptoms and demographic details. This is but one example of the critical areas of the human experience into which AI will be further integrated, and the results of this integration are likely to be uneven at best.
“It is in the day-to-day activities of society where the concern really lies, where it’s quite likely that human input won’t even be considered. Hiring processes, scripting on television shows, political campaigns—all of these are areas that have a direct impact on human lives, and already we’re seeing AI-fueled predictive algorithms affecting them, often without the proper knowledge of what’s going on behind the scenes to generate the decisions that are being put into action. If these aspects of decision-making have already lost the benefit of human input, how can we hope things will get better in the future?
“All of these and many more will one day be completely run by AI, with less human input (because just think of the savings you get by not having to employ a human!). As this continues to proliferate across industries and all aspects of our lives, the damage to our society could become irreparable. The influence of algorithmic AI is permanently changing the ways in which we view and interact with the world. If this continues unchecked, it could result in a new form of AI-driven fascism that may further decimate culture and society.”
George Barnett, a professor and researcher expert in the roles of internet communications in social and economic development and cultural change, University of California-Davis, said, “What key decisions will be mostly automated? Financial. What key decisions should require direct human input? Interpersonal relations. How might the broadening and accelerating rollout of tech-abetted, often autonomous decision-making change human society? It will allow humans to engage less in the daily mundane and spend more time doing creative things and spending more time in social interaction.”
Laurie Orlov, principal analyst at Aging and Health Technology Watch,” said, “Straightforward decisions that make interactions more efficient (i.e., less searching, selecting, typing) more easily will be accepted and therefore possible. This is already happening today. That includes setting of defaults for multiple choices, enabling override by people.
“Retail/commercial selections will increasingly offer smarter defaults based on prior interactions, assuming that those defaults are based on actual behavior and not primarily for marketing purposes.
“Healthcare decisions will not be automated to this degree by 2035—as they are barely assisted today by software solutions. And distrust of technology-enabled decision-making will persist (and perhaps worsen, especially among an aging but tech-knowledgeable population). See the Meta Pixel use in hospital portals that revealed personal information and associated lawsuit, as well as privacy issues increasingly visible in the media. Also see legislative efforts to rein in big tech. Just because something is possible does not imply that it will actually happen.”
Mahendranath Busgopaul, Halley Movement Coalition, Africa. “Most decisions will be taken through AI. Most decisions.”
To read the full survey with analysis, please click here.
To read anonymous responses to the report, please click here.
To download a printable version of the report, please click here.