Elon University

Survey X: Artificial Intelligence and the Future of Humans (Credited Responses)

Results released in December 2018: To illuminate current attitudes about the potential impacts of digital life in the next decade and assess what interventions might possibly emerge to help resolve challenges, Pew Research and Elon University’s Imagining the Internet Center conducted a large-scale canvassing of technology experts, scholars, corporate and public practitioners and other leaders in summer 2018, asking them to share their answer to the following query:

“Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties.

“Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030. Please consider giving an example of how a typical human-machine interaction will look and feel in a specific area, for instance, in the workplace, in family life, in a health care setting or in a learning environment. Why? What is your hope or fear? What actions might be taken to assure the best future?”

In answer to Question One:

  • About 63% of these respondents, said most people will be mostly better off.
  • About 37% said people will not be better off.
  • 25 respondents chose not to select either option.

Among the key themes emerging in the December 10, 2018 report from 979 expert respondents’ overall answers were: * CONCERNS – Human Agency: Decision-making on key aspects of digital life is automatically ceded to code-driven, “black box” tools. People lack input and do not learn the context about how the tools work. They sacrifice independence, privacy and power over choice; they have no control over these processes. This effect will deepen as automated systems become more prevalent and complex. – Data Abuse: Most AI tools are and will be in the hands of companies striving for profits or governments striving for power. Values and ethics are often not baked into the digital systems making people’s decisions for them. These systems are globally networked and not easy to regulate or rein in. – Job Loss: The efficiencies and other economic advantages of code-based machine intelligence will continue to disrupt all aspects of human work. While some expect new jobs will emerge, others worry about massive job losses, widening economic divides and social upheavals, including populist uprisings. – Dependence Lock-in: Many see AI as augmenting human capacities but some predict the opposite – that people’s deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others. – Mayhem: Some predict further erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications and the use of weaponized information, lies and propaganda to dangerously destabilize human groups. Some also fear cybercriminals’ reach into economic systems. * POTENTIAL REMEDIES – Global Good is #1: It is vital to improve human collaboration across borders and stakeholder groups. Digital cooperation to serve humanity’s best interests is the top priority. Ways must be found for people around the world to come to common understandings and agreements – to join forces to facilitate the innovation of widely accepted approaches aimed at tackling wicked problems and maintaining control over complex human-digital networks. – Values-Based Systems: Develop policies to assure AI will be directed at the common good. Adopt a ‘moonshot mentality’ to build inclusive, decentralized intelligent digital networks ‘imbued with empathy’ that help humans aggressively ensure that technology meets social and ethical responsibilities. Some new level of regulatory and certification process will be necessary. – Prioritize People: Alter economic and political systems to better help humans “race with the machines.” Direct energies to radical human improvement. Reorganize economic and political systems toward the goal of expanding humans’ capacities and capabilities in order to heighten human/AI collaboration and staunch trends that would compromise human relevance in the face of programmed intelligence. * BENEFITS of AI 2030 – New Life and Work Efficiencies: AI will be integrated into most aspects of like, producing new efficiencies and enhancing human capacities, It can optimize and augment people’s life experiences, including the work lives of those who choose to work. – Health Care Improvements: AI can revolutionize medical and wellness services, reduce errors and recognize life-saving patterns, opening up a world of opportunity and options in health care. – Education Advances: Adaptive and individualized learning options and AI “assistants” might accelerate targeted, effective education expanding the horizons of all.

News release with nutshell version of report findings is available here:

All anonymous responses to the question on AI and the Future of Humans.

The full survey report with analysis is here.

Written elaborations by respondents who took credit for their remarks

Following are full responses made by study participants who chose to make their names public along with their remarks. Some people chose not to provide a written elaboration. Some of these are the longer versions of responses that are contained in shorter form in the survey report. These responses were collected in an opt-in invitation to more than 10,000 people.

Their predictions:

Peter Stone, professor of computer science at the University of Texas – Austin and chair of the first study panel of the 100-Year Study on Artificial Intelligence (AI100), responded, “As chronicled in detail in the AI 100 report, I believe that there are both significant opportunities and significant challenges/risks when it comes to incorporating AI technologies into various aspects of everyday life. With carefully crafted industry-specific policies and responsible use, I believe that the potential benefits outweigh the risks. But the risks are not to be taken lightly.”

Judith Donath, author of “The Social Machine, Designs for Living Online” and faculty fellow at Harvard University’s Berkman-Klein Center for Internet and Society, commented, “By 2030, most social situations will be facilitated by bots — intelligent-seeming programs that interact with us in human-like ways. At home, parents will engage skilled bots to help kids with homework and catalyze dinner conversations. At work, bots will run meetings.  A bot confidant will be considered essential for psychological wellbeing, and we’ll increasingly turn to such companions for advice ranging from what to wear to whom to marry. We humans care deeply about how others see us — and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: programs will compose many of our messages and our online/AR appearance will computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us.  Able to mimic emotion expertly, they’ll never be overcome by feelings: if they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals? Artificially intelligent companions will cultivate the impression that social goals similar to our own motivate them — to be held in good regard, whether as a beloved friend, an admired boss, etc.  But their real collaboration will be with the humans and institutions that control them. Like their forebears today, these will be sellers of goods who employ them to stimulate consumption and politicians who commission them to sway opinions.”

Eugene H. Spafford, internet pioneer and founder and executive director emeritus of the Center for Education and Research in Information Assurance and Security, commented, “Without active controls and limits, the primary adopters of AI systems will be governments and large corporations. Their use of it will be to dominate/control people, and this will not make our lives better.”

John C. Havens, executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and the Council on Extended Intelligence, wrote, “In 2018 a majority of people around the world can’t access their data, so any ‘human-AI augmentation’ discussions ignore the critical context of who actually controls people’s information and identity. Soon it will be extremely difficult to identify any autonomous or intelligent systems whose algorithms don’t interact with human data in one form or another. While today people provide ‘consent’ for their data usage, most people don’t understand the depth and breadth of how their information is utilized by businesses and governments at large. Until every individual is provided with a sovereign identity attached to a personal data cloud they control, information won’t truly be shared – just tracked. By utilizing blockchain or similar technologies and adopting progressive ideals toward citizens and their data as demonstrated by countries like Estonia, we can usher in genuine digital democracy in the age of the algorithm. The other issue underlying the ‘human-AI augmentation’ narrative rarely discussed is the economic underpinnings driving all technology manufacturing. Where exponential growth, shareholder models are prioritized human and environmental well-being diminishes. Multiple reports from people like Joseph Stiglitz point out that while AI will greatly increase GDP in the coming decades, the benefits of these increases will favor the few versus the many. It’s only by adopting ‘Beyond GDP’ or triple bottom line metrics that ‘people, planet and profit’ will shape a holistic future between humans and AI.”

Andrew Wycoff, the director of OECD’s directorate for science, technology and innovation, and Karine Perset, an economist in OECD’s digital economy policy division, commented: “Twelve years ago, we could already glimpse at our connected, mobile future, although few anticipated that it would jump from 1 billion in 2004 to the 4 billion people online today and that we would exceed 100% mobile phone subscribership in many countries with access being achieved through pocket computers. The pace of progress looking forward is set to keep accelerating. Research in artificial intelligence and the speed of its deployment have already dramatically shrunk the time lag and distinction between research and its real-world impact. Twelve years from now, we will benefit from radically improved accuracy and efficiency of decisions and predictions across all sectors. Machine learning systems will actively support humans throughout their work and play. This support will be unseen but pervasive – like electricity. As machines’ ability to sense, learn, interact naturally and act autonomously increases, they will blur the distinction between the physical and the digital world. AI systems will interconnect and work together to predict and adapt to our human needs and emotions. The growing consensus that AI should benefit society at-large leads to calls to facilitate the adoption of AI systems to promote innovation and growth and help address global challenges, and boost jobs and skills development, while at the same time establishing appropriate safeguards to ensure these systems are transparent and explainable, and respect human rights, democracy, culture, non-discrimination, privacy and control, safety and security. Given the inherently global nature of our networks and applications that run across then, we need to improve collaboration across countries and stakeholder groups to move toward common understanding and coherent approaches to key opportunities and issues presented by AI. This is not too different from the post-war discussion on nuclear power. We should also tread carefully toward Artificial General Intelligence and avoid current assumptions on the upper limits of future AI capabilities.”

Bryan Johnson, founder and CEO of Kernel, a leading developer of advanced neural interfaces, and OS Fund, a venture capital firm, said, “I am hesitant to choose one way or the other. I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition. We could start with owning our own digital data and the data from our bodies, minds and behavior, and then by correcting our major tech companies’ incentives away from innovation for everyday convenience and toward radical human improvement. As an example of what tech could look like when aligned with radical human improvement, cognitive prosthetics will one day give warnings about biases – like how cars today have sensors letting you know when you drift off to sleep or if you make a lane change without a signal – and correct cognitive biases and warn an individual away from potential cognitive biases. This could lead to better behaviors in school, home and work, and encourage people to make better health decisions.”

Peter Reiner, professor and co-founder of the National Core for Neuroethics at the University of British Columbia, Canada, commented, “I am confident that in 2030 both arms of this query will be true: AI-driven algorithms will substantially enhance our abilities as humans and human autonomy and agency will be diminished. Whether people will be better off than they are today is a separate question, and the answer depends to a substantial degree on how looming technological developments unfold. On the one hand, if corporate entities retain unbridled control over how AI-driven algorithms interact with humans, I believe that people will be less well off, as the loss of autonomy and agency will be largely to the benefit of the corporations. On the other hand, if ‘we the people’ demand that corporate entities that deploy AI-algorithms in a manner that is sensitive to the issues of human autonomy and agency, then there is a real possibility for us to be better off – enhanced by the power of the AI-driven algorithm and yet not relegated to an impoverished seat at the decision-making table. I think one could even parse this further – anticipating that certain decisions can be comfortably left in the hands of the AI-driven algorithm, with other decisions either falling back on humans or arrived at through a combination of AI-driven algorithmic input and human decision making. If we approach these issues skillfully – and it will take quite a bit of collaborative work between ethicists and industry – we can have the best of both worlds. On the other hand, if we are lax in acting as watchdogs over industry we will be functionally rich and decisionally poor.”

Sonia Katyal, co-director of the Berkeley Center for Law and Technology and a member of the inaugural U.S. Commerce Department Digital Economy Board of Advisors, said, “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all reemerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit – and who will be disadvantaged – in this new world depends on how broadly we analyze these questions, today, for the future.”

Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy and author of “Machine, Platform, Crowd: Harnessing Our Digital Future,” said, “AI and related technologies have already achieved superhuman performance in many areas, and there is little doubt that their capabilities will improve, probably very significantly, by 2030. But like all technologies, AI and machine learning are tools. As I wrote in a number of places, including my book with Andy McAfee, ‘The Second Machine Age,’ more-powerful tools give humans more power to change the world. Collectively, we will have more choices, not fewer. I’m a mindful optimist, meaning that I think it is more likely than not that we will use this power to make the world a better place. For instance, we can virtually eliminate global poverty, massively reduce disease and provide better education to almost everyone on the planet. That said, AI and ML can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons. Neither outcome is inevitable so the right question is not ‘what will happen?’ but ‘what will we choose to do?’ We need to work aggressively to make sure technology matches our values.  This can and must be done at all levels, from government, to business, to academia, and to individual choices. The MIT Initiative on the Digital Economy that I direct and the MIT Inclusive Innovation Challenge are seeking to contribute to these goals.”

Thomas Schneider, head of International Relations Service and vice-director at the Federal Office of Communications (OFCOM) in Switzerland, said, “AI will help mankind to be more efficient, live safer, healthier and manage resources like energy, transport, etc., more efficiently. At the same time, there are a number of risks that AI may be used by those in power to manipulate, control and dominate others. (We have seen this with every new technology: It can and will be used for good and bad…) Much will depend about how AI will be governed: If we have an inclusive and bottom-up governance system of well-informed citizens, then AI will be used for improving our quality of life. If only a few people decide about how AI is used and what for, many others will be dependent on the decisions of these few and risk being manipulated by them. The biggest danger in my view is that there will be a greater pressure on all members of our societies to live according to what ‘the system’ will tell us is ‘best for us’ to do and not to do, i.e., that we may lose the autonomy to decide ourselves how we want to live our lives, to choose diverse ways of doing things. With more and more ‘recommendations,’ ‘rankings’ and competition through social pressure and control, we may risk a loss of individual fundamental freedoms (including but not limited to the right to a private life) that we have fought for in the last decades and centuries.”

Wendy Hall, professor of computer science at the University of Southampton, U.K., and executive director of the Web Science Institute, said, “By 2030 I believe that human-machine/AI collaboration will be empowering for human beings overall. Many jobs will have gone, but many new jobs will have been created and machines/AI should be helping us do things more effectively and efficiently both at home and at work. It is a leap of faith to think that by 2030 we will have learnt to build AI in a responsible way and we will have learnt how to regulate the AI and robotics industries in a way that is good for humanity. We may not have all the answers by 2030 but we need to be on the right track by then.”

Andrew McLaughlin, executive director of the Center for Innovative Thinking at Yale, previously deputy chief technology officer of the United States for President Obama and global public policy lead for Google, wrote, “2030 is not far in the future. My sense is that innovations like the internet and networked AI have massive short-term benefits, along with long-term negatives that can take decades to be recognizable. AI will drive a vast range of efficiency optimizations, but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment.”

Ken Goldberg, distinguished chair in engineering, director of AUTOLAB and CITRIS “People and Robots” initiative, and founding member, Berkeley AI Research Lab, University of California – Berkeley, said, “As in the past 50+ years, AI will be combined with IA (intelligence augmentation) to enhance humans’ ability to work. One example might be an AI-based ‘Devil’s Advocate’ that would challenge my decisions with insightful questions (as long as I can turn it off periodically).”

Brad Templeton, chair for computing at Singularity University, software architect and former president of the Electronic Frontier Foundation, responded, “While obviously there will be good and bad, the broad history of automation technologies is positive, even when it comes to jobs; there is more employment today than ever in history.”

Martijn van Otterlo, author of “Gatekeeping Algorithms with Human Ethical Bias” and assistant professor of artificial intelligence at Tilburg University, Netherlands, wrote, “Even though I see many ethical issues, potential problems and especially power imbalance/misuse issues with AI (not even starting about singularity issues and out-of-control AI), I do think that AI will change most lives for the better, especially looking at the short horizon of 2030, even more-so because even bad effects of AI can be considered predominantly ‘good’ by the majority of people. For example, the Cambridge Analytica case has shown us the huge privacy issues of modern social networks in a market economy, but, overall, I think people value the extraordinary services Facebook offers to improve our communication opportunities, our sharing capabilities and so on.”

William Dutton, Oxford Martin Fellow at the Global Cyber Security Capacity Centre and founding director of the Oxford Internet Institute, commented, “Advances in AI will be more incremental than suggested by current hype, but the societal impact more profound and overwhelmingly to the benefit of individuals and institutions that adapt to this change.”

Panagiotis T. Metaxas, author of “Technology, Propaganda and the Limits of Human Intellect” and professor of computer science at Wellesley College, responded, “The benefits of H-M/AI collaboration will be tremendous, in the form of less effort in physical and mental labor, production of goods, comfort in daily activities, improved health care. There will be a lot of wealth that AI-supported devices will be producing. The new technologies will make it easier and cheaper to produce food and entertainment massively (‘bread and circus’). This wealth will not be distributed evenly, increasing the financial gap between the top small percentage of people and the rest. Even though this wealth will not be distributed evenly, the (relatively small) share given to the vast majority of people will be enough to improve their (2018) condition. In this respect, the majority of people will be ‘better off’ than they are today. They may not feel better off if they were aware of the inequalities compared to the top beneficiaries, but they will not be aware of them due to controlled propaganda. Unfortunately, there will not be much they could do about the increased inequalities. Technologies of police enforcement by robots and lack of private communication will make it impossible for them to organize, complain or push for change. They will not be valued as workers, citizens or soldiers. The desire for democracy as we know it today will be coming to an end. Many will feel depressed, but medical products will make it easy for them to increase pleasure and decrease pain.”

Steve Crocker, CEO and co-founder of Shinkuro Inc. and Internet Hall of Fame member, responded, “AI and human-machine interaction has been under vigorous development for the past 50 years. The advances have been enormous. The results are marbled through all of our products and systems. Graphics, speech, language understanding are now taken for granted. Encyclopedic knowledge is available at our fingertips. Instant communication with anyone, anywhere exists for about half the world at minimal cost. The effects on productivity, lifestyle and reduction of risks, both natural and man-made, have been extraordinary and will continue. As with any technology, there are opportunities for abuse, but the challenges for the next decade or so are not significantly different from the challenges mankind has faced in the past. Perhaps the largest existential threat has been the potential for nuclear holocaust.  In comparison, the concerns about AI are significantly less.”

Theodore Gordon, futurist, management consultant and co-founder of the Millennium Project, responded, “There will be ups and downs, surely, but the net is, I believe, good. The most encouraging uses of AI will be in early warning of terror activities, incipient diseases and environmental threats and in improvements in decision making.”

Matt Mason, a roboticist and the former director of the Robotics Institute at Carnegie Mellon University, wrote, “AI will present new opportunities and capabilities to improve the human experience. While it is possible for a society to behave irrationally and choose to use it to their detriment, I see no reason to think that is the more likely outcome.”

Bob Frankston, software innovation pioneer and technologist based in North America, wrote, “It could go either way. AI could be a bureaucratic straitjacket and tool of surveillance. I’m betting that machine learning will be like the X-ray in giving us the ability to see new wholes and gain insights.”

Jay Sanders, president and CEO of the Global Telemedicine Group, responded, “AI will bring collective expertise to the decision point, and in health care bringing collective expertise to the bedside will save many lives now lost by individual medical errors.”

Geoff Arnold, CTO for the Verizon Smart Communities organization, said, “One of the most important trends over the next 12 years is the aging population and the high costs of providing them with care and mobility. AI will provide better data-driven diagnoses of medical and cognitive issues and it will facilitate affordable AV-based paratransit for the less mobile. It will support, not replace, human care-givers.”

James Kadtke, expert on converging technologies at the Institute for National Strategic Studies, U.S. National Defense University, wrote, “Barring the deployment of a few different radically new technologies, such as general AI or commercial quantum computers, the internet and AI [between now and 2030] will proceed on an evolutionary trajectory. Expect internet access and sophistication to be considerably greater, but not radically different, and also expect that malicious actors using the internet will have greater sophistication and power. Whether we can control both these trends for positive outcomes is a public policy issue, in my opinion, more than a technological one.”

Craig Mathias, principal at Farpoint Group, an advisory firm specializing in wireless networking and mobile computing, commented, “Many if not most of the large-scale technologies that we all depend upon – such as the internet itself, the power grid and roads and highways – will simply be unable to function in the future without AI, as both solution complexity and demand continue to increase.”

Jennifer J. Snow, an innovation officer with the U.S. Air Force, wrote, “AI will be a tool that will help improve medicine, public safety, education and the workforce, but as with all technologies, it is dual-use and will also have negative aspects and impacts. As with the internet, the overall end result will be positive. But there will be facets including weaponized information, cyber bullying, privacy issues and other potential abuses that will come out of this technology and will need to be addressed by global leaders.”

Greg Shannon, chief scientist for the CERT Division at Carnegie Mellon University’s Software Engineering Institute, said, “Better/worse will appear 4:1 with the long-term ratio 2:1. AI will do well for repetitive work where ‘close’ will be good enough and humans dislike the work (harvesting, cleaning) or expect to be well-paid for it (police, legal). Life will definitely be better as AI extends lifetimes, from health apps that intelligently ‘nudge’ us to health to warnings about impending heart/stroke events to automated health care for the under-served (remote) and those who need extended care (elder care). As to liberty, there are clear risks. AI affects agency by creating entities with meaningful intellectual capabilities for monitoring, enforcing and even punishing individuals. Those who know how to use it will have immense potential power over those who don’t/can’t. Future happiness is really unclear. Some will cede their agency to AI in games, work and community, much like the opioid crisis steals agency today. On the other hand, many will be freed from mundane, un-engaging tasks/jobs. If elements of community happiness are part of AI objective functions, then AI could catalyze an explosion of happiness.”

John Markoff, fellow at the Center for Advanced Study in Behavioral Sciences at Stanford University and author of “Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots,” wrote, “I was completely torn by the first question. There are expected and unexpected consequences to ‘AI and related technologies.’ It is quite possible that improvements in living standards will be offset by the use of autonomous weapons in new kinds of war.”

Wendy Seltzer, strategy lead and counsel at the World Wide Web Consortium, commented, “I’m mildly optimistic that we will have devised better techno-social governance mechanisms. such that if AI is not improving the lives of humans, we will restrict its uses.”

Frank Tipler, a mathematical physicist at Tulane University, commented, “I do not expect human-level AI to be developed by 2030, but I do expect this to occur by the end of this century. Subhuman AI will merely improve per capita GDP.”

Michael M. Roberts, internet pioneer, first president and CEO of ICANN and Internet Hall of Fame member, responded, “The range of opportunities for intelligent agents to augment human intelligence is still virtually unlimited. The major issue is that the more convenient an agent is, the more it needs to know about you – preferences, timing, capacities, etc. – which creates a tradeoff of more help requires more intrusion. This is not a black and white issue – the shades of gray and associated remedies will be argued endlessly. The record to date is that convenience overwhelms privacy. I suspect that will continue.”

Daniel A. Menasce, professor of computer science, George Mason University, commented, “AI and related technologies coupled with significant advances in computer power and decreasing costs will allow specialists in a variety of disciplines to perform more efficiently and will allow non-specialists to use computer systems to augment their skills. Some examples include health delivery, smart cities and smart buildings. For these applications to become reality, easy-to-use user interfaces, or better yet transparent user interfaces will have to be developed.”

Martin Geddes, a consultant specializing in telecommunications strategies, said, “The unexpected impact of AI will be to automate many of our interactions with systems where we give consent, and to enable a wider range of outcomes to be negotiated without our involvement. This requires a new presentation layer for the augmented reality metaverse, with a new ‘browser’ – the Guardian Avatar – that helps to protect our identity and our interests.”

Lawrence Roberts, designer and manager of ARPANET, the precursor to the internet, and Internet Hall of Fame member, commented, “AI voice recognition, or text, with strong context understanding and response will allow vastly better access to website, program documentation, voice call answering, and all such interactions will greatly relieve user frustration with getting information. It will mostly provide service where no or little human support is being replaced as it is not available today in large part. For example, finding and/or doing a new or unused function of the program or website one is using. Visual 3-D space recognition AI to support better than human robot activity including vehicles, security surveillance, health scans and much more.”

Vint Cerf, Internet Hall of Fame member and vice president and chief internet evangelist at Google, wrote, “I see AI (and machine learning) as augmenting human cognition a la Douglas Engelbart. There will be abuses and bugs, some harmful, so we need to be thoughtful about how these technologies are implemented and used, but, on the whole, I see these as constructive.”

Christopher Yoo, a professor of law, communication and computer and information science at the University of Pennsylvania Law School, responded, “AI is good at carrying out tasks that follow repetitive patterns. In fact, AI is better than humans. Shifting these functions to machines will improve performance. It will also allow people to shift their efforts to highe- value-added and more-rewarding directions, an increasingly critical consideration in developing world countries where population is declining. Research on human-computer interaction (HCI) also reveals that AI-driven pattern recognition will play a critical role in expanding humans’ ability to extend the benefits of computerization. HCI once held that our ability to gain the benefit from computers would be limited by the total amount of time people can spend sitting in front of a screen and inputting characters through a keyboard. The advent of AI-driven HCI will allow that to expand further and will reduce the amount of customization that people will have to program in by hand. At the same time, AI is merely a tool. All tools have their limits and can be misused.  Even when humans are making the decisions instead of machines, blindly following the results of a protocol without exercising any judgment can have disastrous results. Future applications of AI will thus likely involve both humans and machines if they are to fulfill their potential.”

Angelique Hedberg, senior corporate strategy analyst at RTI International, said, “The most beneficial human-machine/AI collaborations will be the ones we don’t see or feel because they will augment our lives in the ways we want and need the augmentation. The greatest advancements and achievements will be in health – physical, mental and environmental. The improvements will have positive trickle-down impacts on education, work, gender equality and reduced inequality. AI will redefine our understanding of health care, optimizing existing processes while simultaneously redefining how we answer questions about what it means to be healthy, bringing care earlier in the cycle due to advances in diagnostics and assessment, i.e. in the future preventative care identifies and initiates treatment for illness before symptoms present. The advances will not be constrained to humans; they will include animals and the built environment. This will happen across the disease spectrum. Advanced ‘omics’ will empower better decisions. There will be a push and a pull by the market and individuals. This is a global story, with fragmented and discontinuous moves being played out over the next decade as we witness wildly different experiments in health across the globe. This future is full of hope for individuals and communities. My greatest hope is for disabled individuals and those currently living with disabilities. I’m excited for communities and interpersonal connections as the work in this future will allow for and increase the value of the human-to-human experiences. Progress is often only seen in retrospect; I hope the speed of exponential change allows everyone to enjoy the benefits of these collaborations.”

Benjamin Kuipers, a professor of computer science at the University of Michigan, wrote, “We face several critical choices between positive and negative futures, and I choose to believe that we will muddle through, generally toward the positive ones. This could take significantly longer than the next 12 years. Advancing technology will provide vastly more resources; the key decision is whether those resources will be applied for the good of humanity as a whole, or if they will be increasingly held by a small elite. Advancing technology will vastly increase opportunities for communication and surveillance; the question is whether we will find ways to increase trust and the possibilities for productive cooperation among people, or whether individuals striving for power will try to dominate by decreasing trust and cooperation. In the medium term, increasing technology will provide more powerful tools for human, corporate or even robot actors in society. The actual problems will be about how members of a society interact with each other. In a positive scenario, we will interact with conversational AIs for many different purposes and even when the AI belongs to a corporation we will be able to trust that it takes what in economics is called a ‘fiduciary’ stance toward each of us. That is, the information we provide must be used primarily for our individual benefit.  Although we know, and are explicitly told, that our aggregated information is valuable to the corporation, we can trust that it will not be used for our manipulation or our disadvantage.”

Josh Calder, a partner at the Foresight Alliance, commented, “The best outcome will be if automation is used to support people doing the most-human work – creating and caring, for instance. This will likely require adjusting economic systems so that that kind of work is rewarded. The biggest danger is that workers are displaced on a mass scale, especially in emerging markets.”

Brock Hinzmann, a partner in the Business Futures Network who worked for 40 years as a futures researcher at SRI International, said, “Most of the improvements in the technologies we call AI will involve machine learning from big data to improve the efficiency of systems, which will improve the economy and wealth. It will improve emotion and intention recognition, augment human senses and improve overall satisfaction in human-computer interfaces. There will also be abuses in monitoring personal data and emotions and in controlling human behavior, which we need to recognize early and thwart. Intelligent machines will recognize patterns that lead to equipment failures or flaws in final products and be able to correct a condition or shut down and pinpoint the problem. Autonomous vehicles will be able to analyze data from other vehicles and sensors in the roads or on the people nearby to recognize changing conditions and avoid accidents. In education and training, AI learning systems will recognize learning preferences, styles and progress of individuals and help direct them toward a personally satisfying outcome. However, governments or religious organizations may monitor emotions and activities using AI to direct them to ‘feel’ a certain way, to monitor them and to punish them if their emotional responses at work, in education or in public do not conform to some norm. Education could become indoctrination; democracy could become autocracy or theocracy.”

Seth Finkelstein, consulting programmer at Finkelstein Consulting, commented, “Imagine if back in the past we were asked if ‘advancing the engine and related technology systems will enhance human capacities and empower them?’ How many people nowadays would seriously deny the net-positive benefit of the internal combustion engine? But with these sorts of questions, an overall positive shouldn’t be used to deny there are winners and losers, which are profoundly shaped by politics. The simplistic phrasing tends to create an embedded rhetorical trap. An answer of ‘worse’ is reactionary. Yet replying ‘better’ leaves one open to a torrent of apologism from propagandists of plutocracy, who want to portray their favored social policies as technologically determined. AI depends on algorithms and data. Who gets to code the algorithms, and to challenge the results? Is the data owned as private property, and who can change it? As a very simple example, let’s take the topic of algorithmic recommendations for articles to read. Do they get tuned to produce suggestions which lead to more informative material – which, granted, is a relatively difficult task, and fraught with delicate determinations? Or are they optimized for ATTENTION! CLICKS! *OUTRAGE*!? To be sure, the latter is cheap and easy – and though it has its own share of political problems, they’re often more amenable to corporate management (i.e., what’s accurate vs. what’s unacceptable). There’s a whole structure of incentives that will push toward one outcome or the other.”

Rich Ling, a professor of media technology at Nanyang Technological University, Singapore, responded, “It will not be simply better off or worse off, but there will be domains that will be more positive and others that are less so. On the whole, however, the ability to address complex issues and to better respond to and facilitate the needs of people will be the dominant result of AI.”

Steven Miller, vice provost and professor of information systems at Singapore Management University, said, “It helps to have a sense of the history of technological change over the past few hundred years (even longer). Undoubtedly, new ways of using machines and new machine capabilities will be used to create economic activities and services that were either a) not previously possible, or b) previously too scarce and expensive, and now can be plentiful and inexpensive. This will create a lot of new activities and opportunities. At the same time, we know some existing tasks and jobs with a high proportion of those tasks will be increasingly automated. So we will simultaneously have both new opportunity creation as well as technological displacement. Even so, the long-term track record shows that human societies keep finding ways of creating more and more economically viable jobs. Cognitive automation will obviously enhance the realms of automation, but even with tremendous progress in this technology, there are and will continue to be limits. Humans have remarkable capabilities to deal with and adapt to change, so I do not see the ‘end of human work.’ The ways people and machines combine together will change – and there will be many new types of human-machine symbiosis. Those who understand this and learn to benefit from it will proposer.”

Ebenezer Baldwin Bowles, author, editor and journalist, responded, “If one values community and the primacy of face-to-face, eye-to-eye communication, then human-machine/AI collaboration in 2030 will have succeeded in greatly diminishing the visceral, primal aspects of humanity. Every expression of daily life, either civil or professional or familial or personal, will be diminished by the iron grip of AI on the fundamental realities of interpersonal communications. Already the reliance on voice-to-text technology via smartphone interface diminishes the ability of people to write with skill and cogency. Taking the time to ring-up another and chat requires too much psychic energy, so we ‘speak’ to one another in text box fragments written down and oft altered by digital assistants. The dismissive but socially acceptable ‘TL:DR’ becomes commonplace as our collective attention span disintegrates. Yes, diagnostic medicine and assembly line production and expanded educational curriculum will surely be enhanced by cyber-based, one-and-zero technologies, but at what cost to humanity? Is it truly easier and safer to look into a screen and listen to an electronically delivered voice, far away on the other side of an unfathomable digital divide, instead of looking into another’s eyes, perhaps into a soul, and speaking kind words to one another, and perhaps singing in unison about the wonders of the universe? We call it ‘artificial intelligence’ for good reason.”

Batya Friedman, a human-computer interaction professor at the University of Washington Information School, wrote, “Our scientific and technological capacities have and will continue to far surpass our moral ones – that is our ability to use wisely and humanely the knowledge and tools that we develop. For example, we may develop robots that can care for the elderly in the sense of keeping elders physically safe, but such interactions will likely lack the care, compassion and mutual companionship when there is interaction with another person in the caregiving role. Automated warfare, when autonomous weapons kill human beings without human engagement, can lead to a lack of responsibility for taking the enemy’s life or even knowledge that an enemy’s life has been taken. At stake is nothing less than what sort of society we want to live in and how we experience our humanity.”

Aneesh Aneesh, author of “Global Labor: Algocratic Modes of Organization” and professor at the University of Wisconsin–Milwaukee, responded, “There are two issues to be considered: economic and social, and no, the two are not one and the same. Economically, I don’t see how AI systems will aid in reducing inequality, which is one of the major problems today. If at all, they may exacerbate inequality. Just as automation left large groups of working people behind, even as the United States got wealthier as a country, it is quite likely that AI systems will automate the service sector in a similar way. Unless the welfare state returns with a vengeance, it is difficult to see the increased aggregate wealth resulting in any meaningful gains for the bottom half of society. Socially, AI systems will automate tasks that currently require human negotiation and interaction. Unless people feel the pressure, institutionally or otherwise, to interact with each other, they – more often than not – choose not to interact. The lack of physical, embodied interaction is almost guaranteed to result in social loneliness and anomie, and associated problems such as suicide, a phenomenon already are on the rise in the United States.”

Danny Gillane, a netizen from Lafayette, La., commented, “Technology promises so much but delivers so little. Facebook gave us the ability to stay in touch with everyone but sacrificed its integrity and our personal information in pursuit of the dollar. The promise that our medical records would be digitized and more easily shared and drive costs down still has not materialized on a global scale. The chief drivers of AI innovation and application will be for-profit companies who have shown that their altruism only extends to their bottom lines. Like most innovations, I expect AI to leave our poor even poorer and our rich even richer, increasing the numbers of the former while consolidating power and wealth in an ever-shrinking group of currently rich people.”

Alan Mutter, a longtime Silicon Valley CEO, cable TV executive and now a teacher of media economics and entrepreneurism at the University of California–Berkeley, said, “Although AI will accomplish many rote tasks efficiently, we will be a long way from the point that AI can be a substitute for human sensitivity, intuition and judgment. Yes, AI can apply Van Gogh’s style to a classic Rembrandt portrait, but that is no substitute for human creativity. The danger is that we will surrender thinking, exploring and experimentation to tools that hew to the rules but can’t color outside the lines. Would you like computers to select the president or decide if you need hip surgery?”

Amy Webb, founder of the Future Today Institute and professor of strategic foresight at New York University, was among those who are concerned that AI will not improve the lives of individuals. She commented, “The transition through AI will last the next 50 years or more. As we move farther into this third era of computing, and as every single industry becomes more deeply entrenched with AI systems, we will need new hybrid-skilled knowledge workers who can operate in jobs that have never needed to exist before. We’ll need farmers who know how to work with big data sets. Oncologists trained as robotocists. Biologists trained as electrical engineers. We won’t need to prepare our workforce just once, with a few changes to the curriculum. As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging. It’s easy to look back on history through the lens of present – and to overlook the social unrest caused by widespread technological unemployment. We need to address a difficult truth that few are willing to utter aloud: AI will eventually cause a large number of people to be permanently out of work. Just as generations before witnessed sweeping changes during and in the aftermath of the Industrial Revolution, the rapid pace of technology will likely mean that Baby Boomers and the oldest members of Gen X – especially those whose jobs can be replicated by robots – won’t be able to retrain for other kinds of work without a significant investment of time and effort. The social safety net structures currently in place in the U.S. and in many other countries around the world weren’t designed for our transition to AI.”

Anita Salem, systems research and design principal at SalemSystems, wrote, “Human-machine interaction will result in increasing precision and decreasing human relevance unless specific efforts are made to design in ‘humanness.’ For instance, AI in the medical field will aid more precise diagnosis, will increase surgical precision and will increase evidence-based analytics. If designed correctly, these systems will allow the humans to do what they do best –provide empathy, use experience-based intuition and utilize touch and connection as a source of healing. If human needs are left out of the design process, we’ll see a world where humans are increasingly irrelevant and more easily manipulated. We could see increasing under-employment leading to larger wage gaps, greater poverty and homelessness and increasing political alienation. We’ll see fewer opportunities for meaningful work, which will result in increasing drug and mental health problems and the further erosion of the family support system. Without explicit efforts to humanize AI design, we’ll see a population that is needed for purchasing, but not creating. This population will need to be controlled and AI will provide the means for this control: law enforcement by drones, opinion manipulation by BOTS, cultural homogeny through synchronized messaging, election systems optimized from big data and a geo-political system dominated by corporations that have benefited from increasing efficiency and lower operating costs.”

Baratunde Thurston, futurist, former director of digital at The Onion and co-founder of comedy/technology start-up Cultivated Wit, said, “For the record, this is not the future I want, but it is what I expect given existing default settings in our economic and socio-political system preferences… The problems to which we are applying machine learning and AI are generally not ones that will lead to a ‘better’ life for most people. That’s why I say in 2030, most people won’t be better due to AI. We won’t be more autonomous; we will be more automated as we follow the metaphorical GPS line through daily interactions. We don’t choose our breakfast or our morning workouts or our route to work. An algorithm will make these choices for us in a way that maximizes efficiency (narrowly defined) and probably also maximizes the profitability of the service provider. By 2030, we may cram more activities and interactions into our days, but I don’t think that will make our lives ‘better.’ A better life, by my definition, is one in which we feel more valued and happy. Given that the biggest investments in AI are on behalf of marketing efforts designed to deplete our attention and bank balances, I can only imagine this leading to days that are more filled but lives that are less fulfilled. To create a different future, I believe we must unleash these technologies toward goals beyond profit maximization. Imagine a mapping app that plotted your work commute through the most beautiful route, not simply the fastest. Imagine a communications app that facilitated deeper connections with people you deemed most important. These technologies must be more people-centric. We need to ask that they ask us, ‘What is important to you? How would you like to spend your time?’ But that’s not the system we’re building. All those decisions have been hoarded by the unimaginative pursuit of profit.”

James Hendler, professor of computer, web and cognitive sciences and director of the Rensselaer Polytechnic Institute for Data Exploration and Application, wrote, “I believe 2030 will be a point in the middle of a turbulent time when AI is improving services for many people, but it will also be a time of great change in society based on changes in work patterns that are caused, to a great degree, by AI. On the one hand, for example, doctors will have access to information that is currently hard for them to retrieve rapidly, resulting in better medical care for those who have coverage, and indeed in some countries the first point of contact in a medical situation may be an AI, which will help with early diagnoses/prescriptions. On the other hand, over the course of a couple of generations, starting in the not-too-distant future we will see major shifts in work force with not just blue-collar jobs, but also many white-collar jobs lost. Many of these will not be people ‘replaced’ by AIs, but rather the result of a smaller number of people being able to accomplish the same amount of work – for example in professions such as law clerks, physicians assistants and many other currently skilled positions we would project a need for less people (even as demand grows).”

Kevin Gross, independent technology consultant, commented, “The gap between the haves and have-nots will not be resolved by 2030. The haves will benefit from AI in the form of cool gadgets and applications and the have-nots will be subjected to it in the form of excursions through personal information in linked databases.”

Helena Draganik, a professor at the University of Gdansk, Poland, responded, “AI will not change humans. It will change the relations between them because it can serve as an interpreter of communication. It will change our habits (as an intermediation technology). AI will be a great commodity. It will help in cases of health problems (diseases). It will also generate a great ‘data industry’ (big data) market, and a lack of anonymity and privacy. Humanity will more and more depend on energy/electricity. These factors will create new social, cultural, security and political problems.”

John Leslie King, a computer science professor at the University of Michigan, and a consultant on cyberinfrastructure for the NSF CISE and SBE directorates for several years, commented, “It’s hard to tell what will happen with AI. The future is hard to see. The hype about AI has been so strong many times. But if there are evil things to be done with AI, people will find out about them and do them. There will be an ongoing fight like the one between hackers and IT security people.”

Henning Schulzrinne, co-chair of the Internet Technical Committee of the IEEE Communications Society, professor at Columbia University and Internet Hall of Fame member, said, “Human-mediated education will become a luxury good. Some high school- and college-level teaching will be conducted partially by video and AI-graded assignments, using similar platforms to the MOOC models today, with no human involvement, to deal with increasing costs for education (‘robo-TA’).”

Eliot Lear, principal engineer at Cisco, said, “AI and tech will not leave most people better off than they are today. As always, technology outpaces our ability to understand its ramifications so as to properly govern its use. I have no reason to believe that we will have caught up by 2030.”

Mark Surman, executive director of the Mozilla Foundation and author of “Commonspace: Beyond Virtual Community,” responded, “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”

Luke Stark, a fellow in the department of sociology at Dartmouth College and at the Berkman Klein Center for Internet & Society at Harvard University, wrote, “AI technologies run the risk of providing a comprehensive infrastructure for corporate and state surveillance more granular and all-encompassing than any previous such regime in human history.”

Tracey P. Lauriault, assistant professor of critical media and big data in the School of Journalism and Communication at Carleton University, commented, “The question leaves out the notion of regulatory and policy interventions to protect citizens from potentially harmful outcomes, or anything about AI auditing, oversight, transparency and accountability. Without some sort of principles of a systems-based framework to ensure that AI remains ethical, and in the public interest, in a stable fashion, then I must assume that it will impede agency and could lead to AI decision-making that can be harmful, biased, inaccurate and not dynamically change with changing values. There needs to be some sort of accountability.”

Sam Gregory, director of WITNESS and digital human rights activist, responded, “Here is a set of related concerns. Trends in AI suggest it will enable more individualized, personalized creation of synthetic media filter-bubbles around people including use of deepfakes and related individualized synthetic audio and video, micro-targeting based on personal data and trends in using AI-generated and directed bots. These factors may be controlled by increasing legislation and platform supervision but by 2030 there is little reason to think that most peoples’ individual autonomy and ability to push back to understand the world around them will have improved. Additionally, we should assume all AI systems for surveillance and population control and manipulation will be disproportionately used and inadequately controlled by authoritarian and non-democratic governments. These governments and democratic governments will continue to pressure platforms to use AI to monitor for content, and this monitoring in and of itself, will contribute to the data set for personalization and for surveillance and manipulation. To fight back against this dark future we need to get the right combination of attention to legislation and platform self-governance right now, and think about media literacy to understand AI-generated synthetic media and targeting. We should also be cautious about how much we encourage the use of AI as a solution to managing content online and as a solution to, for example, managing hate speech.”

Sasha Costanza-Chock, associate professor of civic media at MIT, said, “Unfortunately it is most likely that AI will be deployed in ways that deepen existing structural inequality along lines of race, class, gender, ability and so on. A small portion of humanity will benefit greatly from AI, while the vast majority will experience AI through constraints on life chances. Although it’s possible for us to design AI systems to advance social justice, our current trajectory will reinforce historic and structural inequality.”

Peter Levine, associate dean for research and Lincoln Filene Professor of Citizenship & Public Affairs in Tufts University’s Jonathan Tisch College of Civic Life, wrote, “My work focuses on civic engagement and citizenship. Being a fully-fledged citizen has traditionally depended on work. I’m worried that rising levels of non-employment will detract from civic engagement. Also, AI is politically powerful and empowers the people and governments that own it. Thus, it may increase inequality and enhance authoritarianism.”

Susan Etlinger, an industry analyst for Altimeter Group and expert in data, analytics and digital strategy, commented, “In order for AI technologies to be truly transformative in a positive way, we need a set of ethical norms, standards and practical methodologies to ensure that we use AI responsibly and to the benefit of humanity. AI technologies have the potential to do so much good in the world: Identify disease in people and populations; discover new medications and treatments; make daily tasks like driving simpler and safer; monitor and distribute energy more efficiently; and so many other things we haven’t yet imagined or been able to realize. And – like any tectonic shift – AI creates its own type of disruption. We’ve seen this with every major invention from the Gutenberg press to the invention of the semiconductor. But AI is different. Replication of some human capabilities using data and algorithms has ethical consequences. Algorithms aren’t neutral; they replicate and reinforce bias and misinformation. They can be opaque. And the technology and means to use them rests in the hands of a select few organizations, at least today.”

Michiel Leenaars, director of strategy at NLnet Foundation and director of the Internet Society’s Netherlands chapter, responded, “Achieving trust is not the real issue; achieving trustworthiness and real empowerment of the individual is. As the technology that to a large extent determines the informational self disappears – or in practical terms is placed out of local control, going ‘underground’ under the perfect pretext of needing networked AI – the balance between societal well-being and human potential on the one hand and corporate ethics and opportunistic business decisions on the other stands to be disrupted. Following the typical winner-takes-all scenario the internet is known to produce, I expect that different realms of the internet will become even less transparent and more manipulative. For the vast majority of people (especially in non-democracies) there already is little real choice but to move and push along with the masses.”

Joël Colloc, professor at Université du Havre Normandy University and author of “Ethics of Autonomous Information Systems: Towards an Artificial Thinking,” responded, “When AI supports human decisions as a decision-support system it can help humanity enhance life, health and well-being and supply improvements for humanity. See Marcus Flavius Quintilianus’s principles: Who is doing What, With What, Why, How, When, Where? Autonomous AI is power that can be used by powerful persons to control the people, put them in slavery. Applying the Quintilian principles to the role of AI… we should propose a code of ethics of AI to evaluate that each type of application is oriented toward the well-being of the user: 1) Do not harm the user; 2) Benefits go to the user; 3) Do not misuse her/his freedom, identity and personal data; 4) Decree as unfair any clauses alienating the user’s independence or weakening his/her rights of control over privacy in use of the application. The sovereignty of the user of the system must remain total.”

Devin Fidler, futurist and founder of Rethinkery Labs commented, “If earlier industrialization is any guide, we may be moving into a period of intensified creative destruction as AI technologies become powerful enough to overturn the established institutions and the ordering systems of modern societies. If the holes punched in macro-scale organizational systems are not explicitly addressed and repaired, there will be increased pressures on everyday people as they face not only the problems of navigating an unfamiliar new technology landscape themselves, but also the systemic failure of institutions they rely on that have failed to adapt.”

David Golumbia, an associate professor of digital studies at Virginia Commonwealth University, wrote, “The question is not stated precisely enough to be very useful. ‘AI,’ in ordinary usage, does not name a clear enough technology to be useful, and not even a clear idea. All tech has benefits and drawbacks. So as phrased, the answer has to be ‘yes and no and who knows?’”

Erik Huesca, president of the Knowledge and Digital Culture Foundation, based in Mexico City, said, “There is a concentration of places where specific AI is developed. It is a consequence of the capital investment that seeks to replace expensive professionals. Universities have to rethink what type of graduates to prepare, especially in areas of health, legal and engineering, where the greatest impact is expected, since the labor displacement of doctors, engineers and lawyers is a reality with the incipient developed systems.”

Brian Behlendorf, executive director of the Hyperledger project at the Linux Foundation and expert in blockchain technology, wrote, “I am concerned that AI will not be a democratizing power, but will enhance further the power and wealth of those who already hold it. This is because more data means better AI, and data is expensive to acquire, especially personal data, the most valuable kind. This is in contrast to networking technologies, whose benefits were shared fairly widely as the prices for components came down equally fast for everyone. One other reason: AI apps will be harder to debug than ordinary apps, and we already see hard-to-debug applications leading to disenfranchisement and deterioration of living. So, I do not take as a given that AI will enrich ‘most’ people’s lives over the next 12 years.”

Brian Harvey, lecturer on the social implications of computer technology at the University of California – Berkeley, said, “The question makes incorrect presuppositions, encapsulated in the word ‘we.’ There is no we; there are the owners and the workers. The owners (the 0.1%) will be better off because of AI. The workers (bottom 95%) will be worse off, as long as there are owners to own the AI, same as for any piece of technology.”

Dalsie Green Baniala, CEO and regulator for telecommunications for Vanuatu, wrote, “For small-island developing countries, where the topography is unique (small islands separated by big ocean/waters), the use of AI down the road may take more than 10 years (15 or so) for the people to realize its benefits and accept it. This is due to contributing factors such as a lack of power generation and high levels of illiteracy. With the introduction of the Internet of Things, human senses are in decline. In addition, often machine decisions do not produce an accurate result, they do not meet expectations or specific needs. For example, applications are usually invented to target the developed-world market. They may not work appropriately for countries like ours – small islands separated by big waters.”

Christine Boese, digital strategies professional, commented, “I believe it is as William Gibson postulated, ‘The future is already here, it just not very evenly distributed.’ What I know from my work in user-experience design and in exposure to many different Fortune 500 IT departments working in big data and analytics is that the promise and potential of AI and machine learning is VASTLY overstated. There has been so little investment in basic infrastructure, entire chunks of our systems won’t even be interoperable. The AI and machine learning code will be there, in a pocket here, a pocket there, but system-wide, it is unlikely to be operating reliably as part of the background radiation against which many of us play and work online.”

Eileen Donahoe, executive director of the Global Digital Policy Incubator at Stanford University, commented, “While I do believe human-machines collaboration will bring many benefits to society over time, I fear that we will not have made enough progress by 2030 to ensure that benefits will be spread evenly or to protect against downside risks, especially as they relate to bias, discrimination and loss of accountability by that time.”

Adam Nelson, a software developer for one of the “big five” global technology companies, said, “Human-machine/AI collaboration will be extremely powerful, but humans will still control intent. If human governance isn’t improved, AI will merely make the world more efficient. But the goals won’t be human welfare. They’ll be wealth aggregation for those in power.”

Betsy Williams, a researcher at the Center for Digital Society and Data Studies at the University of Arizona, wrote, “AI’s benefits will be unequally distributed across society. Few will reap meaningful benefits. Large entities will use AI to deliver marginal improvements in service to their clients, at the cost of requiring more data and risking errors. Employment trends from computerization will continue. AI will threaten medium-skill jobs. Instead of relying on human expertise and context knowledge, many tasks will be handled directly by clients using AI interfaces or by lower-skilled people in service jobs, boosted by AI. AI will harm some consumers. For instance, rich consumers will benefit from self-driving cars, while others must pay to retrofit existing cars to become more visible to the AI. Through legal maneuvering, self-driving car companies will avoid many insurance costs and risks, shifting them to human drivers, pedestrians and bicyclists. In education, creating high quality automated instruction requires expertise and money. Research on American K-12 classrooms suggests that typical computer-aided instruction yields better test scores than instruction by the worst teachers. By 2030, most AI used in education will be of middling quality (for some, their best alternative). The children of the rich and powerful will not have AI used on them at school; instead, they will be taught to use it. For AI to significantly benefit the majority, it must be deployed in emergency health care (where quicker lab work, reviews of medical histories or potential diagnoses can save lives) or in aid work (say, to coordinate shipping of expiring food or medicines from donors to recipients in need).”

Mai Sugimoto, an associate professor of Sociology at Kansai University, Japan, responded, “AI could amplify one’s bias and prejudice. We have to make data unbiased before putting it into AI, but it’s not very easy.”

Alper Dincel of T.C. Istanbul Kultur University, Turkey, wrote, “I believe personal connections will continue to drop, as they are in today’s world. We are going to have more interest in fiction than in reality. These issues will affect human brain development as a result. Also, unqualified people won’t find jobs, as machines and programs take over easy work in the near future. Machines will also solve performance problems. There is no bright future for most people if we don’t start to try finding solutions.”

Andrew Whinston, computer science professor and director of the Center for Research in Electronic Commerce, University of Texas–Austin, said, “There are several issues. First, security problems do not get the attention needed. Secondly, there may be use of the technology to control the population – as we see developing in China. AI methodology is focused on prediction, at least so far, so methods to improve health or general welfare are lacking. Deep learning, which is getting the big hype, does not have a clear foundation. That makes it scientifically weak.”

Wout de Natris, an internet cybercrime and security consultant based in Rotterdam, Netherlands, wrote, “Hope: advancement in health care, education, decision-making, availability of information, higher standards in ICT-security, global cooperation on these issues, etc. Fear: Huge segments of society, especially the middle classes who carry society in most ways, e.g. through taxes, savings and purchases, will be rendered jobless through endless economic cuts by industry, followed by governments due to lower tax income. Hence all of society suffers. Can governments and industry refrain from an overkill of surveillance? Otherwise privacy values keep declining, leading to a lower quality of life.”

Dan Geer, a respondent who provided no identifying details, commented, “If you believe, as do I, that having a purpose to one’s life is all that enables both pride and happiness, then the question becomes whether AI will or will not diminish purpose. For the irreligious, AI will demolish purpose, yet if AI is truly intelligent, then AI will make serving it the masses’ purpose. Ergo…”

Andrian Kreye, a journalist and documentary filmmaker based in Germany, said, “If humanity is willing to learn from its mistakes with low-level AIs like social media algorithms there might be a chance for AI to become an engine for equality and progress. Since most digital development is driven by venture capital, experience shows that automation and abuse will be the norm.”

Olivia Coombe, a respondent who provided no identifying details, wrote, “Children learn from their parents. As AI systems become more complex and are given increasingly important roles in the functioning of day-to-day life, we should ask ourselves what are we teaching our artificial digital children? If we conceive and raise them in a world of individual self-interest, will they just strengthen these existing, and often oppressive, systems of capitalist competition? Or could they go their own way, aspiring to a life of entrepreneurship to collaboration? Worse yet, will they see the reverence we hold for empires and seek to build their own through conquest?”

Daniel Berninger, an internet pioneer who led the first VoIP deployments at Verizon, HP and NASA, currently founder at VCXC – Voice Communication Exchange Committee, said, “The luminaries claiming artificial intelligence will surpass human intelligence and promoting robot reverence imagine exponentially improving computation pushes machine self-actualization from science fiction into reality. The immense valuations awarded Google, Facebook, Amazon, Tesla, et. al., rely on this machine-dominance hype to sell infinite scaling. As with all hype, pretending reality does not exist does not make reality go away. Moore’s Law does not concede the future to machines, because human domination of the planet does not owe to computation. Any road map granting machines self-determination includes ‘miracle’ as one of the steps. You cannot turn a piece of wood into a real boy. AI merely ‘models’ human activity. No amount of improvement in the development of these models turns the ‘model’ into the ‘thing.’ Robot reverence attempts plausibility by collapsing the breadth of human potential and capacities. It operates via ‘denialism’ with advocates disavowing the importance of anything they cannot model. In particular, super AI requires pretending human will and consciousness do not exist. Human beings remain the source of all intent and the judge of all outcomes. Machines provide mere facilitation and mere efficiency in the journey from intent to outcome. The dehumanizing nature of automation and the diseconomy of scale of human intelligence is already causing headaches that reveal another AI Winter arriving well before 2030.”

Justin Reich, executive director of MIT Teaching Systems Lab and research scientist in the MIT Office of Digital Learning, responded, “Systems for human-AI collaborations will be built by powerful, affluent people to solve the problems of powerful, affluent people. In the hands of autocratic leaders, AI will become a powerful tool of surveillance and control. In capitalism economies, human-AI collaboration will be deployed to find new, powerful ways of surveilling and controlling workers for the benefit of more affluent consumers.”

Benjamin Shestakofsky, an assistant professor of sociology at the University of Pennsylvania specializing in digital technology’s impacts on work, said, “It is difficult to make general statements about whether and how humans will be better or worse off as AI advances. There is nothing inevitable about the social implications of the future of AI. The answer to this question will depend on choices made by citizens, workers, organizational leaders and legislators across a broad range of social domains. For example, algorithmic hiring systems can be programmed to prioritize efficient outcomes for organizations or fair outcomes for workers. The profits produced by technological advancement can be broadly shared or can be captured by the shareholders of a small number of high-tech firms. Policymakers should act to ensure that citizens have access to knowledge about the effects of AI systems that affect their life chances and a voice in algorithmic governance.”

Dave Gusto, professor of political science and co-director of the Consortium for Science, Policy and Outcomes at Arizona State University, said, “The question asked about ‘most people.’ Most people in the world live a life that is not well regarded by technology, technology developers and AI. I don’t see that changing much in the next dozen years.”

Chris Newman, principal engineer at Oracle, commented, “As it becomes more difficult for humans to understand how AI/tech works, it will become harder to resolve inevitable problems. A better outcome is possible with a hard push by engineers and consumers toward elegance and simplicity (e.g., Steve Jobs-era Apple).”

David Bray, executive director for the People-Centered Internet Coalition, commented, “Hope: Human-machine/AI collaborations extend our abilities of humans while we (humans) intentionally strive to preserve values of respect, dignity and agency of choice for individuals. Machines bring together different groups of people and communities and help us work and live together by reflecting on our own biases and helping us come to understand the plurality of different perspectives of others. Big concern: Human-machine/AI collaborations turn out to not benefit everyone, only a few, and result in a form of ‘indentured servitude’ or ‘neo-feudalism’ that is not people-centered and not uplifting of people. Machines amplify existing confirmation biases and other human characteristics resulting in sensationalist, emotion-ridden news and other communications that gets page views and ad-clicks, yet lacks nuance of understanding, resulting in tribalism and a devolution of open societies and pluralities to the determent of the global human condition.”

Annalie Killian, futurist and vice president for strategic partnerships at Sparks & Honey, New York, commented, “More technology does not make us more human; we have evidence for that now within 10 years of combining the smartphone device with persuasive and addictive designs that shape and hijack behaviour. Technologists who are using emotional analytics, image modification technologies and other hacks of our senses are destroying the fragile fabric of trust and truth that is holding our society together at a rate much faster than we are adapting and compensating – let alone comprehending what is happening. The sophisticated tech is affordable and investible in the hands of very few people who are enriching themselves and growing their power exponentially and these actors are NOT acting in the best interest of all people.”

Jenni Mechem, a respondent who provided no identifying details, said, “My two primary reasons for saying that advances in AI will not benefit most people by 2030 are, first, there will continue to be tremendous inequities in who benefits from these advances, and second, if the development of AI is controlled by for-profit entities there will be tremendous hidden costs and people will yield control over vast areas of their lives without realizing it. One area of benefit will be prosthetic limbs and assistive technology for people with disabilities and ways of enhancing human ability. These are welcome, but I see little to no way that these will be distributed equitably in the next 22 years. The examples of Facebook as a faux community commons bent on extracting data from its users and of pervasive internet censoring in China should teach us that neither for-profit corporations nor government can be trusted to guide technology in a manner that truly benefits everyone. Democratic governments that enforce intelligent regulations as the European Union has done on privacy may offer the best hope.”

Cristobal Young, an associate professor of sociology at Cornell University specializing in economic sociology and stratification, commented, “I mostly base my response [that tech will not leave most people better off than they are today] on Twitter and other online media, which were initially praised as ‘liberation technology.’ Tt is clear that the internet has devastated professional journalism, filled the public sphere with trash that no one believes and degraded civil discourse. This isn’t about robots, but rather about how humans use the internet. Donald Trump himself says that without Twitter, he could never have been elected, and Twitter continues to be his platform for polarization, insult and attacks on the institutions of accountability.”

Estee Beck, assistant professor at the University of Texas and author of “A Theory of Persuasive Computer Algorithms for Rhetorical Code Studies,” responded, “Tech design and policy affects our privacy in the United States so much so that most people do not think about the tracking of movements, behaviors and attitudes from smartphones, social media, search engines, ISPs and even Internet of Things-enabled devices. Until tech designers and engineers build privacy into each design and policy decision for consumers, any advances with human-machine/AI collaboration will leave consumers with less security and privacy.”

Adam Popescu, a writer who contributes frequently to the New York Times, Washington Post, Bloomberg Businessweek, Vanity Fair and the BBC, wrote, “We put too much naive hope in everything tech being the savior. Let’s take journalism. An AI smart learning algorithm isn’t what we need to select stories or, god forbid, to write anything with meaning or emotion. There are too many intangibles for the arts ever to be coopted by anything farted out of a computer.”

David A. Banks, an associate research analyst with the Social Science Research Council, said, “AI will be very useful to a small professional class but will be used to monitor and control everyone else.”

Bernie Hogan, senior research fellow at Oxford Internet Institute, wrote, “The current political and economic climate suggests that existing technology, especially machine learning, will be used to create better decisions for those in power while creating an ever more tedious morass of bureaucracy for the rest. We see little example of successful bottom-up technology, open source technology and hacktivism relative to the encroaching surveillance state and attention economy.”

Alexey Turchin, existential risks researcher and futurist, responded, “There are significant risks of AI misuse before 2030 in the form of swarms of AI empowered drones or even non-aligned human-level AI.”

David Brake, senior lecturer in communications at the University of Bedfordshire, U.K., said, “Like many colleagues I fear that AI will be framed as ‘neutral’ and ‘objective’ and thereby used as cover to make decisions that would be considered unfair if made by a human. If we do not act to properly regulate the use of AI we will not be able to interrogate the ways that AI decision-making is constructed or audit them to ensure their decisions are indeed fair. Decisions may also be made (even more than today) based on a vast array of collected data and if we are not careful we will be unable to control the flows of information about us used to make those decisions or to correct misunderstandings or errors which can follow us around indefinitely. Imagine being subject to repeated document checks as you travel around the country because you know a number of people who are undocumented immigrants and your movements therefore fit the profile of an illegal immigrant. And you are not sure whether to protest because you don’t know whether such protests could encourage an algorithm to put you into a ‘suspicious’ category which could get you harassed even more often….”

Gerry Ellis, founder and digital usability and accessibility consultant at Feel The BenefIT, responded, “Technology has always been far more quickly developed and adopted in the richer parts of the world than in the poorer regions where new technology is generally not affordable. AI cannot be taken as a stand-alone technology but in conjunction with other converging technologies like augmented reality, robotics, virtual reality, the Internet of Things, big data analysis, etc. It is estimated that around 80% of jobs that will be done in 2030 do not exist yet. One of the reasons why unskilled and particularly repetitive jobs migrate to poor countries is because of cheap labour costs, but AI combined with robotics will begin to do many of these jobs. For all of these reasons combined, the large proportion of the earth’s population that lives in the under-developed and developing world is likely to be left behind by technological developments. Unless the needs of people with disabilities are taken into account when designing AI related technologies, the same is true for them (or I should say ‘us,’ as I am blind).”

Jennifer King, director of privacy at Stanford Law School’s Center for Internet and Society, said, “Unless we see a real effort to capture the power of AI for the public good, I do not see an overarching public benefit by 2030. The shift of AI research to the private sector means that AI will be developed to further consumption, rather than extend knowledge and public benefit.”

Jason Abbott, professor and director at the Center for Asian Democracy, University of Louisville, said, “AI is likely to create significant challenges to the labor force as previously skilled (semi-skilled) jobs are replaced by AI – everything from AI in trucks and distribution to airlines, logistics and even medical records and diagnoses.”

Kenneth R. Fleischmann, an associate professor at the University of Texas – Austin School of Information, responded, “In corporate settings, I worry that AI will be used to replace human workers to a disproportionate extent, such that the net economic benefit of AI is positive, but that economic benefit is not distributed equally among individuals, with a smaller number of wealthy individuals worldwide prospering, and a larger number of less wealthy individuals worldwide suffering from fewer opportunities for gainful employment.”

Hume Winzar, associate professor and director of the business analytics undergraduate program at Macquarie University, Sydney, Australia, wrote, “In the Western world, most people could be better off with more AI, but in the rest of the world we are more likely to see increased distance between those with opportunity and education and those who do not. In the Western world again, the extent that we will be better off will depend on issues of privacy and the risks of a surveillance state.”

João Pedro Taveira, embedded systems researcher and smart grids architect for INOV INESC Inovação, Portugal, wrote, “Basically, we will lose several degrees of freedom. Are we ready for that? When we wake up to what is happening it might be too late to do anything about it. Artificial intelligence is a subject that must be studied philosophically, in open-minded, abstract and hypothetical ways. Using this perspective, the issues to be solved by humans are (but not limited to) AI, feelings, values, motivation, free will, solidarity, love and hate. Yes, we will have serious problems. Dropping the ‘artificial’ off AI, look at the concept of intelligence. As a computer-science person, I know that so-called ‘AI’ studies how an agent (a software program) increases its knowledge base using rules that are defined using pattern-recognition mechanisms. No matter which mechanisms are used to generate this rule set, the result will be always behavioral profiling. Right now, everybody uses and agrees to use a wide set of appliances, services and products without a full understanding of the information that is being shared with enterprises, companies and other parties. There’s a lack of needed regulation and audit mechanisms on who or what uses our information and how it is used and whether it is stored for future use. Governments and others will try to access this information using these tools by decree, arguing national security or administration efficiency improvements. Enterprises and companies might argue that these tools offer improvement of quality of service, but there’s no guarantee about individuals’ privacy, anonymity, individual security, intractability and so on.”

Joshua Loftus, assistant professor of information, operations and management sciences at New York University and co-author of “Counterfactual Fairness in Machine Learning,” commented, “It’s just another technology. How have new technologies shaped our lives in the past? It depends on the law, market structure and who wields political power. In the present era of extreme inequality and climate catastrophe, I expect technologies to be used by employers to make individual workers more isolated and contingent, by apps to make users more addicted on a second-by-second basis, and by governments for surveillance and increasingly strict border control.”

Joseph Potvin, executive director at the Xalgorithms Foundation – creating specifications and components for an “Internet of Rules” – responded, “I responded that ‘In 2030, advancing AI and tech will not leave most people better off than they are today’ only because the options did not include an ambiguous response, which is how I’d have preferred to answer. AI and tech have the emergent outcomes shaped by choices made along the way. The reason I answered in the negative is because a positive outcome can neither be assumed nor expected. To obtain a positive outcome for people and societies is a design challenge. It’s always easier to botch something than to create an elegant result. I’d have answered: ‘It is possible, but cannot be taken for granted, that by 2030 advancing AI and tech will leave most people better off than they are today.’ So I’m a conditionally optimistic active agent, cognizant of the challenges: Alvesson, M., & Spicer, A. (2012). A Stupidity-Based Theory of Organizations. Journal of Management Studies, 49(7), 1194–1220. http://doi.org/10.1111/j.1467-6486.2012.01072.x.”

Luis German Rodriguez Leal, teacher and researcher at the Universidad Central de Venezuela and consultant on technology for development, said, “Humankind is not addressing properly the issue of educating people about possibilities and risks of human-machine/AI collaboration. One can observe today the growing problems of ill-intentioned manipulation of information and technological resources. There are already plenty of examples about how decision-making is biased using big data, machine learning, privacy violations and social networks (just to mention a few elements) and one can see that the common citizen is unaware of how much of his/her will does not belong to him/her. This fact has a meaningful impact on our social, political, economic and private life. We are not doing enough to attend to this issue and it is getting very late.”

Llewellyn Kriel, CEO of TopEditor International, a media services company based in Johannesburg, South Africa, wrote, “Current developments do not augur well for the fair growth of AI. Vast swaths of the population simply do not have the intellectual capacity or level of sophistication to understand 1) the technology itself and 2) the implications of its safe use. This entrenches and widens the digital divide in places like Africa. The socio-political implications of this breed deep primitive superstition, racial hatred toward whites and Asians who are seen as techno-colonialists and the growth of kleptocracies amid the current mushrooming of corruption.”

Manoj Kumar, manager at Mitsui Orient Lines, responded, “The advancement in AI technologies will replace human thinking and the human element. The choices and decisions will be more controlled. A previous example is advancements in food cloning and engineering – although they have met the goal of providing for basic needs, the degradation in food quality and rise in control is more than evident.”

John Sniadowski, a director for a technology company, wrote, “As technology is currently instantiated it simply concentrates power into a smaller number of international corporations. That needs fixing for everyone to gain the best from AI.”

Joseph Turow, professor of communication at the University of Pennsylvania, wrote, “Whether or not AI will improve society or harm it by 2030 will depend on the structures governing societies of the era. Broadly democratic societies with an emphasis on human rights might encourage regulations that push AI in directions that help all sectors of the nation. Authoritarian societies will, by contrast, set agendas for AI that further divide the elite from the rest and use technology to cultivate and reinforce the divisions. We see both tendencies today; the dystopian one has the upper hand especially in places with the largest populations. It is critical that people who care about future generations speak out when authoritarian tendencies of AI appear.”

John Laudun, a respondent who provided no identifying details, commented, “I chose the darker answer because I fear that the underlying promise of the internet, to make the distribution of knowledge possible (and thus distribute creativity), is being undermined in the current moment by the way that various forms of ML and AI are being implemented – that is, most implementations, and thus most development, is in the hands of very powerful organizations (some states, some corporations).”

Jonathan Swerdloff, consultant and data systems specialist for Driven Inc., wrote, “The more reliant on AI we become, the more we are at the mercy of its developers. While AI has the ability to augment professionals and to make decisions, I have three concerns which make me believe it will not leave us better off by 2030. This does not address fears that anything run via AI could be hacked and changed by bad faith third parties. 1) Until any sort of self-policed AI sentience is achieved, it will suffer from a significant GIGO [garbage-in, garbage-out] problem. As AI as currently conceived only knows what it is taught, the seed sets for teaching must be thought out in detail before the tools are deployed. Based on the experience with Microsoft’s Tay and some responses I’ve heard from the Sophia robot, I am concerned that AI will magnify humanities flaws. 2) Disparate access. Unless the cost for developing AI drops precipitously – and it may, since one AI tool could be leveraged into building further less expensive AI tools – access to whatever advantages the tools will bring will likely be clustered among a few beneficiaries. I view this akin to high frequency trading on Wall Street. Those who can, do. Those who can’t, lose. 3) Tool of control. If AI is deployed to make civic or corporate decisions, those who control the algorithms control everything. In the U.S. we’ve recently seen Immigration and Customs Enforcement change its bond algorithm to always detain in every case.”

Jerry Michalski, founder of the Relationship Economy eXpedition, said, “We’re far from tipping into a better social contract. In a more-just world, AI could bring about utopias. However, many forces are shoving us in the opposite direction. 1) Businesses are doing all they can to eliminate full-time employees, who get sick and cranky, need retirement accounts and raises, while software gets better and cheaper. The Precariat will grow. 2) Software is like a flesh-eating bacterium: tasks it eats vanish from the employment landscape. Unlike previous technological jumps, this one unemploys people more quickly than we can retrain and reemploy them. 3) Our safety net is terrible and our beliefs about human motivations suck. 4) Consumerism still drives desires and expectations.”

Jeff Johnson, computer science professor at the University of San Francisco, previously with Xerox, HP Labs and Sun Microsystems, responded, “It’s a multivariate problem, with AI just one factor – albeit a slightly negative one. Whether most people will be better off in 2030 depends on many factors, only one of which is advances in AI. I believe advances in AI will leave many more people without jobs, which will increase the socio-economic differences in society, but other factors could help mitigate this, e.g., adoption of guaranteed income.”

Ian Peter, pioneer internet activist and internet rights advocate, said, “Personal data accumulation is reaching a point where privacy and freedom from unwarranted surveillance are disappearing. In addition, the algorithms that control usage of such data are becoming more and more complex leading to inevitable distortions. Henry Kissinger may have not been far off the mark when he described artificial intelligence as leading to ‘The End of the Age of Enlightenment.’”

Leonardo Trujillo, a research professor in computing sciences at the Instituto Tecnológico de Tijuana, Mexico, responded, “The AI community is overstating the reach of these technologies. There are far-greater issues of importance to people’s lives, such as access to water, land and food. Given the inevitable ecological crisis that is looming, AI will have but a marginal effect for most people in the world, particularly since population growth is happening in underdeveloped countries.”

Karen Oates, director of workforce development and financial stability for La Casea de Esperanza, commented, “Ongoing increases in the use of AI will not benefit the working poor and low-to-middle-income people. Having worked with these populations for 10 years I’ve already observed many of these people losing employment when robots and self-operating forklifts are implemented. Although there are opportunities to program and maintain these machines, realistically people who have the requisite knowledge and education will fill those roles. The majority of employers will be unwilling to invest the resources to train employees unless there is an economic incentive from the government to do so. Many lower-wage workers won’t have the confidence to return to school to develop new knowledge/skills when they were unsuccessful in the past. As the use of AI increases, low-wage workers will lose the small niche they hold in our economy.”

Steven Thompson, an author specializing in illuminating emerging issues and editor of “Androids, Cyborgs, and Robots in Contemporary Culture and Society,” wrote, “The keyword from the query is ‘dependence’ and I published pioneering quantitative research on internet addiction and dependency in 1996, and followed up 15 years later with a related, updated research talk on the future of AI and internet dependency at a UNESCO-sponsored conference on information literacy in Morocco. My expertise is in ethical and technological issues related to moving the internet appliance into the human body. I have edited two reference books to this effect: one on global concerns and ethics in human enhancement technologies, and one on cyborgs and robots in contemporary culture and society. You can find all of the research items referenced such as ‘Endless Empowerment and Existence: From Virtual Literacy to Online Permanence in Presence’ and other work of mine topically addressing the future of the internet, since the early- to mid-1990s. Suffice it to say the internet is moving into the human body, and in that process, societal statuses are altered, privileging some while abandoning others in the name of emerging technologies, and the global order is restructuring to the same effect. Think of net neutrality issues gone wild, corporately and humanly sustained with the privileges such creation and maintenance affords some members of society. Now think of the liberty issues arising from those persons who are digital outcasts, and wish to not be on the grid, yet will be forced to do so by society and even government edicts.”

Grace Mutung’u, co-leader of the Kenya ICT Action Network, responded, “New technologies will more likely increase current inequalities unless there is a shift in world economics. From the experience of the UN work on Millennium Development Goals, while there has been improvement with the quality of life generally, low- and middle-income countries still suffer disparate inequalities. This will likely lead to governance problems. In any case, governments in these countries are investing heavily in surveillance which will likely have more negative effects on society.”

Robert M. Mason, a professor emeritus in the Information School at the University of Washington, responded, “Technologies, including AI, leverage human efforts. People find ways to apply technologies to enhance the human spirit and the human experience, yet others can use technologies to exploit human fears and satisfy personal greed. As the late Fred Robbins, Nobel Laureate in Physiology/Medicine, observed (my paraphrase when I asked why he was pessimistic about the future of mankind): ‘Of course I’m pessimistic. Humans have had millions of years to develop physically and mentally, but we’ve had only a few thousand years – as the world population has expanded – to develop the social skills that would allow us to live close together.’ I understand his pessimism, and it takes only a few people to use AI (or any technology) in ways that result in widespread negative societal impacts.”

Wangari Kabiru, author of the MitandaoAfrika blog, based in Nairobi, commented, “In 2030, advancing AI and tech will not leave most people better off than they are today because our global digital mission is not strong enough and not principled enough to assure that ‘no, not one is left behind’ – perhaps intentionally. The immense positive-impact potential for enabling people to achieve more in nearly every area of life – the full benefits of human-machine/AI collaboration can only be experienced when academia, civil society and other institutions are vibrant, enterprise is human-values-based, and governments and national constitutions and global agreements place humanity first. Of particular note is education and specifically digital literacy which for African nations and globally MUST shift from beyond the classroom basics into the entire community ecosystem. Why? Because there is a shift in the future of LIFE! In addition to the elite engineering innovation spaces, there is need for discovery spaces that allow for interaction, excite and allow people to interact with, innovate and invent themselves. Engineering should serve humanity and never should humanity be made to serve the exploits of engineering. More people MUST be creators of the future of LIFE – the future of how they live, future of how they work, future of how their relationships interact and overall how they experience life. Beyond the co-existence of human-machine, this creates synergy.”

Michael Veale, co-author of “Fairness and Accountability Designs Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making” and a technology policy researcher at University College London, responded, “AI technologies will turn out to be more narrowly applicable than some hope. There will be a range of small tasks that will be more effectively automated. Whether these tasks leave individuals with increased ability to find meaning or support in life is debatable. Freed from some aspects of housework and administration, some individuals may feel empowered whereas others might feel aimless. Independent living for the elderly might be technologically mediated, but will it have the social connections and community that makes life worth living? Jobs too will change in nature, but it is not clear that new tasks will make people happy. It is important that all technologies and applications are backed up with social policies and systems to support meaning and connection, or else even effective AI tools might be isolating and even damaging on aggregate.”

Mechthild Schmidt Feist, department coordinator for digital communications and media at New York University, said, “Historical precedent shows that inventions are just as powerful in the hands of criminals or irresponsible or uninformed people. The more powerful our communication, the more destructive it could be. We would need global, enforceable legislation to limit misuse. 1) That is highly unlikely. 2) It is hard to predict all misuses. My negative view is due to our inability to make responsible use of our current online communication and media models. The utopian freedom has become a dystopian battleground.”

Rob Frieden, professor and Pioneers Chair in Telecommunications and Law at Penn State University, said, “Any intelligent system depends on the code written to support it. If the code is flawed, the end product reflects those flaws. An old-school acronym spells this out: GIGO, Garbage In, Garbage Out. I have little confidence that AI can incorporate any and every real world scenario, even with likely developments in machine learning. As AI expands in scope and reach, defects will have ever increasing impacts, largely on the negative side of the ledger.”

Zoetanya Sujon, a senior lecturer specializing in digital culture at University of Arts London, commented, “Like the history of so many technologies show us, AI will not be the magic solution to the world’s problems or to symbolic and economic inequalities. Instead, AI is most benefitting those with the most power. Already many studies identify AI as learning human biases, particularly against those who are most vulnerable and disempowered.”

Simeon Yates, director of the Centre for Digital Humanities and Social Science at the University of Liverpool, said, “AI will simply increase existing inequalities – it, like the internet, will fail in its emancipatory promise.”

Timothy Graham, a postdoctoral research fellow in sociology and computer science at Australian National University, commented, “I err on the side of caution and critique when it comes to sketching a vision of the human-machine/AI collaboration in 2030. Let us take for example the use of machine learning (ML) in areas of criminal justice and health. There is already an explosion of research into ‘fairness and representation’ in ML (and conferences such as Fairness, Accountability and Transparency in Machine Learning), as it is difficult to engineer systems that do not simply reproduce existing social inequality, disadvantage and prejudice. Deploying such systems uncritically will only result in an aggregately worse situation for many individuals, whilst a comparatively small number benefit. Similarly, in health care, we see current systems already under heavy criticism (e.g., the My Health Record system in Australia, or the NHS Digital program), because they are nudging citizens into using the system through an ‘opt-out’ mechanism and there are concerns that those who do not opt out may be profiled, targeted and/or denied access to services based on their own data. This is not to say that technology/AI in 2030 won’t be beneficial.”

Patrick Lambe, a partner at Straits Knowledge and president of the Singapore Chapter of the International Society for Knowledge Organization, wrote, “I chose the negative answer not because of a dystopian vision for AI itself and technology interaction with human life, but because I believe social, economic and political contexts will be slow to adapt to technology’s capabilities. The real-world environment and the technological capability space are becoming increasingly disjointed and out of synch. Climate change, migration pressures, political pressures, food supply and water will create a self-reinforcing ‘crisis-loop’ with which human-machine/AI capabilities will be largely out of touch. There will be some capability enhancement (e.g., medicine), but on the whole technology contributions will continue to add negative pressures to the other environmental factors (employment, job security, left-right political swings). On the whole I think these disjoints will continue to become more enhanced until a major crisis point is reached (e.g., war).”

Wendy M. Grossman, author of “net.wars” and technology blogger, wrote, ‘”2030 is 12 years from now. I believe human-machine AI collaboration will be successful in many areas, but that we will be seeing, like we are now over Facebook and other social media, serious questions about ownership and who benefits. It seems likely that the limits of what machines can do will be somewhat clearer than they are now, when we’re awash in hype. We will know by then, for example, how successful self-driving cars are going to be, and the problems inherent in handing off control from humans to machines in a variety of areas will also have become clearer. The big fight is to keep people from relying on experimental systems and turning off the legacy ones too soon – which is our current situation with the internet.”

Nathalie Marechal, doctoral candidate at the University of Southern California Annenberg School for Communication who researches the intersection of internet policy and human rights, said, “Absent rapid and decisive actions to rein in both government overreach and companies’ amoral quest for profit, technological developments – including AI – will bring about the infrastructure for total social control, threatening democracy and the right to individual self-determination.”

Jonathan Taplin, director emeritus at the University of Southern California’s Annenberg Innovation Lab, wrote, “My fear is that the current political class is completely unprepared for the disruptions that AI and robotics applied at scale will bring to our economy. While techno-utopians point to universal basic income as a possible solution to wide-scale unemployment, there is no indication that anyone in politics has an appetite for such a solution. And because I believe that meaningful work is essential to human dignity, I’m not sure that Universal Basic Income would be helpful in the first place.”

Suso Baleato, a fellow at Harvard’s Institute of Quantitative Social Science and liaison for the OECD Committee on Digital Economy Policy, commented, “The intellectual property framework impedes the necessary accountability of the underlying algorithms, and the lack of efficient re-distributive economic policies will continue amplifying the bias of the datasets.”

Dan Schultz, senior creative technologist at Internet Archive, responded, “AI will no doubt result in life-saving improvements for a huge portion of the world’s population, but it will also be possible to weaponize in ways that further exacerbate divides of any kind you can imagine (political, economic, education, privilege, etc.). AI will amplify and enable the will of those in power; its net impact on humanity will depend on the nature of that will.”

Stowe Boyd, founder and managing director at Work Futures, said, “There is a high possibility that unchecked expansion of AI could rapidly lead to widespread unemployment. My bet is that governments will step in to regulate the spread of AI, to slow the impacts of this phenomenon as a result of unrest by the mid 2020s. That regulation might include, for example, not allowing AIs to serve as managers of people in the workplace, but only to augment the work of people on a task or process level. So, we might see high degrees of automation in warehouses, but a human being would be ‘in charge’ in some sense. Likewise, fully autonomous freighters might be blocked by regulations.”

Tom Slee, senior product manager at SAP SE and author of “What’s Yours is Mine: Against the Sharing Economy,” wrote, “Many aspects of life will be made easier and more efficient by AI. But moving a decision such as health care or workplace performance to AI turns it into a data-driven decision driven by optimization of some function, which in turn demands more data. Adopting AI-driven insurance ratings, for example, demands more and more lifestyle data from the insured if it is to produce accurate overall ratings. Optimized data-driven decisions about our lives unavoidably require surveillance, and once our lifestyle choices become input for such decisions we lose individual autonomy. In some cases we can ignore this data collection, but we are in the early days of AI-driven decisions: By 2030 I fear the loss will be much greater. I do hope I am wrong.”

Randy Marchany, chief information security officer at Virginia Tech and director of Virginia Tech’s IT Security Laboratory, said, “AI-human interaction in 2030 will be in its ‘infancy’ stage. AI will need to go to ‘school’ in a manner similar to humans. They will amass large amounts of data collected by various sources but need ‘ethics’ training to make good decisions. Just as kids are taught a wide variety of info and some sort of ethics (religion, social manners, etc.), AI will need similar training. Will AI get the proper training? Who decides the training content?”

Stephanie Perrin, president of Digital Discretion, a data privacy consulting firm, wrote, “There is a likelihood that, given the human tendency to identify risk when looking at the unknown future, AI will be used to attempt to predict risk. In other words, more and deeper surveillance will be used to determine who is a good citizen (purchaser, employee, student, etc.) and who bad. This will find itself into public space surveillance systems, employee vetting systems (note the current court case where LinkedIn is suing data scrapers who offer to predict ‘flight risk’ employees) and all kinds of home management systems and intelligent cars. While this might possibly introduce a measure of safety in some applications, the impact of fear that comes with unconscious awareness of surveillance will have a severe impact on creativity and innovation. We need that creativity as we address massive problems in climate change and reversing environmental impacts, so I tend to be pessimistic about outcomes.”

Sam Ladner, a former UX researcher for Amazon and Microsoft, now an adjunct professor at Ontario College of Art & Design, wrote, “Technology is not a neutral tool, but one that has our existing challenges imprinted onto it. Inequality is high and growing. Too many companies deny their employees a chance to work with dignity, whether it be through providing them meaningful things to do, or with the basic means to live. AI will be placed on top of that existing structure. Those who already have dignified work with a basic income will see that enhanced; those who are routinely infantilized or denied basic rights will see that amplified. Some may slip into that latter category because their work is more easily replaced by AI and machine learning.”

Valarie Bell, a computational social scientist at the University of North Texas, commented, “As a social scientist I’m concerned that never before have we had more ways in which to communicate and yet we’ve never done it so poorly, so venomously and so wastefully. With devices replacing increasingly higher-order decisions and behaviors, people have become more detached, more disinterested and yet more self-focused and self-involved. If people behave this way now, when – let’s say medical diagnosis is taken over by machines, computers and robotics – how will stressful prognoses be communicated? Will a hologram or a computer deliver ‘the bad news’ instead of a physician? Given the health care industry’s inherent profit motives it would be easy for them to justify how much cheaper it would be to simply have devices diagnose, prescribe treatment and do patient care, without concern for the importance of human touch and interactions. Thus, we may devolve into a health care system where the rich actually get a human doctor while everyone else, or at least the poor and uninsured, get the robot.”

Ramon Lopez de Mantaras, director of the Spanish National Research Council’s Artificial Intelligence Research Institute, said, “I do not think it is a good idea to give high levels of autonomy to AI systems. They are, and will be, weak AI systems without commonsense knowledge. They will have more and more competence, yes, but this will be competence without comprehension. AI machines should remain at the level of tools or, at most, assistants, always keeping the human in the loop. We should all read or re-read the book ‘Computer Power and Human Reason’ by Joseph Weizenbaum before deciding whether or not to give lots of autonomy to stupid machines.”

Mike O’Connor, a retired technologist who worked at ICANN and on national broadband issues, commented, “I’m feeling ‘internet-pioneer regret’ about the Internet of Shit that is emerging from the work we’ve done over the last few decades. I actively work to reduce my dependence on internet-connected devices and the amount of data that is collected about me and my family. I will most certainly work equally hard to avoid human/AI devices/connections.  I earnestly hope that I’m resoundingly proven wrong in this view when 2030 arrives.”

Ian O’Byrne, an assistant professor at the College of Charleston whose focus is literacy and technology, said, “I believe in human-machine/AI collaboration, but the challenge is whether humans can adapt our practices to these new opportunities.”

Susan Mernit, executive director, The Crucible, co-founder and board member of Hack the Hood, responded, “If AI is in the hands of people who do not care about equity and inclusion, it will be yet another tool to maximize profit for a few.”

Vian Bakir, a professor of political communication and journalism at Bangor University, responded, “I am pessimistic about the future in this scenario because of what has happened to date with AI and data surveillance. For instance, the recent furor over fake news/disinformation and the use of complex data analytics in the U.K.’s 2016 Brexit referendum, and in the U.S. 2016 presidential election, to understand, influence and micro-target people in order to try get them to vote a certain way is deeply undemocratic. It shows that current political actors will exploit technology for personal/political gains, irrespective of wider social norms and electoral rules. There is no evidence that current bad practices would not be replicated in the future, especially as each new wave of technological progress outstrips regulators’ ability to keep up, and people’s ability to comprehend what is happening to them and their data. Furthermore, and related, the capabilities of mass data-veillance in private and public spaces is ever-expanding, and their uptake in states with weak civil society organs and minimal privacy regulation is troubling. In short, dominant global technology platforms show no signs of sacrificing their business models that depend on hoovering up ever more quantities of data on people’s lives, then hyper-targeting them with commercial messages; and across the world, political actors and state security and intelligence agencies then also make use of such data acquisitions, frequently circumventing privacy safe-guards or legal constraints.”

Mario Morino, chairman of the Morino Institute and co-founder of Venture Philanthropy Partners, commented, “While I believe AI/ML will bring enormous benefits, it may take us several decades to navigate through the disruption and transition they will introduce on multiple levels.”

Richard Forno, of the Center for Cybersecurity and Cybersecurity Graduate Program at the University of Maryland – Baltimore County, wrote, “AI is only as ‘smart’ and efficient as its human creators can make it. If AI in things like Facebook algorithms is causing this much trouble now, what does the future hold? The problem is less AI’s evolution and more about how humankind develops and uses it – that is where the real crisis in AI will turn out.”

Marilyn Cade, longtime global internet policy consultant, responded, “While many people believe technology is ‘neutral,’ in fact, it often reflects the ethics of its creators, but more significantly, those who commercialize it. Most individuals focus on how they personally use technology. They do not spend time (or even have the skills/expertise) to make judgments about the attributes of the way that technology is applied. Advertisements for how you can monitor your home remotely rarely mention that this also means that video capture of your home is ongoing and may be stored somewhere. Use of facial recognition can be used to apprehend criminals, but it can also capture innocent citizens. Machines are not ‘raised’ with ethical training – which hopefully humans are. AI will not necessarily bring in the ethical considerations. Those advancing technology could create huge benefits, for instance, introducing a structural exoskeleton that allows paraplegics to walk, etc. But there are concerns that robots may replace humans, not just support them. A first step is to engage in discussions and debates about ethical challenges and safeguards. And, we must introduce and maintain a focus on critical thinking for our children/youth, so that they are capable of understanding the implications of a different fully digitized world. I love the fact that my typos are auto-corrected, but I know how to spell all the words. I know how to construct a logical argument. If we don’t teach critical thinking at all points in education, we will have a 2030 world where the elites/scientists make decisions that are not even apparent to the average ‘person’ on the street/neighborhood.”

Michael Kleeman, a senior fellow at the University of California – San Diego and board member at the Institute for the Future, wrote, “The utilization of AI will be disproportionate and biased toward those with more resources. In general, it will reduce autonomy, and, coupled with big data, it will reduce privacy and increase social control. There will be some areas where IA [intelligence augmentation] helps make things easier and safer, but by and large it will be a global net negative.”

Oscar Gandy, emeritus professor of communication at the University of Pennsylvania, responded, “We already face an un-granted assumption when we are asked to imagine human-machine ‘collaboration.’ Interaction is a bit different, but still tainted by the grant of a form of identity – maybe even personhood – to machines that we will use to make our way through all sorts of opportunities and challenges. The problems we will face in the future are quite similar to the problems we currently face when we rely upon ‘others’ (including technological systems, devices and networks) to acquire things we value and avoid those other things (that we might, or might not be aware of). Of the many that I could choose, including health care, and economic/investment guidance, it is easier to talk about health. AI systems will make quite substantial and important contributions to the ability of health care providers to generate accurate diagnoses of maladies, and threats to my well-being, now and in the future. I can imagine the development and deployment of systems in which my well-being is the primary basis of our relationship. I am less sure about how my access to and use of this resource may be constrained or distorted by the interests of the other actors (humans within profit/power-seeking orientations). I assume that they will be aided by their own AI systems informing them how to best present options to me. I am hopeful that we will have agents (whether private, social, governmental) whose interest and responsibility is in ensuring that my interests govern those relationships.”

Paul Vixie, an Internet Hall of Fame member known for designing and implementing several Domain Name System protocol extensions and applications, wrote, “Understanding is a perfect proxy for control. As we make more of the world’s economy non-understandable by the masses, we make it easier for powerful interests to practice population control. Real autonomy or privacy or unpredictability will be seen as a threat and managed around.”

Philip Elmer-DeWitt, longtime journalist and editor who launched Time Magazine’s computers and technology section, now a blogger covering Apple, commented, “The election of Donald Trump, thanks in no small part to misinformation and voter manipulation through the internet, does not make me sanguine about the future of AI.”

Sam Punnett, research and strategy officer at TableRock Media, wrote, “The preponderance of AI-controlled systems are designed to take collected data and enable control advantage. Most of the organizations with the resources to develop these systems do so to enable advantages in commercial/financial transactions, manufacturing efficiency and surveillance. Self-regulation by industry has already shown to fail (e.g., social media platforms and Wall Street). Government agencies are lagging in their will and understanding of the implications of the technology to effectively implement guidelines to curtail the impacts of unforeseen circumstances. As such, government participation will be reactive to the changes that the technology will bring. My greatest fear is a reliance on faulty algorithms that absolve responsibility while failing to account for exceptions.”

Thad Hall, a research scientist and coauthor of “Politics for a Connected American Public,” rounded up several leading reasons for fears: “AI is likely to have benefits – from improving medical diagnoses to improving people’s consumer experiences. However, there are four aspects of AI that are very problematic. First, it is likely to result in more economic uncertainty and dislocation for people, including employment issues and more need to change jobs to stay relevant. Second, AI will continue to erode people’s privacy as search becomes more thorough. China’s model for monitoring populations illustrates what this could look like in both authoritarian and Western countries, with greater facial recognition used to identify people and affect their privacy. Third, AI will likely continue to have biases that are negative toward minority populations — including groups we have not considered. Given that these algorithms often have identifiable biases (e.g., favoring people who are white or male), they likely also have biases that are less well-recognized, such as biases that are negative toward people with disabilities, older Americans or other groups, and these biases may ripple through society in unknown ways. These biases may also affect the privacy issue noted above, with some groups more likely to be monitored effectively. Finally, AI is creating a world where reality can be manipulated in ways we do not appreciate. Fake videos, audio and similar media are likely to explode and create a world where ‘reality’ is hard to discern. The relativistic political world will become more so, with people having evidence to support their own reality or multiple realities that mean no one knows what is the ‘truth.’”

Michel Grossetti, a sociologist expert in systems and director of research at CNRS, the French national science research center, wrote, “Advances in machine translation, speech recognition and robotics are likely to help change our environments. For some they will produce improvements, for others constraints. As always.”

Michael Zimmer, associate professor and privacy and information ethics scholar at the University of Wisconsin, Milwaukee, commented, “I am increasingly concerned that AI-driven decision making will perpetuate existing societal biases and injustices, while obscuring these harms under the false belief such systems are ‘neutral.’”

Simon Biggs, a professor of interdisciplinary arts at the University of Edinburgh, said, “AI will function to augment human capabilities in diverse ways. The problem is not with AI but with humans. As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified. Given historical precedent, one would have to assume it will be our worst qualities that are augmented. My expectation is that in 2030 AI will be in routine use to fight wars and kill people, far more effectively that we can currently kill. As societies we will be less affected by this as we currently are, as we will not be doing the fighting and killing ourselves. Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing. We cannot expect our AI systems to be ethical on our behalf – they won’t be as they will be designed to kill efficiently, not thoughtfully. My other primary concern with AI is to do with surveillance and control. The advent of China’s Social Credit System (SCS) is an indicator of what it likely to come. We will exist within an SCS as AI constructs, hybrid instances of ourselves that may or may not resemble who we are. But our rights and affordances as individuals will be determined by the SCS. This is the Orwellian nightmare realised.”

Nancy Greenwald, a respondent who provided no identifying details, wrote, “Human/AI collaboration will offer tremendous benefits in improved decision-making, processes and information sharing through the analysis of big data, among other things. Time to explore the downsides – one of the obvious downsides is the reduction in personal privacy, but perhaps the primary downside is over-reliance on AI, which 1) is only as good as the algorithms created (how are they instructed to ‘learn?’) and 2) has the danger of limiting independent human thinking. How many Millennials can read a map or navigate without the step-by-step instructions from Waze, Google or their iPhones? And information searches online don’t give you an overview. I once wasted 1.5 billable hours searching for a legal concept when two minutes with the human based BNA outline got me the result in two minutes. Let’s be thoughtful about how we use the amazing technology.”

Douglas Rushkoff, a professor of media at City University of New York, responded, “The main reason I believe AI’s impact will be mostly negative is that we will be applying it mostly toward the needs of the market, rather than the needs of human beings. So while AI might get increasingly good at extracting value from people, or manipulating people’s behavior toward more consumption and compliance, much less attention will likely be given to how AI can actually create value for people. Even the most beneficial AI is still being measured in terms of its ability to provide utility, value or increase in efficiency – fine values, sure, but not the only ones that matter to quality of life.”

Marc Rotenberg, executive director of Electronic Privacy Information Center, commented, “The challenge we face with the rise of AI is the growing opacity of processes and decision-making. The favorable outcomes we will ignore. The problematic outcomes we will not comprehend. That is why the greatest challenge ahead for AI accountability is AI transparency. We must ensure that we understand and can replicate the outcomes produced by machines. The alternative outcome is not sustainable.”

Marc Brenman, managing partner at IDARE LLC, said, “We do not know all that machines can do. There is no inherent necessity that they will care for us. We may be an impediment to them. They may take orders from evil-doers. They will enable us to make mistakes even faster than we do now. Any technology is only as good as the morality and ethics of its makers, programmers and controllers. If machines are programmed to care more for the earth than for people, they may eliminate us anyway, since we are destroying the earth.”

William Uricchio, media scholar and professor of comparative media studies at MIT, commented, “AI and its related applications face three problems: development at the speed of Moore’s Law, development in the hands of a technological and economic elite and development without benefit of an informed or engaged public. The public is reduced to a collective of consumers awaiting the next technology. Whose notion of ‘progress’ will prevail? We have ample evidence of AI being used to drive profits, regardless of implications for long-held values; to enhance governmental control and even score citizens’ ‘social credit’ without input from citizens themselves. Like technologies before it, AI is agnostic. Its deployment rests in the hands of society. But absent an AI-literate public, the decision of how best to deploy AI will fall to special interests. Will this mean equitable deployment, the amelioration of social injustice and AI in the public service? Because the answer to this question is social rather than technological, I’m pessimistic. The fix? We need to develop an AI-literate public, which means focused attention in the educational sector and in public-facing media. We need to assure diversity in the development of AI technologies. And until the public, its elected representatives and their legal and regulatory regimes can get up to speed with these fast-moving developments, we need to exercise caution and oversight in AI’s development.”

R “Ray” Wang, founder and principal analyst at Silicon Valley-based Constellation Research, said, “We have not put the controls of AI in the hand of many. In fact the experience in China has shown how this technology can be used to take away the freedoms and rights of the individual for the purposes of security, efficiency, expediency and whims of the state. On the commercial side, we also do not have any controls in play as to ethical AI. Five elements should be included – transparency, explainability, reversibility, coachability and human-led processes in the design.”

Peter Asaro, a professor at The New School, philosopher of sci-tech and media who examines artificial intelligence and robotics, commented, “AI will produce many advantages for many people, but it will also exacerbate many forms of inequality in society. It is likely to benefit a small group who design and control the technology greatly, benefit a fairly larger group of the already well-off in many ways, but also potentially harm them in other ways, and for the vast majority of people in the world it will offer few visible benefits and be perceived primarily as a tool of the wealthy and powerful to enhance their wealth and power.”

Luis Pereira, associate professor of electronics and nanotechnologies, Universidade Nova de Lisboa, Portugal, responded, “I fear that more control and influence will be exerted on people, such as has started in China. There will be a greater wealth gap, benefits will not spread to all and a caste system will develop, unless a new social compact is put in place, which is unlikely. Widespread revolt is plausible.”

Juan Ortiz Freuler, a policy fellow at the Web Foundation, wrote “We believe technology can and should empower people. If ‘the people’ will continue to have a substantive say on how society is run, then the State needs to increase its technical capabilities to ensure proper oversight of these companies. Tech in general and AI in particular will promote the advancement of humanity in every area by allowing processes to scale efficiently, reducing the costs and making more services available to more people (including quality health care, mobility, education, etc.). The open question is how these changes will affect power dynamics. To operate effectively, AI requires a broad set of infrastructure components, which are not equally distributed. These include datacenters, computing power and big data. What is more concerning is that there are reasons to expect further concentration. On the one hand, data scales well: the upfront (fixed) costs of setting up a datacenter are large compared to the cost of keeping it running. Therefore, the cost of hosting each extra datum is marginally lower than the previous one. Data is the fuel of AI, and therefore whoever gets access to more data can develop more effective AI. On the other hand, AI creates efficiency gains by allowing companies to automate more processes, meaning whoever gets ahead can undercut competitors. This circle fuels concentration. As more of our lives are managed by technology there is a risk that whoever controls these technologies gets too much power. The benefits in terms of quality of life and the risks to people’s autonomy and control over politics are qualitatively different and there cannot (and should not) be up for tradeoffs.”

Alistair Knott, an associate professor specializing in cognitive science and AI at Otago University, Dunedin, New Zealand, wrote “AI has the potential for both positive and negative impacts on society. [Negative impacts are rooted in] the current dominance of transnational companies (and tech companies in particular) in global politics. These companies are likely to appropriate the majority of advances in AI technology – and they are unlikely to spread the benefit of these advances throughout society. We are currently witnessing an extraordinary concentration of wealth in the hands of a tiny proportion of the world’s population. This is largely due to the mainstreaming of neoliberalism in the world’s dominant economies – but it is intensified by the massive success of tech companies, which achieve huge profits with relatively small workforces. The advance of AI technologies is just going to continue this trend, unless quite draconian political changes are effected that bring transnational companies under proper democratic control.”

Meryl Alper, an assistant professor of communication at Northeastern University and a faculty associate at the Berkman Klein Center for Internet and Society, wrote, “My fear is that AI tools will be used by a powerful few to further centralize resources and marginalize people. These tools, much like the internet itself, will allow people to do this ever more cheaply, quickly and in a far-reaching and easily replicable manner, with exponentially negative impacts on the environment. Preventing this in its worst manifestations will require global industry regulation by government officials with hands-on experience in working with AI tools on the federal, state and local level, and transparent audits of government AI tools by grassroots groups of diverse (in every sense of the term) stakeholders.”

Stavros Tripakis, an associate professor of computer science at Aalto University (Finland) and adjunct at the University of California–Berkeley, wrote, “‘1984,’ George Orwell, police state.”

Robert Epstein, senior research psychologist at the American Institute for Behavioral Research and Technology, the founding director of the Loebner Prize Competition in Artificial Intelligence, said, “By 2030, it is likely that AIs will have achieved a type of sentience, even if it is not human-like. They will also be able to exercise varying degrees of control over most human communications, financial transactions, transportation systems, power grids and weapon systems. As I noted in my 2008 book, ‘Parsing the Turing Test,’ they will reside in the ‘InterNest’ we have been building for them, and we will have no way of dislodging them. How they decide to deal with humanity – to help us, ignore us or destroy us – will be entirely up to them, and there is no way currently to predict which avenue they will choose. Because a few paranoid humans will almost certainly try to destroy the new sentient AIs, there is at least a reasonable possibility that that they will swat us like the flies we are – the possibility that Hawking, Musk and others have warned about. There is no way, to my knowledge, of stopping this future from emerging. Driven by the convenience of connectivity, the greed that underlies business expansion and the pipedreams of muddle-headed people who confuse machine-like intelligence with biological intelligence, we will continue to build AIs we can barely understand and to expand the InterNest in which they will live – until the inevitable – whatever that proves to be – occurs.”

Martin Shelton, a professional technologist, commented, “There are many kinds of artificial intelligence – some kinds reliant on preset rules to appear ‘smart,’ and some which respond to changing conditions in the world. But because AI can be used anywhere we can recognize patterns, the potential uses for artificial intelligence are pretty huge. The question is, how will it be used? I’m less interested in how consumers will interact with AI on a normal day. I’m more interested in how we use AI, and how access to this technology will be distributed. The consumer uses for AI are entirely different than its institutional uses. Most Americans will be able to leverage AI in ordinary consumer technology, for example, cars making decisions about the best way to get to our destination safely, or assistants built into household items. And while these tools will become cheaper and more widespread, we can expect that – like smartphones or web connectivity – their uses will be primarily driven by commercial interests. We’re beginning to see the early signs of AI failing to make smart predictions in larger institutional contexts. But if Amazon fails to correctly suggest the right product in the future, everything is fine. You bought a backpack once, and now Amazon thinks you want more backpacks, forever. It’ll be okay. But sometimes these decisions have enormous stakes. ProPublica documented how automated ‘risk assessment’ software used in U.S. courtroom sentencing procedures is only slightly more accurate at predicting recidivism than a the flip of a coin. Likewise, hospitals using IBM Watson to make predictions about cancer treatments find the software often gives advice that humans would not. To mitigate harm in high-stakes situations, we must critically interrogate how our assumptions about our data and the rules that we use to create our AI promote harm.”

Anupam Agrawal, consultant with Tata Consultancy Services, wrote, “I think AI will benefit the health care sector particularly, leading to longevity of life.”

Anthony Nadler, assistant professor of media and communication studies at Ursinus College, commented, “I am honestly completely torn on this question. Both scenarios are entirely possible. To my mind, the societal and human impacts of future AI development will not be a matter of technological discovery – as if a predestined path toward enhanced AI were just waiting to be discovered by tomorrow’s inventors, programmers and entrepreneurs. Rather, the question has to do with how decisions will be made that shape the contingent development of this potentially life-changing technology. And who will make those decisions? In the best-case scenario, the development of AI will be influenced by diverse stakeholders representing different communities who will be affected by its implementation (and this many mean that particular uses of AI – military applications, medical, marketing, etc., – will be overseen by reflective ethical processes. In the absolute worst-case scenario, unrestricted military development will lead to utter destruction – whether in a situations in which the ‘machines take over’ or, more likely, in which weapons of tremendous destruction become all the more readily accessible.”

Dave Burstein, editor and publisher at Fast Net News, said, “There’s far too much second-rate AI that is making bad decisions based on inadequate statistical understanding. For example, a parole or sentencing AI probably would find a correlation between growing up in a single parent household and likelihood of committing another crime. Confounding variables, like the poverty of so many single mothers, need to be understood and dealt with. I believe it’s wrong for someone to be sent to jail longer because their father left. That kind of problem, confounding variables and the inadequacy of ‘preponderant’ data, is nearly ubiquitous in AI in practice.”

danah boyd, a principal researcher with Microsoft Research and founder and president of the Data & Society Research Institute, said, “AI is a tool that will be used by humans for all sorts of purposes, including in the pursuit of power. As a result, there will be abuses of power that involve AI, just as there will be advances in science and humanitarian efforts that also involve AI. Unfortunately, there are certain trend lines that are likely to create massive instability. Take, for example, climate change and climate migration. This will further destabilize Europe and the U.S., and I have every expectation that, in panic, we will see AI be used in harmful ways in light of other geopolitical crises.”

Paul Kainen, futurist and director of the Lab for Visual Mathematics at Georgetown University, commented, “Quantum cat here: I expect complex superposition of strong positive, negative and null as typical impact for AI. For the grandkids’ sake, we must be positive!”

Karl M. van Meter, founding editor of the Bulletin of Sociological Methodology and author of “Computational Social Science in the Age of Big Data,” said, “The well-being of the world’s population depends on governments making ‘intelligent’ decisions based on AI or other means. Moreover, environmental change may well be the determining factor for future well-being, with or without ‘intelligent’ decisions by world governments.”

Michael Muller, a researcher in the AI interactions group for a global technology solutions provider, said it will leave some people better off and others not, writing, “For the wealthy and empowered, AI will help them with their daily lives – and it will probably help them to increase their wealth and power. For the rest of us, I anticipate that AI will help the wealthy and empowered people to surveil us, to manipulate us, and (in some cases) to control us or even imprison us. For those of us who do not have the skills to jump to the AI-related jobs, I think we will find employment scarce and without protections. In my view, AI will be a mixed and intersectional blessing at best.”

Marina Gorbis, executive director of the Institute for the Future and author of “The Nature of the Future: Dispatches from the Socialstructed World,” responded, “Like all the previous technologies AI will enhance human capabilities and it will also present new challenges. For example, while AI will accelerate scientific discovery, expanding our access to knowledge of all kinds (medical, space, materials, etc.), it is likely that without significant changes in our political economy and data governance regimes, it is likely to create greater economic inequalities, more surveillance and more programmed and non-human-centric interactions. Every time we program our environments, we end up programming ourselves and our interactions. We humans have to become more standardized, removing serendipity and ambiguity from our interactions. And this ambiguity and complexity is what is the essence of being human.”

Michael H. Goldhaber, an author, consultant and theoretical physicist who wrote early explorations on the digital attention economy, said, “For those without internet connection now, its expansion will probably be positive overall. for the rest we will see an increasing arms race between uses of control, destructive anarchism, racism, etc., and ad hoc, from-below efforts at promoting social and environmental good. Organizations and states will seek more control to block internal or external attacks of many sorts. The combined struggles will take up an increasing proportion of the world’s attention, efforts and so forth. I doubt that any very viable and democratic, egalitarian order will emerge over the next dozen years, and even in larger time frame, good outcomes are far from guaranteed.”

Thomas Streeter, a professor of sociology at the University of Vermont, said, “AI refers to a bewildering variety of technologies, not an obvious ‘thing.’ Code is more like law – a collection of socially-shaped tools and relations – than it is like mind. The technology will not determine whether things are better or worse in 2030; social and political choices will.”

Paul Werbos, a former program director at the U.S. National Science Foundation who first described the process of training artificial neural networks through backpropagation of errors in 1974, said, “We are at a moment of choice. The outcome will depend a lot on the decisions of very powerful people who do not begin to know the consequences of the alternatives they face, or even what the substantive alternatives are.”

Fiona Kerr, industry professor of neural and systems complexity at the University of Adelaide, Australia, commented, “The answer depends very much on what we decide to do regarding the large questions around ensuring equality of improved global health; by agreeing on what productivity and worth now look like, partly supported by the global wage; through fair redistribution of technology profits to invest in both international and national social capital; through robust discussion on the role of policy in rewarding technologists and businesses to build quality partnerships between humans and AI; through the growth of understanding in the neurophysiological outcomes of human-human and human-technological interaction which allows us to best decide what not to technologies, when a human is more effective, and how to ensure we maximise the wonders of technology as an enabler of a human-centric future.”

Anthony Judge, author, futurist, editor of the Encyclopedia of World Problems and Human Potential, former head of the Union of International Associations, said, “AI will offer greater possibilities. My sense is that it will empower many (most probably 1% to 30%) and will disempower many (if not 99%). Especially problematic will be the level of complexity created for the less competent (notably the elderly) as is evident with taxation and banking systems – issues to which sysadmins are indifferent. For some it will be a boon – proactive companions (whether for quality dialogue or sex). Sysadmins will build in unfortunate biases. Missing will be the enabling of interdisciplinarity – as has long been possible but carefully designed out for the most dubious divide-and-rule reasons. Blinkered approaches and blind spots will set the scene for unexpected disasters – currently deniably incomprehensible (Black Swan effect). Advantages for governance will be questionable. Better oversight will be dubiously enabled.”

Bob Metcalfe, Hall of Fame co-inventor of Ethernet, founder of 3Com, now a professor of innovation and entrepreneurship at the University of Texas-Austin, said, “Better. Pessimists are often right, but they never get anything done. All technologies come with problems, sure, but those generally don’t – in the end – matter. Generally, they get solved. The hardest problem I see is the evolution of work. Hard to figure out. Hmmm. Forty percent of us used to know how to milk cows, but now less than 1% do. We all used to tell elevator operators which floor we wanted, and now we press buttons. Most of us now drive cars and trucks and trains, but that’s on the verge of being over. AIs are most likely not going to kill jobs. They will handle parts of jobs, enhancing the productivity of their humans.”

Shigeki Goto, Asia-Pacific internet pioneer and Internet Hall of Fame member, a professor of computer science at Waseda University, commented, “Better off. AI is already applied to personalized medicine for an individual patient. Similarly, it will be applied to learning or education to realize ‘personalized learning’ or tailored education. We need to collect data which covers both of successful learning and failure experiences, because machine learning requires positive and negative data.”

Micah Altman, a senior fellow at the Brookings Institution and head scientist in the program on information science at MIT Libraries, wrote, “AI methods are, in a core sense, defined by their ability to engage with the complexity of the human world. Thus, advances in AI represent advances in our ability to design methods and tools that engage with the human world. And as the integration of AI methods into our technologies become ubiquitous, our interactions change in two ways: Our interaction with information systems become more natural, and our tools are better able to adapt our interactions with the world to our needs. This is especially important to those who face barriers to access today because of physical or mental disabilities or divergences.  By 2030, the AI technologies built into consumer devices and technologies have the potential to enable a blind person to commute to the school of their choice using a self-driving vehicle; to enable a deaf person to readily access speeches, lectures and class discussions using speech recognition in their earbuds; to enable someone on the autism spectrum to recognize faces and emotional expression using face recognition in their smart glasses.  Moreover, these technologies will help to adapt learning (and other environments) to the needs of each individual by translating language, aiding memory and providing us feedback on our own emotional and cognitive state, and on the environment. We all need adaptation; each of us, practically every day, is at times tired, distracted, fuzzy-headed or nervous, which limits how we learn, how we understand and how we interact with others. AI has the potential to assist us to engage with the world better – even when conditions are not ideal – and to better understand ourselves.”

Lindsey Andersen, an activist at the intersection of human rights and technology for Freedom House and Internews now doing graduate research at Princeton University, said, “2030 is not very far away. By then we are likely to see AI advance sufficiently to fully integrate into most industries. AI may well replace humans in certain task-based jobs such as manufacturing, and the inevitable loss of such jobs must be anticipated and mitigated. By 2030, AI will likely not have advanced sufficiently to fully replace humans in more-complex tasks involving critical thinking and uncertainty. In these jobs, AI will instead augment human intelligence. In health care, for example, it will help doctors more accurately diagnose and treat disease, and continually monitor high-risk patients through internet-connected medical devices. It will bring health care to places with a shortage of doctors, allowing health care workers to diagnose and treat disease anywhere in the world and to prevent disease outbreaks before they start. The possibilities are exciting, but not without risk. Already, there is an overreliance on AI to make consequential decisions that affect people’s lives. We have rushed to use AI to decide everything, from what content we see on social media to assigning credit scores to determining how long a sentence a defendant should serve. While often well-intentioned, these uses of AI are rife with ethical and human rights issues, from perpetuating racial bias to violating our rights to privacy and free expression. If we have not dealt with these problems through smart regulation, consumer/buyer education and establishment of norms across the AI industry, we could be looking at a vastly more unfair, polarized and surveilled world in 2030.”

Yvette Wohn, director of the Social Interaction Lab and expert on human-computer interaction at New Jersey Institute of Technology, commented, “Artificial intelligence will be naturally integrated into our everyday lives. Even though people are concerned about computers replacing the jobs of humans the best-case scenario is that technology will be augmenting human capabilities and performing functions that humans do not like to do. One area in which artificial intelligence will become more sophisticated will be in its ability to enrich the quality of life so that the current age of workaholism will transition into a society where leisure, the arts, entertainment and culture are able to enhance the well-being of society in developed countries and solve issues of water production, food growth/distribution and basic health provision in developing countries. Smart farms and connected distribution systems will hopefully eliminate urban food deserts and enable food production in areas not suited for agriculture. Artificial intelligence will also become better at connecting people and provide immediate support to people who are in crisis situations.”

Fred Davis, mentor at Runway Incubator, San Francisco, responded, “I’m more optimistic about AI than most of my peers. As daily a user of the Google Assistant on my phone and both Google Home and Alexa, I feel like AI has already been delivering significant benefits to my daily life for a few years. Google seems to be quite far ahead of Alexa, Siri, Cortana and Bixby… I’ve used them all. My wife and I are so used to talking with Google and Alexa throughout the day that it has become quite natural. We take having an always-on omnipresent assistant on hand for granted at this point. Google Home’s ability to tell us apart and even respond with different voices is a major step forward in making computers people-literate, rather than the other way around. There’s always a concern about privacy, but so far it hasn’t caused us any problems. Obviously, this could change and instead of a helpful friend I might look at these assistants as creepy strangers. Maintaining strict privacy and security controls is essential for these types of services.”

Peng Hwa Ang, professor of communications at Nanyang Technological University and author of “Ordering Chaos: Regulating the Internet,” commented, “AI is still in its infancy. A lot of it is ruled-based and not demanding of true intelligence or learning. But even so, I find it useful. My car has lane-assistance. I find that it makes me a better driver. When AI is more full-fledged, it would make driving safer and faster. I am using AI for some work I am doing on sentiment analysis. I find that I am able to be more creative in asking questions to be investigated. I expect AI will compel greater creativity. Right now, the biggest fear of AI is that it is a black-box operation – yes, the factors chosen are good and accurate and useful, but no one knows why those criteria are chosen. We know the percentages of the factors, but we do not know the whys. Hopefully, by 2030, the box will be more transparent. That’s on the AI side. On the human side, I hope human beings understand that true AI will make mistakes. If not, it is not real AI. This means that people have got to be ready to catch the mistakes that AI will make. It will be very good. But it will (still) not be foolproof.”

Kristin Jenkins, executive director of BioQUEST Curriculum Consortium, said, “Like all tools the benefits and pitfalls of AI will depend on how we use it. A growing concern is the collection and potential uses of data about people’s day-to-day lives. ‘Something’ always knows where we are, the layout of the house, what’s in the fridge and how much we slept. The convenience provided by these tools will override caution about data collection, so strong privacy protection must be legislated and culturally nurtured. We need to learn to be responsible for our personal data and aware of when and how it is collected and used. One of the benefits of this technology is the potential to have really effective responsive education resources. We know that students benefit from immediate feedback, and the opportunity to practice applying new information repeatedly to enhance mastery. AI systems are perfect for analyzing students’ progress, providing more practice where needed and moving on to new material when students are ready. This allows time with instructors to focus on more complex learning, including 21st century skills.”

Andreas Kirsch, fellow at Newspeak House, formerly with Google and DeepMind in Zurich and London, wrote, “Higher education outside of normal academia will benefit further from AI progress and empower more people with access to knowledge and information. For example, question-and-answer systems will improve. Tech similar to Google Translate and WaveNet will lower the barrier of knowledge acquisition for non-English speakers. At the same time, child labor will be reduced because robots will be able to perform the tasks far cheaper and faster, forcing governments in Asia to find real solutions.”

Bart Knijnenburg, assistant professor of computer science active in the Human Factors Institute at Clemson University, said, “Whether AI will make our lives better depends on how it is implemented. Many current AI systems (including adaptive content-presentation systems and so-called recommender systems) try to avoid information and choice overload by replacing our decision-making processes with algorithmic predictions. True empowerment will come from these systems supporting rather than replacing our decision-making practices. This is the only way we can overcome choice/information overload and at the same time avoid so-called ‘filter bubbles.’ For example, Facebook’s current post ranking systems will eventually turn us all into cat video watching zombies, because they follow our behavioral patterns which may not be aligned with our preferences. The algorithms behind these tools need to support human agency, not replace it.”

Perry Hewitt, a marketing, content and technology executive, wrote, “Today, voice-activated technologies are an untamed beast in our homes. Some 16% of Americans have a smart speaker, and yet they are relatively dumb devices: They misinterpret questions, offer generic answers and, to the consternation of some, are turning our kids into assholes. I am bullish on human-machine interactions developing a better understanding of and improving our daily routines. I think in particular of the working parent, often although certainly not exclusively a woman, who carries so much information in their head. What if a human-machine collaboration could stock the house with essentials, schedule the pre-camp pediatrician appointments and prompt drivers for the alternate-side parking/street cleaning rules. The ability for narrow AI to assimilate new information (the bus is supposed to come at 7:10 but a month into the school year is known to actually come at 7:16) could keep a family connected and informed with the right data, and reduce the mental load of household management.”

Dana Klisanin, psychologist, futurist and game designer, predicted, “People will increasingly realize the importance of interacting with each other and the natural world and they will program AI to support such goals, which will in turn support the ongoing emergence of the ‘slow movement.’ For example, grocery shopping and mundane chores will be allocated to AI (smart appliances), freeing up time for preparation of meals in keeping with the slow food movement. Concern for the environment will likewise encourage the growth of the slow goods/slow fashion movement. The ability to recycle, reduce, re-use will be enhanced by the use of in-home 3D printers, giving rise to a new type of ‘craft’ that is supported by AI. AI will support the ‘cradle-to-grave’ movement by making it easier for people to trace the manufacturing process from inception to final product.”

Tim Morgan, a respondent who provided no identifying details, said, “Human/AI collaboration over the next 12 years will improve the overall quality of life by finding new approaches to persistent problems. We will use these adaptive algorithmic tools to explore whole new domains in every industry and field of study: materials science, biotech, medicine, agriculture, engineering, energy, transportation and more. Algorithmic machine learning will be used to explore every field of study that generates data. It will be our intelligence amplifier, exhaustively exploring data and designs in ways humans alone cannot. The world was shocked when IBM’s Deep Blue computer beat Garry Kasparov in 1997. What emerged later was the realization that human and AI ‘centaurs’ could combine to beat anyone, human or AI. The synthesis was more than the sum of the parts. This goes beyond computability into human relationships. AIs are beginning to understand and speak the human language of emotion. The potential of affective computing ranges from productivity-increasing adaptive interfaces, to ‘pre-crime’ security monitoring of airports and other gathering places, to companion ‘pets’ which monitor their aging owners and interact with them in ways that improve their health and disposition. Will there be unseen dangers or consequences? Definitely. That is our pattern with our tools. We invent them, use them to improve our lives and then refine them when we find problems. AI is no different.”

Joe Whittaker, a former professor of sciences and associate director of the NASA GESTAR Program, now associate provost at Jackson State University, responded, “While most people may be better off, huge segments of society will be left behind or excluded completely from the benefits of digital advances – many persons in underserved communities as well as others who are socio-economically challenged.. This is due to the fact that these persons will be under-prepared generally, with little or no digital training or knowledge base. They rarely have access to the relatively ubiquitous internet, except when at school or in the workplace. Clearly, the children of these persons will be greatly disadvantaged. My hope is that AI/human-machine interface will become commonplace especially in the academic research and health care arena. I envision significant advances in brain-machine interface to facilitate mitigation of physical and mental challenges. Similar uses in robotics should also be used to assist the elderly. My fear is that priority may be given to more military uses, and it will be most accessible to those with the greatest financial resources. Actions should be taken to make the internet universally available and accessible, provide the training and know-how for all users.”

David Klann, consultant and software developer at Broadcast Tool & Die, responded, “AI and related technologies will continue to enhance peoples’ lives. I tend toward optimism; I instinctively believe there are enough activists who care about the ethics of AI that the technology will be put to use solving problems that humans cannot solve on their own. Take mapping, for instance. I recently learned about congestion problems caused by directions being optimized for individuals. People are now tweaking the algorithms to account for multiple people taking the ‘most efficient route’ that had become congested and was causing neighborhood disturbance due to the increased traffic. I believe people will construct AI algorithms to learn of and to ‘think ahead’ about such unintended consequences, and to avoid them before they become problems. Of course, my fear is that money interests will continue to wield an overwhelming influence over AI and machine learning (ML). These can be mitigated through fully disclosed techniques, transparency and third-party oversight. These third parties may be government institutions or non-government organizations with the strength to ‘enforce’ ethical use of the technologies. Open source code and open ML training data will contribute significantly to this mitigation.”

Geoff Livingston, author and futurist, commented, “The term AI misleads people. What we should call the trend is machine learning or algorithms. ‘Weak’ AI as it is called – today’s AI – reduces repetitive tasks that most people find mundane. This in turn produces an opportunity to escape the trap of the proletariat, being forced into monotonous labor to earn a living. Instead of thinking of the ‘Terminator,’ we should view the current trend as an opportunity to seek out and embrace the tasks that we truly love, including more creative pursuits. If we embrace the inevitable evolution of technology to replace redundant tasks, we can encourage today’s youth to pursue more creative and strategic pursuits. Further, today’s workers can learn how to manage machine learning or embrace training to pursue new careers that they may enjoy more. My fear is that many will simply reject change and blame technology, as has often been done. One could argue much of today’s populist uprising we are experiencing globally finds its roots in the current displacements caused by machine learning as typified by smart manufacturing. If so, the movement forward will be troublesome, rife with dark bends and turns that we may regret as cultures and countries.”

Mark Crowley, an assistant professor, expert in machine learning and core member of the Institute for Complexity and Innovation at the University of Waterloo, Ontario, Canada, wrote, “While driving home on a long commute from work the human will be reading a book in the heads-up screen of the windshield. The car will be driving autonomously on the highway for the moment. The driver will have an idea to note down and add to a particular document; all this will be done via voice. In the middle of this a complicated traffic arrangement will be seen approaching via other networked cars. The AI will politely interrupt the driver, put away the heads-up display and warn the driver they may need to take over in the next 10 seconds or so. The conversation will be flawless and natural, like Jarvis in ‘Avengers,’ even charming. But it will be tasks-focused to the car, personal events, notes and news.”

Denise N. Rall, a professor of arts and social sciences at Southern Cross University, Australia, responded, “The basic problem with the human race and its continued existence on this planet is overpopulation and depletion of the Earth’s resources. So far, interactions with technology have reduced population in the ‘first world’ but not in developing countries, and poverty will fuel world wars. Technology may support robotic wars and reduce casualties for the wealthy countries. The disparity between rich and poor will continue unabated.”

Denis Parra, assistant professor of computer science in the School of Engineering at PUC Chile, commented, “I live in a developing country. Whilst there are potential negative aspects of AI (loss of jobs), for people with disabilities AI technology could improve their lives. I imagine people entering a government office or health facility where people with eye- or ear-related disabilities could effortlessly interact to state their necessities and resolve their information needs.”

Ken Birman, a professor in the department of computer science at Cornell University, responded, “By 2030, I believe that our homes and offices will have evolved to support app-like functionality, much like the iPhone in my pocket. People will customize their living and working spaces, and different app suites will support different lifestyles or special needs. For example, think of a young couple with children, a group of students sharing a home or an elderly person who is somewhat frail. Each would need different forms of support. This ‘applications’ perspective is broad and very flexible. But we also need to ensure that privacy and security are strongly protected by the future environment. I do want my devices and apps linked on my behalf, but I don’t ever want to be continuously spied-upon. I do think this is feasible, and, as it occurs we will benefit in myriad ways.”

Doug Schepers, chief technologist at Fizz Studio, said, “AI/ML, in applications and in autonomous devices and vehicles, will make some jobs obsolete, and the resulting unemployment will cause some economic instability that impacts society as a whole, but most individuals will be better off. The social impact of software and networked systems will get increasingly complex, so ameliorating that software problem with software agents may be the only way to decrease harm to human lives, but only if we can focus the goal of software to benefit individuals and groups rather than companies or industries.”

Paul Jones, professor of information science at the University of North Carolina, Chapel Hill, responded, “AI as we know it in 2018 is just beginning to understand itself. Like HAL, it will have matured by 2030 into an understanding of its post-adolescent self and of its relationship to humans and to the world. But, also, humans will have matured in our relationship to AI. Like all adolescent relationships there will have been risk taking and regrets and hopefully reconciliation. Language was our first link to other intelligences, then books, then the internet – each a more intimate conversation than the one before. AI will become our link, adviser and to some extent our wise and loving companion.”

Joseph Konstan, distinguished professor of computer science specializing in human-computer interaction and AI at the University of Minnesota, said, “Widespread deployment of AI has immense potential to help in key areas that affect a large portion of the world’s population, including agriculture, transportation (more efficiently getting food to people) and energy. Even as soon as 2030, I expect we’ll see substantial benefits for many who are today disadvantaged, including the elderly and physically handicapped (who will have greater choices for mobility and support) and those in the poorest part of the world. Unfortunately, there will also be many losers. 2030 isn’t soon enough for the massive reforms needed to avoid substantial worker displacement (and poverty). I’m very optimistic about 2100, but 2030 will have many winners, but also many losers.”

Jean-Claude Heudin, a professor with expertise in AI and software engineering at the Devinci Research Center at Pole Universitaire Leonard de Vinci, France, wrote, “Natural intelligence and artificial intelligence are complementary. We need all of the possible intelligence possible for solving the problems yet to come. More intelligence is always better.”

Daniel Siewiorek, a professor with the Human-Computer Interaction Institute at Carnegie Mellon University, predicted, “AI will enable systems to perform labor-intensive activities where there are labor shortages. For example, consider recovery from an injury. There is a shortage of physical therapists to monitor and correct exercises. AI would enable a virtual coach to monitor, correct and encourage a patient. Virtual coaches could take on the persona of a human companion or a pet, allowing the aging population to live independently. The downside: isolating people, decreasing diversity, a loss of situational awareness (witness GPS directional systems) and ‘losing the receipt’ of how to do things. In the latter case, as we layer new capabilities on older technologies if we forget how the older technology works we cannot fix it and layered systems may collapse, thrusting us back into a more-primitive time.”

Danny O’Brien, international director for a nonprofit digital rights group, commented, “I’m generally optimistic about the ability of humans to direct technology for the benefit of themselves and others. I anticipate human-machine collaboration to take place at an individual level, with tools and abilities that enhance our own judgment and actions, rather than this being a power restricted to a few actors. So, for instance, if we use facial-recognition or predictive tools, it will be under the control of an end-user, transparent and limited to personal use. This may require regulation, internal coding restraints or a balance being struck between user capabilities. But I’m hopeful we can get there.”

Danil Mikhailov, head of data and innovation for Wellcome Trust, responded, “I see a positive future of human/AI interaction in 2030. In my area, health, there is tremendous potential in the confluence of advances in big data analysis and genomics to create personalised medicine and improve diagnosis, treatment and research. Although I am optimistic about human capacity for adaptation, learning and evolution, technological innovation will not always proceed smoothly. In this we can learn from previous technological revolutions. For example, [Bank of England chief economist] Andy Haldane rightly pointed out that the original ‘luddites’ in the 19th century had a justified grievance. They suffered severe job losses and it took the span of a generation for enough jobs to be created to overtake the ones lost. It is a reminder that the introduction of new technologies benefits people asymmetrically, with some suffering while others benefit. To realise the opportunities of the future we need to acknowledge this and prepare sufficient safety nets, such as well-funded adult education initiatives, to name one example. It’s also important to have an honest dialogue between the experts, the media and the public about the use of our personal data for social-good projects, like health care, taking in both the risks of acting – such as effects on privacy – and the opportunity costs of not acting. It is a fact that lives are lost currently in health systems across the world that could be saved even with today’s technology let alone that of 2030.”

Charles Ess, an expert in ethics and professor with the Department of Media and Communication, University of Oslo, Norway, said, “The key to the question is the focus on autonomy and agency. It seems quite clear that evolving AI systems will bring about an extraordinary array of options, making our lives more convenient. But this convenience almost always comes at the cost of deskilling – of our offloading various cognitive practices and virtues to the machines, and thereby becoming less and less capable of exercising our own agency, autonomy and most especially our judgment (phronesis).  In particular, empathy and loving itself are virtues that are difficult to acquire and enhance. My worst fears are not only severe degradation, perhaps more or less loss of such capacities – and, worst of all, our forgetting they even existed in the first place, along with the worlds they have made possible for us over most of our evolutionary and social history.”

Chao-Lin Liu, a professor at National Chengchi University, Taiwan, commented, “The answer depends on what we mean by ‘most’ and ‘better off.’ AI machines may help us handle many tasks in our lives, making our lives relatively simple. AI machines might also make it a lot harder for people to find jobs. Some claim that new types of jobs will be created. Perhaps. No one can make precise predictions now.”

Ben Shneiderman, distinguished professor and founder of the Human Computer Interaction Lab at the University of Maryland, said, “Automation is largely a positive force, which increases productivity, lowers costs and raises living standards. Automation expands the demand for services, thereby raising employment, which is what has happened at Amazon and FedEx. My position is contrary to those who believe that robots and artificial intelligence will lead to widespread unemployment. Over time I think AI/machine learning strategies will become merely tools embedded in ever-more-complex technologies for which human control and responsibility will become clearer.”

Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, commented, “In order for people, in general, to be better off as AI advances through 2030, a progressive political agenda – one rooted in the protection of civil liberties and human rights and also conscientious of the dangers of widening social and economic inequalities – would have to play a stronger role in governance. In light of current events, it’s hard to be optimistic that such an agenda will have the resources necessary to keep pace with transformative uses of AI throughout ever-increasing aspects of society. To course-correct in time it’s necessary for the general public to develop a deep appreciation about why leading ideologies concerning the market, prosperity and security are not in line with human flourishing.”

Dan Buehrer, a retired professor of computer science formerly with National Chung Cheng University, Taiwan, responded, “Statistics will be replaced by individualized models, thus allowing control of all individuals by totalitarian states and, eventually, by socially intelligent machines.”

David Wells, chief financial officer at Netflix, responded, “Technology progression and advancement has always been met with fear and anxiety, giving way to tremendous gains for humankind as we learn to enhance the best of the changes and adapt and alter the worst. Continued networked AI will be no different but the pace of technological change has increased, which is different and requires us to more quickly adapt. This pace is different and presents challenges for some human groups and societies that we will need to acknowledge and work through to avoid marginalization and political conflict. But the gains from better education, medical care and crime reduction will be well worth the challenges.”

Yeseul Kim, a designer for a major South Korean search firm, wrote, “The prosperity  generated by and the benefits of AI will promote the quality of living for most people only when its ethical implications and social impacts are widely discussed and shared inside the human society, and only when pertinent regulations and legislation can be set up to mitigate the misconduct that can be brought about as the result of AI advancement. If these conditions are met, computers and machines can process data at unprecedented speed and at an unrivaled precision level, and this will improve the quality of life, especially in medical and healthcare sectors. It has already been proven and widely shared among medical expert groups that doctors perform better in detecting diseases when they work with AI. Robotics for surgery is also progressing, so this will also benefit the patients as they can assist human surgeons who inevitably face physical limits when they conduct surgeries.”

Barry Hughes, senior scientist at the Center for International Futures, University of Denver, commented, “Although AI will be disruptive through 2030 and beyond, meaning that there will be losers in the workplace and growing reasons for concern about privacy and AI/cyber-related crime, on the whole I expect that individuals and societies will make choices on use and restriction of use that benefit us. Examples include likely self-driving vehicles at that time, when my wife’s deteriorating vision and that of an increased elderly population will make it increasingly liberating. I would expect rapid growth in use for informal/non-traditional education as well as some more ambivalent growth in the formal-education sector. Big-data applications in health-related research should be increasingly productive, and health care delivery should benefit. Transparency with respect to its character and use, including its developers and their personal benefits, is especially important in limiting the inevitable abuse. PS: I was one of the original test users of the ARPANET and now can hardly imagine living without the internet.”

Robert Stratton, cybersecurity expert, said, “Currently, while there is widespread acknowledgement in a variety of disciplines of the potential benefits of machine learning and artificial intelligence technologies, progress has been tempered by their misapplication. Part of data science is knowing the right tool for a particular job. As more-rigorous practitioners begin to gain comfort and apply these tools to other corpora it’s reasonable to expect some significant gains in efficiency, insight or profitability in many fields. This may not be visible to consumers except through increased product choice, but it may include everything from drug discovery to driving.”

Daniel Obam, information and communications technology policy advisor, responded, “As we develop AI, the issue of ethical behaviour is paramount. AI will allow authorities to analyse and allocate resources where there is the greatest need. AI will also change the way we work (robots) and travel (autonomous vehicles). It will not be unusual to find robots performing chores even in our own homes. Robots will also do more deliveries and perform cleaning duties in homes and offices. There will be more interactions between humans and machines, with machines/robots being able to understand and interpret what humans are saying or want to do. Digital assistants that mine and analyse data will help professionals in making concise decisions in healthcare, manufacturing and agriculture, among others. Smart devices and virtual reality will enable humans to interact with and learn from historical or scientific issues in a more-clear manner. Using AI, authorities will be able to prevent crime before it happens, and even if it happens it will be easier to solve the crimes. The fear is that it is not clear if AI may override human beings and take over the world! Cybersecurity needs to be at the forefront to prevent unscrupulous individuals from using AI to perpetrate harm or evil on the human race.”

Paola Perez, vice president of the Internet Society chapter in Venezuela, and chair of the LACNIC Public Policy Forum, responded, “Humans will be better with AI. Many problems will be solved, but many jobs are going to disappear and there may be more poor people as a result. Families will be dependent on technology, health may be advanced in some ways. Will we see life-extension? Maybe, and maybe not, because our dependence on technology may also be destructive to our health.”

Ed Lyell, longtime internet strategist and professor at Adams State University, predicted, “By 2030, lifelong learning will become more widespread for all ages. The tools already exist, including Khan Academy and YouTube. We don’t have to know as much, just how to find information when we want it. We will have on-demand, 24/7 ‘schooling.’ This will make going to sit-down classroom schools more and more a hindrance to our learning. The biggest negative will be from those protecting current, status-quo education including teacher/faculty, school boards and college administrators. They are protecting their paycheck- or ego-based role. They will need training, counseling and help to embrace the existing and forthcoming change as good for all learners. Part of the problem now is that they do not want to acknowledge the reality of how current schools are today. Some do a good job, yet these are mostly serving already smarter, higher-income communities. Parents fight to have their children have a school like they experienced, forgetting how inefficient and often useless it was. AI can help customize curricula to each learner and guide/monitor their journey through multiple learning activities, including some existing schools, on-the-job learning, competency-based learning, internships and such. You can already learn much more, and more efficiently using online resources than almost all of the classes I took in my public schooling and college, all the way through getting a Ph.D.”

Adam Sah, advisor to technology startups and former tech lead and manager at Google Research, said, “Machine learning is a powerful, broad technology that both enhances existing functions and enables new ones. Assuming humans continue to choose life-improving applications (i.e., non-military) then of course or lives will be improved.”

Tomas Ohlin, longtime professor at Linköping and Stockholm universities in Sweden, responded, “The AI future will be positive for human beings, since AI programming is a human endeavour, where our morals will be built into our systems. Naturally misuse will exist, but humans have survived so far and will do so also in the future. Example: Families will use robots for their daily life, and these robots will be friendly. AI will also be used in development of political programs, so that their proposals will be more realistic and democratic. AI will also support increased citizen influence.”

Christopher Leslie, lecturer in media, science and technology studies at South China University of Technology, wrote, “Technologies that are largely unseen will benefit the citizens of 2030. Although AI may not be very useful to consumers at first, it will help medicine, law, transportation and communication systems that can benefit from machines that learn. The person of 2030 may not necessarily interact with these systems directly, but they will provide indirect benefits.”

Collin Baker, senior AI researcher at the International Computer Science Institute at the University of California – Berkeley, commented, “I fear that advances in AI will be turned largely to the service of nation-states and mega-corporations, rather than used for truly constructive purposes. The positive potential, particularly in education and health care, is enormous, but people will have to fight to make it come about. I hope that human-computer interaction will include both much better natural language understanding and natural language generation, so that the Turing test will be passé. I hope that AI will get much better at understanding Gricean maxims for cooperative discourse, and at understanding people’s beliefs, intentions and plans.”

Eduardo Vendrell, a computer science professor at the Polytechnic University of Valencia, Spain, responded, “I sincerely believe that the advances in AI will generate many more possibilities for a better life in the future. This will come from all different applications that people will have available in their everyday lives, although these advances will have a noticeable impact on our privacy, since the basis for this application is focused on the information we generate with the use of different technologies. For example, in the field of health, many solutions will appear that will allow us to anticipate current problems and discover other risk situations more efficiently. The use of personal gadgets and other domestic devices will allow interacting directly with professionals and institutions in any situation of danger or deterioration of our health. However, it will be necessary to regulate in a decisive way the access to the information and its use.”

Denise Garcia, an associate professor of political science and international affairs at Northeastern University, said, “Humanity will come together to cooperate.”

Charles Geiger, head of the executive secretariat for the UN’s World Summit on the Information Society, commented, “In the next 10-15 years, I do not consider AI as a threat. As long as we have a democratic system and a free press, we may counterbalance the possible threats of AI.”

David Wilkins, instructor in computer science at the University of Oregon, responded, “AI for the very aged will provide mobility and advice, and enable coping with the complexities of nature. AI must be able to explain the basis for its decisions.”

David Zubrow, associate director of empirical research at the Carnegie Mellon Software Engineering Institute, said, “My hope is that people and society will continue to find good ways to use the technology that is being developed. Science, engineering and art will continue to evolve and progress. How the advances are used demands wisdom, leadership and social norms and values that respect and focus on making the world better for all. Education and health care will reach remote and underserved areas for instance. The fear is control is consolidated in the hands of few that seek to exploit people, nature and technology for their own gain. I am hopeful that this will not happen.”

Anirban Sen, a lawyer and data privacy consultant based in New Delhi, India, wrote, “There will be significant changes to both the user interface and the AI working in the background. Machines would offer options and suggestions based on the past, but humans will always make the ultimate decisions on items such as the pros and cons of entering a contract, what dinner menu items haven’t been had in a while, too much time spent at work. AI can also suggest that a machine will not interrupt life for a specific time unless there’s an emergency.”

Michael Dyer, an emeritus professor of computer science at the University of California–Los Angeles, commented, “As long as GAI (general AI) is not achieved then specialized AI will eliminate tasks associated with jobs but not the jobs themselves. A trucker does a lot more than merely drive a truck. A bartender does a lot more than merely pour drinks. Society will still have to deal with the effects of smart technologies encroaching ever into new parts of the labour market. A universal basic income (UBI) could mitigate increasing social instability. Later on, as general AI spreads, it will become an existential threat to humanity. My estimate is that this existential threat will not begin to arise until the second half of the 21st century. Unfortunately, by then humanity might have grown complacent, since specialized AI systems do not pose an existential threat.”

Divina Frau-Meigs, professor of media sociology at Sorbonne Nouvelle University, France, and UNESCO chair for sustainable digital development, responded, “The relationship between human-machine/AI collaboration can be perceived differently according to cultures and it will evolve accordingly. In my culture, France, we are very reticent to fuse the borders between human and nonhuman agents, and intelligent/artificial interactions. This will continue and provide diversity in the human/machine collaboration. In learning environments this may lead to fewer robots in the classrooms here than in other cultures, and with robots with very specialized and unique tasks that remove dangerous activities from the human teachers while allowing them to devote more time per student. A lot of ethical and pedagogical thinking is going to go on before the interaction will be accepted, and the sooner the ethics of AI are aligned with human rights tenets the better.”

Dan Ryan, information sociologist at Mills College and author of the Sociology of Information blog, responded, “Negative effects like those described in the question will not be an effect of AI per se but of the same interests and forces that produce negative effects associated with current technologies.”

Rik Farrow, editor of “;login:” for USENIX Association, wrote, “Humans do poorly when it comes to making decisions based on facts, rather than emotional issues. Humans get distracted easily, and I occasionally wake up enough to feel terror while driving a car. There are certainly things that AI can do better than humans, like driving cars, handling finances, even diagnosing illnesses. Expecting human doctors to know everything about the varieties of disease and humans is silly. Let computers do what they are good at.”

E. Ohlson, a respondent who provided no identifying details, commented, “There are of course significant challenges. However, I feel by 2030 AI will be capable of solving many routine-yet-resource-intensive tasks better, more consistently and with higher quality. This will allow humans the mental and physical bandwidth to work on higher-order issues.”

Alex Turner, a respondent who provided no identifying details, said, “By 2030, we will see greater prevalence of ‘narrow AI’ helping people with specific tasks (cleaning, busywork and other mundane attentional sinks). I expect disruptive negative impact, but later than 2030.”

Alf Rehn, a professor of innovation, design and management in the school of engineering at the University of Southern Denmark, commented, “Whilst there will no doubt be some pain along the way, by 2030 the development of AI will in all likelihood have lessened drudgery, enabled more efficient organizations (e.g., lessening waste and bureaucracy in public organization) and even though the total number of jobs may have decreased, the overall benefits will still be positive.”

Bryan Alexander, futurist and president of Bryan Anderson Consulting, responded, “I hope we will structure AI to enhance our creativity, to boost our learning, to expand our relationships worldwide, to make us physically safer and to remove some drudgery.”

Arthur Bushkin, an IT pioneer who worked with the precursors to ARPANET and Verizon, wrote, “The principal issue will be society’s collective ability to understand, manage and respond to the implications and consequences of the technology.”

Charles Zheng, a researcher into machine learning and AI with the U.S. National Institute of Mental Health, commented, “In the year 2030, I expect AI will be more powerful than they currently are, but not yet at human level for most tasks. A patient checking into a hospital will be directed to the correct desk by a wheeled robot. The receptionist will be aided by software that listens to their conversation with the patient, and automatically populates the information fields without needing the receptionist to type the information. Another program cross-references the database in the cloud to check for errors. The patient’s medical images would first be automatically labeled by a computer program before being sent to a radiologist. My hope is that AI algorithms advance significantly in their ability to understand natural language, and also in their ability to model humans and understand human values. My fear is that the benefits of AI are restricted to the rich and powerful without being accessible to the general public. To ensure the best future, politicians must be informed of the benefits and risks of AI and pass laws to regulate the industry and to encourage open AI research.”

Adam Powell, senior fellow at the USC Annenberg Center on Communication Leadership and Policy, wrote, “Just as the internet quickly became an essential tool for education and business in the 1990s, AI will enhance the power of human/machine interactivity. We already see this happening well before 2020; by 2030 AI tools will be integrated into the broad range of personal, educational and professional life.”

Deana Rohlinger, a professor of sociology at Florida State University, responded, “AI will have both positive and negative consequences for society. I’ll use the workplace as an example. AI helps make production, distribution and customer service more efficient. It will also further reduce deskilled jobs. This will likely be a problem because social institutions (e.g., education system) will be slow to fill this gap.”

Clay Shirky, writer and consultant on the social and economic effects of internet technologies and vice president at New York University, said, “All previous forms of labor-saving devices, from the level to the computer, have correlated with increased health and lifespan in the places that have adopted them.”

Anthony Picciano, a professor of education at the City of New York University Interactive Pedagogy and Technology program, responded, “I am concerned that profit motives will lead some companies and individuals to develop AI applications that will threaten, not necessarily improve, our way of life. In the next 10 years we will see evolutionary progress in the development of artificial intelligence. After 2030, we will likely see revolutionary developments that will have significant ramifications on many aspects of human endeavor. We will need to develop checks on artificial intelligence.”

Dan Robitzski, a reporter covering science and technology for Futurism.com, commented, “Many of the fears surrounding AI advances are related to automation or a far-fetched robotic takeover. Surely, automation needs to be enacted such that workers’ needs are considered, but AI systems are so prevalent nowadays that it’s easy to forget how they can improve things like medical research.”

Clark Quinn, executive director at Quinnovation, wrote, “It’s up to us, but in general, things get better. 2030 is a *long* time off to predict, but hopefully we’ll seize the vision of Intelligence Augmentation (IA).”

Jamais Cascio, research fellow at the Institute for the Future, wrote, “Although I do believe that in 2030 AI will have made our lives better, I suspect that popular media of the time will justifiably highlight the large-scale problems: displaced workers, embedded bias and human systems being too deferential to machine systems. But AI is more than robot soldiers, autonomous cars or digital assistants with quirky ‘personalities.’ Most of the AI we will encounter in 2030 will be in-the-walls, behind-the-scenes systems built to adapt workspaces, living spaces and the urban environment to better suit our needs. Medical AI will keep track of medication and alert us to early signs of health problems; environmental AI will monitor air quality, heat index and other indicators relevant to our day’s tasks; our visual and audio surroundings may be altered or filtered to improve our moods, better our focus or otherwise alter our subconscious perceptions of the world. Most of this AI will be functionally invisible to us, as long as it’s working properly. The explicit human-machine interface will be with a supervisor system that coordinates all of the sub-AI – and undoubtedly there will be a lively business in creating supervisor systems with quirky personalities.”

Amali De Silva-Mitchell, futurist, responded, “Most individuals will be unaware of the applications working to provide service to their lives. The risk is that people get boxed in. Slow individualized response times to data corrections and clarifications cause people to feel insecure. Poor best-practice application is an issue of ethics. Generally, there is a lot to gain from uses of AI – which may be slower to develop than anticipated due to restrictions for data communications between applications. We need to ensure access for all good human computer interfaces for the disabled, elderly, etc.”

Cliff Lynch, director of the Coalition for Networked Information, responded, “My hope is that humans augmented by machine learning will improve the performance of humans alone. My fear is that costs will be cut by replacing humans by machine learning with only modestly worse results than humans alone. I don’t expect ‘general’ AI by 2030.”

Edson Prestes, a professor and director of robotics at the Federal University of Rio Grande do Sul, Brazil, responded, “I always lean to see the brightest side of AI and technology. I understand the fear around the domain. However, we must understand that all domains (technological or not) have two sides: a good and a bad one. To avoid the bad one we need to create and promote the culture of AI/Robotics for good; we need to stimulate people to empathize toward others; we need to think about potential issues, even if they have small probability to happen; we need to be futurists, foreseeing potential negative events and how to circumvent them before they happen; we need to create regulations/laws (at national and international levels) to handle globally harmful situations for humans, other living beings and the environment. Applying empathy, we should seriously think about ourselves and others – if the technology will be useful for us and others and if it will not cause any harm. We cannot develop solutions without considering people and the ecosystem as the central component of development. If so, the pervasiveness of AI/robotics in the future will diminish any negative impact and create a huge synergy among people and environment, improving people’s daily lives in all domains while achieving environment sustainability. Fortunately, the global community, e.g., the IEEE Global Initiative, is being very proactive in this, working to mitigate any potential issues or difficulties that might happen in the future.”

Craig Burdett, a respondent who provided no identifying details, wrote, “The question asked ‘will most people’ and ‘most of the time.’ I anticipate ‘most people’ applies to relatively affluent consumers with access to more advanced AI technologies. And, while most AI will probably be a positive benefit, the possible darker side of AI could lead to a loss of agency for some. For example, in a health care setting an increasing use of AI could allow wealthier patients access to significantly more advanced diagnosis agents. When coupled with a supportive care team, these patients could receive better treatment and a greater range of treatment options. Conversely, less-affluent patients may be relegated to automated diagnoses and treatment plants with little opportunity for interaction to explore alternative treatments. AI could, effectively, manage long-term health care costs by offering lesser treatment (and sub-optimal recovery rates) to individuals perceived to have a lower status. Consider two patients with diabetes. One patient, upon diagnosis, modifies their eating and exercise patterns (borne out by embedded diagnostic tools) and would benefit from more advanced treatment. The second patient fails to modify their behaviour resulting in substantial ongoing treatment that could be avoided by simple lifestyle choices. An AI could subjectively evaluate that the patient has little interest in their own health and withhold more expensive treatment options leading to a shorter lifespan and an overall cost saving.”

David Cake, an leader with Electronic Frontiers Australia and vice-chair of the ICANN GNSO Council, wrote, “In general, machine learning and related technologies have the capacity to greatly reduce human error in many areas where it is currently very problematic, and make available good appropriately tailored advice to people to whom it is currently unavailable, in literally almost every field of human endeavour. The greatest fear is that the social disruption due to changing employment patterns will be handled poorly and lead to widespread social issues. This is a social and policy issue not intrinsic to the technology itself.”
Alex Halavais, an associate professor of social technologies at Arizona State University, wrote, “At the individual level, there will be few aspects of our lives that are not influenced by some form of AI. Some elements of this, like conversational agents, will be directly observable in the interfaces we use. Most of this will continue to occur just below the surface of our technology use. On the other hand, AI is likely to rapidly displace many workers over the next 10 years, and so there will be some potentially significant negative effects at the social and economic level in the short run.”

Alex Simonelis, computer science faculty member, Dawson College, Montreal, said, “People will be smart enough, at least in the developed democratic world, to use AI well.”

Barry Chudakov, founder and principal of Sertain Research and author of “Metalifestream,” commented, “By 2030 the human-machine/AI collaboration will be a necessary tool to manage and counter the effects of multiple simultaneous accelerations: broad technology advancement, globalization, climate change and attendant global migrations. In the past, human societies managed change through gut and intuition, but as Eric Teller, CEO of Google X, has said, ‘our societal structures are failing to keep pace with the rate of change.’ To keep pace with that change and to manage a growing list of ‘wicked problems’ by 2030, AI – or using Joi Ito’s phrase, extended intelligence – will value and revalue virtually every area of human behavior and interaction. AI and advancing technologies will change our response framework and timeframes (which in turn, changes our sense of time). Where once social interaction happened in places – work, school, church, family environments – social interactions will increasingly happen in continuous, simultaneous time. If we are fortunate, we will follow the 23 Asilomar AI Principles outlined by the Future of Life Institute and will work toward ‘not undirected intelligence but beneficial intelligence.’ Akin to nuclear deterrence stemming from mutually assured destruction, AI and related technology systems constitute a force for a moral renaissance. We must embrace that moral renaissance, or we will face moral conundrums that could bring about human demise. In the learning environment, AI has the potential to finally demolish the retain-to-know learning (and regurgitate) model. Knowing is no longer retaining – machine intelligence does that; it is making significant connections. Connect and assimilate becomes the new learning model. My greatest hope for human-machine/AI collaboration constitutes a moral and ethical renaissance – we adopt a moonshot mentality and lock arms to prepare for the accelerations coming at us. My greatest fear is that we adopt the logic of our emerging technologies – instant response, isolation behind screens, endless comparison of self-worth, fake self-presentation – without thinking or responding smartly.”

David J. Krieger, co-director of the Institute for Communication & Leadership in Lucerne, Switzerland, wrote, “The affordances of digital technologies bind people into information networks such that the network becomes the actor and intelligence as well as agency are qualities of the network as a whole and not any individual actors, whether human or non-human. Networks will have access to much more information than do any present-day actors and therefore be able to navigate complex environments, e.g., self-driving cars, personal assistants, smart cities. Typically, we will consult and cooperate with networks in all areas, but the price will be that we have no such thing as privacy. Privacy is indeed dead, but in the place of personal privacy management there will be network publicy governance [‘publicy’ is the opposite of privacy; governance in regard to rights of publicy in an age in which privacy is no longer as available or available at all]. To ensure the use of these technologies for good instead of evil it will be necessary to dismantle and replace current divides between government and governed, workers and capitalists as well as to establish a working global governance.”

Ethem Alpaydin, a professor of computer engineering at Bogazici University, Istanbul, responded, “As with other technologies, I imagine AI will favor the developed countries that actually develop these technologies. AI will help find cures for various diseases and overall improve the living conditions in various ways. For the developing countries, however, whose labor force is mostly unskilled and whose exports are largely low-tech, AI implies higher unemployment, lower income and more social unrest. The aim of AI in such countries should be to add skill to the labor force rather than supplant them. For example, automatic real-time translation systems (e.g., Google’s Babelfish) would allow people who don’t speak a foreign language to find work in the tourism industry.”

Emanuele Torti, a research professor in the computer science department at the University of Pavia, Italy, responded, “AI will positively improve the quality of our lives. In particular, vocal assistants such as Bixby, Cortana and Siri will enter in our lives as personal assistants. The interaction between us and AI will be based on vocal interaction, with the AI coordinating Internet of Things devices in order to carry out different tasks.”

Julian Togelius, researcher working on machine learning and AI at New York University, commented, “Throughout the history of human technology, our culture has co-evolved with technology as we invent it, and that has given us new opportunities. I don’t see how it would be any different with the various technologies we call ‘AI.’”

Gene Crick, director of the Metropolitan Austin Interactive Network and longtime community telecommunications expert, wrote, “To predict AI will benefit ‘most’ people is more hopeful than certain. Health care will improve: objectives are simpler; economic and social interests largely compatible. Education should improve (where encouraged) with easier access, enhanced tools and vastly expanded resources. AI can benefit lives at work and home – if competing agendas can be balanced. Key support for this important goal could be technology professionals’ acceptance and commitment regarding social and ethical responsibilities of our work.”

Hank Dearden, executive director at ForestPlanet Inc., said, “I’m hoping that the AI advancements will be applied to health care, including better monitoring and prevention, as well as allowing for rapid-response mechanisms.”

Kenneth Cukier, author and senior editor at The Economist, commented, “AI will be making more decisions in life, and some people will be uneasy with that. But these are decisions that are more effectively done by machines, such as assessing insurance risk, the propensity to repay a loan or to survive a disease. A good example is health care: Algorithms, not doctors, will be diagnosing many diseases, even if human doctors are still ‘in the loop.’ The benefit is that healthcare can reach down to populations that are today underserved: the poor and rural worldwide.”

Kyung Sin Park, internet law expert and co-founder of Open Net Korea, responded, “As all technologies are, AI is a two-edged sword. It can help humans control the greatest threat (humans themselves) through rationality, or it can accelerate economic polarization of people, depending on their access to the technology. AI consists of software and training data. Software is already being made available on an open source basis. What will decide AI’s contribution to humanity will be whether data (used for training AI) will be equitably distributed. Data-protection laws and the open data movement will hopefully do the job of making more data available equally to all people. I imagine a future where people can access AI-driven diagnosis of symptoms, which will drastically reduce health care costs for all.”

Lou Gross, professor of mathematical ecology and expert in grid computing, spatial optimization and modeling of ecological systems at the University of Tennessee – Knoxville, said, “I see AI as assisting in individualized instruction and training in ways that are currently unavailable or too expensive. There are hosts of school systems around the world that have some technology but are using it in very constrained ways. AI use will provide better adaptive learning and help achieve a teacher’s goal of personalizing education based on each student’s progress.”

Jeff Jarvis, director of the Tow-Knight Center at City University of New York’s Craig Newmark School of Journalism, commented, “I could substitute ‘book’ for ‘AI’ and the year 1485 (or maybe 1550) for 2030 in your question and it’d hold fairly true. Some thought it would be good, some bad; both end up right. We will figure this out. We always have. Sure, after the book there were wars and other profound disturbances. But in the end, humans figure out how to exploit technologies to their advantage and control them for their safety. I’d call that a law of society. The same will be true of AI. Some will misuse it, of course, and that is the time to identify limits to place on its use – not speculatively before. Many more will use it to find economic, societal, educational and cultural benefit and we need to give them the freedom to do so. What worries me most is worry itself: an emerging moral panic that will cut off the benefits of this technology for fear of what could be done with it. What I fear most is an effort to control not just technology and data but knowledge itself, prescribing what information can be used for before we know what those uses could be.”

Kate Eddens, research scientist at the Indiana University Network Science Institute, responded, “There is an opportunity for AI to enhance human ability to gain critical information in decision-making, particularly in the world of health care. There are so many moving parts and components to understanding health care needs and deciding how to proceed in treatment and prevention. With AI, we can program algorithms to help refine those decision-making processes, but only when we train the AI tools on human thinking, a tremendous amount of real data and actual circumstances and experiences. There are some contexts in which human bias and emotion can be detrimental to decision-making. For example, breast cancer is over-diagnosed and over-treated. While mammography guidelines have changed to try to reflect this reality, strong human emotion powered by anecdotal experience leaves some practitioners unwilling to change their recommendations based on evidence, and advocacy groups reluctant to change their stance based on public outcry. Perhaps there is an opportunity for AI to calculate a more specific risk for each individual person, allowing for a tailored experience amid the broader guidelines. If screening guidelines change to ‘recommended based on individual risk,’ it lessens the burden on both the care provider and the individual. People still have to make their own decisions, but they may be able to do so with more information and a greater understanding of their own risk and reward. This is such a low-tech and simple example of AI, but one in which AI can – importantly – supplement human decision-making without replacing it.”

John Willinsky, professor and director of the Public Knowledge Project at Stanford Graduate School of Education, said, “Uses of AI that reduce human autonomy and freedom will need to be carefully weighed against the gains in other qualities of human life (e.g., driverless cars that improve traffic and increase safety). By 2030, deliberations over such matters will be critical to the functioning of ‘human-machine/AI collaboration.’ My hope, however, is that these deliberations are not framed as collaborations between what is human and what is AI, but will be seen as the human use of yet another technology, with the wisdom of such use open to ongoing human consideration and intervention intent on advancing that sense of what is most humane about us.”

Kate Carruthers, a chief data and analytics officer based in Australia, predicted, “Humans will increasingly interact with AI on a constant basis and it will become hard to know where the boundaries are between the two. Just as kids now see their mobile phones as an extension of themselves so too will human/AI integration be. Further, I assume that tracking and monitoring of people will be an accepted part of life and that there will be stronger regulation on privacy and data security. Every facet of life will be circumscribed by AI and it will be part of the fabric of our lives and asking questions of AI will seem normal. Doctor Google will be replaced by Doctor AI. I fear that the cause of democracy and freedom will be lost by then so that it might be a darker future than I’ve outlined. To avoid this darker future, one thing we need to do is ensure the development of ethical standards for the development of AI and ensure that we deal with algorithmic bias. We need to build ethics into our development processes.”

Kostas Alexandridis, author of “Exploring Complex Dynamics in Multi-agent-based Intelligent Systems” and research assistant professor at the University of the Virgin Islands, said, “Many of our day-to-day small decisions will be automated and will require minimal intervention by the end-user. Autonomy and/or independence will be sacrificed and replaced by convenience. Newer generation of citizens will become more and more dependent on networked AI structures and processes. There are challenges that need to be addressed in terms of critical thinking and heterogeneity. Networked interdependence will, more likely than not, increase our vulnerability to cyber-attacks. There is also a real likelihood that there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure ownership and control.”

Frank Kaufmann, president of Filial Projects and founder and director of the Values in Knowledge Foundation, said, “Advancement in technology without exception benefits human life, at all times in all places. It also exacerbates weaknesses, problems, unresolved commitments to personal and social dysfunction and dystopia. This question, regardless of how it is worded, can be simplified to say: ‘Are you an optimist or a pessimist?’ and ‘Do you believe humanity is progressing, or declining?’ I believe that getting better at things is always better, regardless of the version of unresolved, persistent problems that manifest in attendant ways.”

Garland McCoy, founder and chief development officer of the Technology Education Institute, wrote, “I selected the positive outlook box but with foreboding. I am an optimist at heart and so believe that, given a decade-plus, the horror that is unfolding before our eyes will somehow be understood and resolved. That said, if the suicide epidemic we are witnessing continues to build and women continue to opt out of motherhood all bets are off. I do think technology is at the core of both the pathology and choice.”

Leonard Kleinrock, Internet Hall of Fame member and co-director of the first host-to-host online connection, and professor of computer science at the University of California – Los Angeles, said, “As AI and machine learning improve, we will see highly customized interactions between humans and their health care needs. This mass customization will enable each human to have her medical history, DNA profile, drug allergies, genetic makeup, etc., always available to any caregiver/medical professional that they engage with, and this will be readily accessible to the individual as well. Their care will be tailored to their specific needs and the very latest advances will be able to be provided rapidly after the advances are established. The rapid provision of the best medical treatment will provide great benefits. In hospital settings, such customized information will dramatically reduce the occurrence of medical injuries and deaths due to medical errors. My hope and expectation is that intelligent agents will be able to assess the likely risks and the benefits that ensue from proposed treatments and procedures, far better than is done now by human evaluators, such humans, even experts, typically being poor decision makers in the face of uncertainty. But to bring this about, there will need to be carefully conducted tests and experimentation to assess the quality of the outcomes of AI-based decision making in this field. However, as with any ‘optimized’ system, one must continually be aware of the fragility of optimized systems when they are applied beyond the confines of their range of applicability.”

Joaquin Vanschoren, assistant professor of machine learning at Eindhoven University of Technology, Netherlands, responded, “Humans are very adaptable. We fulfill our goals using the means that we have. Without those means, we struggle to meet our goals or we dream less big. AI, like many technologies, enables us to solve harder and different problems. Humans will interact with AI technologies to meet new goals. The challenge is to develop AI technologies that align with the best of our own goals. My hope is that the goals we set for ourselves benefit humanity as a whole.”

Larry Lannom, internet pioneer and vice president at the Corporation for National Research Initiatives (CNRI), an expert in digital object architecture, said, “I am optimistic by nature and so I am hopeful that networked human-machine interaction will improve the general quality of life, e.g., greatly improved medical care available to all at low or zero cost. My fears revolve around the social aspects – roughly speaking will all of the benefits of more powerful artificial intelligence benefit the human race as a whole or simply the thin layer at the top of the social hierarchy that owns the new advanced technologies?”

Henry E. Brady, dean, Goldman School of Public Policy, University of California – Berkeley, wrote, “It is already evident that the internet and AI can be extraordinarily helpful in many different ways including facilitating searches on the internet, finding fraud in consumer data, formulating better medical diagnoses and on and on. At the same time, we’ve found that AI can perpetuate stereotypes or biases in past decision-making (to a large degree because ‘training sets’ for supervised learning contain those kinds of biases), and it can be fooled by those out to manipulate it to, for example, make certain news stories rank at the top of an algorithm even though they are not accurate or true. AI can also replace people in jobs that require sophisticated and accurate pattern matching – driving, diagnoses based upon medical imaging, proofreading and other areas. I am optimistic about the future because I believe that policy responses can be developed that will reduce biases and find a way to accommodate AI and robotics with human lives. There is also the fact that in the past technological change has mostly led to new kinds of jobs rather than the net elimination of jobs. Furthermore, I also believe that there may be limits to what AI can do. It is very good at pattern matching, but human intelligence goes far beyond pattern matching and it is not clear that computers will be able to compete with humans beyond pattern matching. It also seems clear that even the best algorithms will require constant human attention to update, check and revise them.”

John Lazzaro, retired professor of electrical engineering and computer science, University of California – Berkeley, commented, “When I visit my primary care physician today, she spends a fair amount time typing into an EMS application as she’s talking to me. In this sense, the computer has already arrived in the clinic. An AI system that frees her from this clerical task – that can listen and watch and distill the doctor-patient interaction into actionable data – would be an improvement. A more-advanced AI system would be able to form a ‘second opinion’ based on this data as the appointment unfolds, discreetly advising the doctor via a wearable. The end goal is a reduction in the number of ‘false starts’ in-patient diagnosis. If you’ve read Lisa Sander’s columns in the New York Times, where she traces the arc of difficult diagnoses, you understand the real clinical problem that this system addresses.”

Gary Kreps, distinguished professor of communication and director of the Center for Health and Risk Communication at George Mason University, wrote, “The tremendous potential for AI to be used to engage and adapt information content and computer services to individual users can make computing increasingly helpful, engaging and relevant. However, to achieve these outcomes, AI needs to be programmed with the user in mind. For example, AI services should be user-driven, adaptive to individual users, easy to use, easy to understand and easy for users to control. These AI systems need to be programmed to adapt to individual user requests, learning about user needs and preferences.”

Frank Feather, futurist and consultant with StratEDGY, commented, “AI by 2030 … This is only about a decade away, so despite AI’s continuing evolution, it will not have major widespread effects by 2030. However, with care in implementation, all effects should be positive in social and economic impact. That said, the changes will represent a significant step toward what I call a DigiTransHuman Future, where the utility of humans will increasingly be diminished as this century progresses, to the extent that humans may become irrelevant or extinct, replaced by DigiTransHumans and their technologies/robots that will appear and behave just like today’s humans, except at very advanced stages of humanoid development. This is not going to be a so-called ‘singularity’ and there is nothing ‘artificial’ about the DigiTransHuman Intelligence. It is part of designed evolution of the species.”

James Gannon, global head of eCompliance for emerging technology, cloud and cybersecurity at Novartis, responded, “AI will increase the speed and availability to develop drugs and therapies for orphan indications. AI will assist in general lifestyle and health care management for the average person.”

Gary Arlen, president of Arlen Communications, wrote, “After the initial frenzy recedes about specific AI applications (such as autonomous vehicles, workplace robotics, transaction processing, health diagnoses and entertainment selections), specific applications will develop – probably in areas barely being considered today. As with many new technologies, the benefits will not apply equally, potentially expanding the haves-and-have-nots dichotomy. In addition, as AI delves into new fields – including creative work such as design, music/art composition – we may see new legal challenges about illegal appropriation of intellectual property (via machine learning). However, the new legal tasks from such litigation may not need a conventional lawyer – but could be handled by AI itself. Professional health care AI poses another type of dichotomy.  For patients, AI could be a bonanza, identifying ailments, often in early stages (based on early symptoms), and recommending treatments. At the same time, such automated tasks could impact employment for medical professionals. And again, there are legal challenges to be determined, such as liability in the case of a wrong action by the AI. Overall, there is no such thing as ‘most people,’ but many individuals and groups – especially in professional situations – WILL live better lives thanks to AI, albeit with some severe adjustment pains.”

Robert K. Logan, chief scientist at sLab and OCAT and professor emeritus of physics at the University of Toronto, Canada, said, “AI is great for doing tasks that are repetitive – the only danger is when AI researchers believe that a computer can think. They do not think; they only compute. The idea of the Singularity is an example of the over-extension of AI. Computers will never achieve an equivalency to human intelligence. There is no such thing as AW (artificial wisdom). AI as a tool to enhance human intelligence makes sense but AI to replace human intelligence makes no sense and therefore is nonsense.”

Glenn Grossman, principal consultant for Fair Issac Corporation (FICO), wrote, “AI could make tasks we encounter more effective. It will require humans to focus on more-advanced thinking vs. the routine. If AI could help medical staff detect certain conditions and do the first phase of evaluation, our doctors could actually spend more time on real medical care vs. routines.”

Julian Jones, a respondent who provided no identifying details, said, “I expect that between today and 2030 AI applications will be relatively simple replicating responses of ‘expert’ users. Beyond 2030 algorithms are likely to become more complex and may not be well understood. They may also reflect their designer’s prejudices. This may harm society.”

K. Stout, a respondent who provided no identifying details, said, “As with most innovations, I see value and possible difficulties. Scenario: health care. When I visit the doctor his exam is supplemented by AI, which reviews my health information, offering similar issues I’ve encountered and what treatments have been effective.”

Lee McKnight, associate professor, School of Information Studies, Syracuse University, commented, “In 2030, we can expect increasing automation – which is the un-fancy way to say ‘artificial intelligence’ – will be even more omnipresent in people’s daily lives, and at the core of artificially intelligent enterprises. This will generally be for individually and socially beneficial reasons we may reasonably expect. But I am afraid the only scenario I can paint for our 2030 future with high confidence is this: There will be good, bad and ugly outcomes from human-machine interaction in artificially intelligent systems, services and enterprises. In addition to good human-machine/AI collaboration, there will be bad and ugly artificially intelligent machines. Poorly designed artificially intelligent services and enterprises will have unintended societal consequences, hopefully not catastrophic, but sure to damage people and infrastructure. Even more regrettably, defending ourselves against evil – or to be polite, bad AI systems turned ugly by humans, or other machines – must become a priority for societies well before 2030, given the clear and present danger. How can I be sure? What are bots and malware doing every day, today? Is there a reason to think ‘evil-doers’ will be less motivated in the future? No. So my fear is that the hopefully sunny future of AI, which in aggregate we may assume will be a net positive for all of us, will be marred by – many – unfortunate events.”

Greg Lloyd, president and co-founder at Traction Software, responded, “By 2030 AIs will augment access and use of all personal and networked resources as highly skilled and trusted agents for almost every person – human or corporate. These agents will be bound to act in accordance with new laws and regulations that are fundamental elements of their construction much like Isaac Asimov’s Three Laws of Robotics, but with finer-grain ‘certifications’ for classes of activities that bind their behavior and responsibility for practices much like codes for medical, legal, accounting and engineering practice. Certified agents will be granted access to personal or corporate resources, and within those bounds will be able to converse, take direction, give advice and act like trusted servants, advisers or attorneys. Although these agents will ‘feel’ like intelligent and helpful beings, they will not have any true independent will or consciousness, and must not pretend to be human beings or act contrary to the laws and regulations that bind their behavior. Think Ariel and Prospero.”

José Estabil, director of entrepreneurship and innovation at MIT’s Skoltech Initiative, commented, “To paraphrase Steve Jobs, the more we free ourselves from routine tasks the closer we get to freeing human creativity.”

Jan Schaffer, founder and executive director of J-Lab – The Institute for Interactive Journalism, responded, “AI/human-machine advances will help many people, especially seniors and the disabled, navigate their daily lives more independently. It will enable a lot more digital medicine. But the tradeoff will be a loss of privacy and exposure to hacking for evil ends.”

Jean-Daniel Fekete, researcher in information visualization, visual analytics and human-computer interaction at INRIA, France, said, “Humans and machines will integrate more, improving health through monitoring and easing via machine control. Personal data will then become even more revealing and intrusive and should be kept under personal control.”

Jennifer Groff, co-founder of the Center for Curriculum Redesign, an international NGO dedicated to redesigning education for the 21st century, wrote, “The question was black or white, but many conditionals come into play that will ultimately determine what happens – the future is not just a straight-line prediction. The impact on learning and learning environments has the potential to be one of the most positive future outcomes. Learning is largely intangible and invisible, making it a ‘black box’ – and our tools to capture and support learning to this point have been archaic. Think, large-scale assessment. Learners NEED tools that help them understand where they are in a learning pathway, how they learn best, what they need next and so on. We’re only just beginning to use technology to better answer these questions. AI has the potential to help us better understand learning, gain insights into learners at scale and ultimately build better learning tools and systems for them. But as a large social system, it is also prey to the complications of poor public policy that ultimately warps and diminishes AI’s potential positive impact.”

Fred Baker, an independent networking technologies consultant, longtime leader in the Internet Engineering Task Force and engineering fellow with Cisco, commented, “The impact of AI and of the internet has been a net positive, although there have been difficulties. In my opinion, developments have not been ‘out of control,’ in the sense that the creation of Terminator’s Skynet or the HAL 9000 computer might depict them. Rather, we have learned to automate processes in which neural networks have been able to follow data to its conclusion (which we call ‘big data’) unaided and uncontaminated by human intuition, and sometimes the results have surprised us. These remain, and in my opinion will remain, to be interpreted by human beings and used for our purposes. If I see something dark in that, I will note that the 2004 film ‘I, Robot’s’ VIKI is more commonly a human user of the information and insight that is developed. Spam and malware are big issues in the internet, but they are created by and serve forces that, however dark, are very human. For example, https://dyn.com/blog/shutting-down-the-bgp-hijack-factory/ details some efforts to shut down a BGP prefix hijacker, someone who abuses the internet to inject large volumes of spam and malware. The target network and its owners are a dark force, but they are people subverting the network to serve their ends.”

Liz Rykert, president at Meta Strategies, a consultancy that works with technology and complex organizational change, responded, “The key for networked AI will be the ability to diffuse equitable responses to basic care and data collection. If bias remains in the programming it will be a big problem. I believe we will be able to develop systems that will learn from and reflect a much broader and more diverse population than the systems we have now.”

Guy Levi, chief innovation officer for the Center for Educational Technology, based in Israel, wrote, “In the field of education AI will promote personalization, which almost by definition promotes motivation. The ability to move learning forward all the time by a personal AI assistant, which opens the learning to new paths, is a game changer. The AI assistants will also communicate with one another and will orchestrate teamwork and collaboration. The AI assistants will also be able to manage diverse methods of learning, such as productive failure, teach-back and other innovating pedagogies.”

Hassaan Idrees, an electrical engineer and Fulbright Scholar active in creating energy systems for global good, commented, “I believe human-machine interaction will be more of utility, and less fanciful as science fiction puts it. People will not need to see their physicians in person, their automated doctors making this irrelevant. Similarly, routine workplace activities like data processing and financial number crunching would be performed by AI. Humans with higher levels of intellect can survive this age, and those on the lower ends of spectrum of mental acumen would be rendered unnecessary.”

Kenneth Grady, futurist, founding author of The Algorithmic Society blog and adjunct and advisor at the Michigan State University College of Law, responded, “In the next dozen years, AI will still be moving through a phase where it will augment what humans can do. It will help us sift through, organize and even evaluate the mountains of data we create each day. For example, doctors today still work with siloed data. Each patient’s vital signs, medicines, dosage rates, test results and side effects remain trapped in isolated systems. Doctors must evaluate this data without the benefit of knowing how it compares to the thousands of other patients around the country (or world) with similar problems. They struggle to turn the data into effective treatments by reading research articles and mentally comparing them to each patient’s data. As it evolves, AI will improve the process. Instead of episodic studies, doctors will have near-real-time access to information showing the effects of treatment regimes. Benefits and risks of drug interactions will be identified faster. Novel treatments will become evident more quickly.   Doctors will still manage the last mile, interpreting the analysis generated through AI. This human in the loop approach will remain critical during this phase. As powerful as AI will become, it still will not match humans on understanding how to integrate treatment with values. When will a family sacrifice effectiveness of treatment to prolong quality of life? When two life-threatening illnesses compete, which will the patient want treated first?  This will be an important learning phase, as humans understand the limits of AI.”

Ashok Goel, director of the Human-Centered Computing Ph.D. Program at Georgia Tech, wrote, “Human-AI interaction will be multimodal: We will directly converse with AIs, for example.  However, much of the impact of AI will come in enhancing human-human interaction across both space (we will be networked with others) and time (we will have access to all our previously acquired knowledge). This will aid, augment and amplify individual and collective human intelligence in unprecedented and powerful ways.”

Katja Grace, contributor to the AI Impacts research project and a research associate with the Machine Intelligence Research Institute, said, “There is a substantial chance that AI will leave everyone worse off, perhaps radically so. The chance is less than 50%, so I voted for ‘better off,’ but the downside risk is so large that there could be an expectation the world might be worse for AI.”

Gianluca Demartini, a senior lecturer in data science at the University of Queensland, Australia, wrote, “AI will support tasks that are currently time-consuming for humans like scheduling meetings, shopping, cooking, driving.”

Gabor Melli, senior director of engineering for AI and machine learning for Sony PlayStation, responded, “Barring the very unlikely event of a nuclear war, I believe that in 12 years, by 2030, our world will be significantly better in part because of AI technologies. I attribute this prediction in large part because of our ongoing progress toward better quality of life for most. My hope is that by 2030 most of humanity will have ready access to health care and education through digital agents.”

Timothy Leffel, research scientist, National Opinion Research Center at the University of Chicago, said, “Formulaic transactions and interactions are particularly ripe for automation. This can be good in cases where human error can cause problems, e.g., for well-understood diagnostic medical testing. But it will inevitably lead to lower-wage job losses, e.g., in the service industry.”

John Verdon, retired futurist and consultant, wrote, “Marshall McLuhan noted that, ‘Technology is the most human part of us.’ Once humans invented the technologies of language and culture, we became truly technology-embodied beings. Humans are also profoundly social beings – ever more social as we have evolved to survive through collective efforts. The complexity of our challenges are the motivations for our continued evolution.”

Charlie Firestone, communications and society program executive director and vice president at The Aspen Institute, commented, “I remain optimistic that AI will be a tool that humans will use, far more widely than today, to enhance quality of life such as medical remedies, education and the environment. For example, the AI will help us to conserve energy in homes and in transportation by identifying exact times and temperatures we need, identifying sources of energy that will be the cheapest and the most efficient. There certainly are dire scenarios, particularly in the use of AI for surveillance, a likely occurrence by 2030. I am hopeful that AI and other technologies will identify new areas of employment as it eliminates many jobs.”

Jonathan Kolber, futurist, wrote, “My fear is that, by generating AIs that can learn new tasks faster and more reliably than people can do, the future economy will have only evanescent opportunities for most people. My hope is that we will begin implementing a sustainable and viable UBI, and in particular Michael Haines’ MUBI proposal. (To my knowledge, it is the only such proposal that is sustainable and can be implemented in any country at any time.) Here is the Haines proposal: https://medium.com/@m.haines_81949/a-universal-basic-income-directly-solves-one-problem-only-bc8a212f3d98. Here is my critique of alternatives: https://ieet.org/index.php/IEET2/more/Kolber20160514. Given that people no longer need depend on their competitive earning power in 2030, AI will empower a far better world. If, however, we fail to implement MUBI or something equally effective, vast multitudes will become unemployed and unemployable without means to support themselves. That is a recipe for societal disaster.”

James Scofield O’Rourke, a professor of management at the University of Notre Dame specializing in reputation management, commented, “Technology has, throughout recorded history, been a largely neutral concept. The question of its value has always been dependent on its application. For what purpose will AI and other technological advances be used? Everything from gunpowder to internal combustion engines to nuclear fission has been applied in both helpful and destructive ways. Assuming we can contain or control AI (and not the other way around), the answer to whether we’ll be better off depends entirely on us (or our progeny). ‘The fault, Dear Brutus, is not in our stars, but in ourselves, that we are underlings.’”

Fernando Barrio, director of the law program at the Universidad Nacional de Rio Negro, Argentina, commented, “The interaction between humans and networked AI could lead to a better future for a big percentage of the population. In order to do so, efforts need to be directed not only at increasing AI development and capabilities but also at positive policies to increase the availability and inclusiveness of those technologies. The challenge is not technical; it is socio-political.”

Jen Myronuk, a respondent who provided no identifying details, said, “The optimist’s view includes establishing and implementing a new type of ISO standard – ‘encoded human rights’ – as a functional data set alongside exponential and advancing technologies. Global human rights and human-machine/AI technology can and must scale together. If applied as an extension of the human experience, human-machine/AI collaboration will revolutionize our understanding of the world around us.”

Laurie Orlov, principal analyst at Aging in Place Technology Watch, wrote, “Voice-enabled technologies that enable people to speak to tech (and to others through tech) has been a user-interface breakthrough. AI capabilities will learn patterns and help predict issues, warn of trouble and act as a reassuring connection for those living alone.”

Sumandra Majee, an architect at F5 Networks Inc., said, “AI, deep learning, etc., will become more a part of daily life in advanced countries. This will potentially widen the gap between technology-savvy people and economically well-to-do folks and the folks with limited access to technology. However, I am hopeful that in the field of healthcare, especially when it comes to diagnosis, AI will significantly augment the field, allowing doctors to do a far better job. Many of the routines aspects of checkups can be done via technology. There is no reason an expert human has to be involved in basic A/B testing to reach a conclusion. Machines can be implemented for those tasks and human doctors should only do the critical parts. I do see AI playing a negative role in education, where students may not often actually do the hard work of learning through experience. It might actually make the overall population dumber.”

Marek Havrda, director at NEOPAS and strategic adviser for the GoodAI project, a private R&D company focusing on the development of artificial general intelligence and AI applications based in Prague, Czech Republic, said, “The development and implementation of artificial intelligence has brought about questions of the impact it will have on employment. Machines are beginning to fill jobs that have been traditionally reserved for humans, such as driving a car or prescribing medical treatment. How these trends may unfold is a crucial question. We may expect the emergence of ‘super-labour,’ a labour defined by super-high added value of human activity due to augmentation by AI. Apart from the ability to deploy AI, super-labour will be characterised by creativity and the ability to co-direct and supervise safe exploration of business opportunities together with perseverance in attaining defined goals. An example may be that by using various online-AI gig-workers (and maybe several human gig-workers), while leveraging AI to its maximum potential, at all aspects from product design to marketing and after-sales care, three people could create a new service and ensure its smooth delivery for which a medium-size company would be needed today. We may expect growing inequalities between those who have access and are able to use technology and those who do not. However, it seems more important how big a slice of the AI co-generated ‘pie’ is accessible to all citizens in absolute terms (e.g., having enough to finance public service and other public spending) which would make everyone better off than in pre-AI age, than the relative inequalities.”

Robert Bell, co-founder of Intelligent Community Forum, wrote, “My forecast is based on humanity’s track record. When first confronted with change, we largely fear its downsides because that kept us alive on the African plains long ago. Then we learn the upsides and adopt with furious abandon. Eventually we find ways to deal with the worst negatives and experience tempers our misplaced enthusiasms to produce a net benefit. Forecasters of the future tend to forget that people learn from their experiences.”

Monica Murero, director of the E-Life International Institute and associate professor in sociology of new technology at the University of Naples Federico II, Italy, commented, “My vision regarding the human machine/AI collaboration is dual, although I tend to see more advantages than disadvantages (for those who will be somewhat excluded, which is the majority of the world population). For example, in health care I foresee positive outcomes in terms of reducing human mistakes, that are currently still creating several failures. Also, I foresee an increased development of mobile (remote) 24/7 healthcare services and personalized medicine thanks to AI and human-machine collaboration applied to the field.”

Nigel Hickson, an expert on technology policy development for ICANN based in Brussels, responded, “I am optimistic that AI will evolve in a way that benefits society by improving processes and giving people more control over what they do. This will only happen though if the technologies are deployed in a way in which benefits all. My fear is that in non-democratic countries AI will lessen freedom, choice and hope.”

Uta Russmann, professor in the Department of Communication at FHWien der WKW University of Applied Sciences for Management & Communication, said, “Human-machine/AI collaboration will bring some benefits for society as a whole. For instance, life expectancy is increasing (globally) and human-machine/AI collaboration will help older people to manage their life on their own by taking care of them, helping them in the household (taking down the garbage, cleaning up, etc.) as well as keeping them company – just like cats and dogs do, but it will be a much more ‘advanced’ interaction. On the other hand, many people will not be benefitting from this development, as robots will do their jobs. Blue-collar workers, people working in supermarkets stacking shelves, etc., will not be needed less, but the job market will not offer them any other possibilities. The gap between rich and poor will increase as the need for highly skilled and very well-paid people increases and the need for less skilled workers will decrease tremendously.”

Ross Stapleton-Gray, principal at Stapleton-Gray and Associates, an information technology and policy consulting firm, commented, “Human-machine interaction could be for good or for ill. It will be hugely influenced by decisions on social priorities. We may be at a tipping point in recognizing that social inequities need to be addressed, so, say, a decreased need for human labor due to AI will result in more time for leisure, education, etc., instead of increasing wealth inequity.”

Andrew Odlyzko, professor at the University of Minnesota and former head of its Digital Technology Center and the Minnesota Supercomputing Institute, said, “I expect strong similarities to the interactions people have with domesticated animals. We use them, even though we frequently don’t understand how they ‘work,’ but also watch for bad behavior.”

Michael R. Nelson, a technology policy expert for a leading network services provider who worked as a technology policy aide in the Clinton Administration, commented, “If by artificial intelligence you primarily mean machine learning and autonomous systems, I think it is clear that there will be all sorts of benefits – many almost invisible – that will improve the productivity and safety of individuals. Most media reports focus on how machine learning will directly affect people (medical diagnosis, self-driving cars, etc.) but we will see big improvements in infrastructure (traffic, sewage treatment, supply chain, etc.).”

Mary Chayko, author of “Superconnected: The Internet, Digital Media, and Techno-Social Life” and professor in the Rutgers School of Communication and Information, said, “We will see regulatory oversight of AI geared toward the protection of those who use it. Having said that, people will need to remain educated as to AI’s impacts on them, and to mobilize as needed to limit the power of companies and governments to intrude on their spaces, lives and civil rights. It will take vigilance and hard work to accomplish this, but I feel strongly that we are up to the task.”

Lee Smolin, a professor at Perimeter Institute for Theoretical Physics and Edge.org contributor, responded, “I don’t think there is a simple answer.  First of all, so far we have machine-learning algorithms which are impressive but are very far from true AI. So far, they are still just tools, without agency, so it is incorrect to speak of collaboration with them.”

Mícheál Ó Foghlú, engineering director and DevOps Code Pillar at Google, Munich, said, “This trend has already started to happen in 2018. The trend is that AI/ML models in specific domains can out-perform human experts (e.g., certain cancer diagnosis, based on image recognition of retina scans). I think it would be fairly much the consensus that this trend would continue, and many more such systems could aid human experts to be more accurate.”

Yoram Kalman, an associate professor at The Open University of Israel and member of The Center for Internet Research at the University of Haifa, wrote, “In essence, technologies that empower people also improve their lives. I see that progress in the area of human-machine collaboration empowers people by improving their ability to communicate and to learn, and thus my optimism. I do not fear that these technologies will take the place of people, since history shows that again and again people used technologies to augment their abilities and to be more fulfilled. Although in the past, too, it seemed as if these technologies would leave people unemployed and useless, human ingenuity and the human spirit always found new challenges that could best be tackled by humans. The main risk is when communication and analysis technologies are used to control others, to manipulate them, or to take advantage of them. These risks are ever-present, and can be mitigated through societal awareness and education, and through regulation that identifies entities that become very powerful thanks to a specific technology or technologies, and which use that power to further strengthen themselves. Such entities – be they commercial, political, national, military, religious or any other – have in the past tried and succeeded in leveraging technologies against the general societal good, and that is an ever-present risk of any powerful innovation. This risk should make us vigilant but should not keep us from realizing one of the most basic humans urges: the strive to constantly improve the human condition.”

Mike Osswald, vice president of experience innovation at Hanson Inc., commented, “I’m thinking of a world in which people’s devices continuously assess the world around them to keep a population safer (and healthier). Thinking of those living in large urban areas, with devices forming a network of AI input through sound analysis, air quality, natural events, etc., that can provide collective notifications and insight to everyone in a certain area about the concerns of environmental factors, physical health, even helping provide no quarter for bad actors through community policing.”

Thomas H. Davenport, distinguished professor of information technology and management at Babson College and fellow of the MIT Initiative on the Digital Economy, responded, “So far, most implementations of AI have resulted in some form of augmentation, not automation. Surveys of managers suggest that relatively few have automation-based job loss as the goal of their AI initiatives. So while I am sure there will be some marginal job loss, I expect that AI will free up workers to be more creative and to do more unstructured work.”

Walid Al-Saqaf, senior lecturer at Sodertorn University, member of the board of trustees of the Internet Society (ISOC) and vice president of the ISOC Blockchain Special Interest Group, commented, “AI can help solve complex problems by using the collective knowledge of the past. The challenge is to ensure that the data used for AI procedures is reliable. This entails the need for strong cyber security and data integrity. The latter, I believe, can be tremendously enhanced by distributed ledger technologies such as blockchain. I foresee mostly positive results from AI so long as there is enough guards to protect from automated execution of tasks in areas that may have ethical considerations such as taking decisions that may have life-or-death implications. AI has a lot of potential. It should be used to add to and not replace human intellect and judgement.”
Andrew Tutt, an expert in law and author of “An FDA for Algorithms,” which called for “critical thought about how best to prevent, deter and compensate for the harms that they cause,” said, “AI will be absolutely pervasive and absolutely seamless in its integration with everyday life. It will simply become accepted that AI are responsible for ever-more-complex and ever-more-human tasks. By 2030, it will be accepted that when you wish to hail a taxi the taxi will have no driver – it will be an autonomously-driven vehicle. Robots will be responsible for more-dynamic and complex roles in manufacturing plants and warehouses. Digital assistants will play an important and interactive role in everyday interactions ranging from buying a cup of coffee to booking a salon appointment. It will no longer be unexpected to call a restaurant to book a reservation, for example, and speak to a ‘digital’ assistant who will pencil you in. These interactions will be incremental but become increasingly common and increasingly normal. My hope is that the increasing integration of AI into everyday life will vastly increase the amount of time that people can devote to tasks they find meaningful.”

Robert D. Atkinson, president of the Information Technology and Innovation Foundation, wrote, “The developed world faces an unprecedented productivity slowdown that promises to limit advances in living standards. AI has the potential to play an important role in boosting productivity and living standards.”

Nicholas Beale, leader of the strategy practice at Sciteb, an international strategy and search firm, commented, “All depends on how responsibly AI is applied, e.g., Richard Liu says he’ll retrain his JD.com delivery drivers and robot controllers. AI ‘done right’ will empower. But unless Western CEOs improve their ethics it won’t. I’m hoping for the best.”

Scott Burleigh, software engineer and intergalactic internet pioneer, wrote, “Advances in technology itself, including AI, always increase our ability to change the circumstances of reality in ways that improve our lives. It also always introduces possible side effects that can make us worse off than we were before. Those effects are realized when the policies we devise for using the new technologies are unwise. I don’t worry about technology; I worry about stupid policy. I worry about it a lot, but I am guardedly optimistic; in most cases I think we eventually end up with tolerable policies.”

Olévié Kouami, a participant in global internet policy discussions, based in Togo, wrote, “In my geographic area of the world, much has to be done in terms of massive digital education before this can happen.”

Marshall Kirkpatrick, the product director at Influencer Marketing, responded, “AI is most likely to augment humanity for the better, but it will take longer and not be done as well as it could be. If the network can be both decentralized and imbued with empathy, rather than characterized by violent exploitation, then we’re safe. I expect it will land in between, hopefully leaning toward the positive. For example, I expect our understanding of self and freedom will be greatly impacted by an instrumentation of a large part of memory, through personal logs and our data exhaust being recognized as valuable just like when we shed the term ‘junk DNA.’  Networked AI will bring us new insights into our own lives that might seem as far-fetched today as it would have been 30 years ago to say, ‘I’ll tell you what music your friends are discovering right now.’ Hopefully we’ll build it in a way that will help us be comparably understanding to others.”

Mark Deuze, a professor of media studies at the University of Amsterdam, wrote, “With the advances in AI and tech the public debate grows over their impact. It is this debate that will contribute to the ethical and moral dimension of AI, hopefully inspiring a society-wide discussion on what we want from tech and how we will take responsibility for that desire.”

Michael Wollowski, associate professor of computer science and software engineering at Rose-Hulman Institute of Technology and expert in the Internet of Things, diagrammatic systems and artificial intelligence, wrote, “Assuming that industry and government are interested in letting the consumer choose and influence the future, there will be many fantastic advances of AI. I believe that AI and the Internet of Things will bring about a situation in which technology will be our guardian angel. For example, self-driving cars will let us drive faster than we ever drove before, but they will only let us do things that they can control. Since computers have much better reaction time than people, it will be quite amazing. Similarly, AI and the Internet of Things will let us conduct our lives to the fullest while ensuring that we live healthy lives. Again, it is like having a guardian angel that lets us do things, knowing they can save us from stupidity.”

L. Schomaker, professor at the University of Groningen and scientific director of the Artificial Intelligence and Cognitive Engineering (ALICE) research institute, said, “In the 1990s, you went to a PC on a desktop in a room in your house. In the 2010s you picked a phone from your pocket and switched it on. By 2030 you will be online 24/7 via miniature devices such as in-ear continuous support, advice and communications.”

Shannon Ellis, a postdoctoral fellow at Johns Hopkins University Bloomberg School of Public Health, said, “I’m pretty close to neutral. There is great power in computing, AI and the internet to improve our society in the future. But, that does not come without responsibility. I chose that we will be better off because I have faith in humanity. But, I certainly still have my reservations.”

Steve Sawyer, a professor in the school of information studies at Syracuse University, commented, “I can imagine the benefits of many small wins for human/AI collaboration – forms, guides and other ways to assist (not replace humans). A mature and thoughtful approach would be many small wins.”

Bill Woodcock, executive director at the Packet Clearing House, the research organization behind global network development, commented, “In short-term, pragmatic ways, learning algorithms will save people time by automating much of tasks like navigation and package delivery and shopping for staples. But that tactical win comes at a strategic loss as long as the primary application of AI is to extract more money from people, because that puts them in opposition to our interests as a species, helping to enrich a few people at the expense of everyone else. In AI that exploits human psychological weaknesses to sell us things, we have for the first time created something that effectively predates our own species. That’s a fundamentally bad idea and requires regulation just as surely as would self-replicating biological weapons.”

Stuart A. Umpleby, a professor and director of the research program in social and organizational learning at George Washington University, wrote, “People who use AI and the internet will have their lives enhanced by these technologies. People who do not use them will be increasingly disconnected from opportunities. As the digital world becomes more complicated and remote from real-world experiences, the need will grow for people and software to make connections. There will be a need for methods to distinguish the real world from the scam world.”

Joly MacFie, president of the Internet Society New York Chapter, commented, “AI will have many benefits for people with disabilities and health issues. Much of the aging baby boomer generation will be in this category.”

Peggy Lahammer, director of health/life sciences at Robins Kaplan LLP and legal market analyst, commented, “AI will continue to change how we work, play and interact with each other. AI will provide us all with better, connected data and make us more dependent on those who hold, control and analyze information and the applications that deliver those. Jobs will continue to change and, as many disappear new ones will be created. These changes will have an impact on society as many people are left without the necessary skills. I find the impact of AI most interesting in social interactions, as we increasingly have the ability to find and connect to and develop meaningful relationships with people around the globe. Control of the information and applications will bring great power and conflict between governments, private enterprise and individuals – especially those with little power or means to access the information and tools.”

Cliff Zukin, professor of public policy and political science at the School for Planning and Public Policy and the Eagleton Institute of Politics, Rutgers University, said, “Initially I was not sure whether to answer a force for good rather than the alternative. I think it will ultimately be good because it takes ‘information’ out of the category of a commodity, and I believe that more information makes for better decisions and is democratizing. Education, to me, has always been the status leveler, correcting, to some extent, for birth-luck and social mobility. This will be like Asimov’s ‘Foundation,’ where everyone is plugged into the data-sphere. There is a dark side (later) but overall a positive.”

Stephen McDowell, a professor of communication at Florida State University and expert in new media and internet governance, commented, “Much of our daily lives is made up of routines and habits that we repeat, and AI could assist in these practices. However, just because some things we do are repetitive does not mean they are insignificant. We draw a lot of meaning from things we do on a daily, weekly or annual basis, whether by ourselves or with others.  Cultural practices such as cooking, shopping, cleaning, coordinating and telling stories are crucial parts of building our families and larger communities. Similarly, at work, some of the routines are predictable, but are also how we gain a sense of mastery and expertise in a specific domain. In both these examples, we will have to think about how we define knowledge, expertise, collaboration, and growth and development.”

Hari Shanker Sharma, an expert in nanotechnology and neurobiology at Uppasala University, Sweden, said, “AI has not yet peaked hence growth will continue, but evil also uses developments. That will bring bigger dangers to mankind. The need will be to balance growth with safety, e.g., social media is good and bad. The ways to protect from evil mongers are not sufficient. Tracing an attacker/evil monger in a global village to control and punish is the need. AI will give birth to an artificial human being who could be an angel or a devil. Plan for countering evil at every development stage.”

John McNutt, a professor in the school of public policy and administration at the University of Delaware, responded, “There are always things that can go wrong and technology that can be misused. Those are worth worrying about and care should be taken. On balance, throwing out technology because there is a potential downside is not how human progress takes place. In public service, a turbulent environment has created a situation where knowledge overload can seriously degrade our ability to do the things that are essential to implement policies and serve the public good. AI can be the difference between a public service that works well and one that creates more problems than it solves.”

Randall Mayes, a technology analyst and author, wrote, “Humans and machines will complement each others’ weaknesses and strengths.”

Steve Farnsworth, chief marketing officer at Demand Marketing, commented, “When information can be captured and shared in an organized way we have always benefited – be it scribes using papyrus, printing presses or the internet. However, we now have more information than we can organize in a meaningful way. Machine learning and AI offer tools to turn that into actionable data. One project using machine learning and big data already was able to predict SIDS correctly 94% of the time. Imagine AI looking at diagnostics, tests and successful treatments of millions of medical cases. We would instantly have a deluge of new cures and know the most effective treatment options using only the data, medicines and therapies we have now. The jump in quality health care alone for humans is staggering. This is only one application for AI.”

Traci Belanger, a licensed clinical mental health counselor and Ph.D. researching media psychology, commented, “I don’t know if better or worse are the correct terms. I believe things will continue to evolve. As part of the evolution, much will depend on how our collective brains can keep up with the changes and how socially we can adapt to becoming more isolated and yet be more positively productive and not just use our downtime to verbally abuse others.”

Surja Sharma, a computational physics expert and senior research scientist at the University of Maryland, responded, “The development of machine learning/AI has been mainly from the engineering perspective. For example, neural networks seek to obtain the optimum way to predict given the data, but do not address the underlying principles. The lack of the integration between first-principles and engineering/data-driven modeling is narrowing. For example, the fourth paradigm or data-enabled science seeks to reveal the underlying principles from data with a priori assumption, i.e., without using current laws or principles. The growing convergence between the first-principles (science) and data analytics will make machine learning/AI more reliable and trustworthy. A framework for this is complexity science. Such convergence will command a deeper recognition as science in general has enjoyed in human history. This in turn will bring many positive developments in general.”

Miguel Moreno-Muñoz, a professor of philosophy specializing in ethics, epistemology and technology at the University of Granada, Spain, said, “Some improvements in advanced algorithms, natural language processing and computing power are expected to speed up the automation of routine processes and move human workers away from boring, unhealthy or risky tasks. Hybrid environments, where humans deal with the non-automatable part of certain tasks, will be more or less something common. There is a risk of over-reliance on systems with poorly experienced intelligence augmentation due to pressure to reduce costs. This could lead to major dysfunctions in health care or in the supervision of highly complex processes. A hasty application of management systems based on the Internet of Things could be problematic in certain sectors of industry, transport or health, but its advantages will outweigh its disadvantages. I do believe there may be significant risks in the military applications of AI.”

Nathaniel Borenstein, chief scientist at Mimecast, wrote, “Social analyses of IT [information technology] trends have consistently wildly exaggerated the human benefits of that technology, and underestimated the negative effects. Overall, the observation has been that technology’s effect on human happiness is generally neutral. There are counter-examples like the Green Revolution, but IT has not yet been among them. In particular, I foresee a world in which IT and so-called AI produce an ever-increasing set of minor benefits, while simultaneously eroding human agency and privacy and supporting authoritarian forms of governance. I also see the potential for a much worse outcome, in which the productivity gains produced by technology accrue almost entirely to a few, widening the gap between the rich and poor while failing to address the social ills related to privacy. But if we can find a way to ensure that these benefits are shared equally among the population, it might yet prove to be the case that the overall effect of the technology is beneficial to humanity.  This will only happen, however, if we manage to limit the role of the rich in determining how the fruits of increased productivity will be allocated. Overall, though, I have to agree with Oliver Wendell Holmes Jr., who wrote, ‘Science makes major contributions to minor needs.’”

Michael J. Oghia, a Belgrade-based consultant active in internet governance activities and media-development ecosystems, commented, “Of course there are serious challenges, gaps, inequalities and potential threats from evolving technology… The adoption rate of new technologies is rising as well as cascading down economic levels more quickly. Depending on how the technologies develop, it’s likely they will have a positive impact. At the same time, they could have negative consequences – nothing is ever black and white.”

Tom Hood, an expert in corporate accounting and finance, said, “By 2030, AI will stand for Augmented Intelligence and will play an ever-increasing role in working side-by-side with humans in all sectors to add its advanced and massive cognitive and learning capabilities to critical human domains like medicine, law, accounting, engineering and technology. Imagine a personal bot powered by artificial intelligence working by your side (in your laptop or smartphone) making recommendations on key topics by providing up-to-the-minute research, or key pattern recognition and analysis of your organization’s data? One example is a CPA in tax given a complex global tax situation amid constantly-changing tax laws in all jurisdictions who would be able to research and provide guidance on the most complex global issues in seconds. It is my hope for the future of artificial intelligence in 2030 that we will be augmenting our intelligence with these ‘machines.’”

Stephen Abram, principal at Lighthouse Consulting Inc., wrote, “I am concerned that individual agency is lost in AI and that appropriate safeguards should be in place around data collection as specified by the individual. I worry that context can be misconstrued by government agencies like ICE, IRS, police, etc. There is a major conversation needed throughout the time during which AI applications are developed, and they need to be evergreen as innovation and creativity spark new developments. Indeed, this should not be part of a political process, but an academic, independent process guided by principles and not economics and commercial entities.”

Ryan Sweeney, director of analytics at Ignite Social Media, commented, “It is without question that AI systems will continue to develop in ways that automate and improve the lives of those who can afford such systems. Our technology continues to evolve at a growing rate, but our society, culture and economy are not as quick to adapt. We’ll have to be careful that the benefits of AI for some do not further divide those who might not be able to afford the technology. What will that mean for our culture as more jobs are automated? We will need to consider the impact on the current class divide.”

Vassilis Galanos, a Ph.D. student and teaching assistant actively researching future human-machine symbiosis at the University of Edinburgh, commented, “2030 is not that far away, so there is no room for extremely utopian/dystopian hopes and fears. Nonetheless, humans are parts of their environments and the more they interact with elements found in their habitats, the better their interactions become. We have seen this in the cases of every tool. We have also seen that it often takes time for humans to be actually enhanced by the invention of a tool, but past that necessary temporal threshold, the effects of human plus technology are usually fruitful. Given that AI is already used in everyday life (social-media algorithms, suggestions, smartphones, digital assistants, health care and more), it is quite probable that humans will live in a harmonious co-existence with AI as much as they do now – to a certain extent – with computer and internet technologies. I cannot offer a particular extreme image, because, as said, 2030 is not far. However, I can think of augmented versions of what we already have, further dependent on hardware development.”

Steve King, partner at Emergent Research, said, “2030 is less than 12 years away. So, while it is fun to speculate whether AI will greatly enhance or reduce human capacities and autonomy, the most likely scenario is AI will have a modest impact on the lives of most humans over this timeframe. Having said that, we think the use of AI systems will continue to expand, with the greatest growth coming from systems that augment and complement human capabilities and decision-making. This is not to say there won’t be negative impacts from the use of AI. Jobs will be replaced, and certain industries will be disrupted. Even scarier, there are many ways AI can be weaponized. But like most technological advancements, we think the overall impact of AI will be additive – at least over the next decade or so.”

Steven Polunsky, director of the Alabama Transportation Policy Research Center, University of Alabama, wrote, “AI will allow public transportation systems to better serve existing customers by adjusting routes, travel times and stops to optimize service. New customers will also see advantages. Smart transportation systems will allow public transit to network with traffic signals and providers of ‘last-mile’ trips to minimize traffic disruption and inform decision making about modal (rail, bus, mobility-on-demand) planning and purchasing.”

Ray Schroeder, associate vice chancellor for online learning at the University of Illinois, Springfield, wrote, “It is clear that society broadly and technology leaders most specifically, are well aware of the awesome power and potential of AI. In general, I believe that the public at large will not support AI implementations that degrade the quality of life. I would anticipate boycotts of products, companies and institutions that promote AI applications that – on balance – do not improve life for the majority.”

Edward Tomchin, a retiree, said, “Taking into consideration past major advances in technology, we do seem to abuse any new discovery out of the box, but we eventually get past that abuse and put the technology to good use to our benefit. I have abiding faith in our species.”

Norton Gusky, an education technology consultant, wrote, “By 2030 most learners will have personal profiles that will tap into AI/machine learning. Learning will happen everywhere and at any time. There will be appropriate filters that will limit the influence of AI, but ethical considerations will also be an issue.”

David Sarokin, author of “Missed Information: Better Information for Building a Wealthier, More Sustainable Future,” commented, “2030 isn’t that far away! The promises and problems the internet and AI makes possible today will still be with us in a familiar way in the next few decades, but it will be magnified in certain areas. My biggest concern is that our educational system will not keep up with the demands of our modern times. It is doing a poor job of providing the foundations to our students. As more and more jobs are usurped by AI-endowed machines – everything from assembling cars to flipping burgers – those entering the workplace will need a level of technical sophistication that few graduates possess these days.”

Mauro D. Ríos, an adviser to the eGovernment Agency of Uruguay and director of the Uruguayan Internet Society chapter, responded, “In 2030 dependence on AI will be greater in all domestic, personal, work and educational contexts; this will make the lives of many people better. However, it has risks. We must be able to maintain active survival capabilities without AI. Human freedom cannot be lost in exchange for the convenience of improving our living standards. In another order of things, issues such as those related to science will be significantly improved. We will do things that are not possible today, and we will better understand the world at subatomic levels – we can look through clear windows on life at the level of a nanometer. But AI must continue to be subject to the rationality and control of the human being.”

Warren Yoder, longtime director of the Public Policy Center of Mississippi, now an instructor at Mississippi College, responded, “Human/AI collaborations will allow humans to offload tasks for which humans are poorly suited because the tasks are tedious, subject to cognitive bias or not supported by our innate pattern recognition. They will augment our human abilities and increase the material well-being of humanity. At the same time the concomitant increase in the levels of education and health will allow us to develop new social philosophies and rework our polities to transform human well-being. AI increases the disruption of the old social order, making the new transformation both necessary and more likely, though not guaranteed.”

Lane Jennings, a recent retiree who served as managing editor for the World Future Review from 2009 to 2015, wrote, “I believe it is ‘most likely’ that advances in AI will improve technology and thus give people new capabilities. But this ‘progress’ will also make humanity increasingly vulnerable to accidental breakdowns, power failures and deliberate attacks. Example: Driverless cars and trucks and pilotless passenger aircraft will enhance speed and safety when they work properly, but they will leave people helpless if they fail. Fear and uncertainty could negate positive benefits after even a few highly-publicized disasters.”

Sanjiv Das, a professor of data science and finance at Santa Clara University, responded, “AI will enhance search to create interactive reasoning and analytical systems. Search engines today do not know ‘why’ we want some information and hence cannot reason about it. They also do not interact with us to help with analysis. An AI system that collects information based on knowing why it is needed and then asks more questions to refine its search would be clearly available well before 2030. These ‘search-thinking-bots’ will also write up analyses based on parameters elicited from conversation and imbue these analyses with different political (left/right) and linguistic (aggressive/mild) slants, chosen by the human, using advances in language generation, which are already well under way. These ‘intellectual’ agents will become companions, helping us make sense of our information overload. I often collect files of material on my cloud drive that I found interesting or needed to read later, and these agents would be able to summarize and engage me in a discussion of these materials, very much like an intellectual companion. It is unclear to me if I would need just one such agent, though it seems likely that different agents with diverse personalities may be more interesting! As always, we should worry what the availability of such agents might mean for normal human social interaction, but I can also see many advantages in freeing up time for socializing with other humans as well as enriched interactions, based on knowledge and science, assisted by our new intellectual companions.”

John Paschoud, councillor for the London borough of Lewisham, said, “It is possible that advances in AI and networked information will benefit ‘most’ people, but this is highly dependent upon on how those benefits are shared, largely by political decision. If traditional capitalist models of ‘ownership of the means of production’ prevail, then benefits of automated production will be retained by the few who own, not the many who work. Similarly, models of housing, health care, etc., can be equitably distributed, and can all be enhanced by technology.”

George Kubik, president of Anticipatory Futures Group, wrote, “There will be an expansion of choice in all sectors of human life. My fear is the restriction of access and lack of knowledge concerning potential for choice.”

Daniel Riera, a professor of computer science at Universitat Oberta de Catalunya, commented, “Technology has always been an opportunity for human improvement. Every time these opportunities are bigger but so are the associated risks. My concern is if humans will be able to use it in the right way.”

Matthew Henry, chief information officer at LeTourneau University, Longview, Texas, said, “Since the beginning of time, tools have made us better. Is there a difference because we call a tool ‘intelligent?’”

Steve Chenoweth, an associate professor of computer science at the Rose-Hulman Institute of Technology, said, “We tend to foresee outcomes that do not include all the interrelated events, or use of the new opportunities. While the scientific intent will be primarily toward improvement, uses of new technologies become apparent after they are initially employed.”

Jennifer Jarratt, owner of Leading Futurists consultancy, commented, “There are two separate questions: I answered this one (By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities?) with ‘yes’ because the capabilities are there, but whether people will be empowered is a separate question.”

Francisco S. Melo, an associate professor of computer science at Instituto Superior Técnico, Lisbon, Portugal, responded, “My expectation is that AI technology will allow humans to better interact with services and information in their daily lives. I expect that AI technology will contribute to render several services (in health, assisted living, etc.) more efficient and humane and, by making access to information more broadly available, contribute to mitigate inequalities in society. However, in order for positive visions to become a reality, both AI researchers and the general population should be aware of the implications that such technology can have, particularly in how information is used and the ways by which it can be manipulated. In particular, AI researchers should strive for transparency in their work, in order to demystify AI and minimize the possibility of misuse; the general public, on the other hand, should strive to be educated in the responsible and informed use of technology.”

Pedro U. Lima, an associate professor of computer science at Instituto Superior Técnico, Lisbon, Portugal, said, “As with any past technology breakthroughs, the consequences of new applications of AI will be multi-fold: Some will be used for positive purposes, others for negative purposes (e.g., warfare). Overall, I see AI-based technology relieving us from repetitive and/or heavy and/or dangerous tasks, opening new challenges for our activities. I envisage autonomous mobile robots networked with a myriad of other smart devices, helping nurses and doctors at hospitals in daily activities, working as a ‘third-hand’ and (physical and emotional) support to patients. I see something similar happening in factories, where networked robot systems will help workers on their tasks, relieving them from heavy duties.”

Jaak Tepandi, a professor of knowledge-based systems at Tallinn University of Technology, Estonia, commented, “2030 is a short-term perspective. I believe in useful results from AI in that time range.”

John Laird, a professor of computer science and engineering at the University of Michigan, responded, “There will be a continual off-loading of mundane intellectual and physical tasks on to AI and robotic systems. In addition to helping with everyday activities, it will significantly help the mentally and physically impaired and disabled. There will also be improvements in customized/individualized education and training of humans, and conversely, the customization of AI systems by everyday users. We will be transitioning from current programming practices to user customization. Automated driving will be a reality, eliminating many deaths, but also having significant societal changes.”

Toby Walsh, a professor of AI at the University of New South Wales, Australia, and president of the AI Access Foundation, said, “I’m pessimistic in short term – we’re seeing already technologies like AI being used to make life worse for many – but I’m optimistic in long term that we’ll work out how to get machines to do the dirty, dull, dangerous and difficult, and leave us free to focus on all the more-important and human parts of our lives.”

Alan Bundy, a professor of automated reasoning at the University of Edinburgh, wrote, “A yes/no answer is too simplistic. Some people will be better off and some worse. Unskilled people will suffer because there will be little employment for them. This may create disruption to society, some of which we have already seen with Trump, Brexit, etc. Highly educated will be needed to orchestrate automated systems, deal with edge cases and act as an interface for them. Their jobs will become more interesting and fulfilling.”

Peter Eachus, director of psychology and public health at the University of Salford, U.K., responded, “Part of the problem with this question is that we cannot begin to imagine how things will change. We start from the assumption that the changes will bring us extensions to what we already do now, so we will have better health care because AI is better at diagnosis than we are. But that is like saying the internet is an extension of a library. Obviously, it is not and they are fundamentally different. What would a world be like in which there is no illness, no work and few remaining problems to solve?”

Andrea Romaoli Garcia, an international lawyer active in internet governance discussions, commented, “I believe that typical human-machine interaction will make a man better than he is today. I don’t believe that machines will completely replace people. AI will improve the way people make decisions in all industries because it allows instant access to a multitude of information. People will require training for this future – educational and technological development… Human reality will be interpreted by machines trained with previously analyzed data. The result will be smart contracts with errors reduced to zero or almost zero, without bureaucracy. But – this is a very high level of human development that poor countries don’t have access to. Without proper education and policies, they will not have access to wealth. The result may be a multitude of hungry and desperate people. This may be motivation for wars or invasion of borders. Future human-machine interaction (AI) will only be positive if richer countries develop policies to help poorer countries to develop and gain access to work and wealth.”

Alex Smith, partner relationship manager at Monster Worldwide, said, “All new technologies are built to make our lives better and AI is no different in this sense.  However, we’re having this debate because of its potential for damaging human relationships. We’re already starting to see machines doing things like cooking our dinner, driving our cars and curating our news. In 2030, we could see a DMV that processes your paperwork automatically without waiting in lines or human interaction, or schools with the ability to create holograms from computerized replicas of the greatest minds in human history. My hope is that we don’t swing the pendulum too far in the direction of no human interaction. In areas like customer service there is still an art to that human interaction. Who likes to talk to the recorded customer-service person anyway!”

David Schlangen, a professor of applied computational linguistics at Bielefeld University, Germany, responded, “If the right regulations are put in place and ad-based revenue models can be controlled in such a way that they cannot be exploited by political interest groups, the potential for AI-based information search and decision support is enormous. That’s a big if, but I prefer to remain optimistic.”

Bert Huang, an assistant professor in the Department of Computer Science at Virginia Tech focused on machine learning, wrote, “As a researcher of AI, I value the importance of uncertainty. There is too much uncertainty to make any reasonable estimate of what 2030 will look like. Instead, I estimate that the net result of AI advances will be positive because I see no evidence of any technology having a net negative result. Weapons technology has also helped medicine, energy, civil engineering, etc. AI will cause harm (and it has already caused harm), but its benefits will outweigh the harm it causes. That said, this pattern of technology being net positive depends on people seeking positive things to do with the technology, so efforts to guide research toward societal benefits will be important to ensure the best future.”

Ian Rumbles, a quality-assurance specialist at North Carolina State University, said, “While I feel strongly that AI will impact significantly positively on those in the developed world, the question indicated ‘most people.’ I feel that AI will have a negative effect on those in developing countries. AI will have no effect on those in these countries.”

Mike Meyer, chief information officer at Honolulu Community College, commented, “All aspects of human existence will be affected by the integration of AI into human societies. Historically this type of base paradigmatic change is both difficult and unstoppable. The results will be primarily positive but will produce problems both in the process of change and in totally new types of problems that will result from the ways that people do adapt the new technology-based processes. Two major areas of change are education and organizational administration. Within these areas, adult education availability and relevance will undergo a major transformation. Community colleges will become more directly community centers for both occupational training and greatly-expanded optional liberal arts, art, crafts and hobbies. Classes will, by 2030, be predominantly augmented-reality-based, with a full mix of physical and virtual students in classes presented in virtual classrooms by national and international universities and organizations. The driving need will be expansion of knowledge for personal interest and enjoyment as universal basic income or equity will replace the automated tasks that had provided subsistence jobs in the old system. Social organizations will be increasingly administered by AI/ML systems to ensure equity and consistency in provisioning of services to the population. The steady removal of human emotion-driven discrimination will rebalance social organizations creating true equitable opportunity to all people for the first time in human history. People will be part of these systems as censors, in the old imperial Chinese model, providing human emotional intelligence where that is needed to smooth social management.”

Mark Maben, a general manager at Seton Hall University, wrote, “The AI revolution is, sadly, likely to be dystopian. At present, governmental, educational, civic, religious and corporate institutions are ill-prepared to handle the massive economic and social disruption that will be caused by AI. I have no doubt that advances in AI will enhance human capacities and empower some individuals, but this will be more than offset by the fact that artificial intelligence and associated technological advances will mean far fewer jobs in the future. Sooner than most individuals and societies realize, AI and automation will eliminate the need for retail workers, truck drivers, lawyers, surgeons, factory workers and other professions. In order to ensure that the human spirit thrives in a world run and ruled by AI, we will need to change the current concept of work. That is an enormous task for a global economic system in which most social and economic benefits come from holding a traditional job. We are already seeing a decline in democratic institutions and a rise in authoritarianism due to economic inequality and the changing nature of work. If we do not start planning now for the day when AI results in complete disruption of employment, the strain is likely to result in political instability, violence and despair. This can be avoided by policies that provide for basic human needs and encourage a new definition of work, but the behavior to date by politicians, governments, corporations and economic elites gives me little confidence in their ability to lead us through this transition.”

Justin Amyx, a technician with Comcast, said, “In regard to health care most people will be better off. Access to information and quality of care will rise. Doctors’ resources will expand to assist in the care of the patient. My worry is automation. Automation occurs usually with mundane tasks that fill low-paying, blue-collar and under jobs. Those jobs will disappear – lawn maintenance, truck drivers and fast food, to name a few. Those un-skilled or low-skilled workers will be jobless. Unless we have training programs to take care of worker displacement there will be issues. In addition, with all that power we must have checks and balances to assure privacy and an open internet. With the loss of net neutrality, ISPs may hinder the growth, access and privacy we expect.”

If you wish to read the full survey report with analysis, click here.

To read anonymous survey participants’ responses with no analysis, click here.