Elon University

Survey X: Artificial Intelligence and the Future of Humans (Anonymous Responses)

Results released in December 2018 – To illuminate current attitudes about the potential impacts of digital life in the next decade and assess what interventions might possibly emerge to help resolve challenges, Pew Research and Elon University’s Imagining the Internet Center conducted a large-scale canvassing of technology experts, scholars, corporate and public practitioners and other leaders in summer 2018, asking them to share their answer to the following query:

“Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties.

“Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030. Please consider giving an example of how a typical human-machine interaction will look and feel in a specific area, for instance, in the workplace, in family life, in a health care setting or in a learning environment. Why? What is your hope or fear? What actions might be taken to assure the best future?”

In answer to Question One:

  • About 63% of these respondents, said most people will be mostly better off.
  • About 37% said people will not be better off.
  • 25 respondents chose not to select either option.

Among the key themes emerging in the December 10, 2018 report from 979 expert respondents’ overall answers were: * CONCERNS – Human Agency: Decision-making on key aspects of digital life is automatically ceded to code-driven, “black box” tools. People lack input and do not learn the context about how the tools work. They sacrifice independence, privacy and power over choice; they have no control over these processes. This effect will deepen as automated systems become more prevalent and complex. – Data Abuse: Most AI tools are and will be in the hands of companies striving for profits or governments striving for power. Values and ethics are often not baked into the digital systems making people’s decisions for them. These systems are globally networked and not easy to regulate or rein in. – Job Loss: The efficiencies and other economic advantages of code-based machine intelligence will continue to disrupt all aspects of human work. While some expect new jobs will emerge, others worry about massive job losses, widening economic divides and social upheavals, including populist uprisings. – Dependence Lock-in: Many see AI as augmenting human capacities but some predict the opposite – that people’s deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others. – Mayhem: Some predict further erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications and the use of weaponized information, lies and propaganda to dangerously destabilize human groups. Some also fear cybercriminals’ reach into economic systems. * POTENTIAL REMEDIES – Global Good is #1: It is vital to improve human collaboration across borders and stakeholder groups. Digital cooperation to serve humanity’s best interests is the top priority. Ways must be found for people around the world to come to common understandings and agreements – to join forces to facilitate the innovation of widely accepted approaches aimed at tackling wicked problems and maintaining control over complex human-digital networks. – Values-Based Systems: Develop policies to assure AI will be directed at the common good. Adopt a ‘moonshot mentality’ to build inclusive, decentralized intelligent digital networks ‘imbued with empathy’ that help humans aggressively ensure that technology meets social and ethical responsibilities. Some new level of regulatory and certification process will be necessary. – Prioritize People: Alter economic and political systems to better help humans “race with the machines.” Direct energies to radical human improvement. Reorganize economic and political systems toward the goal of expanding humans’ capacities and capabilities in order to heighten human/AI collaboration and staunch trends that would compromise human relevance in the face of programmed intelligence. * BENEFITS of AI 2030 – New Life and Work Efficiencies: AI will be integrated into most aspects of like, producing new efficiencies and enhancing human capacities, It can optimize and augment people’s life experiences, including the work lives of those who choose to work. – Health Care Improvements: AI can revolutionize medical and wellness services, reduce errors and recognize life-saving patterns, opening up a world of opportunity and options in health care. – Education Advances: Adaptive and individualized learning options and AI “assistants” might accelerate targeted, effective education expanding the horizons of all.

News release with nutshell version of report findings is available here.

All credited responses to the question on AI and the Future of Humans.

The full survey report with analysis is here.

Written elaborations by anonymous respondents

Following are full responses made by study participants who chose remain anonymous when making remarks. Some people chose not to provide a written elaboration. Some of these are the longer versions of responses that are contained in shorter form in the survey report. These responses were collected in an opt-in invitation to more than 10,000 people.

Their predictions:

An Internet Hall of Fame member commented, “AI will not leave most people better off than they are today because individuals will not be able to control their lives.”

A principal design researcher at one of the world’s largest technology companies commented, “Although I have long worked in this area and been an optimist, I now fear that the goal of most AI and UX is geared toward pushing people to interact more with devices and less with other people. As a social species that is built to live in communities, reductions in social interaction will lead to erosion of community and rise in stress and depression over time. Although AI has the potential to improve lives as well, those advances will come more slowly than proponents think, due to the ‘complexity brake’ Paul Allen wrote about, among other things. There have been AI summers and AI winters. This is not an endless summer.”

A principal architect for a major global technology company responded, “AI is a prerequisite to achieving a post-scarcity world, in which people can devote their lives to intellectual pursuits and leisure rather than to labor. The first step will be to reduce the amount of labor required for production of human necessities. Reducing tedium will require changes to the social fabric and economic relationships between people as the demand for labor shrinks below the supply, but if these challenges can be met then everyone will be better off.”

A longtime Silicon Valley communications professional who has worked at several of the top tech companies over the past few decades responded, “AI will continue to improve *if* quality human input is behind it. If so, better AI will support service industries at the top of the funnel, leaving humans to handle interpretation, decisions and applied knowledge. Medical data-gathering for earlier diagnostics comes to mind. Smarter job-search processes, environmental data collection for climate-change actions – these applications all come to mind.”

A changemaker working for digital accessibility wrote, “There is no reason to assume that some undefined force will be able to correct for or ameliorate the damage of human nature amplified with power-centralizing technologies. There is no indication that governments will be able to counterbalance power-centralization trends, as governments, too, take advantage of such market failures. The outward dressing of such interactions is probably the least important aspect of it.”

An information-science futurist commented, “I fear that powerful business interests will continue to put profits above all else, closing their eyes to the second- and third-order effects of their decisions. I fear that we do not have the political will to protect and promote the common interests of citizens and democracy. I fear that our technological tools are advancing more quickly than our ability to manage them wisely. I have, however, recently spotted new job openings with titles like ‘Director of Research, Policy and Ethics in AI’ and ‘Architect, AI Ethical Practice’ at major software companies. There are reasons for hope.”

A longtime veteran of a pioneering internet company commented, “Profit motive and AI at scale nearly guarantee suffering for most people. It should be spiffy for the special people with wealth and power, though. Watching how machines are created to ensure addiction (to deliver ads) is a reminder that profit-driven exploitation always comes first. The push for driverless cars, too, is a push for increased profits. In the face of managing resources and warfare – the big issues for AI at scale – the goals are not likely to be sharing and co-existence.”

A strategy consultant wrote, “The problem is one of access. AI will be used to consolidate power and benefits for those who are already wealthy and further surveil, disenfranchise and outright rob the remaining 99% of the world.”

A senior data analyst and systems specialist expert in complex networks responded, “Artificial intelligence software is, after all, SOFTWARE. It will implement the priorities of the entities that funded development of the software. In some cases, this will a generic service sold to the general public (much as we now have route-planning software in GPS units), and this will provide a definite benefit to consumers. In other cases, software will operate to the benefit of a large company but to the detriment of consumers (for example, calculating a price for a product that will be the highest that a given customer is prepared to pay). In yet a third category, software will provide effective decision-making in areas ranging from medicine to engineering, but will do so at the cost of putting human beings out of work.”

A professor of AI at a university in Italy said, “Development has brought humanity past the boundary, the survival limit. It is too easy to control technology in ways that are dangerous for people.”

An internet pioneer, founder and president and early AI research leader said, “AI is a marketing term. It is what programmers with an IQ over 180 do. They just think differently.”

A journalist and leading internet activist wrote, “Computer AI will only be beneficial to its users if it is owned by humans, and not ‘economic AI’ (that is, corporations).”

An associate professor of computer science at a U.S. university commented, “Machines will be able to do more-advanced work and improve accuracy but this likely will expand manipulation of consumers/voters, and automation may reduce available jobs. Lack of regulation and ethics is key and reflects aspects of U.S. culture that don’t necessarily exist elsewhere (Europe comes to mind).”

An assistant professor of social justice wrote, “Technology magnifies what exists (for good or bad). There is simply more bad than good to be magnified.”

A professor of digital humanities at a major Silicon Valley-area university said, “Given increasing income disparity in much of the world, my fear is that AI will be used to repress the disenfranchised and create even more privilege for the few. If technological advances are not integrated into a vision of holistic, ecologically sustainable, politically equitable social visions, they will simply serve gated and locked communities.”

An anonymous respondent commented, “Great power fragmentation will limit ability of technologies to reach the economies of scale for this.”

An internet pioneer who has worked as a distinguished engineer and chief scientist at major technology companies commented, “Asymmetry. Most AI systems require vast volume of training data, which is only available to large actors. These large actors will use AI for their benefit. Individual customers may have some benefits as a side effect, at a cost of lower autonomy.”

A director of free expression for a global digital rights organization commented, “My concern is that human-machine collaboration will leave some of us far better off by automating our jobs, giving us more free and creative time, while doing little to improve the lives of billions of others.”

An author and professor of law at a major U.S. university said, “Read ‘Re-Engineering Humanity,’ a new book published by Brett Frischmann and Evan Selinger. It provides a thorough, reasoned account. The website is www.reengineeringhumanity.com. The authors have published various short pieces, too.”

A professor expert in cultural geography and American studies said, “AI relies on humans to do a great deal of work: deliveries, stocking, sorting, coding and even structuring the data we collect on one another. To presume that AI will improve to a point where we needn’t create, monitor, direct, and, most importantly, ethically lead it is a fool’s errand. By claiming AI is self-reliant and self-perpetuating, there will be no need to pay many people. Given the majority human assumption that capitalism is something worth reproducing, the evacuation of most labor positions by AI would create vast poverty and cruelty by the ruling class.”

The CEO of a foundation based in Germany responded, “2030 is not far away, and even if technical breakthroughs would enable a much more evolved AI human interaction I assume that the implementation will simply take long. But for sure there will be huge changes in the future that are associated with AI, especially in the context of mobile devices, speech recognition, etc. But if that is adding value and quality to human life for the majority remains a big question.”

A lecturer in media studies at a major university in New Zealand wrote, “The automation of large volumes of work by machine learning-based systems is unlikely to lead to an increase in social equity within a capitalist economy.”

A chief marketing officer said, “AI is another ‘leap forward’ in our evolution as a species. It will most certainly make our lives easier and much more efficient. It will be implemented across all industries and applications – current and future – but, and this is a big but, will it help to alleviate the discrepancies of humanity or will it further aggravate them? By having the 10/20% of humans (U.S., EU, Canada, Russia, Japan) having a further competitive advantage, one must assume that the poorer nations would become bigger ‘slaves’ to us. Moreover, AI might work against a sustainable and environmental fringe world by demanding further energy and resources. All in all, AI can be of great use, but we need to be vigilant of the repercussions instead of constantly leaping ‘forward’ only to find out later about all of the negatives.”

A program director who was formerly on the start-up team that built one of the most successful online platforms today said, “Wealth distribution will continue to widen as the rich get richer.”

A senior statistician with a data science group said, “As with most technologies, the more affluent countries, regions and individuals will be the first to benefit and profit from AI developments. In the U.S., the blue-collar job wages have been stagnant since the 1970s despite all of the advances with the internet and mobile devices, so I am not optimistic regarding AI, either.”

A longtime telecommunications policy consultant based in Europe commented, “I think it possible that AI will bring general benefits to most humans, but unlikely that this will happen by 2030, given difficulties in attaining even the basic Sustainable Development Goals by then.”

One of the world’s foremost social scientists studying human-technology interactions said, “My chief fear is face recognition used for social control. Even Microsoft has begged for government regulation! Surveillance of all kinds is the future for AI. It is not benign if not controlled!”

A director for a major regional internet registry responded, “Both private industry and government have potentially huge gains from advanced technology (including so-called AI) in gathering and linking datasets. At the same time the ability of government to properly regulate advanced technologies is not keeping up with the evolution of those technologies. This allows many developments to proceed without sufficient notice, analysis, vetting or regulation to protect the interests of citizens (Facebook being a prime example). Whether adverse impacts will be mitigated, or can be, is yet to be seen; but certainly there will be adverse impacts, that definitely have the potential to outweigh benefits.”

A professor at a major U.S. university and expert in artificial intelligence as applied to social computing said, “The trends around democratic governance of AI are not encouraging. The big players are U.S.-based, and the U.S. is in an anti-regulation stance that seems fairly durable. Therefore, I expect AI technologies to evolve in ways that benefit corporate interests, with little possibility of meaningful public response. As AI systems take in more data and make bigger decisions, people will be increasingly subject to their unaccountable decisions and non-auditable surveillance practices. To pick just one example, a business could monitor every employee interaction with every customer in real time with a classification system in place to detect any examples of less-than-perfect customer service.”

An online-communities researcher said, “Initially, AI will be used for business intelligence, where those who have money can make more money.”

A fellow at a university center who has worked with a group developing AI for a top-five global technology company commented, “The question doesn’t adequately think about who the populations at risk of benefit and harm are, and how they will be seen to ‘feel’ about today and in 12 years. Most likely those who can remember will feel it is worse, given their interactions. Those younger won’t have a reference point.”

A professor of electronic engineering and innovation studies based in Europe commented, “People will lose control of their lives, which will remain in the hands of a small group of experts or companies.”

A respondent based in Turkey wrote, “Due to unknown logic of algorithms we will lose our autonomy over our lives and everyday life decisions; humankind is depending on AI and not learning to be algorithmically literate.”

A principal architect for a top-five technology company commented, “AI will enable vicious regimes to track citizens at all times. Mistaken identifications will put innocent people in jail and even execute them with no hope of appeal. In general, AI will only have a positive contribution in truly democratic states which are dwindling in number.”

A professor of public and international affairs based on the U.S. East Coast responded, “Many people will no longer be useful in the labor market. Such rapid economic and social change will leave many frightened and angry.”

An anonymous respondent said, “AI can guide metacognition, which is good for teaching. However, we need to assure that we have individuals who are able to think and problem-solve and monitor that thinking without assistance.”

An anonymous respondent commented, “AI is going to lead to the destruction of entire rungs of the economy, and the best way to boost and economy while holding together a fractured economy is war. The panopticon and invasion of all personal aspects of our lives is already complete. AI will allow greater control by the organized forces of tyranny, greater exploitation by the organized forces of greed and open a Pandora’s box of a future that we as a species are not mature enough to deal with.”

An information administration manager responded, “I chose the negative answer based on the overall direction I am currently seeing in American society, where we cede more and more decision making and policy making to self-interested parties in the private sphere. Our institutions are insufficiently nimble to keep up with the policy questions that arise and attempts to regulate new industries are subverted by corrupt money politics at both the federal and state levels. Specific functions and scenarios are beside the point. You can deploy most any technology in ways that enhance freedom or autonomy and have the opposite effect. The public should have a say in that choice.”

An anonymous respondent said, “There’s less in AI than meets the eye. Present systems are very clever scanners and matchers, but they are anything but intelligent. They can do things they are taught to only, even so-called unsupervised systems. Their creators do not understand them. They have their own blind spots and flaws that we are only barely scratching the surface of. Mostly, they’ll be useful tools and interesting toys but they’re neither going to lead to huge improvements nor degradations of the human condition, certainly not in a mere 11 years.”

An anonymous respondent commented, “Our ethical capabilities lag far behind our technical capabilities. These drive my concern about where we’ll be in 2030. To assure the best future, we need to ramp up efforts in the areas of decentralizing data ownership, education and policy around transparency. 2030 arrives in less than 15 years. When considering the likely benefits of AI, I am thinking globally. With China aiming to ‘win’ the AI lead, I have serious doubts that any benefits will outweigh the negative effects on human rights for a majority of people. Certainly in the area of health care alone, there will be tremendous benefits – in particular, for those who can afford medicine employing AI. But at the same time, there is an enormous potential for widening inequality, and for abuse. We can see the tip of this iceberg now with health insurance companies today scooping up readily available, poorly protected third-party data that will be used to discriminate.”

An anonymous respondent said, “People will be better off 15 years from now, but I am not sure if AI will be responsible. Until now, AI has not improved that much in our lives (and for some, it has definitely worsened their situation) – at least compared with, say, better public health.”

An internet protocol engineer and researcher wrote, “AI strategic decisions with the most clout are made by corporations, and they do not aim for human well-being in opposition to corporate profitability.”

A distinguished professor of information science and research dean commented, “For most people there will be minimal impact, hence my answer that most people won’t be better off. However, a number of jobs will be supported by AI analyses. In some cases, these will replace human discretion, but in other cases they’ll be a decision aid.”

A policy adviser for the U.S. banking system said, “I don’t think life changes that fast; technology is overrated as a true life-changing event. AI will help in some places but overall in terms of life ‘quality’ we will be about the same.”

A British-American computer scientist commented, “Increasing dependence on AI will decrease societal resilience through centralization of essential systems in a few large companies.”

An anonymous respondent said, “Decisions about what AI will govern and what it will do will be placed in the hands of a very few elite. Few people will understand what the AI is attempting to do and how it’s doing it; regular people without this knowledge will become more like sheep. The gap between rich and poor will continue to grow. Lack of education in AI and inclusiveness of individual in their own decision-making will make most people worse off in 2030.”

An anonymous respondent said, “AI will increasingly allow low-quality but passable substitutes for previously-skilled labor.”

A professional working on the setting of web standards wrote, “Looking ahead 12 years from now, I expect that AI will be enhancing the quality of life for some parts of some populations, and in some situations, while worsening the quality of life for others. AI will still be uneven in quality, and unevenly available throughout different parts of society. Privacy and security protections will be inadequate; data bias will still be common; many technologies and response patterns will be normed to the needs of the ‘common denominator’ user and mis-identify or mis-interpret interactions with people with disabilities or, if appropriately identifying their disability, will expose that information without user consent or control.”

An anonymous respondent wrote, “There are clearly advances associated with AI, but the current global political climate gives no indication that technological advancement in any area will improve most lives in the future. We also need to think ecologically in terms of the interrelationship between technology and other social-change events. For example, medical technology has increased lifespans, but the current opioid crisis has taken many lives in the U.S. among certain demographics.”

An internet pioneer said, “Nothing in our current social, economic or political structures points to a positive outcome. There is no evidence that more AI will improve the lives of most people. In fact, the opposite is likely to be the case. There will be more unemployment, less privacy, etc.”

A chief operating officer wrote, “No doubt in my mind, AI is and will continue to present benefits in simplifying and aiding human activities; however, the net effect is not likely ‘to leave people better off.’ The advances in AI-enabled tools are likely to expand the digital gap in human competencies. This growing gap will decrease the capacity of sizable portions of the population to survive an outage of the technology. This raises humanitarian and national-security concerns.”

An attorney specializing in policy issues for a global digital rights organization commented, “I’m not sure, today, whether the tech advances of the last 12 years have been net positive over the global population. We’ve seen a widening gap between the very rich and everybody else. That is likely bad for democracy. AI seems likely to make the employment/training problem worse in the U.S., and AI may have similar effects in countries that currently provide cheap labor. On the political-governmental side, AI will exacerbate current surveillance and accountability problems. I figure that AI will improve and speed up all biometric pattern recognition as well as DNA analysis and natural language processing. And though we know that much of this is biased, we’re not adequately counteracting the bias we know about. The companies who generate and disseminate AI technology have every incentive to continue, but I’m not optimistic that collective action – at least in the U.S. system – will successfully counter those incentives.”

An anonymous respondent wrote, “A better answer might be ‘it depends.’ It depends on how people adapt to these new changes. If people use advances in AI to be more productive therefore creating more leisure time, change could be positive. However, with increasing cyber attacks and privacy concerns AI could connect people to bad actors which could cause stress and new problems. Even the simplest of attacks/pranks could negatively affect people’s lives. I have concerns about how people are adapting to these new changes, the continuing disconnection people have due to advances in AI, substituting AI connections for real people, leading to greater depression.”

An anonymous respondent commented, “AI is not intelligent – it is human-made, and therefore biased and unreliable. It cannot do now what it is claimed it can do. Knowing humanity, I assume particularly wealthy, white males will be better off, while the rest of humanity will suffer from it.”

A senior partner at one of the world’s foremost management consulting firms commented, “AI will benefit businesses, the economy and people as consumers, but likely increase income/wage polarization so most people as workers may not benefit.”

The founder of a technology research firm wrote, “Neoliberal systems function to privilege corporations over individual rights, thus AI will be used in ways to restrict, limit, categorize. And, yes, it will also have positive benefits, as do most technologies.”

An engineer and chief operating officer for a project automating code said, “Those with the most money will leverage their position of power through AI; it will lead to possibly cataclysmic wealth disparity. AI will be used to suppress rights.”

A former employee of a pioneering internet company said, “In general people are complacent and do not relish change.”

An anonymous respondent wrote, “I question the efficacy of technology to improve the more serious problems facing humankind.”

A director of a center for digital health and behavior commented, “AI will exacerbate income inequality.”

A longtime economist for a top global technology company predicted, “The decline of privacy and increase in surveillance.”

A senior researcher in AI in a highly ranked university’s engineering program commented, “While I would hope that most people would benefit from technological advances, I suspect that benefitting would be limited to the privileged. All too many people do not even have easy internet access, consistent and strong signal or sufficient learning opportunities to make use of the technological advances.”

An anonymous respondent commented, “In 12 years AI may be more disruptive than enabling, leaving many without work until they retrain and transition.”

An anonymous respondent said, “The question was specifically about ‘most people,’ and I feel that improvements will be unequally distributed. There are significant implications for unskilled or easily-automated tasks on one end of the spectrum, and certain types of analysis on the other, that will be automated away. My concern is that we have no plan for these people as these jobs disappear.”

A digital activist and member of the Pirate Party wrote, “There will be a huge gap between users and admins, which makes a difference in how you will be able to interact with/influence AI-based systems.”

A digital rights activist commented, “AI is already (through racial recognition, in particular) technologically laundering longstanding and pervasive bias in the context of police surveillance. Without algorithmic transparency and transparency into training data, AIs can be bent to any purpose.”

An engineer who is a longtime leader in the IETF and Internet Architecture Board said, “During its initial phase, humans tend use new technology to achieve the same ends as the old technology served. It takes a much longer time for new ends to emerge. As a result, I think a 12-year time horizon is too short to see a true collaboration emerge that will benefit the majority for humanity.”

An anonymous respondent commented, “My fear is that we will spend even more time with machines than we do with talking with each other.”

A research scientist based in North America wrote, “2030 is barely a decade away. The wheels of legislation, which is a primary mechanism to ensure benefits are distributed throughout society, move slowly. While the benefits of AI/automation will accrue very quickly for the 1%, it will take longer for the rest of the populace to feel any benefits, and that’s ONLY if our representative leaders DELIBERATELY enact STRONG social and fiscal policy. For example, AI will save billions in labor costs – and also cut the bargaining power of labor in negotiations with capital. Any company using AI technologies should be heavily taxed, with that money going into strong social welfare programs like job retraining and federal jobs programs. For another example, any publicly funded AI research should be prevented from being privatized. The public ought to see the reward from its own investments. Don’t let AI follow the pattern of Big Pharma’s exploitation of the public-permitted Bayh-Dole act.”

A technology fellow for a global organization said, “I fear that AI will control many background choices with great implicating effects.”

A digital anthropologist for a major global technology company wrote, “The gap between those who benefit from advances in technology and those who do not have widened over the past three decades; I can’t see an easy or quick reversal.”

A leading infrastructure engineer for a social network company commented, “AI will have negative and positive aspects; it may make people’s lives better by making some things easier, but it will likely reduce human value along the way. I expect people to be less able to make decisions, less able to tolerate human interaction, etc.”

A respondent who works at a major global privacy initiative predicted AI and tech will not improve most people’s lives, citing, “Loss of jobs, algorithms run amuck.”

An anonymous respondent wrote, “Our species is being trained in convenience and near-total mediation (technological separation from the actual world, including the ecology and the world of embodied social interaction). Without being too flip here, I suspect several of our sci-fi writers and artists have a good handle on it. I foresee some combo of Phillip K. Dick’s ‘Blade Runner’ world and maybe ‘Wall-E’ coming to pass in this century. By 2030, humanoid house assistants, police/security sentries, etc., and rampant environmental degradation will continue unchecked.”

An anonymous respondent wrote, “Data is too controlled by corporations and not individuals, and privacy is eroding as surveillance and stalking options have grown unchecked in the U.S. Europe is leading on privacy protections, but it is not enough.”

An anonymous respondent commented, “My fear is that the increasing ‘datafication’ of work and our lives as a whole will further increase the pressure we feel to reach an unrealistic apex of perfection, that AI will contribute to an ever-widening gap between the privileged/wealthy and the rest of the world.”

An anonymous respondent wrote, “The increasing dependence of humans on computing coupled with the fundamental insecurability of general-purpose computing is going to lead to widespread exploitation.”

A lead project engineer commented, “It is hopeless because most high-end AI knowhow is and will be controlled by few giant corporations unless government or a better version of the United Nations step in to control and oversee them.”

An anonymous respondent commented, “The capabilities are not shared equally, so the tendency will be toward surveillance by those with power to access the tools. Verbal and visual are coming together with capacities to sort and focus the masses of data.”

An anonymous respondent wrote, “Provided we are still locked in capitalism, I do not see how technology will help people stay engaged and empowered in our society.”

An anonymous respondent said, “It is essential that policymakers focus on impending inequalities. The central question is for whom will life be better, and for whom will it be worse? Some people will benefit from AI, but many will not. For example, folks on the middle and lower end of the income scale will see their jobs disappear as human-machine/AI collaborations become lower-cost and more efficient. Though such changes could generate societal benefits, they should not be born on the backs of middle- and low-income people.”

An anonymous respondent commented, “What is important is not the technology but who controls it since this determines the uses to which it is put. AI could be used for empowerment and support for creativity and so on; it could be used for surveillance, Taylorism [also known as scientific management theory – maximizing efficiency] and political control from above. Political change will determine whether AI technologies will benefit most people or not; I am not optimistic due to the current growth of authoritarian regimes and the growing segment of the super-rich elite who derive disproportionate power over the direction of society from their economic dominance.”

An anonymous respondent said, “Mechanisms must be put in place to ensure that the benefits of AI do not accrue only to big companies and their shareholders. If current neo-liberal governance trends continue, the value-added of AI will be controlled by a few dominant players, so the benefits will not accrue to most people. There is a need to balance efficiency with equity, which we have not been doing lately.”
An anonymous respondent wrote, “AI will leave most people worse off than today, potentially with a loss of autonomy and a great bifurcation – there will be the workers who inform how AI works and there will be those at the bottom of the hierarchy of workers and society. There could be a thinning out of the middle – middle management and class.”

A professor of electrical and computer engineering based in Europe commented, “The problem lies in human nature. The most powerful will try to use AI and technology to increase their power and not to the benefit of society. Human-computer interaction will be seamless: People and computers will talk as people talk to each other.”

A network science researcher said, “There is a risk that the difficulties I already see today in interacting with ‘too-smart’ augmented environments around us will push people to reduce the enthusiasm in adopting them. AI is not useful if we do not deploy effective models, to interact with the ‘intelligent’ surrounding environments that best couple with our ‘mental models.’ Research is concentrating too much on AI and in my humble opinion not enough on the aspect I pointed out.”

An anonymous respondent commented, “The question you asked is scientifically ridiculous.”

An anonymous respondent said, “AI will help in some ways but decrease autonomy and control in others. And please stop Facebook now from feeding me silly ads.”

A professor of social simulation and director of a policy center in Europe wrote, “It will enhance some people’s lives and reduce others’ lives.”

A European computer science professor expert in machine learning commented, “The social sorting systems introduced by AI will most likely define and further entrench the existing world order of the haves and the have-nots, making social mobility more difficult and precarious given the unpredictability of AI-driven judgements of fit. The interesting problem to solve will be the fact that initial designs of AI will come with built in imaginaries of what ‘good’ or ‘correct’ constitutes. The level of flexibility designed in to allow for changes in normative perceptions and judgements will be key to ensuring that AI driven-systems support rather than obstruct productive social change.”

An anonymous respondent said, “The problems are about who is in charge of implementing the AI advances and how do people manage the wealth distribution.”

An anonymous respondent said, “The effects of any technologies are mediated through social and economic contexts. If corporations and private interests control the technology, we will be worse off. Public control of these, and other, technologies is necessary.”

An anonymous respondent commented, “AI is likely to differentially contribute to the human experience. Some will benefit, while others will suffer. The bifurcated economy will continue to grow. For those at the top, life will be assisted in many ways including health. Those at the bottom of the ladder will see greater numbers of jobs being taken away by technology.”

An Internet Hall of Fame member based in the U.S. wrote, “I think we’ll find the internet is even more ubiquitous – every lightbulb will have a WiFi hotspot in it – and less and less visible. You’ll talk to your digital assistant in a normal voice and it will just be there – it will often anticipate your needs, so you may only need to talk to it to correct or update it.”

An Internet Hall of Fame member expert in network architecture said, “Given my utter lack of being able to predict what happened in the past 50 years, I am reluctant to make any guesses, but I imagine that the equivalent of the ‘Star Trek’ universal translator will become practical. This will enable travelers to better interact with people in countries that they visit, facilitate online discussions across language barriers, etc.

An anonymous respondent wrote, “It is impossible to tell, impossible to measure all of the dimensions across all economies and societies to determine what the impact of AI will be. Also, the term ‘AI’ is so vague that it makes little sense outside a newspaper headline. While various deployments of new data science and computation will help firms cut costs, reduce fraud and support decision-making that involves access to more information than an individual can manage, organisations, professions, markets and regulators (public and private) usually take many more than 12 years to adapt effectively to a constantly changing set of technologies and practices. This generally causes a decline in service quality, insecurity over jobs and investments, new monopoly businesses distorting markets and social values, etc. For example, many organisations will be under pressure to buy and implement new services, but unable to access reliable market information on how to do this, leading to bad investments, distractions from core business and labour and customer disputes.”

A co-author of a research study on the future architecture of the Internet of Things wrote, “AI and connection of devices will definitely work for humans’ upliftment. We will have huge data and we shall be generating datasets for learning. Each such learning shall give us the in-depth analysis of human habits as well as human minds. Medical fields shall see the maximum benefits. In far-off places people can directly use an advanced app to diagnose themselves. Medicines, after-effects and response shall be swift. People would be more interested in human-machine interaction. Automated processes shall be evolving systems. We need to balance between human emotions and machine intelligence. Can machines be emotional? That’s the frontier that we have to conquer.”

An anonymous respondent commented, “Overall, AI will help people to manage the increasingly complex world we are forced to navigate. AI might narrow possibilities in certain cases, but it will ultimately empower individuals to not be overwhelmed and find what they are looking for, whether a good product or a physician. I think that particularly in broadly creative endeavors, from art to music to programming to scientific discovery, AI will help creators do their work better, whether it’s suggesting hypotheses or potential designs.”

The director of a cognitive research group at one of the world’s top AI and large-scale computing companies predicted that by 2030, “Smartphone-equivalent devices will support true natural-language dialog with episodic memory of past interactions. Apps will become low-cost digital workers with basic commonsense reasoning.”

An ARPANET and internet pioneer wrote, “I view the kind of AI we are currently able to build is good for data analysis, but far, far away from ‘human’ levels of performance. The next 20 years won’t change this, but we will have valuable tools to help analyze and control our world.”

professor of computer science expert in systems who works at a major U.S. technological university wrote, “The judgment of better or worse depends on the individual and social values of the time. For some settings, e.g., workplace, those value systems may be less changeable and therefore somewhat comparable between now and the future. The pursuit of higher performance and efficiency (more output per person per day) in the workplace has introduced incessant pressure and stress for workers. To some extent, the introduction of robots (and AI in the future) may lessen this pressure, since the repetitive tasks will be performed by robots/AI that have higher efficiency (in those tasks) compared to humans. Of course, the flip side is the issue of job loss, but the global economic gains in the last 50 years seem to indicate better outcome overall. By 2030, we should expect advances in AI, networking and other technologies enabled by AI and networks, e.g., the growing areas of persuasive and motivational technologies, to improve the workplace in many ways beyond replacing humans with robots. As a concrete example, physiological monitoring devices (e.g., lower heart beats and decreasing blood sugar levels) could indicate lower levels of physical alertness. Smart apps could detect those decaying physical conditions (at an individual level) and suggest improvements to the user (e.g., taking a coffee break with a snack). Granted, there may be large-scale problems caused by AI and robots, e.g., massive unemployment, but the recent trends seem to indicate small improvements such as health monitor apps outlined above, would be more easily developed and deployed successfully.”

A CEO and editor-in-chief wrote, “Humans always find innovation through necessity.”

A director of marketing for a major technology platform company commented, “Deliberate and thoughtful use of AI technology in select industries and tasks will make citizens and society better off.”

A liberal arts professor based at a major university in India responded, “Mentally we always want our work to be done by others, especially physical work. AI fits into this mental framework quite well. We will become more skilled in a different kind of work, more mentally involved.”

The lead QA engineer for a technology group said, “For medicine AI has begun and will continue to support medical personnel in diagnosing and identifying health issues. Without a doubt there are always folks left out of these advances in technology. I can’t imagine how AI will affect the home and workplace. But I do see a positive trajectory. As human nature is, there will be those who use it to the detriment of others. Just thinking of viruses, phishing, etc.”

A researcher and teacher of digital literacies and technical communication at a U.S. university responded, “Overall our lives will be enhanced except that we will give up some of our basic rights, such as privacy, and there will be surveillance.”

A member of the editorial board of the Association of Computing Machinery journal on autonomous and adaptive systems commented, “By developing an ethical AI, we can provide smarter services in daily life, such as collaborating objects providing on-demand highly adaptable services in any environment supporting daily life activities.”

A representative for a nation-state’s directorate of telecommunications wrote, “My hope is that AI will continue to open a new window to daily life at least similar to, if not better than, other innovative windows in ICT. My fear is that humans will  become more and more dependent on AI, to the extent that their natural intelligence would be more and more diminished. The concern is that in the absence of AI they may not be able to act in a timely manner.”

A manager with a major digital innovation company said, “While the human mind is capable of storing a large quantity of information, it pales in raw capacity to the storage ability of machines. Couple the information storage with the ever-increasing ability to rapidly search and analyze that data, and the benefits to augmenting human intelligence with this processed data will open up new avenues of technology and research throughout society.”

An expert in technological and science systems for defense and warfare responded, “In general people will have more access to information and the ability to experience new things, which will be better. However, with all new positives there may be some who exploit others with the new capability. Guarding against negative uses of a mostly positive (AI and internet) is a concern.”

A professor expert in AI whose university is connected to one of the major global technology company projects in AI development wrote, “The future is about sustaining our planet, which is a prerequisite to sustaining our human life form. As with the current development of precision health as the path from data to wellness, so too will artificial intelligence improve the impact of human collaboration and decision-making in sustaining our planet. Precision democracy will emerge from precision education, to incrementally support the best decisions we can make for our planet and our species.”

A data analyst for an organization developing marketing solutions said, “Assuming that policies are in place to prevent the abuse of AI and programs are in place to find new jobs for those who would be career-displaced, there is a lot of potential in AI integration. By 2030, most AI will be used for marketing purposes and be more annoying to people than anything else as they are bombarded with personalized ads and recommendations. The rest of AI usage will be its integration into more tedious and repetitive tasks across career fields. Implementing AI in this fashion will open up more time for humans to focus on long-term and in-depth tasks that will allow further and greater societal progression. For example, AI can be trained to identify and codify qualitative information from surveys, reviews, articles, etc., far faster and in greater quantities than even a team of humans can. By having AI perform these tasks, analysts can spend more time parsing the data for trends and information that can then be used to make more informed decisions faster and allow for speedier turn-around times. Minor product faults can be addressed before they become widespread, scientists can generate semiannual reports on environmental changes rather than annual or biannual, teachers looking for new classroom activities can just go to one site with a series of filters rather than search through multiple sites in their spare time.”

A manager with a major African nation’s communications regulatory authority wrote, “People will have technology at their fingertips. In developing countries this will be felt at the office more than in the homes. More systems meant to make jobs easier will be acquired thus doing the job with minimal effort and supervision. Some work will even be done at home more than in the office as advanced technology makes it easier for application and less monitoring as systems will be able to check each other.”

A professor of media studies at a U.S. university commented, “My answer to the question is a bit misleading because I don’t believe that advancing technology/AI will be the primary factor determining whether or not most people will be better off in 2030. Technology will be a material expression of social policy. If that social policy is enacted through a justice-oriented democratic process, then it has a better chance of producing justice-oriented outcomes. If it is enacted solely by venture-funded corporations with no obligation to the public interest, most people in 2030 will likely be worse off.”

A director of a marketing company and futurist based in Europe commented, “There is a need to invest in R&D&I and education for a positive impact by AI.”

The general manager of a top-level internet domain organization based in Africa wrote, “As more people embrace the Internet of Things, artificial intelligence will become the key cornerstone for Internet of Things. This will be supported by IP v6.”

A founder and president said, “The future of AI is more about the policies we choose and the projects we choose to fund. I think there will be large corporate interests in AI that serve nothing but profits and corporations’ interests. This is the force for the ‘bad.’ However, I also believe that most technologists want to do good, and that most people want to head in a direction for the common good. In the end, I think this force will win out.”

A professor emeritus expert on technology’s impacts on well-being wrote, “AI will anticipate our needs and help provide us extra time that is usually devoted to mundane tasks that can be automated.”

An anonymous respondent commented, “If discernment is used regarding AI resources, it’s good, but not otherwise; one size doesn’t fit all. Some blind people like the idea of driverless cars, but others need a person taking them to the door of a place. I like the voice-to-text for dictation.”

A professor emeritus expert in organizational communication and technology commented, “Most people will not notice the increased use of AI since it will enhance and extend what we already do. Perhaps the most noticeable will be in medicine – diagnosis, surgery, etc. However, there will be unanticipated consequences. We must all be more mindful.”

A professor of computing sciences based in Mexico who is expert in AI said, “AI is a technological tool as it was electricity at the end of 19th century. Imagine you would ask the question ‘Will electricity result in empowering people or will they become more dependent on it and will it, in the end, impoverish their lives?’”

A co-founder of a program for liberation technology wrote, “It’s just a new form of automation. History of technology shows that the number of new roles and jobs created will likely exceed the number of roles and jobs that are destroyed.”

An Australian internet pioneer and lecturer in computer science said, “Most human/AI interaction will be at a small-scale routine level. As an example, AI will read communications addressed to us, suggest routine appointments for us, which we should read and which to ignore (an advanced form of the assistants already available).”

A postdoctoral associate at the MIT Media Lab and fellow at Harvard University said “In the near future, AI will most likely be used to enhance human capabilities. Though AI will inevitably replace a portion of the human work force, causing short disruption in the economy, in the long term, AI will not be competing with humanity but augmenting it for the better.”

A technical evangelist for a major organization commented, “I believe people will guide decisions to serve the common good.”

The director of a media psychology group responded, “Technology, whether it’s AI or something else, is a tool, not a social agenda. Every tool can be used well or poorly. We choose how to develop and use the potential. AI offers opportunities for personalized interaction, making things like education and training or health care more accessible and effective by providing the appropriate challenge and skill levels to scaffold learners and support behavior change. We make a mistake when we look for direct impact without considering the larger picture – we worry about a worker displaced by a machine rather than focus on broader opportunities for a better-trained and healthier workforce where geography or income no longer determine access not just to information but to relevant and appropriate information paths.”

The director of a center for technology, society and policy studies at a major university in Silicon Valley responded, “I am very cautiously optimistic because I think AI can significantly improve usability and thus access to the benefits of technology. Many powerful technical tools today require detailed expertise, and AI can bring more of those to a larger swath of the population.”

An associate professor at a major university in Israel wrote, “In the coming 12 years AI will enable all sorts of professions to do their work more efficiently, especially those involving ‘saving life:’ individualized medicine, policing, even warfare (where attacks will focus on disabling infrastructure and less in killing enemy combatants and civilians). In other professions, AI will enable greater individualization, e.g., education based on the needs and intellectual abilities of each pupil/student. Of course, there will be some downsides: greater unemployment in certain ‘rote’ jobs (e.g., transportation drivers, food service, robots and automation, etc.).”

An anonymous respondent commented, “Technology normally has made life better for a broad number of people in the past. The internet, I believe, has been broadly beneficial, however, each technology advancement has had corresponding problems. The internet, for all its good, has been used for malicious purposes at time for theft, exploitation and political manipulation. My fear is that AI will be developed too quickly and that there may be severe repercussions once the genie is out of the bottle. AI holds a lot of promise but a great deal of downside if not properly controlled in its early stages.”

A research professor of international affairs at a major university in Washington, D.C., responded, “We have to find a balance between regulations designed to encourage ethical nondiscriminatory use, transparency and innovation.”

A member of the Japanese Society for Artificial Intelligence commented, “Technologies, although causing cons due to various kinds of misuse, are beneficial in longer span. I can’t see any reason this should change.”

A director emeritus of a center for technology research in the interest of society commented, “People will use machines in a complementary way.”

An anonymous expert in integrated information technology said, “Although there will be unintended consequences, in general people will be better off with advances in artificial intelligence.”

A distinguished engineer at one of the world’s largest computing hardware companies commented, “Tech will add value, AI not so much. Tech will continue to be integrated into our lives in a seamless way. It will be a slow and steady evolution from where we are today. My biggest concern is responsible gathering of information and its use. Information can be abused in many ways as we are seeing today.”

An expert in knowledge, creativity and support systems said, “If AI can better learn individual preferences and suggest better strategies for addressing them, and – furthermore – it can point to other people’s preferences and strategies to achieve them and thus broaden people’s horizons and allow them wider perspectives into the path they chose so far and where it can lead them in the future, then it seems like an advancement into individual and perhaps also social achievements.”

A professor of mathematics and statistics commented, “My assumption is that AI will still be governed by human beings who will have the ultimate say in the acceptance/decline of decisions and analyses performed by AI. As when computers and calculators first appeared, life improved for many, but some skills became devalued.”

A lecturer in communications law based in Washington, D.C., wrote, “This may be more hope than anything else, but I expect the net benefits of AI to outweigh the detriments. In particular, I see economic efficiencies and advances in preventive medicine and treatment of disease. However, I do think there will be plenty of adverse consequences.”

A research scientist who works for Google said, “Things will be better, although many people are deeply worried about the effects of AI.”

A senior researcher and programmer for a major global think tank commented, “I expect AI to be embedded in systems, tools, etc., to make them more useful. However, I am concerned that AI’s role in decision making will lead to more brittle processes where exceptions are more difficult than today – this is not a good thing.”

An anonymous respondent wrote, “E.g., health care, I hope AI will improve the diagnostics and reduce the number of errors. Doctors cannot recall all the possibilities; they have problems correlating all the symptoms and recognizing the patterns. I hope that in the future patients will be interviewed by computers, which will correlate the described symptoms with results of tests. I hope that with the further development of AI and cognitive computing there will be fewer errors in reports of medical imaging and diagnosis.”

An anonymous respondent said, “My fear is that technology will further separate us from what makes us human and sensitive to others. My hope is that technology would be used to improve the quality of living, not supplant it. Much of the AI innovation is simply clogging our senses, stealing our time, increasing the channels and invasion of adverts. This has destroyed our phones, filled our mailboxes and crowded our email. No product is worth that level of incursion.”

An anonymous respondent said, “Really, the results will be determined by the capacity of political, criminal justice and military institutions to adapt to rapidly evolving technologies. But 12 years from now most innovations will be beneficial and not catastrophic. Yet.”

An anonymous respondent wrote, “2030 is still quite possibly before the advent of human-level AI. During this phase AI is still mostly augmenting human efforts – increasingly ubiquitous, optimizing the systems that surround us and being replaced when their optimization criteria are not quite perfect – rather than pursuing those goals programmed into them whether we find the realization of those goals desirable or not.”

An anonymous respondent wrote, “AI will produce major benefits in the next 10 years, but ultimately the question is one of politics: Will the world somehow manage to listen to the economists, even when their findings are uncomfortable?”

A professor of information science wrote, “There is the possibility that AI will improve our access to information and resources (e.g., health care). I am, at the same time, afraid that systems will be developed that do not protect people’s privacy and security.”

An anonymous respondent said, “The core hope is that AI and the internet continue to be tools to improve communication, education and awareness.”

A policy analyst for a major internet services provider said, “There are many ways in which AI can improve our lives moving forward. Many people think of AI solely in the context of robots and machines with ‘human levels’ of intelligence. However, it also encompasses things such as better pacemakers and health diagnostics tests, as well as systems for better resource allocation. There are many ways in which AI can help improve our lives; we just need to be careful about what data is being used and how.”

An anonymous respondent wrote, “AI is starting to offer solutions not previously seen, particularly in areas of health care and home automation. However, serious problems continue in anticipating human responses, in security and in usefulness of solutions.”

An anonymous respondent said, “AI is just a tool; its usefulness will improve as the tool improves.”

An anonymous respondent wrote, “In the next 12 years we will see medical advances and disaster prevention due to AI. We will become more secure and tend to start trusting AI – after that, more so in 30-50 years we will become less secure due to inequity in access and the use of AI by governments to control and instill fear in the public. At some point in the future those governments will lose control of AI and private organizations and individuals (or AI individuals) will use AI to their advantage. Unless those in control have largess in mind, the division between haves and have-nots will create walled cities protected by drones to keep out the unwanted. The rest is science fiction coming upon us.”

A digital and interactive strategy manager commented, “When there is a review of pros and cons of the technology used based on different factors, such as environment, economics, etc., AI can be very beneficial. However, the technology should work in tandem with ethics so that side of guidelines is followed.”

A technology company founder and CEO said, “‘Most people.’ This is the point: We’re being challenged to go beyond what is human.”

An anonymous respondent wrote, “There will be an explosive increase in the number of autonomous cognitive agents (e.g., robots) and humans will interact more and more with them, being unaware, most of the time, if it is interactivity with a robot or with another human. This will increase the number of personal assistants and the level of service.”

An anonymous respondent commented, “I see AI as providing tools that make life more efficient and more convenient. We are now used to accurate text- and voice-recognition. At one point those tasks were goals of AI. Similarly, AI will provide cars and other systems that are safer than today’s. 2030 is only 12 years from now, so I expect that systems like Alexa and Siri will be more helpful but still of only medium utility. The combination of widespread device connectivity and various forms of AI will provide a more pleasant everyday experience but at the expense of an even further loss of privacy.”

An anonymous respondent wrote, “I have two fears 1) loss of privacy and 2) building a ‘brittle’ system that fails catastrophically after a hacker attack, prolonged power failure or (long-shot, paranoid worry) a massive solar EMP (Carrington Event).”

An engineer wrote, “As an engineer, much of my work is in the field of numerical control. I strongly believe that an increasing use of numerical control will improve the lives of people in general.”

An anonymous respondent said, “AI will be a useful tool – just as many other technological developments have been useful tools. I am quite a ways away from fearing SkyNet and the rise of the machines.”

A consultant and analyst commented, “The use of technology in education is minimal today due to the existence and persistence of the classroom-in-a-school model. As we have seen over the last 30 years, the application of artificial intelligence in the field of man/machine interface has grown in many unexpected directions. Who would have thought back in the late 1970s that the breadth of today’s online (i.e., internet) capabilities could emerged? I believe we are just seeing the beginning of the benefits of the man/machine interface for mankind. The institutionalized education model must be eliminated to allow education of each and every individual to grow. The human brain can be ‘educated’ 24 hours a day by intelligent ‘educators’ who may not even be human in the future. Access to information is no longer a barrier as it was 50 years ago. The next step now is to remove the barrier of structured human delivery of learning in the classroom. I know from my own experience with my earning a Doctor of Philosophy degree when I was 58 years old that I learn differently now that I am older than I learned when I was a child or a college student. I believe that I can educate myself better than any other human could. With unlimited access to the man/machine interface of existing knowledge, I can educate myself best.”

A top research director and technical fellow at a major global technology company said, “There is a huge opportunity to enhance folks’ lives via AI technologies. The positive uses of AI will dominate as they will be selected for their value to people. I trust the work by industry, academia and civil society to continue to play an important role in moderating the technology, such as pursuing understandings of the potential costly personal, social and societal influences of AI. I particularly trust the guidance coming from the long-term, ongoing One Hundred Year Study on AI and the efforts of the Partnership on AI.”

A member of the IETF wrote, “It will be of help mostly in regard to healthcare for people living in poor areas of the world where there are few doctors.”

A cybersecurity strategist said, “There are a lot of things happening in the information and communications technologies sector – some good, others bad. The world has become technologically-oriented and this creates challenges – for example, cybercrime.”

An anonymous respondent said, “The advancement of artificial intelligence will help in solving problems that once required extensive cranking of data. Such an example would be in the field of health and personalized medicine, where data collected with the help of machine learning and AI can aid in diagnoses and in the choice of medication.”

An anonymous respondent commented, “Data can reduce errors – for instance, in clearly taking into account the side effects of a medicine or use of multiple medications. In addition, diagnostic analysis and the merging of data science and AI could benefit strategic planning of the future research and development efforts that should be undertaken by humanity.”

An anonymous respondent wrote, “This is just another technology revolution in which humans gain efficiency and effectiveness. We can expect that automation and AI will enhance the humans’ quality of life overall. For example, the use of exoskeletons and autonomous vehicles will help to improve human beings’ mobility. This, in turn, will change the overall environment of various sectors – for example, transportation, caretaking of others and other labor-intensive work. However, as one is more and more people have AI/automation support in their daily lives the interactions between people will lessen. People may feel more isolated and less socially inter-related. Social interaction must be carefully maintained and evolved.”

The director of a digital creativity think tank wrote, “Health care: An AI system will be helping my doctor make decisions.”

An anonymous respondent said, “Life will be a lot like it is today. The hope is that it will be seamless and helpful in our day-to-day lives. There will be new threats that come out of it, too.”

An anonymous respondent wrote, “Technology always creates wealth long-term.”

An anonymous respondent commented, “Breakthroughs in medicine and automation will save a lot of people’s lives. Automation will allow elderly and people dependent on others for care to maintain independence longer.”

A director of e-business research at a large data management firm said, “Patient diagnosis of medical conditions will be speeded up and more comprehensive and accurate because of machine learning and predictive analytics.”

A professor of psychology for a human-computer interaction institute commented, “AI will reduce human error in many contexts: driving, workplace, medicine and more.”

professor of political science and pollster said, “In teaching it will enhance knowledge about student progress and how to meet their individual needs. Used properly it will offer guidance options, based on the unique preferences of students that can guide learning and career goals.”

A post-doctoral fellow studying data and society said, “AI will assist us doing mechanical procedures such as driving, cooking, making stuff based on our commands, etc.”

An open-source technologist in the automotive industry wrote, “Hopefully human-to-machine interfaces will continue to ‘disappear’ and allow seamless interaction with AI. This presents privacy problems, so we’ll have to have independent AI systems with carefully controlled data access, clear governance and right-to-be-forgotten.”

An anonymous respondent said, “AI will help us navigate choices, find safer routes and avenues for work and play, and help make our choices and work more consistent.”

A writer and editor who documented the boom of the internet in the 1990s wrote, “Unless AI can respond to changing human wishes, it will fail economically. It is inherently no more dominating than the computer itself. In fact, its flexibility makes it less so.”

A senior strategist in regulatory systems and economics for a top global telecommunications firm wrote, “If we do not strive to improve society, making the weakest better off, the whole system may collapse. So, AI better serve to make life easier for everyone.”

An anonymous respondent said, “AI systems will most likely be humanistic in design and functioning, but we should ensure that values (local or global) and basic philosophical theories on ethics inform the development and implementation of AI systems.”

An anonymous respondent wrote, “Repeatedly throughout history people have worried that new technologies would eliminate jobs. This has never happened, so I’m very skeptical it will this time. Having said that, there will be major short-term disruptions in the labor market and smart governments should begin to plan for this by considering changes to unemployment insurance, universal basic income, health insurance, etc. This is particularly the case in America, where so many benefits are tied to employment. I would say there is almost zero chance that the U.S. government will actually do this, so there will be a lot of pain and misery in the short and medium term, but I do think ultimately machines and humans will peacefully coexist. Also, I think a lot of the projections on the use of AI are ridiculous. Regardless of the existence of the technology, cross-state shipping is not going to be taken over by automated trucks any time soon because of legal and ethical issues that have not been worked out.”

An anonymous respondent said, “If you design the system correctly human errors in miscellaneous parts of life will be eliminated.”

An anonymous respondent wrote, “The operative word is ‘most.’ It is far from clear what all the elements may be, but some advances in health and transportation would indicate the quality and quantity of life would, on average, get better. Many scenarios, however, indicate this would not come without cost.”

An anonymous respondent said, “These will be market driven and only enhancements will prove fruitful.”

The managing director of research in Europe for a major IT infrastructure company said, “Artificial intelligence will change all of our lives similarly to the way in which steam, electricity, computers and mobile communication have done. Lives will be easier, more comfortable and more healthy, but there will also be many people suffering from the dramatic changes that come with new technologies.”

An anonymous respondent wrote, “Technology has already improved our lives dramatically. I can’t think it is going to stop.”

An anonymous respondent said, “The most important place where AI will make a difference is in health care of the elderly. Personal assistants are already capable of many important tasks to help make sure older adults stay in their home. But adding to that emotion detection, more in-depth health monitoring and AI-based diagnostics will surely enhance the power of these tools.”

An anonymous respondent wrote, “Human-machine/AI collaboration will reduce barriers to proper medical treatment through better recordkeeping and preventative measures, improve transportation options for persons and goods through automation, improve educational opportunities broadly by offering tailored curricula, and reduce workplace injuries and accidents through automation.”

An anonymous respondent commented, “What does ‘better’ mean?  Longer life expectancy, more understanding of health, population reduction, global cooling? More will be delegated to technology – smartphones, software. People will stop thinking or caring about ‘control’ and delegate to ‘the system,’ which might free up time for person-to-person caring and involvement.”

An anonymous respondent wrote, “Initially AI will augment rather than automate many human tasks and experiences. For example, in a learning environment we already have AI tutors to help students learn, but in the future I envision this becoming more personalized toward individual students. It doesn’t seem yet that AI will replace human teachers, at least in 2030, but many of the classroom functions could be taken over by AI. Syllabi, study material and practice questions might become more personalized. My hope is this will enhance students’ learning experience and even accelerate it. My fear is that if we aren’t careful enough, this technology would only be available to select populations. We need to ensure that essential technology for health care and education is made available to everyone, irrespective of their economic or social status, to ensure that there isn’t a furthering divide between different demographic groups because of AI.”

A policy director with the European Commission wrote, “Improved certainty of medical tests, better and more efficient technical and mechanical operations, faster and better access to results.”

An anonymous respondent commented, “Manual labor reduced by robots; less repetitive work due to AI deployment; safer self-driving vehicles; reduced information overload from personal AI assistants.”

A principal researcher for a top global technology company said, “In the long term most technologies have helped more people than they have harmed. AI should be no different.”

An anonymous respondent commented, “Like any trend it can go both ways. I’m optimistic that the benefits will outweigh the risks. There are many places where the use of AI will augment what humans can do. In the health sector, AI can take over many of the administrative tasks current doctors must do, allowing them more time with patients. Provided that we can ensure the safety of data, a patient could carry access to his/her data with them, and work on it together with their doctor. People could use a virtual doctor for information and first-level response; so much time could be saved!”

An anonymous respondent said, “In the next 12 years, it is my belief that the majority of humans will harness the power and functionality of AI to improve their lives by automating mundane tasks, lessening their workload and assisting where they identify a need. The AI assistance will be of a positive nature and take the form of ‘help.’ This will leave them with more time for tasks they deem enjoyable and fun. In the long term, however, further applications of AI will be investigated that are more controversial and possibly more intrusive.”

An artificial intelligence researcher working for one of the world’s most powerful technology companies wrote, “AI will enhance our vision and hearing capabilities, remove language barriers, reduce time to find information we care about and help in automating mundane activities.”

An anonymous respondent wrote, “I believe AI systems will extend human intellectual and cognitive capabilities so that, in particular, we will have found cures to some of the diseases affecting us today.”

An anonymous respondent commented, “Many factors will be at work to increase or decrease human welfare, and it will be difficult to separate them. AI will work together with other disciplines to improve welfare (e.g., medicine, sustainability, security, efficient markets, etc.). I worry (for one) about AI enabling mass surveillance.”

A professor and researcher in AI based in Europe said, “The question should not be binary. Many different aspects should be taken into consideration and some are on the positive side, others on the negative. Using technological AI-based capabilities will give people the impression that they have more power and autonomy. However, those capabilities will be available in contexts already framed by powerful companies and states. No real freedom. For the good and for the bad.”

If you wish to read the full survey report with analysis, click here.

To read credited survey participants’ responses with no analysis, click here.