Elon and Pew surveyed nearly 1,000 leading technologists about the wide-ranging impact of artificial intelligence on the quality of life of individuals.
Experts say the rise of artificial intelligence (AI) will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will, according to a new report by the Pew Research Center and Elon University’s Imagining the Internet Center.
> Read the USA Today story on this report
> Read the story in VentureBeat
These findings are based on an extensive, non-scientific canvassing in which 979 technologists, innovators, developers, business and policy leaders, researchers and activists answered questions about the possible impact of AI developments. The report’s release ties to a presentation at the “Our People-Centered Digital Future” conference in San Jose, CA – an event featuring the pioneers of the internet and officials from the United Nations.
This report, part of a long-running series about the future of the internet, is based on a non-random canvassing of experts conducted from July 4 to August 6, 2018. A total of 979 respondents answered the question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? The survey then asked respondents to explain their answers in an open-ended response. The results are not projectable to any population other than the individuals in this sample.
Overall, 63% of respondents predicted that the majority of individuals will be mostly better off in 2030, and 37% said people will not be better off. Most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. Analysis of these responses led to several major themes centered around:
- Human agency: Experts worry that people are losing control over their lives as decision-making in digital systems is automatically ceded to code-driven “black box” tools.
- Data abuse: Respondents are concerned that most AI is controlled by companies or governments whose focus is on profits and power, not on human-centered values and ethics.
- Job loss: While some expect new jobs will emerge, many worry a mass takeover of job skills by autonomous systems may widen economic divides, possibly leading to populist uprisings.
- Dependency lock-in: Many see AI as augmenting human capacities, but some predict the opposite – that people’s deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others.
- Mayhem: Some say significant damage could be caused by autonomous weapons, cybercrime and the use of weaponized information, lies and propaganda to destabilize human groups.
“These experts point out the vast opportunities and frightening possibilities presented by autonomous systems,” said Lee Rainie, director of internet and technology research at Pew Research Center. “They expect networked artificial intelligence to tremendously amplify human effectiveness, and, at the same time, they point out that these same AI advances raise threats to human autonomy, agency and capabilities.”
Some of the wide-ranging possibilities various experts said they expect to see in the next wave of AI development include:
- AI might soon match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation.
- “Smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.
- Health care is seen as the most likely space in which applications of AI will have the most impact – for instance, in diagnosing and treating patients and in helping senior citizens live fuller and healthier lives.
“A number of these experts focused on possible solutions to the problems they foresee,” noted Janna Anderson, director of the Imagining the Internet Center in the Elon University School of Communications. “They can be distilled to a pretty clear plea: People should join forces to innovate widely accepted approaches aimed at open, decentralized, intelligent networks. They suggest economic and political systems should be reinvented to better help humans ‘race with the robots,’ expanding their capacities and capabilities to heighten human/AI collaboration.”
Following is a sample of thoughts shared by experts through this survey:
- Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy, observed, “We can virtually eliminate global poverty massively reduce disease and provide better education to almost everyone on the planet. That said, AI and ML can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons… The right question is not ‘What will happen?’ but ‘What will we choose to do?’ We need to work aggressively to make sure technology matches our values.”
- Sonia Katyal, co-director of the Berkeley Center for Law and Technology, replied, “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights… Questions about privacy, speech, the right of assembly and technological construction of personhood will all reemerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all.”
- danah boyd, a principal researcher for Microsoft and founder and president of the Data & Society Research Institute, responded, “There will be abuses of power that involve AI, just as there will be advances in science and humanitarian efforts that also involve AI. Unfortunately, there are certain trend lines that are likely to create massive instability… This will further destabilize Europe and the U.S., and I expect that, in panic, we will see AI be used in harmful ways in light of other geopolitical crises.”
- Amy Webb, founder of the Future Today Institute, said, “The social safety net structures currently in place in the U.S. and in many other countries around the world weren’t designed for our transition to AI… We need to address a difficult truth that few are willing to utter aloud: AI will eventually cause a large number of people to be permanently out of work.”
- Susan Etlinger, industry analyst for Altimeter Group, commented, “In order for AI technologies to be truly transformative in a positive way, we need a set of ethical norms, standards and practical methodologies to ensure that we use AI responsibly and to the benefit of humanity… Algorithms aren’t neutral; they replicate and reinforce bias and misinformation. They can be opaque. And the technology and means to use them rests in the hands of a select few organizations.”
- David Wells, chief financial officer at Netflix, said, “Technology progression and advancement has always been met with fear and anxiety, giving way to tremendous gains for humankind as we learn to enhance the best of the changes and adapt and alter the worst. Continued networked AI will be no different but the pace of technological change has increased, which is different and requires us to more quickly adapt.”
- Judith Donath, author of “The Social Machine, Designs for Living Online, observed, “Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us… Artificially intelligent companions will cultivate the impression that social goals similar to our own motivate them — to be held in good regard, whether as a beloved friend, an admired boss, etc. But their real collaboration will be with the humans and institutions that control them.”
- Jerry Michalski, founder of ReX, replied, “Businesses are doing all they can to eliminate full-time employees, who get sick and cranky, need retirement accounts and raises, while software gets better and cheaper. The Precariat will grow. Software is like a flesh-eating bacterium: tasks it eats vanish from the employment landscape. Our safety net is terrible and our beliefs about human motivations suck.”
- Nathaniel Borenstein, chief scientist at Mimecast, said, “I foresee a world in which IT and so-called AI produce an ever-increasing set of minor benefits, while simultaneously eroding human agency and privacy and supporting authoritarian forms of governance. I also see the potential for a much worse outcome in which the productivity gains produced by technology accrue almost entirely to a few, widening the gap between the rich and poor.”
- Wendy Hall, executive director of the Web Science Institute, said, “It is a leap of faith to think that by 2030 we will have learnt to build AI in a responsible way and we will have learnt how to regulate the AI and robotics industries in a way that is good for humanity. We may not have all the answers by 2030 but we need to be on the right track by then.”
- Andrew McLaughlin, executive director of the Center for Innovative Thinking at Yale, commented, “Innovations like the internet and networked AI have massive short-term benefits, along with long-term negatives that can take decades to be recognizable. AI will drive a vast range of efficiency optimizations, but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment.”
- Micah Altman, head scientist in the program on information science at MIT Libraries, wrote, “These technologies will help to adapt learning to the needs of each individual by translating language, aiding memory and providing us feedback on our own emotional and cognitive state, and on the environment… AI has the potential to assist us to engage with the world better – even when conditions are not ideal – and to better understand ourselves.”
- Craig Mathias, principal at Farpoint Group, an advisory firm specializing in wireless networking and mobile computing, replied, “Many if not most of the large-scale technologies that we all depend upon – such as the internet itself, the power grid and roads and highways – will simply be unable to function in the future without AI, as both solution complexity and demand continue to increase.”
- Benjamin Kuipers, a professor of computer science at the University of Michigan, wrote, “Advancing technology will vastly increase opportunities for communication and surveillance. The question is whether we will find ways to increase trust and the possibilities for productive cooperation among people, or whether individuals striving for power will try to dominate by decreasing trust and cooperation.”
- Lee McKnight, associate professor, School of Information Studies, Syracuse University, commented, “Poorly designed artificially intelligent services and enterprises will have unintended societal consequences, hopefully not catastrophic, but sure to damage people and infrastructure… defending ourselves against evil – or, to be polite, bad AI systems turned ugly by humans or other machines – must become a priority for societies well before 2030.”
- Jeff Jarvis, director of the Tow-Knight Center at City University of New York’s Craig Newmark School of Journalism, commented, “What worries me most is worry itself: An emerging moral panic that will cut off the benefits of this technology for fear of what could be done with it. What I fear most is an effort to control not just technology and data but knowledge itself, prescribing what information can be used for before we know what those uses could be.”
- Steve Crocker, Internet Hall of Fame member, responded, “AI and human-machine interaction has been under vigorous development for the past 50 years. The advances have been enormous… Graphics, speech, language understanding are now taken for granted. Encyclopedic knowledge is available at our fingertips. Instant communication with anyone, anywhere… Effects on productivity, lifestyle and reduction of risks have been extraordinary and will continue.”
- Barry Chudakov, founder and principal of Sertain Research, commented, “AI and related technology systems constitute a force for a moral renaissance. We must embrace that moral renaissance, or we will face moral conundrums that could bring about human demise… My greatest hope for human-machine/AI collaboration – we adopt a moonshot mentality and lock arms to prepare for the accelerations coming at us.”
- R “Ray” Wang, founder and principal analyst, Constellation Research, said, “The experience in China has shown how this technology can be used to take away the freedoms and rights of the individual for the purposes of security, efficiency, expediency and whims of the state. On the commercial side, we also do not have any controls in play as to ethical AI. Five elements should be included – transparency, explainability, reversibility, coachability and human-led processes in the design.”
- Bart Knijnenburg, assistant professor of computer science active in the Human Factors Institute at Clemson University, responded, “True empowerment will come from these systems supporting rather than replacing our decision-making practices. This is the only way we can overcome choice/information overload and at the same time avoid so-called ‘filter bubbles’… The algorithms behind these tools need to support human agency, not replace it.”
- Marina Gorbis, executive director of the Institute for the Future, replied, “Without significant changes in our political economy and data governance regimes [AI] is likely to create greater economic inequalities, more surveillance and more programmed and non-human-centric interactions. Every time we program our environments, we end up programming ourselves and our interactions.”
- Douglas Rushkoff, professor of media at City University of New York, commented, “AI’s impact will be mostly negative [because] we will be applying it mostly toward the needs of the market, rather than the needs of human beings. So while AI might get increasingly good at extracting value from people, or manipulating people’s behavior toward more consumption and compliance, much less attention will likely be given to how AI can actually create value for people.”
- Annalie Killian, futurist and vice president for strategic partnerships at Sparks & Honey, said, “Technologists who are using emotional analytics, image-modification technologies and other hacks of our senses are destroying the fragile fabric of trust and truth that is holding our society together at a rate much faster than we are adapting and compensating – let alone comprehending what is happening.”
- Brian Harvey, lecturer on the social implications of computer technology at the University of California – Berkeley, commented, “There is no ‘we’; there are the owners and the workers. The owners (the 0.1%) will be better off because of AI. The workers (bottom 95%) will be worse off, as long as there are owners to own the AI, same as for any piece of technology.”
- Paul Vixie, an Internet Hall of Fame member, said, “Understanding is a perfect proxy for control. As we make more of the world’s economy non-understandable by the masses, we make it easier for powerful interests to practice control. Real autonomy or privacy or unpredictability will be seen as a threat and managed around.”
- John C. Havens, executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, observed, “Until every individual is provided with a sovereign identity attached to a personal data cloud they control, information won’t truly be shared – just tracked. By utilizing blockchain or similar technologies and adopting progressive ideals toward citizens and their data as demonstrated by countries like Estonia, we can usher in genuine digital democracy in the age of the algorithm.”
- Baratunde Thurston, futurist and co-founder of comedy/technology start-up Cultivated Wit, said, “Given that the biggest investments in AI are on behalf of marketing efforts designed to deplete our attention and bank balances, I can only imagine this leading to days that are more filled but lives that are less fulfilled… I believe we must unleash these technologies toward goals beyond profit maximization.”
- Vint Cerf, Internet Hall of Fame member and vice president and chief internet evangelist at Google, commented, “I see AI and machine learning as augmenting human cognition a la Douglas Engelbart. There will be abuses and bugs, some harmful, so we need to be thoughtful about how these technologies are implemented and used, but, on the whole, I see these as constructive.”
- Leonard Kleinrock, Internet Hall of Fame member, replied, “As AI and machine learning improve, we will see highly customized interactions between humans and their health care needs. This mass customization will enable each human to have her medical history, DNA profile, drug allergies, genetic makeup, etc., always available to any caregiver/medical professional.”
- Thad Hall, coauthor of “Politics for a Connected American Public,” commented, “Fake videos, audio and similar media are likely to explode and create a world where ‘reality’ is hard to discern. The relativistic political world will become more so, with people having evidence to support their own reality or multiple realities that mean no one knows what is the ‘truth.’”
- Ken Birman, a professor in the department of computer science at Cornell University, responded, “By 2030, I believe that our homes and offices will have evolved to support app-like functionality… People will customize their living and working spaces… I do want my devices and apps linked on my behalf, but I don’t ever want to be continuously spied-upon. I do think this is feasible, and, as it occurs we will benefit in myriad ways.”
For more information or to arrange an interview, please contact Dan Anderson, Elon University’s Vice President for Communications.