A new survey by the Imagining the Internet Center and Pew Research looks at best/worst changes in digital life
Experts have deep concerns about the impact of digital evolution on people’s and society’s overall well-being. But they also expect great benefits in healthcare, scientific advances and education, according to a new report from Pew Research Center and Elon University’s Imagining the Internet Center.
This report is part of a long-running series about the future of the internet and is based on a nonscientific canvassing of technology innovators, developers, business and policy leaders, researchers and activists who were asked to predict the most beneficial and the most harmful changes that will occur in technology in the coming years. In all, more than 300 respondents submitted more than 200 pages of in-depth responses in which they shared their overall expectations about the changes they foresee:
- 42% of these experts said they are equally excited and concerned about the changes in the “humans-plus-tech” evolution they expect to see by 2035
- 37% said they are more concerned than excited about the changes they expect
- 18% said they are more excited than concerned about expected change
- 2% said they are neither excited nor concerned
- 2% said they don’t think there will be much real change by 2035.
“While significant new benefits are widely expected, 79% of these experts said they have some concerns about how digital trends – including the rapid evolution and spread of AI – may influence people’s lives in the near future,” said Janna Anderson, director of the Imagining the Internet Center and professor of communications at Elon.
“There is a mega story in these expert answers and a pixilated story,” said Lee Rainie, director of Internet and Technology research at Pew Research. “The big point they want to make is that if smart steps are not taken now things will break bad in the coming years as tech trends play out. At the same time, these experts can identify developments in key parts of life that are remarkable, heartening and represent unambiguous progress. They are clear that the outcome is ours to shape.”
The canvassing took place Dec. 27, 2022-Feb. 21, 2023. The key themes for both the worst and best trends these experts foresee include the following:
The most harmful or menacing changes in digital life that are likely by 2035
Some 37% of these experts said they are more concerned than excited about coming technological change and 42% said they are equally concerned and excited. They spoke of these fears:
- The future harms to human-centered development of digital tools and systems: The experts who addressed this fear wrote about their concern that digital systems will continue to be driven by profit incentives in economics and power incentives in politics. They said this is likely to lead to data collection aimed at controlling people rather than empowering them to act freely, share ideas and protest injuries and injustices. These experts worry that ethical design will continue to be an afterthought and digital systems will continue to be released before being thoroughly tested. They believe the impact of all of this is likely to increase inequality and compromise democratic systems.
- The future harms to human rights: These experts fear new threats to rights will arise as privacy becomes harder, if not impossible, to maintain. They cite surveillance advances, sophisticated bots embedded in civic spaces, the spread of deepfakes and disinformation, advanced facial-recognition systems, and widening social and digital divides as looming threats. They foresee crimes and harassment spreading more widely, and the rise of new challenges to humans’ agency and security. A topmost concern is the expectation that increasingly sophisticated AI is likely to lead to the loss of jobs, resulting in a rise in poverty and the diminishment of human dignity.
- The future harms to human knowledge: They fear that the best of knowledge will be lost or neglected in a sea of mis- and disinformation, that the institutions previously dedicated to informing the public will be further decimated, and that basic facts will be drowned out in a sea of entertaining distractions, bald-faced lies and targeted manipulation. They worry that people’s cognitive skills will decline. In addition, they argued that “reality itself is under siege” as emerging digital tools convincingly create deceptive or alternate realities. They worry that a class of “doubters” will hold back progress.
- The future harms to human health and well-being: A share of these experts said humanity’s embrace of digital systems has already spurred high levels of anxiety and depression and predicted things could worsen as technology embeds itself further in people’s lives and social arrangements. Some of the mental and physical problems could stem from tech-abetted loneliness and social isolation; some could come from people substituting tech-based “experiences” for real-life encounters; some could come from job displacements and related social strife; and some could come directly from tech-based attacks.
- The future harms to human connections, governance and institutions: The experts who addressed these issues fear that norms, standards and regulation around technology will not evolve quickly enough to improve the social and political interactions of individuals and organizations. Two overarching concerns: a trend toward autonomous weapons and cyberwarfare and the prospect of runaway digital systems. They also said things could worsen as the pace of tech change accelerates. They expect that people’s distrust in each other may grow and their faith in institutions may deteriorate. This, in turn, could deepen already undesirable levels of polarization, cognitive dissonance and public withdrawal from vital discourse. They fear, too, that digital systems will be too big and important to avoid, and all users will be captives.
The best and most beneficial changes in digital life likely by 2035
Some 18% of these experts said they are more excited than concerned about coming technological change and 42% said they are equally excited and concerned. They shared their hopes in the following categories:
- The future benefits to human-centered development of digital tools and systems: The experts who cited tech hopes covered a wide range of likely digital enhancements in medicine, health, fitness and nutrition; access to information and expert recommendations; education in both formal and informal settings; entertainment; transportation and energy; and other spaces. They believe that digital and physical systems will continue to integrate, bringing “smartness” to all manner of objects and organizations, and expect that individuals will have personal digital assistants that ease their daily lives.
- The future benefits to human rights: These experts believe digital tools can be shaped in ways that allow people to freely speak up for their rights and join others to mobilize for the change they seek. They hope ongoing advances in digital tools and systems will improve people’s access to resources, help them communicate and learn more effectively, and give them access to data in ways that will help them live better, safer lives. They urged that human rights must be supported and upheld as the internet spreads to the farthest corners of the world.
- The future benefits to human knowledge: These respondents hope to see innovations in business models; in local, national and global standards and regulation; and in societal norms and digital literacy that will lead to the revival of and elevation of trusted news and information sources in ways that attract attention and gain the public’s interest. Their hope is that new digital tools and human and technological systems will be designed to assure that factual information will be appropriately verified, highly findable and well-updated and archived.
- The future benefits to human health and well-being: These experts expect that the many positives of digital evolution will bring a health care revolution that enhances every aspect of human health and well-being. They emphasize that full health equality in the future should direct equal attention to the needs of all people while also prioritizing their individual agency, safety, mental health and privacy and data rights.
- The future benefits to human connections, governance and institutions: Hopeful experts said society is capable of adopting new digital standards and regulations that will promote pro-social digital activities and minimize antisocial activities. They predict that people will develop new norms for digital life and foresee them becoming more digitally literate in social and political interactions. They said in the best-case scenario, these changes could influence digital life toward promoting human agency, security, privacy and data protection.
Among the intriguing predictions from those canvassed:
- Jonathan Grudin spoke of automation: “I foresee a loss of human control in the future. The menace isn’t control by a malevolent AI. It is a Sorcerer’s Apprentice’s army of feverishly acting brooms with no sorcerer around to stop them. Digital technology enables us to act on a scale and speed that outpaces human ability to assess and correct course. We see it already.”
- Catriona Wallace looked ahead to in-body tech: “Embeddable software and hardware will allow humans to add tech to their bodies to help them overcome problems. There will be AI-driven, 3D-printed, fully-customised prosthetics. Brain extensions – brain chips that serve as digital interfaces – could become more common. Nanotechnologies may be ingested.”
- Liza Loop observed, “Humans evolved both physically and psychologically as prey animals eking out a living from an inadequate supply of resources. … The biggest threat here is that humans will not be able to overcome their fear and permit their fellows to enjoy the benefits of abundance brought about by automation and AI.”
- Matthew Bailey said he expects that, “AI will assist in the identification and creation of new systems that restore a flourishing relationship with our planet as part of a new well-being paradigm for humanity to thrive.”
- Judith Donath warned, “The accelerating ability to influence our beliefs and behavior is likely to be used to exploit us; to stoke a gnawing dissatisfaction assuageable only with vast doses of retail therapy; to create rifts and divisions and a heightened anxiety calculated to send voters to the perceived safety of domineering authoritarians.”
- Kunle Olorundare said, “Human knowledge and its verifying, updating, safe archiving by open-source AI will make research easier. Human ingenuity will still be needed to add value – we will work on the creative angles while secondary research is being conducted by AI. This will increase contributions to the body of knowledge and society will be better off.”
- Jamais Cascio said, “It’s somewhat difficult to catalog the emerging dystopia because nearly anything I describe will sound like a more extreme version of the present or an unfunny parody. … Simulated versions of you and your mind are very likely on their way, going well beyond existing advertising profiles.”
- Lauren Wilcox explained, “Interaction risks of generative AI include the ability for an AI system to impersonate people in order to compromise security, to emotionally manipulate users and to gain access to sensitive information. People might also attribute more intelligence to these systems than is due, risking over-trust and reliance on them.”
- Giacomo Mazzone warned, “With relatively small investments, democratic processes could be hijacked and transformed into what we call ‘democratures’ in Europe, a contraction of the two French words for ‘democracy’ and ‘dictatorship.’ AI and a distorted use of technologies could bring mass-control of societies.”
- Christine Boese noted, “Soon all high-touch interactions will be non-human. NLP [natural language processing] communications will seamlessly migrate into all communications streams. They won’t just be deepfakes, they will be ordinary and mundane fakes, chatbots, support technicians, call center respondents and corporate digital workforces … I see harm in ubiquity.”
- Beth Noveck predicted that AI could help make governance more equitable and effective and raise the quality of decision-making, but only if it is developed and used in a responsible and ethical manner, and “if its potential to be used to bolster authoritarianism is addressed proactively.”
- Alejandro Pisanty wrote, “Human connection and human rights are threatened by the scale, speed and lack of friction in actions such as bullying, disinformation and harassment. The invasion of private life available to governments facilitates repression of the individual, while the speed of Internet expansion makes it easy to identify and attack dissidents.”
- Barry K. Chudakov observed, “We are sharing our consciousness with our tools. They can sense what we want, can adapt to how we think; they are extensions of our cognition and intention. As we go from adaptors to co-creators, the demand on humans increases to become more fully conscious. It remains to be seen how we will answer that demand.”