Elon University

The 2017 Survey: The Future of Truth and Misinformation Online (Q2 Credited Responses)

Credited responses to the first follow-up question:
Is there a way to create trusted, unhackable verification systems?

Internet technologists, scholars, practitioners, strategic thinkers and others were asked by Elon University and the Pew Research Internet, Science and Technology Project in summer 2017 to share their answer to the following query:

Future of Misinformation LogoWhat is the future of trusted, verified information online? The rise of “fake news” and the proliferation of doctored narratives that are spread by humans and bots online are challenging publishers and platforms. Those trying to stop the spread of false information are working to design technical and human systems that can weed it out and minimize the ways in which bots and other schemes spread lies and misinformation. The question: In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially-destabilizing ideas?

About 49% of these respondents, said the information environment WILL improve in the next decade.
About 51% of these respondents said the information environment WILL NOT improve in the next decade.

Follow-up Question #1 was:
Is there a way to create reliable, trusted, unhackable verification systems? If not, why not, and if so what might they consist of?

Some key themes emerging from among the responses: – It is probably not possible to create such a system. – It would be seen as too costly and too work-intensive. – There is likely to be less profit if such systems are implemented, which is also likely to stifle such solutions. – It is possible to have commonly accepted, ‘trusted’ systems – it’s complicated because ‘what I trust and what you trust may be very different.’ – Can systems parse ‘facts’ from ‘fiction’ or identify accurately and in a widely accepted manner the veracity of information sources? – There can be no hackable largescale networked systems. – It’s worth a try to create verification systems; they may work or at least be helpful. – ‘Verification’ would reduce anonymity, hinder free speech and harm discourse. – There is hope for possible fixes.

Written elaborations to by for-credit respondents

Misinformation Online Full Survey LinkFollowing are full responses to Follow-Up Question #1 of the six survey questions, made by study participants who chose to take credit when making remarks. Some people chose not to provide a written elaboration. About half of respondents chose to remain anonymous when providing their elaborations to one or more of the survey questions. Respondents were given the opportunity to answer any questions of their choice and to take credit or remain anonymous on a question-by-question basis. Some of these are the longer versions of expert responses that are contained in shorter form in the official survey report. These responses were collected in an opt-in invitation to about 8,000 people.

Their predictions:

Micah Altman, director of research for the Program on Information Science at MIT, commented, “People and the systems they create are always imperfect. Instead of ‘unhackable’ systems we should seek more reliable, more trustworthy, and tamper-resistant (hardened) systems. These systems will be based on transparency of operation (e.g., open source, open algorithms); cryptographic protocols; and distributed operation.”

Brian Cute, longtime internet executive and ICANN participant, said, “There are ways to create reliable, trusted verification systems. Technology solutions exist and could be developed that would design trust into an information system. The question of whether these systems would be unhackable is more difficult to answer. Given the history of the internet to date, hackers will continue to try to hack almost any technology solution and certainly more so those solutions that hold themselves out as being unhackable.”

Steve McDowell, professor of communication and information at Florida State University, replied, “Such systems may not be totally secure, but a reference system (a trusted source vouches for the author or story in question) or a branded system (recognizable and trusted information providers) may reduce to persuasivenes of some false facts. However, there may not be agreement on who are the trusted sources in news and information.

Bart Knijnenburg, researcher on decision-making and recommender systems and assistant professor of computer science at Clemson University, said, “There are no fool-proof solutions, but a lot depends on (automated) social proof. Algorithms will learn to filter out ‘bad apples.’ This is crucially dependent on having the right incentives: the appeal of ‘virality’ will go away once news consumption is no longer funded by ad consumption.”

Helen Holder, distinguished technologist for HP, said, “First, nothing is ‘unhackable.’ Second, higher reliability of information can be achieved with human and electronic validation of facts, using methods that traditional investigators and journalists are trained to do. Some of those techniques may be enhanced with machine learning to identify common indicators of false information. Third, gaining trust is much harder and requires a long track record of virtually perfect execution. Any failures will be used to discredit such a system. For example, the modern widespread distrust of the reliability of information from major media outlets, despite being reliable the vast majority of time, indicates that even low error rates will add to perception that there are no objective, reliable sources of information. Rapid corrections when new information becomes available will be essential so that no outdated content can be referenced.”

David Weinberger, writer and senior researcher at Harvard’s Berkman Klein Center for Internet & Society, noted, “Reliable, yes. Trusted, yes. Unhackable, no. Reliability and trust are social formations: Reliable and trustworthy enough for some purpose. We will adjust our idea of what is an appropriate degree of reliability and trust. Because we have to.”

Laurel Felt, lecturer at the University of Southern California, said, “Anything that can be built can be unbuilt – that is to say, anything coded can be hacked.”

Jonathan Brewer, consulting engineer for Telco2, commented, “Yes, it’s very possible to create trusted, un-hackable verification systems. Much of the requisite infrastructure exists through DNSSEC. Browser vendors and social media platforms need only integrate and extend DNSSEC to provide a registry of authentic information sources.”

Michael R. Nelson, public policy executive with Cloudflare, replied, “No one works on computer systems would promise that a system can be ‘unhackable.’ But a lot can be done with a system that is ‘good enough’ and upgradable (if vulnerabilities are found). The history of encryption is a good model. Standards have evolved to overcome new attacks.”

Andrew Odlyzko, professor of math and former head of the University of Minnesota’s Supercomputing Institute, observed, “No, because what is accepted as reliable is a social construct, and in most cases does not have an absolute unambiguous answer.”

Glenn Edens, CTO for Technology Reserve at Xeroz/PARC, wrote, “Maybe but it is not clear what an acceptable technology might be. Consumers of information need to take an active role in determining the quality and reliability of information they receive. This can happen via verifiable and trusted sources through subscriptions, certificates and verifiable secure protocols, of course this does not solve the problem of the ‘commons’ – the free marketplace.”

David Wood, a UK-based futurist at Delta Wisdom, said, “The goalposts will keep moving. Systems that are strong enough for practical purposes today won’t be strong enough in three years’ time, and so on. Most likely the verification systems of 2027 will use mechanisms that are hardly even imagined today.”

Seth Finkelstein, consulting programmer with Seth Finkelstein Consulting, commented, “The technical issue of verification is irrelevant to the social issue of not valuing truth. That is, a cryptographically signed statement does almost nothing against being quoted in misleading manner, or just plain lies that people want to believe. The problem with stories ‘too good to check’ isn’t a deficiency of ability, but rather essentially nobody cares. In discussion forums, when someone posts an article link and mischaracterizes it in an inflammatory way, consider how few people will read the full article versus immediately ranting based on the mischaracterization. That is, we see a prominent failure-mode of not verifying by reading an article often one click away. Given this, it’s hard to see more than a minuscule effect for anything elaborate in terms of an unforgeable chain to a source. It’s worthwhile to compare the infrastructure of online shopping, where a huge amount of money is directly at risk if the system allows for false information by bad actors, i.e. credit card scammers. There, the businesses involved have a very strong incentive to make sure all the various platforms cooperate to maintain high standards. This isn’t an argument to treat getting news like making a purchase. But looking at the overall architecture of a payment system can shed some light on what’s involved in having reliability and trust in the face of distributed threat.”

Deirdre Williams, retired internet activist, replied, “I don’t think so. What human beings create human beings can break.”

Paul Saffo, longtime Silicon Valley-based technology forecaster, commented, “Yes, but it is a never-ending race. The systems will get better, the bad guys will get more sophisticated, creating demand for another rev.”

Geoff Scott, CEO of Hackerati, commented, “Reliable and trusted in whose eyes? It’s technically feasible to create immutable and consensus-based repositories for information, but it is the ‘facts’ themselves that are being doubted and fabricated. What determines if a statement is true or not? Popular consensus only indicates which statements are most believable to a segment of the population. Findings from ‘independent’ investigations are themselves questioned by those who are already inclined to disagree.”

Garth Graham, an advocate for community-owned broadband with Telecommunities Canada, explained, “We can only verify the source, never the information. The question assumes external authority and there is no external authority.”

Edward Kozel, an entrepreneur and investor, replied, “All existing or posited techniques to grade ‘trust’ are subjective. Like reputation, trust is relative and subjective.”

Neville Brownlee, associate professor of computer science at the University of Auckland, said, “There could be, but it will be difficult to make that work without overall regulation.”

Alejandro Pisanty, a professor at UNAM, the National University of Mexico, and longtime internet policy leader, observed, “No, only partial approximations serving specific outlooks are possible. Malicious intent will never go away and will continue to find ways against defenses, especially automated ones; and the increasing complexity of our environments will continue to be way above our ability to keep people educated.”

Erhardt Graeff, a sociologist doing research on technology and civic engagement at the MIT Media Lab, said, “Solutions to misinformation will be more social than technical and will require we redistribute power in meaningful ways. Using the frame of security, there is never going to be such a thing as an unhackable verification system. The weakest links in security are human, which cannot be addressed by technical fixes. Rather, they require that we work on education and support systems, designs that are collaboratively created and adaptive to people’s needs, and ways to respond to hacks and crises that protect and reassure individual users first rather than business interests. Furthermore, conspiracy theorists will always find a way to discredit a system’s reliability and trustworthiness. A more fundamental solution will require that we work on building relationships among diverse communities that foster mutual respect and trust. These networks of people and institutions are what information ecosystems (and democracies, more generally) work through. It’s these webs of relationships that do the lion’s share of the work of verification. We will need to rethink our connections to public information in order to foster respect and trust through consistent engagement in the same way friendships are built. News organizations, platforms, and other media elites will need to operate in more ‘localized’ and participatory ways that allow regular people to have agency in the journalistic process and in how problems like misinformation are addressed. We trust who and what we know in part because we have some control over those relationships closest to us. Ultimately, verification and the larger universe of information problems affecting democracy boil down to relationships and power, which we must take into account in order to make real progress.”

Brian Harvey, teaching professor emeritus at the University of California-Berkeley, said, “Trusted by whom?”

David Conrad, a chief technology officer, replied, “Not systems that can be deployed in a cost-effective fashion for the foreseeable future. ‘Unhackable’ implies a fundamental change in how computer systems and software are implemented and used and this is both expensive and takes time.”

Daniel Wendel, a research associate at MIT, said, “The technology exists to make things reliable and unhackable. However, this does not mean they will be reliable or trusted. At some level, value judgments will be made, and person preference will be injected into any system that endeavors to report on ‘truth.’ Luckily, having a fully foolproof, trusted and reliable source is not required. In fact, having a public that doubts everything is good. That said, some sources are more reliable than others. People need to begin to understand that being a wary consumer does not mean taking all news as ‘equally fake.’ There is a certain willful self-deception in society now that allows untruthful sources to be perceived as reliable. But social, not technical, innovation is required to overcome that.”

Mercy Mutemi, legislative advisor for the Kenya Private Sector Alliance, observed, “One of the ways to counter fake news is proof of research. Whilst everyone can sit down and type a news article, very few can show that their work is research-backed. This could vary from speaking to a government official to showing official hansard records. A system that mandates the reporter to make some disclosure of the research they have done before being allowed to post their article online would be a great start. As well, a pre-approval step would discourage busy bodies from idle posting while barring bots from automatically reposting stories.”

Larry Keeley, founder of innovation consultancy Doblin, observed, “YES. I’ve had teams working on in both the design school where I teach graduate and doctorate students, plus at the Kellogg Graduate School of Management. There are technologies – like blockchain and other forms of distributed ledgers – that make complex information unhackable. There will be other methods that don’t worry so much about being unhackable, and worry more about being resilient and swiftly corrected. There will be many more such emergent capabilities. Most, however, will NOT be instantaneous and real time, so that there will still be considerable asymmetry with information that is compelling, vivid, easily amplified, and untrue. So the coolest systems may need to have an augmented-reality ‘layer’ that provides the confidence interval about the underlying story – and shows how that gets steadily better after a bit of time permits a series of corrective/evaluative capabilities to address the initial story(ies).”

Jan Schaffer, executive director of J-Lab, said, “It’s beginning to appear like anything can be hacked, so I don’t see the light at the end of this tunnel. But, again, hopefully some trustworthy news organizations will survive, although it will be harder for them when we have a more ‘normal’ president in office.”

Sandro Hawke, technical staff, World Wide Web Consortium, noted, “A practical goal in security is to make the cost of breaking through the security be higher than any gain from doing so. Once you get to this point, you can feel fairly comfortable saying the system is ‘secure,’ even though, in fact, a sufficiently motivated opponent could still break in. But you can never know with 100% certainty whether there’s someone who developed a new technique, or who for some unknown reason is passionately devoted to breaking in. It’s like keeping the president safe from assassination: if it’s important enough, the country can do solid job, but still wont be 100%. To establish trust online, we need to separate a few threats. 1) Impersonation, where I think I’m talking to my cousin Casey, but I’m actually talking to an industrial spy who is pretending to be Casey, is relatively easy to address by technical means. It doesn’t seem to have become a large problem yet in the political arena. Hopefully we can keep it that way. 2) Assessing whether a stranger is to be given the respect commonly granted to fellow members of some community. We currently have very little technology deployed to help with this. We can’t even tell if we’re having a conversation with a bot on Twitter. I don’t think this is hard to solve, if people are motivated, and the platform providers are motivated to help. 3) Assessing whether known and respected sources are conveying accurate information. This is the hardest of these problems. Your sister Sarah tells you about a new study that microwaves cause cancer. Or the Wall Street Journal Editorial Board says the best option is to go to war with North Korea because of some reason that seems crazy to you. What can you do about that? Again, we have options, but at this point we’re probably outside the scope of this question. I just wanted to separate three distinct aspects of information security.”

Steve Axler, a user-experience researcher, replied, “We can censor individual systems but when we do, new systems will be created to give an uncensored voice to those desiring that.”

Nick Ashton-Hart, a public policy professional based in Europe, commented, “Creating systemic trust systems is always an arms race between those who want to protect or verify and those who see a value in overcoming those systems. That dynamic will continue.”

Michael Rogers, principal at the Practical Futurist, wrote, “We can get part of the way there. A combination of trained AI at the first level and then human evaluation could filter the most egregious pieces of fake information. It might also be possible to set up a voluntary ‘certification’ process for information providers, perhaps those who agree to embed metatags that lead to original news sources. You could get close to ‘reliable’ but ‘trusted’ might still for some be a matter of opinion. And for hackability, you could at least make it more trouble than it’s worth.”

Scott MacLeod, founder and president of World University and School, replied, “Possibly through a kind of point and counterpoint – or neo-Hegelian – approach. ‘Democracy Now’ (1996) as a scaled ‘news’ version may be an example of a somewhat ‘reliable, trusted, unhackable verification system…’ in English, since the advent of the internet with Graphical User Interface in 1993-94 (with Mozilla). Google, Alphabet and TensorFlow may offer another algorithmic neo-Hegelian approach.”

Morihiro Ogasahara, associate professor at Kansai University, said, “There would not be a robust verification system, but a game of cat and mouse between the fake news ecosystem and the protesters.”

Sahana Udupa, professor of media anthropology at Ludwig Maximilian University of Munich, wrote, “There are ways to create reliable verification systems. Technology companies will begin to respond to government and civil society pressure in the coming years, even as civil society and non-governmental organizations will raise infrastructure to debunk fake news. However, the capacity to combat misinformation will highly depend on the ruling regimes’ political will. Hence, despite the readiness of technology companies and civil society pressure, the real challenge would be to ensure national and subnational governments build a sound regulatory scenario. In many countries, such political will might not exist.”

Ian Peter, internet pioneer, historian and activist, observed, “Verification systems are possible for matters such as financial transactions. However in the area of opinions, facts and exchanges of information it might be possible to verify the source, but not the accuracy of information passed on. There are limited instances in which it might be possible to provide a verification system – where we are dealing with verifiable facts. But very often this is not the case.”

Uta Russmann, a professor whose research is concentrated on political communication via digital methods, noted, “Yes. Software and AI become exponentially better, they will automatically verify each other.”

John Perrino, senior communications associate at George Washington University School of Media and Public Affairs, wrote, “No, not a foolproof system at a mass scale. You would either put First Amendment rights at risk by only allowing verified content from a few sources or expand the verification system to everyone where it is sure to be exploited because it would be created by humans.”

Jon Lebkowsky, web consultant/developer, author and activist, commented, “I’m sure there’s a way to create systems of vetting and verification, but there may not be a will to do so. We might see an information arms race, where such systems emerge but are persistently undermined. And there’s always the question of authority: What sources will be considered authoritative, and how will that authority be established? Verification doesn’t have a solely technical solution, there will inherently be a human element – so a workable solution will rely more on social and political factors than on any specific technology.”

Leah Lievrouw, professor in the department of information studies at the University of California-Los Angeles, observed, “There may be some useful techniques for verification, but historically there’s always been a dynamic in digital technology development where different parties with different views about who or what that technology is for, build and reconfigure systems in a kind of adversarial or ‘argumentative’ cycle of point-counterpoint. That’s the culture of computing; it resists stabilization (at least so far). For me, though, the key thing is that verification isn’t judgment. Fact checking isn’t editing or making a case. It takes people to do these things and the idea that machines or ‘an artificial intelligence’ is going to do this for us is, I think, irresponsible.”

James LaRue, director of the Office for Intellectual Freedom of the American Library Association, commented, “I’m not sure. On the one hand, the application of biometrics prior to posting, and some kind of disaggregation that didn’t carry the ID forward might hit a balance between privacy/free speech and accountability. But the opportunity for abuse, or hacking, remains.”

Stuart Elliott, visiting scholar at the US National Academies of Sciences, Engineering and Medicine, observed, “This strikes me as a situation with a natural equilibrium: We develop verification systems that are good enough with respect to current hacking methods and when the hacking methods improve we improve the verification systems. The system is unlikely to get too far away from an equilibrium with a small rate of successful hacking – verification is expensive so we’ll be willing to tolerate some minimal level of successful hacking but beyond that minimal level it’s worth improving verification.”

Tanya Berger-Wolf, professor at the University of Illinois-Chicago, wrote, “No. There will always be a way to hack any system. We have been trying to create unhackable systems long before computers came to be and they are always hacked in the end. The best we can hope is the ‘Red Queen’ approach: be just ahead of the hackers, never stopping.”

Michael J. Oghia, an author, editor and journalist based in Europe, said, “Of course there are ways, perhaps by using blockchain technologies. I’m not sure, though, that we (society) are keeping pace with how quickly technology is changing.”

Veronika Valdova, managing partner at Arete-Zoe, noted, “Verification of credentials is a major problem for all corporate networks. I would expect verification and blocking of devices that are not pre-defined as part of the network and simultaneous biometric verification of the user. Healthcare information systems may become the cutting edge of innovation due to obsolete nature of many of them and the pressing need for a substantial overhaul. This may provide an opportunity for the development of a truly innovative set of solutions.”

Jamais Cascio, distinguished fellow at the Institute for the Future, noted, “Unhackable? No. Whether it’s a technological hack or social engineering, we have to operate as if ‘unhackable’ is un-possible. Reliable? Probably. Trusted? Now this is the problem. Trust is a cultural construct (as in, you trust when the source doesn’t violate your norms, put simply). What I trust and what you trust may be very different, and finding something that we both (or all) will trust may be functionally impossible. No matter the power of the technologies, there’s still the ‘analog hole’ – the fact that the human mind has to accept something as reliable and true.”

Peter Jones, associate professor in strategic foresight and innovation at OCAD University, Toronto, commented, “There already are such information services, but they are hated by the political press because they ‘dox’ important politicians and expose leaks. The future will look like a network of commentary and field-level (‘embodied’) tweets supported by well-funded Wikileaks sources that cue journalists and give direction to investigation. Wikileaks is not ‘hackable’ in the way today’s fake news attributes such ‘hacks’ to Russia. False information on a leak site is crowd-analyzed and found out quickly.”

Jerry Michalski, futurist and founder of REX, replied, “Nothing is unhackable, but systems can be very hard to hack. If Mark Zuckerberg wanted to play watchdog, he could turn Facebook, one of the superconductors of unreliable info, into a far better platform. But he’d risk growth and loyalty. A novel platform that is very reliable will have trouble attracting users, unless it is the natural successor to Facebook, Instagram, Snapchat, et cetera.”

Esther Dyson, a former journalist and founding chair at ICANN, now a technology entrepreneur, nonprofit founder and philanthropist, expert, said, “The systems can be unhackable, but they cannot be reliable and trusted any more than *people* can be reliable and trusted.”

Howard Rheingold, pioneer researcher of virtual communities, longtime professor and author of “Net Smart: How to Thrive Online,” noted, “Because it is an arms race, with the purveyors of untrustworthy information backed by both state actors and amateurs, I don’t think it is likely that 100% reliable systems will last for long. However, I think a combination of education – teaching people how to critically examine online info and use credibility tools, starting with elementary school children, can augment technical efforts.”

Bob Frankston, internet pioneer and software innovator, said, “No, because the world is inherently ambiguous. If anything the wish for such a system feeds into an authoritarian dystopia.”

Kenneth R. Fleischmann, associate professor at the University of Texas- Austin School of Information, wrote, “The main problem is that there is a cascade of reliance that inherently occurs in these situations – take, for example, perhaps the most famous piece of fake news from the 2016 election, Pizzagate, which was based on an allegation that Hillary Clinton and John Podesta were running a child sex trafficking ring in the basement of Comet Ping Pong, a pizza restaurant in DC. This claim has been roundly debunked by sites and sources that I trust, and is backed up by a fairly significant and case-closing fact, that Comet Ping Pong does not have a basement. However, I am here relying on others, I am not able to make an independent verification of this fact. Even if I traveled to Comet Ping Pong and used a device designed to determine soil density to figure out if there was a secret basement beneath the floor, I would be relying on the experts who had designed and built the device. To really know for certain firsthand, I would need to use a jackhammer. As information comes to us from further and further away, mediated by more and more sources, it seems inevitable that we will have less ability to directly verify the information that we trust as accurate.”

Jim Hendler, professor of computing sciences at Rensselaer Polytechnic Institute, commented, “It is certainly possible to create these systems, but the problems include cost (much more expensive than ‘turnkey’ systems) and desire – i.e., the issue right now isn’t so much where information is coming from, but is that source trustworthy in itself.”

J. Cychosz, a content manager and curator for a scientific research organization, commented, “Yes, at the system level, but the implementation always is vulnerable, so the answer is NO.”
Liam Quin, an information specialist with the World Wide Web Consortium (W3C), said, “We’re working on it at W3C, but the boundary between the physical and virtual worlds remains a difficulty.”

Pete Cranston, knowledge management and digital media consultant, replied, “No, human intervention of some kind will always be necessary – both individuals and small and large groups.”

Mark P. Hahn, a chief technology officer, wrote, “No. Even with perfect tools people will make mistakes. Decentralized tools will still allow bad actors to subvert locally. Centralized tools will concentrate power and become both a target and a magnet for bad actors on the inside.”

Taina Bucher, associate professor in the Centre for Communication and Computing at the University of Copenhagen, commented, “There are ways to create reliable verification systems that can be trusted but never in an absolute sense. I don’t think there are any unhackable systems as such.”

Scott Guthrey, publisher for Docent Press, said, “Ultimately the security will depend upon the humans building and using the system. There is no such thing as a ‘reliable, trusted, unhackable’ human being.”

Andrew Dwyer, an expert in cybersecurity and malware at the University of Oxford, commented, “There is no such thing as an ‘unhackable’ system… though they do provide a service in providing assurance. Ultimately trust emerges from contextual information – so that people will be able to use guidemarks, trusted news sources and systems on social media for example to approximate whether something is reliable.”

Michael P. Cohen, a principal statistician, replied, “Even though a current fact-checker web page could be hacked, over time truth will ought. The difficulty is getting folks to seek out trustworthy information.”

Dean Willis, consultant for Softarmor Systems, commented, “No such thing as unhackable. The personal web of trust is the best we have, and it is subject to weakest link failures.”

Axel Bruns, professor at the Digital Media Research Centre, Queensland University of Technology, commented, “No. To seek our salvation in technological solutions is a fool’s errand; it’s Silicon Valley-style tech solutionism in the face of a very human problem. Verification systems will only ever be a partial solution because truth isn’t binary: it’s not simply yes/no, but rather there are varying degrees of truthfulness, skew, bias and misrepresentation.”

Sharon Tettegah, professor at the University of Nevada, commented, “Reliability of information has always been a challenge. I believe there are ways to reduce misinformation by channeling information through a few sources. Machines will be able to verify the reliable sources based on mining processes.”

Alladi Venkatesh, professor at the University of California-Irvine, replied, “This is not easy, but we should not give up.”

Allen H Renear, dean of the School of Information Sciences, University of Illinois-Urbana Champaign, commented, “Probably. Improved largely reliable identity and authentication services are emerging now.”

John Lazzaro, a retired electrical engineering and computing sciences professor from the University of California-Berkeley, wrote, “Simple existence proof: no one has successfully impersonated the real Donald Trump on Twitter (yet), and forging tweets from POTUS is the highest value target I can imagine. So, if everyone in the chain takes security seriously (Donald, his IT/network staff, and Twitter in this case), it can be done today.”

Claudia Caro, assistant director, Digital Media and Learning Research Hub, University of California-Irvine, said, “I am not sure that creating a ‘unhackable’ verification is the right question or path to pursue. Rather, what are the cultural reputation systems and technologies that will be created in the future out of this necessity to represent different frames and viewpoints.”

Paul Hyland, principal consultant for product management and user experience at Higher Digital, observed, “It’s hard to know now, but I have to believe that such systems could be created, although they will never be perfect – and never completely unhackable.”

Scott Shamp, a dean at Florida State University, commented, “I do not believe there is a way to create a robust verification system. Doing so would require greater regulation of information systems and sources. And regulation is almost uniformly undesirable.”

Mike Meyer, chief information officer at University of Hawaii, wrote, “This can be done but it will require the use of formal bio authentication and a very transparent audit train probably using blockchain technology.”

Amber Case, research fellow at Harvard Berkman Klein Center for Internet & Society, replied, “I am reminded of early verification systems for sites like HotorNot.com. Each profile submitted to the site would go through a two-person verification by two separate volunteer moderators. If both moderators voted the profile safe (no personal information in it, etc.) the profile would be accepted by the site. If one moderator voted against the profile, the profile would go through a second system of review. If both voted against the profile, the profile would be rejected. This community-based system helped create a scalable moderation community for a site whose founders and employees couldn’t handle the size and scale of the task of verifying submissions.”

George Siemens, professor at LINK Research Lab at the University of Texas-Arlington, commented, “It is certainly possible to create trusted verification systems, but the tradeoff is between usability and ease of engagement. Blogging first became popular because it was easy to post rather than going through an intermediary. For me, the important question is whether we can create a *usable* reliable verification system.”

Matt Mathis, a research scientist who works at Google, said, “[One solution is] verifiable citations (resembling science or other fields), pointing back to original sources in lay publications.”

Alfred Hermida, associate professor and journalist, commented, “The question assumes there is an objective ‘truth’ that can be achieved. Who and how information is verified is shaped by systemic power structures that tend to privilege sectors of society.”

Charles Ess, professor of media studies at the University of Oslo, wrote, “I don’t think any system is unhackable. The best we can do is develop systems that are profoundly difficult to hack: doing so includes not only attention to the technical dimensions, but also to the human elements of building trusted relationships as well as greater understanding of the risks and dangers of new technological developments.”

Rick Forno, senior lecturer in computer science and electrical engineering at the University of Maryland-Baltimore County, said, “There is no unhackable system for facts or the presentation of reality; people will still believe what they want to believe. Technology can help increase the level of trust and factual information in the world, but ultimately it comes down to the individual to determine what is real, true, fake, misleading, mis-sourced, or flat-out incorrect. That determination is based on the individual’s own critical thinking skills if not the factual and practical soundness of their educational background as well. Unfortunately, I don’t see technology helping overcome *that* particular vulnerability which is quite prevalent in the world – i.e., people making bad judgments on what to believe or trust – anytime soon. I hope your report’s commentary will touch on the importance of not only education (both formal and informal) and especially the development of critical thinking and analysis skills needed to inculcate an informed and capable citizenry – especially ones that allow a person to acknowledge an opposing view even if they disagree with it, and not just brush it off as ‘fake news’ because they don’t like what they’re hearing or seeing. Otherwise, I daresay a misinformed or easily-misguided citizens that remains uncritical and unquestioning will remain a politician’s best friend, and this problem will only get worse in time. ;(“

John Anderson, director of Journalism and Media Studies at Brooklyn College, City University of New York, wrote, “I believe the only way to create such systems is to first lay a foundation of critical media education in which the intricacies and complexities of our media environment are dissected and explored: without this fundamental understanding a purely technological solution will fall short (or be hamstrung by preemptive attacks on its integrity and veracity).”

Stephen Downes, researcher with the National Research Council of Canada, commented, “There may be a way technically – this is an empirical question – but there is probably no way politically. Unhackable systems create too much risk for government, because it creates a source of power they are unable to control.”

Tom Valovic, Technoskeptic magazine, noted, “It may be possible for algorithms to verify certain types of factual data but not to make larger assessments at greater levels of abstraction.”

Philip J. Nickel, lecturer at Eindhoven University of Technology, said, “No, because what matters is whether people will choose to limit themselves to “verified” sources or trust the verification. This is a matter of social epistemology, not technology.”

Miguel Alcaine, International Telecommunication Union Area Representative for Central America, commented, “Incentives are the key. There will be a competition between information and misinformation, each source responding to different incentives. The better the incentives for reliable information better good information, if effective ways are found to deincentivize misinformation better. Hacking follows the same logic as to incentives.”

Shane Greenstein, professor at Harvard Business School, noted, “Yes, for those who want it, and for those willing to do the work, it will be possible to access information from those who employ standard professional norms and professional practices to bolster the reputation of those who provide verifiable information. However, the rise of casual and incidental news caters to the majority of users, who do not want to do such work, and this will bypass such systems, and rather easily.”

Fred Davis, a futurist based in North America, wrote, “Nothing is unhackable. Manipulation of the media has been going on for as long as there has been media. It’s naive to think that established media sources are unbiased. I think that Noam Chomsky was right about the mass media’s ongoing use as a propaganda outlet in his book Manufacturing Consent. Chomsky believes that mass media is also self-censoring, often without realizing it, which I agree with. Also, as a former journalist, I often write about the ‘death of journalism.’ Online economic models for media are not able to support high-quality journalism and in-depth investigative reporting as was possible in the past when economic models around print were far more profitable, providing the money for salaries and expenses of journalists. Journalism has been degrading for over 15 years due to the economic model of online media. I don’t see that changing because pay walls have not proven to be as effective as was hoped.”

Giacomo Mazzone, head of institutional relations for the World Broadcasting Union, replied, “I’m afraid there will be no way because the fundamental economic model will not change.”

Ryan Sweeney, director of analytics, Ignite Social Media, wrote, “In short, no. The reason this won’t work is the same reason that Flat Earthers, climate change deniers and anti-vaxxers have a growing voice in mainstream discourse. Scientific proof discrediting all three exists, but Newton’s Third Law is at play; when facts are presented there is an equal and opposite reaction. We are shutting down any willingness to accept information that does not align with our own preconceived worldview. Creating such a system could only further this divide. Instead, we need a more-human approach to discourse that rekindles our curiosity and curbs our egos.”

Jesse Drew, professor of cinema and digital media, University of California-Davis, commented, “No, not really. The best we can do is create a body of educated and critical thinkers.”

Sean Goggins, an associate professor and sociotechnical data scientist, wrote, “Reliable and trusted systems are possible. The idea of a system being ‘unhackable’ is as much folly as claims the Titanic was ‘unsinkable.’”

Isto Huvila, professor of information studies, Uppsala University, replied, “Yes, and the answer is to find and establish commonly enough approved and trusted social systems for verification of facts that are based on reasonably reliable technical platforms, which are uncompromisable enough to work in practice.”

Paul N. Edwards, fellow in International Security, Stanford University, commented, “Moderately reliable, moderately trusted verification systems can be created. Probably none can be made unhackable. Any trusted verification system will require a significant component of attention from trained, reliable, trustworthy human beings. Such systems are labor-intensive and therefore expensive. Further, many people care more about confirming their own biases than about finding trustworthy sources.”

Julia Koller, a learning solutions lead developer, replied, “For every ‘reliable’ system created, some can and will find a way around it. Creating a perfectly secure system is an ideal that will never be reflected in reality.”
danah boyd, principal researcher, Microsoft Research and founder, Data & Society, wrote, “Nothing is unhackable. You also can’t produce trust in a system without having trust in the underlying incentives and social infrastructure. If you want to improve the current ecosystem, it starts by addressing perceptions of inequality.”

O’Brien Uzoechi, a business development professional based in Africa, replied, “There should be certainly a way to create reliable, trusted verification system. This certainly can be attained through selective and focused application development. I am an app savvy developer though, but with the current advance in technology like in the area of IOT AI, anything is possible to ensure its seamless operation and prevent vicious intrusions.”

Mark Glaser, publisher and founder, MediaShift.org, observed, “There is never an ‘unhackable’ solution from what I’ve seen with so many other technologies. With enough money and desire, all systems can eventually be breached. However, it’s possible to make them more secure, and also more reliable over time with enough effort.”

Susan Etlinger, industry analyst, Altimeter Research, said, “It’s theoretically possible, perhaps with blockchain or similar technology, to create systems of record. But blockchain was built as a system of record for transactions, not for knowledge-sharing. I can’t imagine how complex that would be, and the extent to which it would be vulnerable to political or other agendas.”

Scott Spangler, principal data scientist, IBM Watson Health, wrote, “This will be an ongoing competition between verification technology and hacking technology. It will always be a question of effort needed to create reliability vs. risk of loss.”

Philipp Müller, postdoctoral researcher at the University of Mainz, Germany, replied, “I am skeptical that verification systems can be reliable and trustworthy if their decision is a binary one between true or false. In many instances, truth is a social construction. The ultimately trustworthy answer to many questions would therefore be that there is no ultimate answer but rather different sides to a coin. I believe this logic of uncertainness and differentiated ‘truths’ is hard to implement in technological ecosystems.”

Daniel Alpert, managing partner at Westwood Capital, a fellow in economics with The Century Foundation, observed, “Yes, passive fact-checkers are absolutely a reliable possibility (just as malware and spam filters have developed). Not censoring content but passively showing inaccurate statements in content.”

Rob Atkinson, president, Information Technology and Innovation Foundation, wrote, “If nations copied Estonia’s digital signature model we could have trusted verification systems. These would bring an array of other economic benefits as well.”

John King, professor, University of Michigan School of Information Science, noted, “It’s Whack A Mole. You solve one problem and others pop up. The Internet of Things is going to make this bad beyond belief.”

Marc Rotenberg, president, Electronic Privacy Information Center, wrote, “Yes, but it will require a much greater willingness in the US to pursue antitrust investigations and to support the enactment of data protection laws.”

Sebastian Benthall, junior research scientist, New York University Steinhardt, responded, “Sure. Blockchain technology. Or systems managed by very large monolithic corporations.”

Brooks Jackson of FactCheck.org wrote, “If by ‘systems’ you mean some sort of robotic fact-checking, then no. Because none to date have been reliable, or widely trusted, or unhackable. The ‘system’ that will eventually prevail is an honest news media that does its best to get the facts, and also – this is the important missing element in many cases today – is aware of its own biases and corrects its e-mistakes and excesses promptly and openly.”

Andrew Nachison, author, futurist and founder of WeMedia, noted, “Reliable, trusted – yes. Unhackable? Of course not. Our current systems are very good at enabling reliable, trusted knowledge. They’re just easily abused by bad actors. And focusing on the systems alone doesn’t address the culture that embraces rumor, disinformation, wildly diverging moral codes and disbelief in reason, science and truth.”

Henning Schulzrinne, professor and chief technology officer for Columbia University, said, “We only need near-perfect, not perfect, systems. Verification systems within limited realms are feasible, both for identifying publishers and individuals.”

Ray Schroeder, associate vice chancellor for online learning, University of Illinois-Springfield, replied, “News reports, statements by public figures, etc. can be vetted by ‘fact checking’ organizations. I anticipate that we will see the advent of a handful of these. Perhaps something akin to the associated press will offer a highly-regarded source of fact-checking.”

Davide Beraldo, postdoctoral researcher, University of Amsterdam, noted, “It is difficult to answer yes or no. Any technical system responds to specific interests, and reality is too complex for neutral judgments to be produced. This does not mean that the situation cannot be improved, and a ‘better’ system should probably be based on three principles: algorithmic transparency, digital literacy and non-profit logic.”

David C. Lawrence, a software architect for a major content delivery and cloud services provider whose work is focused on standards development, said, “I’m inclined to answer yes but for ‘unhackable.’ The hackability of a system is an arms race, and what we conceive of as robust security now could fall to new techniques in the future. That does not mean, however, that we can’t create reliable and trusted systems with robust security now.”

Francois Nel, director of the Journalism Leaders Programme, University of Central Lancashire, noted, “Yes, we have the hardware capability and are developing the software. More challenging will be to get agreement on the verification databases and the processes – and the information literacy of people.”

Richard D. Titus, CEO for Andronik and advisor to many technology projects, wrote, “Blockchain technology, in my eyes is one of the most powerful weapons we have around truth, trust and authenticity. A foundation I advise, PO.ET, who are launching an ICO, now can create verifiable audited trust around an asset’s creation, evolution and publication. But near-term we will have to leverage our FOAF networks and education and verification process.”

Mohamed Elbashir, senior manager for internet regulatory policy, Packet Clearing House, noted, “It’s difficult but not hard to create a system to checks and balances to ensure the authenticity and accuracy of news and information, it will require the collaboration from content providers and social media platforms.”

G. Hite, a researcher, replied, “Hacking will always be a problem as long as people are careless/clueless and the unscrupulous get away with it. For now, I only know to look for a ‘secure’ website.”

Sonia Livingstone, professor of social psychology, London School of Economics and Political Science, replied, “Perfect unhackable systems are unlikely, but reliable and trusted systems – as we already have with banking, for instance – are possible and likely. There’s probably, in the end, more money and power to be gained by building systems the majority trust than in creating widespread distrust and, ultimately, a withdrawal from the internet (or, even, some as-yet hard-to-imagine alternative system being built).”

Robert W. Glover, assistant professor of political science, University of Maine, wrote, “No – nothing is foolproof. At best, verification systems can exist as a check against hacks, but cannot prevent all false or malicious information from reaching the general population.”

Tony Smith, boundary crosser for Meme Media, commented, “Not in an absolute sense. All encoded systems can be gamed. Keys to continuing marginal gains are staying nimble and keeping well-intentioned humans in the loop. Trust-based systems tend to work better as they adapt to experience.”

Steven Polunsky, writer with the Social Strategy Network, replied, “No, but the answer does not lie in moving back to paper processes, it lies in auditing, just as we do with electronic systems today.”

Timothy Herbst, senior vice president of ICF International, noted, “I don’t think there will every be a ‘unhackable verification system’ and it would be folly to believe in such a thing.”

Eileen Rudden, co-founder of LearnLaunch, wrote, “We will be able to verify who you are, but will not be able to verify if what you say is true.”

Tim Bray, senior principal technologist for Amazon.com, observed, “I doubt it; people trust people, not systems.”

Joseph Turow, professor of communication, University of Pennsylvania, commented, “Not from major government hackers.”

Brad Templeton, chair emeritus of the Electronic Frontier Foundation, said, “Reliable and trustable, but not unhackable. However, the level of intrusion can be low enough for people to use them.”

Jack Park, CEO, TopicQuests Foundation, noted, “The question is too blue sky; ‘trust,’ ‘unhackable’ and ‘verification’ are three topics far too complex for a single sentence. Trust is earned; you cannot just put up a ‘trusted’ site; nothing is unhackable (strong claim), and who verifies the verification system? Rather, a different worldview needs to emerge, one in which people *participate* in reviewing information resources – a kind of crowd-sourced curation. In doing so, and under the right conditions, you end up with an increase in critical thinking skills that reduces the overall impact of false information. I would suggest looking closely at the scholarship on role-playing games, on the US Navy’s work with MMOWGLI, and think ‘World of Warcraft meets global sensemaking.’ This is not your grandmother’s concept of evening news; it’s a different way to look at global issue resolution; it is an outgrowth of the late Douglas Engelbart’s quest for humans, their knowledge and the communications tools they use all co-evolving to improve capabilities.”

Steve Newcomb of Coolheads Consulting replied, “No. Whatever one human can create, another can subvert.”

Johanna Drucker, professor of information studies, University of California-Los Angeles, commented, “Tracking and trailing points of entry into the discursive field will become increasingly sophisticated. So will ways to subvert and evade them.”

Hazel Henderson, futurist and CEO of Ethical Markets Media Certified B. Corporation, said, “Some will always be based on community and face-to-face trust , others in walled garden platforms ( e.g., Ethical Markets , where bots, trolls and open platforms with too little or no curation can be excluded. Similar trust systems may be used in the cloud. All blockchain platforms and applications still face the questions ‘Who owns this?’ or ‘Who’s in charge.’”

Peng Hwa Ang, an academic researching this topic at Nanyang Technological University, observed, “As we are learning, it is not possible to produce an unhackable system especially if that system is intended to scale. But it is possible to produce a system with a reasonable level of reliability and trust. The reason I say this is that the tech companies such as Google and Facebook now realise that they have the most vested interests to ensure the reliability and trustworthiness of the Internet. Without this reasonable level of reliability and trustworthiness, their business model will collapse. It’s like the US airlines before 9/11 – security was essentially non-existent; after that security is an essential check before the business can, well, fly. It is this awareness that will drive investments by the tech giants in raising reliability, trustworthiness and security on the Internet.”

Kelly Garrett, associate professor in the School of Communication at Ohio State University, said, “The question appears to presume that beliefs are shaped primarily by the communication systems upon which they rely. Human knowledge is finite and will always entail judgments made in the face of uncertainty. Even if we could create systems that consistently reported an unbiased summary of human knowledge on any given topic, it is foolish to imagine that individuals would accept those conclusions unquestioningly. Technology can help, providing tools that help individuals make better informed decisions, but it must be accompanied by social change.”

Susan Price, lead experience strategist at Firecat Studio, noted, “Blockchain offers an example of a trusted verification system. Combinations of distributed verification could also serve well. Human effort based systems show promise.”

Tom Wolzien, chairman of The Video Center and Wolzien LLC, said, “Reliable and trusted are different from unhackable. Never totally unhackable, but reliable and trusted will develop with their own editorial and verification standards.”

Thomas Frey, executive director and senior futurist at the DaVinci Institute, replied, “The question is similar to the question about building a hacker-proof internet. There are no perfect systems, but we should be able to get to levels of 98-99% reliability. Maybe even higher. But it will always be the 1-2% failure rate the most will focus in on.”

Daniel Berleant, author of the book “The Human Race to the Future,” commented, “One way to distinguish trustworthy information from the rest is to recognize trustworthy opinion leaders. Such trustworthiness will likely become a valuable currency for public commentators, leading to the rise of trustworthiness as a distinct, recognized and valuable characteristic for public commentators.”

Stephan Adelson, an entrepreneur and business leader, said, “The answer to this question I believe is related and connected to net neutrality and the futures independence of the major media outlets. The government will in my opinion continue to dictate news that is unreliable.”

Jacqueline Morris, a respondent who did not share additional personal details, replied, “I doubt that anything could be ‘unhackable.’ If it’s created, someone will be able to hack it. It may not be quick, or easy, but it will be possible. The question is – would the value of the information that could be obtained from the hack worth the time, effort, and cost required? When that answer is ‘yes,’ the hack will be attempted.”

Mike O’Connor, a self-employed entrepreneur, wrote, “Don’t let ‘perfect’ get in the way of ‘pretty good.’ ‘Easy to use’ trumps ‘perfect’ in my book – example: CPanel’s implementation of Let’s Encrypt.”

Katim S. Toray, an international development consultant currently writing a book on fake news, noted, “No; simply because its not possible to develop an unhackable system. In the end, I think we should heed the advice (http://www.poynter.org/2017/what-can-fact-checkers-learn-from-wikipedia-we-asked-the-boss-of-its-nonprofit-owner/465634/) of Wikimedia Foudation’s executive director that we should aim for an ‘approximation of the truth,’ and as much transparency as is possible.”

Wendell Wallach, a transdisciplinary scholar focused on the ethics and governance of emerging technologies, The Hastings Center, wrote, “Probably not, or at least not for anything more than conscribed purposes. The existing infrastructure is just too porous. Replacing that infrastructure is too costly. In addition, finding a shared value system upon which to build more reliable infrastructure will be difficult, if not impossible.”

Amy Webb, author and founder of the Future Today Institute, wrote, “There is a way to create reliable, trusted verification systems for news, but it would require radical transparency, a fundamental change in business models and global cooperation. Fake news is a bigger and more complicated problem than most of us realize. In the very near future, humanity’s goal should be to build international, nonpartisan verification body for credible information sources. Within the decade, machine learning can be applied for auditing – randomly selecting stories to fact check and analyze expert sentiment. In the decade that follows, more advanced systems would need to authenticate videos of leaders as real, monitor augmented reality overlays for hacks, ensure that our mixed reality environments represent facts accurately. The best defense against fake news is a strong, coordinated offense. But it will take cooperation by both the distributors – Facebook, Google, YouTube, Twitter – and the world’s news media organizations. Google and Facebook could take a far more aggressive approach to identifying false or intentionally misleading content and demoting websites, channels and users who create and promote fake news. Twitter’s troll problem could be tackled using variables that analyze tweet language, hashtag timing, and the origin of links. YouTube could use filters to demote videos with misleading information. News organizations could offer a nutritional label alongside every single story published, which would list all the ingredients: everyone in the newsroom who worked on the story, all of the data sets used, the sources used, the algorithms used, any software that was used, and the like. Each story that travels digitally would have a snippet of code and a badge visible to viewers. Political stories that are factually accurate but represent liberal or conservative viewpoints would have a verification badge indicating a political slant, while non-political stories would carry a different badge. The easiest way to do this would be to use the existing emoji character system. The verification badge convention is something we’re already familiar with because of Twitter and Facebook. Similarly, stories with verified badges would be weighted more heavily in content distribution algorithms, so they would be prioritized in search and social media. Badges would be awarded based on credible, factual reporting, and that wouldn’t be limited to traditional news organizations. Of course, it’s possible to hack anything and everything, so whatever system gets built won’t be impenetrable.”

Ian O’Byrne, assistant professor at the College of Charleston, replied, “There may be a way to construct reliable, trusted, unhackable verification systems. I believe that the answer to this lies in an openly shared system like blockchain and distributed ledger technologies in which the info is openly shared online for all to review. Open, encrypted and transparent may be the best path.”

David A. Bernstein, a marketing research professional, said, “My fear is that there is no reliable way to determine the ‘truth.’ Unlike math or science, it is difficult to place a stamp of ‘true’ or ‘false’ on an opinion. However, we could score statements with some sort of consistency score based on someone’s prior statements on the same topic.”

Michael Wollowski, associate professor at the Rose-Hulman Institute of Technology, commented, “It’s called the New York Times. We always had the National Enquirer. It is just that now we have many more information sources. If you want to read them, go ahead. If you want to trusted information, do what people have been doing for a long time: peruse sources that are known to diligently check their facts. Use sources from several countries/continents.”

Axel Bender, a group leader for Defence Science and Technology (DST) Australia, said, “No. All systems (including, by the way, humans) will have vulnerabilities that can be exploited by a sufficiently educated/sophisticated hacker/attacker. However, I would expect verification systems to improve, especially if they are systems of multiple heterogeneous (verification) agents that complement each others’ vulnerabilities (‘Swiss cheese’ model in risk management)”

Monica Murero, a professor and researcher based in Europe, wrote, “Creating reliable, trusted and unhackable (if possible) verification systems would not solve the problem of fake news creation and circulation, in my opinion. In fact, even now there are reliable sources of online information (famous hospitals like Mayo clinic providing reliable health information to anyone online, Not for profit Associations, et cetera). However, millions of people currently trust (more) their ‘social’ friends and acquaintances that feed their personal information spaces with various types of contents, online and offline (fake or not fake, from news to romance). I mean, reliable and trusted systems of information and communication are already there; they rely on personal relationships and networks. The problem I foresee is in part due to the ‘technical’ nature of digital information (easy to create and circulate by anyone with minimal ‘tech’ abilities.”

Ned Rossiter, professor of communication, Western Sydney University, replied, “Systems will always be vulnerable to the curiosity and persistence of hackers. To speak of reliable, trusted and unhackable verification systems within digital environments is an oxymoron.”

Andrew Feldstein, an assistant provost, noted, “Not necessarily unhackable but systems can be created to minimize the damage. Perhaps voluntary bio-authentication?”

Giovanni Luca Ciampaglia, a research scientist at the Network Science Institute, Indiana University, wrote, “The success of Wikipedia and of the Open Source Software model gives hope to believe we will build dependable verification systems in the future. It will be important though to understand their limitations so that we don’t put excessive faith in them.”

Martin Shelton, a security researcher with a major technology company, said, “We can’t build absolutely unhackable verification systems, but we can do a lot better. A good example: Using strong cryptography, it’s possible to create systems that do simple things like verifying the legitimacy of an electronic transaction. Much like your own personal signature is difficult to fake, encryption can be used to make a cryptographic signature to attest that event actually happened. This is how electronic currencies like Bitcoin allow users to check that a transaction took place. One day, I can imagine news organizations providing similar forms of verification to let users know that they actually wrote a story, and that it wasn’t a fraud. And while there’s no such thing as an unhackable system – for verification or otherwise – when it comes to helping people much more reliably verify the source of a document.”

Sandra Garcia-Rivadulla, a librarian based in Latin America, replied, “Probably there is nothing that can’t be hacked over the time. As new technologies develop and the Internet continues to be buried deeper on our lives by means of the IoT, wearables, and other intrusive technology, it is very important that people could feel confident about the information they let in. Technology like the new blockchain could be a good way to reach a safer and more trusted sharing of information.”

Peter Lunenfeld, a professor at UCLA, commented, “At many levels, locally, nationally, globally, trust in basic institutions has broken down. It will take a reuniting of these social fabrics before engineers will be able to offer a technological fix to what is a social problem. This doesn’t mean that engineers, the news media, and social entrepreneurs should not strive to create such systems, just that the problem is broader than such tweaks can fix right now.

Jason Hong, associate professor, School of Computer Science, Carnegie Mellon University, said, “Currently, no, this is well beyond the state of the art. Gene Spafford once said, ‘The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards – and even then I have my doubts.’ That was 1989 and is still true today.”

Avery Holton, professor at the University of Utah, wrote, “In terms of reliable and more trustworthy systems of verification, we’re already seeing a turn toward the public. Having the public involved ala “wiki-style” is a good first step. The blending in of algorithmic and computational based systems is a solid second one.

Daniel Menasce, professor of computer science, George Mason University, replied, “There may be a way to create verification systems using machine learning techniques and big data. The real question is: Why would anyone interested in disseminating falsehoods want to use such systems?”

Matt Stempeck, a director of civic technology, noted, “Most verification signals can be misappropriated by third parties, as we’ve seen in the recent spates of sophisticated phishing attacks. More problematic is that many information consumers judge the content based on the person they know that’s sharing it, not a third-party verification system.”

Dave Burstein, editor of FastNet.news, said, “Folks like Dave Clark at MIT have developed systems that could come close, but outright fraud like this is rarely the issue in policy.”

David Manz, a cybersecurity scientist, replied, “Nothing is perfect, but we conduct financial services and shopping online today and accept the risks. We can similarly accept the risks with news tampering IF there is a desire for credible news.”

Hjalmar Gislason, vice president of data for Qlik, noted, “Yes, it is possible using technologies such as blockchain, however I am not convinced such systems will be widely used in this context within the next decade (also, the question is not quite clear as to what you mean by ‘verification system,’ i.e. verification of information, origin or identity).”

Wendy Seltzer, strategy lead and counsel for the World Wide Web Consortium, replied, “No. We should focus on ways to reduce the impact and reach of falsehoods and spoofs, because we won’t be able to stop them entirely. In a combined social-technical system, technical solutions aren’t enough.”

Emmanuel Edet, head of legal services, National Information Technology Development Agency of Nigeria, observed, “It is impossible to create a reliable, trusted, unhackable verification system. This is because the strength of any solution depends on the knowledge of who created the solution and there will always be a more knowledgeable person to hack such a system. On the alternative, even though Artificial Intelligence is in its infancy it may help provide a more secure information environment.”

Joshua Hatch, president of the Online News Association, noted, “Reliable, yes. Trusted – by some. Unhackable, no. I don’t think the solution to this is technical. I think it’s social. It’s about people learning and understanding the consequences of bad information. It’s about people valuing the journalistic process and putting truth ahead of ideology.”

Nathaniel Borenstein, chief scientist at Mimecast, commented, “No. Not without an absolute authority that everyone is required to trust.”

Jameson Watkins, a respondent who shared no additional identifying details, said, “Yes, I think it’s possible, but to do so we need to invest in a national identity structure first. We need to stop treating SSN, which is an identifier, as a verifier. We need an identity card, physical and virtual, assigned at birth.”

Irene Wu, adjunct professor of communications, culture and technology, Georgetown University, said, “There is no perfectly unhackable system. If there were, they could be used for harm as easily as for good. However, we may develop technical ways to improve the quality of news we get. In other arenas, we really on safety certifications for home appliances, or brand names for fashion clothing. Similarly, other markers could be developed for information. It used to be we trusted a newspaper, maybe it’s no longer just the newspaper, but a certification that reporters can get, or an industry association of online news sources that adheres to good codes of practice.”

Denis Clements, chief operating officer of PlanetRisk Inc., replied, “The issue is less about hacking or not hacking, trusted information providers will develop reliable systems that have sufficient protections to ensure the integrity of the system and therefore, the information provided. It is not a question so much of being unhackable, but rather, the ability to detect a hack and provide updates to consumers.”

Alexios Mantzarlis, director of the International Fact-Checking Network based at Poynter Institute for Media Studies, commented, “I am in 100% conflict of interest territory here, but I think the International Fact-Checking Network code of principles is a useful experiment in trying to make verification more rigorous (Process is explained at bit.ly/FCCOPprocess).”

Alan Inouye, director of public policy for the American Library Association, commented, “Not for most people in most situations. Anything that locked down won’t be usable for most people. The central benefit of digital content and ubiquitous networking derives from sharing and collaborating – which necessitates relatively open systems.”

Scott Amyx, managing partner of Amyx Ventures & Amyx+, wrote, “Some promising areas are blockchain and quantum computing. Integer factorization, which underpins the security of today’s public key cryptographic systems, is believed to be computationally infeasible with an ordinary computer for large integer if they are the product of several prime numbers. By comparison, a quantum computer could efficiently solve this problem using Shor’s algorithm to find its factors. This ability would allow a quantum computer to decrypt many of the cryptographic systems in use today, in the sense that there would be a polynomial time algorithm for solving the problem. This is what China is experimenting with its quantum communications satellite. The mission is testing quantum entanglement over unprecedented distances and the creation and transmission of hack-proof quantum key distribution. However, these technologies do not address the underlying mechanism of defining what is or what is not a reliable fact.”

Andreas Vlachos, lecturer in artificial intelligence at the University of Sheffield, commented, “No, because any algorithm or model can be fooled, even if it is at a human level (humans are fooled anyway).”

Jim Warren, an internet pioneer and open-government/open-records/open-meetings advocate, said, “There have been ways, and there will be. The foremost way has been and will be to consult alternative sources for the same information – e.g., online but also in print.”

Evan Selinger, professor of philosophy, Rochester Institute of Technology, wrote, “This is the wrong framing. Asking about perfect solutions – which is exactly what ‘unhackable’ suggests – is like asking if paradise will be created for everyone on Earth. A more realistic way to look at these issues is to think in terms of minimizing probabilities of undesired outcomes and strategizing accordingly. Setting overly-high expectations creates unrealistic confidence.”

Garrett A. Turner, a vice president for global engineering, noted, “I think this system would rely solely on a subjective perspective. There certain news mediums that I tend to avoid as a user due to misreporting or unreliable stories. For example, I would not rely on TMZ to provide me information on commodity trading.”

Maja Vujovic, senior copywriter for the Comtrade Group, noted, “In the traditional system of distribution of information, replicated online, there isn’t a fail-proof way to combat deception and ensure information is reliable; anyone can publish and spread any falsehood. This is increasingly abused with the growth of non-traditional sources that the Web has brought forth – forums, blogs, social media et cetera. Still, a combination of new standards can contain this chaos. This calls for more elaborate stamp-like authentication protocols at the source of information, expert vetting along its distribution path, risk-awareness education and the use of distributed ledgers for record keeping. These protocols could allow automatic filtering out of inauthentic information and also secure the vetting role of mass media well into the future.”

Bill Adair, Knight Professor of Journalism and Public Policy at Duke University, commented, “Yes. The fact-checking community is doing that and a variety of groups, including my team at Duke, are bringing people and groups together to create new tools and apps that can combat misinformation.”

Stuart A. Umpleby, professor emeritus, George Washington University, wrote, “The academic system of peer review works pretty well. News outlets with professional editors were common in the past. News outlets with professional editors were common in the past. I assume people will rely on them more in the future, given the recent experiences with alt media. Young people should receive instruction in how to interpret media and messages. Some of this is happening already. Public discussion of common conspiracy theories and why they are false might be helpful.”

Gina Neff, professor, Oxford Internet Institute, said, “Countries like Germany regulate media in very different ways, shaping how social media there evolved. In the US the laissez faire approach to media means reliable, trusted and unhackable verification systems are unlikely to be put into action.”

William L. Schrader, a former CEO with PSINet Inc., observed, “Yes, there are ways to tag posts to individuals with names and locations which can be made to be reliable and trusted (to the extent you trust the person or organization posting it). There is no such thing as ‘unhackable;’ so don’t ask that question. All computers and systems can be hacked, but we can detect that hack… and that is what helps us stay alive.”

Matt Armstrong, an independent research fellow working with King’s College, formerly executive director of the US Advisory Commission on Public Diplomacy, replied, “No, and this is the wrong question. It is a poorly informed question that ironically suggests a nanny state is the solution, protecting people from the bad. The question ignores the consumer and the ability to hack the system. consumers will look for alternatives outside of the verification system, especially if the information outside the system is attractive to the consumer.”

John Klensin, longtime leader with the Internet Engineering Task Force and Internet Hall of Fame member, commented, “’Reliable’ implies a frame or reference or official version, ‘trusted’ is in the mind of the beholder, and ‘unhackable’ implies this is a technical problem, not a social one, but it has always been a social one in every regard other than, maybe, some identification and authentication issues.”

Don Kettl, professor of public policy at the University of Maryland, said, “Unhackable systems might well be impossible. But increased transparency will improve the reliability of data systems.”

Dan Gillmor, professor at the Cronkite School of Journalism and Communication, Arizona State University, commented, “Short answer is no. Software and people can be hacked, period. But that doesn’t mean we’re helpless. We can do a lot to improve verification. This will start with metadata. It will extend to human transparency (e.g. creating norms that encourage standing behind one’s words rather than being anonymous, while preserving anonymity for rare but crucial situations). We can harden systems against attack. In the end we can make the ecosystem more, but not perfectly, trustworthy.”

James Schlaffer, an assistant professor of economics, commented, “No, there is not. Information has been diffused to the people, and any push to make information held by the people will be viewed as a way to control information and create a narrative.”

Justin Reich, assistant professor of comparative media studies, MIT, noted, “The better question is ‘Will Facebook create a reliable verification system?’ since that platform has achieved unprecedented status as the dominant source of news for Americans. They won’t develop such a system because it’s antithetical to their incentives and technically infeasible. Fake News is the kind of high-throughput, viral content that’s terrific to sell ads against. Moreover communities really enjoy shared fake news: Judith Donath has important research here suggesting that sharing Fake News can provide powerful signals of group affiliation even when people know it’s fake. Spreading fake news is a mechanism for self-expression and for building community building – both squarely within the mission of Facebook. It’s also financially lucrative to allow, and politically very difficult to deal with, since the bulk of Fake News comes from the Right and they are in political ascendancy. The corrosive effects of fake news on our society are but an unfortunate externality. Compounding the problems with incentives, algorithms can be reverse engineered and gamed, and crowd-sourcing methods will lead to mobilizing ideological crowds versus mobilizing people committed to objective truths. Fake News verification systems need to be built inside people’s heads.”

Stowe Boyd, futurist, publisher and editor in chief of Work Futures, said, “‘Unhackable’ is a bit strong, but we should be able to create encrypted verification that will be too expensive to hack at scale.”

Larry Diamond, senior fellow at the Hoover Institution and FSI, Stanford University, observed, “I won’t comment on the technical dimensions, but I do think we can get more reliable and trusted information if the digital platforms invest greater human and technical resources in vetting and verification. I definitely don’t want to see governments play this role.”

Scott Fahlman, professor emeritus of AI and language technologies, Carnegie Mellon University, said, “If you are looking for a software system that RELIABLY labels statements as true or false, this is impossible. Not even well-informed humans can do this. But we can detect the most egregious cases and link to other evidence and opinions, and this will improve with better AI methods.”

Fredric Litto, professor emeritus, University of São Paulo, Brazil, wrote, “Without the implantation of sophisticated biometric identification in the logon/logoff, writing/reading processes, which would make trouble-makers identifiable and punishable, there will be no end to increasing insecurity throughout society.”

Kevin Werbach, professor of legal studies and business ethics, the Wharton School, University of Pennsylvania, said, “It’s definitely possible to create robust verification systems, but that doesn’t necessarily solve the problem. What is being verified? And will people prefer the content that goes through those mechanisms?”

Filippo Menczer, professor of informatics and computing, Indiana University, noted, “Yes. We can develop community trust standards backed by independent news and fact-checking organizations, and implemented by Web and social media platforms. It won’t be perfect and abuse will continue to exist, but its harm will be reduced.”

Garland McCoy, president, Technology Education Institute, commented, “Yes, but the platforms don’t have much of a ‘shelf life.’ Take blockchain/public ledger for example, it meets the criteria, but it has a flaw that soon renders it useless, you can’t dispose of the mistakes, so both mistakes and corrections march on together but in time the mistake accumulate in such a way that it clutters the platform with background confetti, like the ‘snow’ in the back ground of an old B&W TV set that eventually disgusts the person watching so much that a brick finds its way through the TV screen.”

Meamya Christie, user-experience designer with Style Maven Linx, replied, “No, truths and untruths have been apart of the human experience even from the beginning of time. The illusion of truth will always be a constant. If I was to say yes, then I would have to wonder who the gatekeepers of this ‘trusted unhackable, verification system’ would be. The thought is complex in terms of who would have access, what it would be used for, what about the poor, etc. So instead I choose to think, yin vs. yang, light vs. dark, evil vs. good. One simply does not exist without the other.”

John Markoff, retired journalist formerly technology reporter for the New York Times, said, “Certainly, but there is a tradeoff in convenience. I don’t think people are willing to make those compromises.”

Vince Alcazar, business owner and retired US military officer, wrote, “At the extreme, devices are paired to a person and every online identity is linkable to a real person who does not live in a nation beyond the reach of sanctions for untrustworthy online behavior. However, to get to this construct, the world of the next decade must take on numerous Orwellian characteristics. Technologically speaking, there is no cost accessible method – in which the poor and the middle classes everywhere are priced out of the devices and usage.”

Sam Punnett, research officer, TableRock Media, replied, “Verification is a matter confirmation through the use of trusted sources. The best insurance against misleading information is confirmation by multiple trusted sources. Much of news information is interpreted facts. Sources can be used to verify facts. Brand of a particular news organization or author offers credentials for determining the quality of the interpretation of facts. You can technically verify the source already (such as registered IP associated with a brand). Verification is a matter of due diligence. For solutions to actual hacking it is always a matter of ‘measure’ vs. ‘counter-measure.’ This will continue for the foreseeable future.”

Anne Mayhew, retired chief academic officer and professor emerita, University of Tennessee, replied, “No, there are no easy answers but we will learn and improve as we go along. This is the story of all regulatory processes.”

Greg Lloyd, president and co-founder of Traction Software, wrote, “Yes, for a reasonable degree of trust, with increasingly high confidence. It will likely be based on/linked on each person’s smart phone as a secure token of identity, plus a layer that allows attributed name, pseudonym, or truly anonymous use of services.”

Luis Martínez, president of the Internet Society’s Mexico chapter, observed, “Yes, more-secured systems are arising and encrypting techniques are improving.”

Shawn Otto, author of “The War on Science,” observed, “It’s probably a false hope to rely on technology alone to do this. New communication technologies from the printing press forward have always disrupted societies when first introduced, and have often been first adopted by authoritarian players with an agenda. The strongest answer is for mainstream publications to make a commitment to never engage in false balance and to move their editorial mission and value system from presenting all views equally (which elevates extreme views and further partisanship) to holding the powerful accountable to the evidence. As they begin to distinguish themselves as reliable participants in a democracy again, people will come to separate reliable journalism from yellow journalism on their own.”

Danny Rogers, founder and CEO of Terbium Labs, replied, “Not really. Things built by humans will always be breakable by other humans. History has proven that time and again. Still, we can do way better than we’re doing now, and continue to play the cat-and-mouse game effectively to create increasingly better systems.”

Dane Smith, president of the public policy research and equity advocacy group Growth & Justice, noted, “Yes of course there is a way. Wikipedia proves it. Trust is in the eye of the beholder, however, and fundamentalists of all stripes are distrustful of information from sources other than their deity, and we’re not likely going to get past that.”

Susan Hares, a pioneer with the NSFNet and longtime internet engineering strategist, now a consultant, said, “Yes, reliable, trusted unhackable verification systems are within the range of today’s technology. The public writers and readers can be protected by current cryptography algorithms if new methods for storing and retrieving public information are created. As public outcry increases for fake news, then the requirements to have multiple sources document and tested within a program can be done. Academic systems already do cross checking of academic sources. The real problem today is that it costs to secure these systems. If the citizens of United States or other countries want these systems, then the public and private money must be invested to create them.”

Mike Gaudreau, a retired IT and telecommunications executive, commented, “The hackers will try and they will gain access. It’s their game. If the US government can be hacked I am sure they can hack anything.”

Louisa Heinrich, founder of Superhuman Ltd, commented, “Maybe. But I think a combined human/technological system is more interesting (and likely to be more resilient) than a purely technological one. In any case, it would need to be a distributed system.”

Michele Walfred, a communications specialist at the University of Delaware, said, “Publish date an online website was established, a blue verified check mark if possible, some indication whether authors/owners are known or anonymous, country of origin would help. Similar to the box most newspapers would print showing who the editors and publishers were. Establish some type of journalism stamp of approval, not for ideas, but for methods of research and integrity, similar to a ‘Good Housekeeping seal of approval.’”

Jeff Jarvis, professor at the City University of New York Graduate School of Journalism, commented, “Yes, we need a shared fact base and that is why the search for reliable verification systems is so tempting. But the problem with misinformation is much larger than fact-checking can solve. We lack diversity in the news media ecosystem (not just in newsrooms but in the industry as a whole) and that leads to a lack of trust. Radical partisans in faux media (Fox News, Breitbart, et. al.) use facts to feed their schemas (one crime by an immigrant is made to look like a crime wave); thus facts alone will not solve this.”

David Sarokin, writer, commented, “Of course there isn’t. If there was, it would exist!”

David Goldstein, researcher and author of the Goldstein Report, noted, “Everything can be hacked. Humans are involved so it will always be fallible.”

Gianluca Demartini, a senior lecturer in data science, observed, “I believe it is impossible to build a 100% secure verification system, but I am confident that such systems will become mainstream and will support both content creators and consumers to assess the reliability of information.”

Richard Jones, a self-employed business owner based in Europe, said, “No way. Opinions, realities and belief systems themselves dictate truth. Ultimately there are no facts in societies without common belief systems. Witness passionate divisions on the credibility of the BBC. Witness traditional consciousness of the newspapers, other media being the means for moguls/tyrannies to exert control.”

Iain MacLaren, director of the Centre for Excellence in Learning & Teaching, National University of Ireland-Galway, commented, “My concern is that sources which are seen, today, as ‘trusted’ are not themselves as reliable as providers of unbiased, neutral information. This has certainly been the case with the BBC in recent years, for example.”

Romella Janene El Kharzazi, a content producer, entrepreneur and user activist, said, “Requiring several layers of verification will reduce the likelihood of hacking. However, also using AI for systems to self-diagnose will be an innovation in the future. Finally, AI will be used to identify suspicious users across networks and suspend activities until a human can review.”

Stephen Bounds, information and knowledge management consultant, KnowQuestion, noted, “No system is unhackable. That is not the problem, however. Existing verification systems can be 99.9% effective but require a skeptical mindset before they will be used. Unless people are committed to getting the right answer, any answer from a trusted proxy will be deemed sufficient.”

John Wilbanks, chief commons officer, Sage Bionetworks, replied, “No. Because the weakness of all technical systems is the people involved – the designers, builders, and users. And we’re always going to be hackable. Until we get better (or die off and are replaced by people better able to deal with it) it won’t improve.”

R. Lee Mulberry, managing partner, Northern Star Consulting, said, “No, the news business is primarily a people business, certainly the medium is becoming mainly electronic but the primary fix is going to remain human.”

Ed Terpening, an industry analyst with the Altimeter Group, replied, “It’s possible to create verification systems, but society needs unbiased institutions that can be trusted. Since the US government is now blatantly political, new non-profit institutions without bias are needed.”

Basavaraj Patil, principal architect for AT&T, wrote, “Yes. It is possible to create such systems. Use of AI and crowd-sourcing and similar means can be used to create such systems.”

Paul Jones, director of ibiblio.org, University of North Carolina-Chapel Hill, noted, “Verification for news publishers is almost there with blockchain technology. You could know at least that an article or video claiming to come from Washington Post actually came from there unaltered. Verifying content is more difficult and is a social and ethical issue.”

Frank Kaufmann, founder and director of several international projects for peace activism and media and information, commented, “No it will not be possible. This is the wrong approach to fixing the ‘news’ problem. I call this the ‘cops and robbers’ approach.”

Sam Lehman-Wilzig, associate professor and former chair of the School of Communication, Bar-Ilan University, Israel, wrote, “Crowdsourcing seems the most efficient way of developing verification systems, with algorithms that ‘rank’ people highly who consistently provide verifiable facts (and/or call out those who don’t), placing them higher on any algorithmic system of ‘news’ dissemination. I also believe that AI will enable other forms of fact verification, without human input (by the end of the human/algorithm learning stage).”

Bradford W. Hesse, chief of the health communication and informatics research branch of the US National Cancer Institute, said, “Probably not, given the way the question is worded. We will never produce completely ‘unhackable’ verification systems, in the same way that we will not be able to create robbery-proof banks. What we can do is cultivate trust in the systems we use to combat fraud and address violations of the social contract (as we have done with eCommerce).”

Clifford Lynch, director of the Coalition for Networked Information, noted, “I am very skeptical.”

Glenn Grossman, a consultant in the financial services industry, replied, “I am not sure it can be 100%, but some form of verification system would be great.”

Peter Dambier, DNS guru for Cesidian Root, commented, “End-to-end encryption will help a lot and all means to break encryption do sabotage it.”

Peter Eckart, director of health and information technology, Illinois Public Health Institute, replied, “I assume that there are and will be ways to create ‘attribution’ system, but trust implies an understanding of the source deep enough to judge its merit. The heart of this is an objective assignment of truth or falsehood, and that feels in jeopardy right now.”

Peter Levine, associate dean and professor, Tisch College of Civic Life, Tufts University, observed, “I’d distinguish between hard facts, which may be verified by reliable and trusted systems, and political arguments. Arguments do vary in their reliability, plausibility and level of responsibility, but they are not subject to being verified in the same way as facts. And arguments are much more important and pervasive that raw facts.”

Megan Knight, associate dean, University of Hertfordshire, said, “No. Verification, like objectivity, is a holy grail. Mythical, magical and something to aspire to that will never be achieved. The whole principle of relying on technological systems is faulty – we need to create intelligent, critical and informed people, not systems to substitute for human judgment.”

Dave Kissoondoyal, CEO, KMP Global, replied, “Sooner or later, there will be clearing houses or agencies setup on the Internet whose main tasks will be to check the veracity of information. The internet community will then only trust information that has been verified by the clearing houses or from verified individuals, entities or organisations.”

Matt Moore, a business leader, observed, “You can see the potential of distributed ledgers like blockchain BUT the key issue is that anything not actually coded into the blockchain requires some trusted connection point to get data on there. These trusted connection points are the weak spots that come under attack.”

Jeremiah Foster, a respondent who shared no additional background details, said, “The same methods that have always been used; human curation, professional editing, critical judgment.”

Carl Ellison, an early internet developer and security consultant for Microsoft, now retired, commented, “Of course. We have authenticated communication channels. Having them won’t solve the fake news problem.”

Adam Powell, project manager, Internet of Things Emergency Response Initiative, University of Southern California Annenberg Center, said, “Yes, we have them, and they have such names as nytimes.com and ap.org.”

David Harries, associate executive director for Foresight Canada, replied, “YES. But each such system must/must be self-contained, and not connected in any way to the internet at large.”

Michael Marien, senior principal, The Security & Sustainability Guide and former editor of The Future Survey, wrote, “I use trusted sources, like New York Times, which covers most of the news fit to print.”

Paul M.A. Baker, senior director of research for the Center for Advanced Communications Policy, observed, “I can see two different ways of providing a parity check 1) crowdsourcing, were readers provide comments or votes on the item and a ‘market’ for veracity can occur, and 2) automated AI systems that run algorithms to test and cross check with other extant or trusted sources.”

Deborah Stewart, an internet activist/user, wrote, “Technology will advance. When there is a need, things evolve.”

Sasa M. Milasinovic, information and communication technology consultant with Yutro.com, replied, “No, because of human influence.”

Jonathan Ssembajwe, executive director for the Rights of Young Foundation, Uganda, commented, “There is a way to create reliable, trusted, unhackable verification if all stake holders in the internet systems for example domain companies, internet service providers, organisations working for internet safety and users among others work together in ensuring a reliable, trusted internet.”

Bernie Hogan, senior research fellow, University of Oxford, noted, “All systems must work on some web of trust. We live in post-modern times where we have long since departed from a world of absolute truths to one of stable regularities. Our world is replete with uncertainty. To make a system that is certain also makes it rigid and impractical. We already know that one-time pads form excellent unhackable security, but they are completely unrealistic in practice. So, I genuinely challenge the question – reliable does not mean perfect or unhackable. We must reject these absolutes in order to create more practical working systems and stop discourses that lend us to false equivalences between different paradigms. Some are still much more reliable than others even if they are not perfect.”

Joanna Bryson, associate professor and reader at University of Bath and affiliate with the Center for Information Technology Policy at Princeton University, said, “It’s unlikely that anything used by humans will ever be perfect since our implementation tends to erratic (e.g. writing passwords on sticky notes). On the other hand, Science is an example of a reliable system – it is occasionally perverted but progresses robustly. We can make other such systems.”

Adrian Schofield, an applied research manager based in Africa, commented, “In theory, yes – provided the developers are reliable, trustworthy and ‘unhackable’ themselves. Any security product is only as good as its weakest link – usually an underpaid or otherwise vulnerable employee.”

Yuri Hohlov, a respondent who shared no additional background details, replied, “The only way to reduce the level of misinformation is to use the Digital ID for authors of the information.”

Alf Rehn, chair of management and organization studies, Åbo Akademi University, commented, “We’ll probably never see a perfectly unhackable singular system, as the incentive to hack such a one would simply be too great. Instead, we will probably move towards something more distributed and mesh-like, possibly utilizing blockchain technologies.”

Riel Miller, an international civil servant who works as team leader in futures literacy for UNESCO, commented, “Reliable, trustworthy and secure ‘verification systems’ are in the eye of the beholder and context. A ‘truth’ vending machine or system is not doable. What is entirely feasible and is always more or less functional are systems for assessing information in context and related to need. With the decline in the status and power of the former gatekeepers of ‘good’ knowledge processes are unleashed to seek alternatives. Mass solutions are not the only way and are likely to be sub-optimal from many perspectives. As new sources and dynamics for counter-veiling power emerge so too will fit for purpose assessment. It will be messy and experimental, that’s complex evolution.”

Michael Pilos, chief marketing officer, FirePro, replied, “No! That which is man-built can be man-ipulated!”

Bill Jones, chairman of Global Village Ltd., observed, “Yes for a time – not with better systems, although quantum information processing provides a way forward.”

Andrew McStay, professor of digital life at Bangor University, Wales, wrote, “On the basis of other digital media content there are no 100% failsafes.”

Marcel Bullinga, futurist with Futurecheck, based in the Netherlands, said, “Trusted verification systems are a prime necessity and yes, they will be created. I envision an AI-backed, traffic light system that shows me in realtime: Is this information/this person/this party reliable, yes or no? Is their AI transparent and open, yes or no? Is their way of being financed transparent, yes or no?”

Vian Bakir, professor in political communication and journalism, Bangor University, Wales, commented, “There are a large number of ways to enhance verification – just look at the range of solutions proposed by Facebook, Google and a plethora of mainstream media and emergent media following the 2016 election furors in US and UK.”

Jens Ambsdorf, CEO at The Lighthouse Foundation, based in Germany, replied, “I believe that it is much more efficient and also more beneficial for society to strengthen the citizens in their capability to used the information available. All systems are hackable but that should not prevent the use of third party independent systems.”

Dan Ryan, professor of arts, technology, and the business of design at the University of Southern California, said, “I doubt that it’s possible in the absolute sense. I think we have mathematical proofs of that. But strongly improved I do think is possible. Norms plus technology that would allow strong attribution and source tracking, for example, go a long way toward improving on the status quo. I can imagine a block chain like mechanism for accumulating fact reports and interpretations that must be ‘owned’ by authors but would still permit testimony without retribution and other important features of a healthy information order.”

David J. Krieger, director of the Insitute for Communication & Leadership, Lucerne, Switzerland, commented, “We should move away from privacy/anonymity be design towards trust by design. Trust is based on information, not on anonymity.”

Rich Ling, professor of media technology, School of Communication and Information, Nanyang Technological University, said, “Reliable news can be produced by insisting on transparency in the production process (direct quotes, verification, et cetera). There will be an increased importance associated with the branding of news outlets that use these methods and can claim to produce and distribute unbiased material. Further, the branding and transparency of news items that are re-posted in social media will need to be obvious.”

Julian Sefton-Green, professor of new media education at Deakin University, Australia, replied, “I don’t know the answer to this but I should imagine it would be possible – the problem is one of trust/belief not technological security.”

Michael Zimmer, associate professor and privacy and information ethics scholar, University of Wisconsin-Milwaukee commented, “Any attempt at a system to ‘verify’ knowledge will be subject to systemic biases. This has been the case since the first dictionary, the evolution of encyclopedias from a roomful of editors to a million contributors, debates over standardized curriculum, et cetera. Technology might make things appear to be reliable or unhackable, but that’s just a facade that obscures latent biases in how such systems might be built, supported and verified.”

Rajnesh Singh, Asia-Pacific director for an internet policy and standards organization, observed, “Blockchain and its derivatives offer some hope. The challenge may be mass adoption.”

Marina Gorbis, executive director of the Institute for the Future, said, “I don’t think it’s possible to create any foolproof systems. I think focus on pure technological solutions is misguided. We don’t perceive information, true or false, in a vacuum. We filter information to fit our worldview, so solutions to increasing amounts of misinformation have to involve technology but also social levels, i.e., helping people construct frameworks and tools in order for people to make good judgments about what they are seeing.”

Patrick Lambe, principal consultant, Straits Knowledge, noted, “No. All human-designed systems are capable of having their design subverted or overcome.”

Vivienne Waller, senior lecturer, Swinburne University of Technology, replied, “It is not possible to create reliable, trusted, unhackable verification systems for news. It never has been. Although it may be possible to create unhackable verification systems, it is the issue of trust that is crucial. Each perspective is a view from somewhere and the consumer of information needs to know, and think critically about, who has provided that view – or who has funded it. This is not just an issue with online information but has always been an issue. For example, in Australia misinformation about the causes of an energy blackout in South Australia was reported widely in the newspapers by those with a vested interest in fossil fuels (The cause was falsely attributed to wind energy). This example is a clear-cut case of false information, but there are shades of grey. How conflict is reported will depend on the perspective of the reporter. Similarly, communications from marketing departments of organisations will put a particular spin on the activities of their organisation. Unhackability is still an important goal, however, so that consumers can be certain of the author or source of the information that they are consuming.”

Daniel Kreiss, associate professor of communication, University of North Carolina-Chapel Hill, commented, “I doubt that a polarized public where partisanship is akin to religious identification will care about verified information. Members of the public would care about these systems if the information they parlayed benefits their own partisan team or social identity groups.”

Steven Miller, vice provost for research, Singapore Management University, wrote, “This is a complicated question. Is one talking about news sources? Or is one talking about any information that holds any type of data? In either case, there will always be some way in which the reliability, trust, hackability and verifiability of a system can be compromised, by some extent, by whatever combination of insiders or outsiders. Yet, there are always ways to make extra efforts to check, and double check, and where necessary, more than double-check that what one is reading, or what one is acquiring (data wise), or what one is accessing is reliable, trusted, and verified. This will all depend on specific contexts.”

Eric Burger, research professor of computer science and director of the Georgetown Center for Secure Communications in Washington, DC, replied, “We can stop obvious gaming, like bots submitting similar items. However, it will be nearly impossible to distinguish attacks that are carried out by humans.”

Barry Wellman, internet sociology and virtual communities expert and co-director of the NetLab Network, noted, “Nothing is unhackable.”

Tom Worthington, honorary lecturer in the Research School of Computer Science at Australian National University, commented, “It is technically possible to create verification systems, but these will only verify who the information is coming from, not that it is true.”

John McNutt, professor, School of Public Policy and Administration, University of Delaware, wrote, “There is no perfect system. On balance, different systems allow verification of each other.”

Greg Shatan, partner, Bortstein Legal Group, based in New York, replied, “Never say never, but it is supremely difficult to verifying the truth or falsity of a statement. The proxy for this is validating the identity (or at least, the legitimacy) of the entity providing the information. In the future, we can have one or more of the following: verification of the information, verification of identity or verification of personhood. The first is all but impossible, the second runs counter to the privacy-first mentality, and the third is of limited utility.”

Alexander Furnas, Ph.D. candidate, University of Michigan, replied, “No. Nothing is unhackable. Trust and reliability are socially mediated and constructed. I can’t think of any authority/organization/individual in a position to command or build trust across the social graph. We see this problem with fact-checkers already; sufficiently motivated or entrenched people think the fact checkers are themselves fake news.”

Tomslin Samme-Nlar, technical lead, Dimension Data Australia, commented, “The challenging part is the ‘trust’ part. This is because any system used to filter online content can also be used some governments to suppress ‘real news.’”

Andrea Matwyshyn, a professor of law at Northeastern University who researches innovation and law, particularly information security, observed, “Alas, not with the existing internet infrastructure. It simply wasn’t designed with security in mind.”

Mark Bunting, visiting academic at Oxford Internet Institute, a senior digital strategy and public policy advisor with 16 years’ experience at the BBC, Ofcom and as a digital consultant, wrote, “There is no way to create entirely unhackable verification systems. But the nascent tools we have to test, validate and iterate verification systems will continue to improve.”

Jim Rutt, research fellow and past chairman of the Santa Fe Institute and former CEO of Network Solutions, replied, “’Unhackable,’ no, but reliable enough to be useful, yes.”

Christian H. Huitema, past president of the Internet Architecture Board, commented, “We can certainly create systems in which articles can be accessed in a reliable way, as in ‘This really was written on April 1st by Jane Smith of the Example Tribune.’ But technology alone cannot decide whether the article is a fair description of what actually happened. We might get some kind of voting system, resulting in ‘this article got five stars and three rotten tomatoes.’ But that’s not ‘unhackable.’”

Amali De Silva-Mitchell, a futurist, replied, “Excellence in verification will be costly. Data correction, a transparent manner to facilitate this, review for defamation and slander prior to publication and other data risk-mitigation strategies are critical. E-waste will cause unintended issues as well. Data cleaning is important by human or automated processes to minimize risk although I don’t believe there can be full elimination of risk and there must be public education regarding this feature of data and good laws that deal with events retroactively to correct errors with good compensation mechanisms which can reduce lazy data publications.”

Ayaovi Olevie Kouami, chief technology officer for the Free and Open Source Software Foundation for Africa, said, “I’m not sure that the chance for zero risk exists because perfection is not of this world.”

Bryan Alexander, futurist and president of Bryan Alexander Consulting, replied, “Unhackable? Yes, through serious encryption and people following information security protocols.”

Alexander Halavais, associate professor of social technologies, Arizona State University, said, “There is no such thing as ‘unhackable.’ We already have trusted verification systems: in banking, in health, and elsewhere. The more interesting question is whether we will develop more generalized measures of social trust for individuals and organizations.”

David Schultz, professor of political science, Hamline University, said, “No. If one can build a system one can hack it.”

Mark Lemley, professor of law, Stanford University, observed, “Nothing is unhackable, but we can certainly create trusted verification systems. We do it already in a variety of contexts, including banking and medical information. The question is what things we want to verify, how quickly, and at what cost.”

Bill Woodcock, executive director of the Packet Clearing House, wrote, “There’s no perfect solution to identifying people. The interface between keyboard and mind is very difficult to authenticate, and identities will always be fluid and subject to the whims of national governments. The best that can be done in this space is digital signature and nonrepudiation linking articles of public speech by the same identity. That doesn’t prevent individuals (or governments) from wielding many identities, which cannot be tied to each other by other parties.”

Jean Paul Nkurunziza, a consultant based in Africa, commented, “Massive adoption of IPv6 could help.”

Mark Patenaude, vice president for iInnovation, cloud and self-service technology, ePRINTit Cloud Technology, replied, “It needs to be controlled in the cloud infrastructure with a world body that is voted in annually (not very four years). Call it the United Nations of the internet or what is already available in Geneva at UNESCO.”

Greg Swanson, media consultant with Itzontarget, noted, “Yes. The Mozilla Foundation and The Reynolds J School are working on this and making great progress. But this is not sufficient. If an US administration is consciously spreading lies, being sure that it is indeed the administration that is lying does not address the right problem. What use is an unhackable system for identifying liars, if the lie is asserted to be true?”

Paul Kyzivat, retired software engineer and Internet standards contributor, noted, “An answer can be found in an ‘open source’ system, along the lines of Wikipedia. This doesn’t guarantee success, but it will provide transparency into the source of the content.”

Clark Quinn, consultant with Quinnovation, said, “The more distributed and transparent, the better. I think it’ll be a continual battle, however.”

Flynn Ross, associate professor of teacher education, University of South Maine, said, “Control of the systems also carries the threat of greater authoritarianism through control, so the open information is needed.”

William Anderson, adjunct professor, School of Information, University of Texas-Austin, replied, “I do not know about creating unhackable systems. However, people can develop socio-technical practices to evaluate reliability and trustworthiness.”

Robin James, an associate professor of philosophy at a North American university, wrote, “This question assumes that the problem with ‘fake news’ is a technology problem and not a problem with the society that tech comes from and works in. My research shows that it’s not a tech problem but a social problem at root.”

Tom Birkland, professor of public policy, North Carolina State University, noted, “I’m not sure. This may be a technological problem. Perhaps a good way to start is in educating the public about the history of real journalism and its role in a democracy.”

Jennifer Hassum, a department leader at a nonprofit organization based in North America, commented, “No. The more hurdles you make the more the system will shut off voices of dissent, critique.”

Alan D. Mutter, media consultant and faculty at graduate school of journalism, University of California-Berkeley, replied, “In the fullness of time, artificial intelligence filters or some other sort of breakthrough technology might be able to help. But the black hats will be working as feverishly as the white hats to exploit the information the porous and unregulated information ecosystem for fun, profit and outright malice (see also Trump, Donald).”

Eduardo Villanueva-Mansilla, associate professor, department of communications, Pontificia Universidad Católica del Perú, said, “There are too many actors invested in the alternative, thriving in unverifiable and/or hackable systems. From state actors to individual hackers (black, grey or white hatted ones), the reality is that only a significant reengineering of the underlying systems, or a radical transformation of political norms, may change this situation. Don’t forget the decentralized nature of the Internet: actors at the periphery may be as critical as the most well known ones in central nation-states.”

Tiziano Bonini, lecturer in media studies at the department of social, political and cognitive sciences, University of Siena, noted, “Tiziano Bonini. I am not a tech expert. I do not know if there is a way, but I believe the best way is to massively increase the General Intellect of the Internet users. The best verification system is the global network of informed and skilled people.”

Jane Elizabeth, senior manager American Press Institute, said, “Nothing is unhackable. But generally reliable systems can be built, with enough will and money, relying on vast databases of verified content.”

Nate Cardozo, senior staff attorney, Electronic Frontier Foundation, observed, “Anyone who advertises an ‘unhackable’ system of any kind has no conception of information security.”

Federico Pistono, entrepreneur, angel Investor and researcher with Hyperloop TT, commented, “No. But we can get closer to it than we are now.”

David Sarokin of Sarokin Consulting, author of “Missed Information,” said, “Of course there isn’t. If there was, it would exist!”

Paul Gardner-Stephen, senior lecturer, College of Science & Engineering, Flinders University, noted, “It is very difficult to create unhackable verification systems. One of the problems is that it is state-level actors who are major players in this space. Tools like block-chains may allow for consensus forming and similar web-of-trust schemes, however they all stumble over the problems of relativity, subjectivity and perspective. We see this today: One man’s bullying is another’s ‘standing up for himself.’ This is a classic tragedy of the commons: The rules that enabled public communications to be productively shared are being undermined by those so desperate to hold onto power, that they are willing to degrade the forward-value of the medium. Indeed, for some, it is probably an active ploy so as to neuter public scrutiny in the future, by destroying and discrediting the means by which it could occur.”

Jonathan Grudin, principal design researcher, Microsoft, said, “Verifying the accuracy of information claims is not always possible, but verifying the source of information seems likely to be tractable. The next step is to build and learn to use reliable sources of information about information sources. This can be done now most of the time, which isn’t to say unforeseen technical challenges won’t arise.”

Richard Rothenberg, professor and associate dean, School of Public Health, Georgia State University, noted, “I think the answer will be no, but close enough. What we need is an AI system that detects hacking and provides an early warning. It won’t be foolproof, but it will be good enough.”

Virginia Paque, lecturer and researcher of internet governance, DiploFoundation, wrote, “I think so, but I think it will involve in-depth searches that might have serious implications for privacy as the necessary openness will break down privacy expectations. I wonder if information will become ‘ownerless’ and generic as a result of a search for universal definition of truth?”

Tatiana Tosi, netnographer at Plugged Research, commented, “I believed the AI bots are transforming the scenario, even tough its fragile and due to hacking activities. It will look like a plug-in or app that will be used as a filter in daily online lifestream.”

Pamela Rutledge, director of the Media Psychology Research Center, noted, “The next war is for cyberspace. There will be continual battle between system security and cyber assaults.”

Richard Lachmann, professor of sociology, State University of New York-Albany, replied, “Yes, artificial intelligence can or soon will be able to do that.”

Diana Ascher, information scholar at the University of California-Los Angeles, observed, “I don’t think verification systems can be ‘unhackable,’ but the public, rightly, has begun to place less trust in uncorroborated news. I suspect many will advocate for the use of artificial intelligence to build trustworthy verification systems. However, always inherent in such systems are the biases and perspectives of their creators. The solution to biased information must come in the form of a recognition on the part of the information seeker that no information is pure fact. It is all interpreted and deployed in context. Systems that present information from a variety of perspectives will be most effective in providing the public with the opportunity to understand the many facets of an issue. And then, of course, most people will accept as true the information that confirms their existing beliefs. In addition, news consumers depend on heuristic information practices to find the information on which they base their decisions. Often, this comes in the form of opt-in communications from emerging thought leaders as trusted sources, as we’re seeing in the resurgence of email digests from individuals and think tanks (e.g., Stat, Neiman, countless others), as well as following trusted entities on social media (e.g., Twitter).”

Noah Grand, a sociology Ph.D., wrote, “No. History has shown us that even when new technologies are unambiguously superior to old ones, trust is slow to develop. People once feared cars. ‘How can it be a good idea to sit on an explosion?’ Now lets think about what makes a good news story. We probably want some facts. If you don’t want facts, you probably aren’t going to read this. So lets say Chicago has the highest murder rate in the United States, and a reporter publishes this fact. The fact alone makes for a disappointing news story. We want to know why there are so many murders. However, these kinds of explanations are more ambiguous. Different people will have different theories. The best way for opportunists to manipulate and misinform the public is to capitalize on these ambiguous explanations by sewing doubt and resentment. There is no technical solution to weed out fraudulent explanations for why things happen, and any technical solution would not be trusted.”

Meg Mott, professor of politics at Marlboro College, commented, “We should be most worried when the goal is creating a reliable, trusted, unhackable verification system. Instead we should be developing habits of deliberation and reflection so that we can learn to trust ourselves to make better decisions.”

Dariusz Jemielniak, professor of organization studies in the department of Management In Networked and Digital Societies (MiNDS), Kozminski University, observed, “We need systems that are reliable enough, such as Wikipedia. While unhackable systems are a pipedream, it is enough to improve the quality of what we have to receive better results. It is possible to use communal control over news generation and propagation, and AI algorithms are already able to cut down on fake news (we only need Facebook and Google to start using them seriously).”

Alexis Rachel, user researcher and consultant, said, “I don’t have the answer to this, but I’m afraid not. And if there is, I fear that it will sold for a profit – thusly ‘the truth’ becoming a commodity, only available to those who can afford it.”

Jennifer Urban, professor of law and director of the Samuelson Law, Technology & Public Policy Clinic at the University of Calfifornia Berkeley, wrote, “Reasonably reliable and trusted, yes. Completely unhackable? We have not managed it yet, and it seems unlikely until we can invent a system that, for example, has no vulnerabilities to social engineering. While we should always work to improve reliability, trust, and security on the front end, we must always expect systems to fail, and plan for that failure.”

Judith Donath, fellow at Harvard’s Berkman Klein Center, and founder of the Sociable Media Group at the MIT Media Lab, commented, “There’s no single answer – there is and will continue to be a technological arms race, but many of the factors are political and social. Basically, there are two fronts to fighting fake news. The first is identifying it. This can be a technical issue (figuring out the ever more subtle indicators of doctored video, counterfeit documents), a research problem (finding the reliable documentation that backs a story), et cetera. The second, harder, one is making people care. Why have so many Americans embraced obvious lies and celebrated the liars? And what can we do to change this? Many feel a deep alienation from politics and power in general. If you don’t think your opinion and input matters, why should it matter how educated you are on issues? Rethinking news and publishing in the age of the cellphone should not be just about getting the small screen layout right, or convincing people to <like> a story. It needs to also be about getting people to engage at a local level and understand how that connects with a bigger picture. An authoritarian leader with contempt for the press is, obviously, a great boon for fake news; an authoritarian leader who has the power to control the press and the internet is worse. Socially, the key element is demand for truth – the ‘for-profit-only’ writers of some of last fall’s fake news had little interest in whether their stories were for the right or the left – but found that pro-Trump/anti-Hillary did well and brought them profits, and that there just wasn’t the same appetite on the left. We need to address the demand for fake news – to motivate people across the political spectrum to want reality. This is not simply a matter of saying ‘read critically, it is better for you’ – that is the equivalent of countering a proselytizing Christian telling you to believe in the Gospels because Jesus walked on water by explaining the laws of physics. You may be factually right, but you won’t get anywhere. We need to have leaders who appeal to authoritarian followers AND also promote a fact- and science-based view of the world, a healthy press ecology etc. That said, the technology – the internet, AI – has changed the dynamics of fake news. Many people now get their news as free floating stories, effectively detached from their source publication. So one issue is how to make news that is read online have more identity with the source, with the reasons why people should believe it or not. And the answers can’t be easy fixes, because any cheap signal of source identity can be easily mimicked by look-alike sites. The real answer will come with finding ways to work WITH the culture of online reading and find native ways to establish reliability, rather than trying to make it behave like paper. A key area is the social use of news in a platform like Facebook. We’ve seen the negative side – people happy to post anything that they agree with, using news as the equivalent of a bumper sticker, not a source of real information. News and social platforms – both the publishers and the networks – should create tools that help people discuss difficult issues. At the moment, it appears that what Facebook might be doing is separating people – if they disagree politically, showing them less of each other’s feeds. Instead, we need tools to help mediate engagement, tools that help people host discussions among their own friends in a less acrimoniously. Some discussions benefit from interfaces in which people up and downvote different responses; some interfaces present the best comments more prominently, etc. While not every social discussion on Facebook should have more structured interfaces and moderation tools, giving people the ability to add structure etc. to certain discussions would be useful. I would like to see newspapers do a better job of using links to back up stories, provide background information and more detailed explanations. While there is some linking in articles today, it is often haphazard – links to Wikipedia articles about a mentioned country, etc. rather than useful background information or explanations or alternative views. The New York Times is doing a great job in adding interactive material – I’d like to see more that helps people see how different changes and rules and decisions affect them personally.”

J. Nathan Matias, a postdoctoral researcher at Princeton University, previously a visiting scholar at MIT Center for Civic Media, wrote, “Society will never settle on a single, reliably-flawless information verification system. We can expect an ongoing contest of technical and social innovation between those who benefit from misinformation and the many groups who advance their interests through public understanding. By working together to understand the ongoing risks and systematically-testing our responses to misinformation, we can make measurable progress.”

Barry Chudakov, founder and principal, Sertain Research and StreamFuzion Corp., wrote, “The way to ensure information is trustworthy is to build trust-tools into the information itself. By transparently revealing as much metadata and tracking confirmation of the sources of the information, readers and viewers can verify the information. This not only enhances the value of the information, it fosters confidence and reliability. Many useful organizations such as Check, an open web–based verification tool, FactCheck.org, PolitiFact, the International Fact-Checking Network at Poynter Institute, Share the Facts, Full Fact, Live – all are tackling ways to institute and evolve trusted verification systems. Fifteen years ago in “Making the Page Think Like a Network,” http://bit.ly/2vyxQ3l, I proposed adding an information balcony to all published information; in this balcony, or level above the information itself, would appear meta-commentary about the utility and accuracy and of the information. With tracking tools and metadata – today mostly in the hands of marketers, but useful for the larger public good – we can verify messaging and sources more accurately than ever before because information now carries – within its digital confines – more miscellaneous data than ever before. A Facebook post, a tweet, notes from a meeting, an audio recording, contemporaneous notes from an anonymous source – all can be combined to create trusted, verifiable content that reveals any hacking or alteration of the content. With meta information positioned in a balcony above or around the information, readers and viewers will become accustomed to evaluating the reliability of the information they receive. We can no longer behave as though information can operate alone on trust-me. Just as RSA, the security division of EMC, provides security, risk and compliance management solutions for banks; media outlets will need to provide an added layer of information protection that is a visible component of the information presentation itself, whether online or in broadcast and print. Some countries are already doing this. As the Washington Post reported recently, “When politicians use false talking points on talk shows that air on RAI, the public broadcast service in Italy, they get fact-checked on air. It’s a recorded segment, and the hosts and politicians debate the data. Politicians frequently revise their talking points when confronted with the facts during interviews.” Work is already underway to enact better verification. A committee of fact-checkers, under the auspices of the International Fact-Checking Network developed a code of principles, to which The Washington Post Fact Checker was an inaugural signatory. Knowing that information is dynamic – that it can expand, deepen, change – is essential to creating reliable, trusted, unhackable verification systems.”

Philip Rhoades, retired IT consultant and biomedical researcher with Neural Archives Foundation, said, “I am not sure but if there is, I don’t think it will be enough.”

Sean Justice, assistant professor at Texas State University-San Marcos, “This is an incoherent question. ‘Reliable’ ‘Trusted’ ‘Unhackable’ ‘Verification’ are fluid and relational – the question is hinged on pre-internet definitions of these concepts but it points to the emergence of post-internet emergence. The terms are not commensurate with the expected answer.

Janet Kornblum, a writer/journalist, investigator and media trainer, replied, “No. I’m a story teller/writer/journalist so every time someone tries to constrain a system, it results in the curtailment of free speech. For this type of system to work, we’d all have to agree on the purveyor of such a system. Would you trust the government? The New York Times? Fox News? I think not. I trust newspapers more than most because I believe mistakes are caught. But once you start saying this is The Truth, you’re in dangerous territory. I think the best we can do is hold those who report facts to a community standard that is completely transparent. When I, for instance, write stories I put in as many links to prime material as possible. Really, the issue won’t be resolved by technology. It is a people problem and we need a people solution.”

John Laprise, consultant with the Association of Internet Users, wrote, “No, complex systems always have flaws/vulnerabilities including people.”

Rob Lerman, a retired librarian, commented, “At present the likely counterbalances – the universities and legacy media, for example – have had their credibility severely and deliberated damaged. This makes it challenging for verification sources to be widely trusted.”

Cliff Cook, planning information manager for the City of Cambridge, Massachusetts, noted, “The technical track record is not good. A solution might require a fundamental change to the way the internet operates and that is far easier said than done, given the dispersed nature of control.”

Su Sonia Herring, an editor and translator, commented, “There is no way, as long as there is the human element a system cannot be ‘unhackable.’”

To return to the survey’s for-credit responses home page, with links to all sets, click here.

To advance to the next set of for-credit responses – those to survey Question 3 – click here.

If you wish to read the full survey report with analysis, click here.

To read anonymous survey participants’ responses with no analysis, click here.