Elon University

The 2006 Survey: Scenario Three – Autonomous technology is a danger (Anonymous Responses)

Responses in reaction to the following provocative future scenario were assembled from a select group of internet stakeholders in the 2006 Pew Internet & American Life/Elon University Predictions Survey. The survey allowed respondents to select from the choices “agree” or “disagree” or to leave the scenario unanswered. Respondents were encouraged to provide a written elaboration to explain their answers; they did not always do so, but those who did provided richly detailed predictive material. Some respondents chose to identify themselves with each answer; many did not. We share some – not all – of the responses here. Workplaces of respondents whose reactions are shared below are attributed here only for the purpose of indicating a level of internet expertise; the statements reflect personal viewpoints and do not represent their companies’, universities’, or government agencies’ policies or positions. Some answers have been edited to share more respondents’ replies. This is a selection of the many carefully considered responses to the following scenario.

internet artBy 2020, intelligent agents and distributed control will cut direct human input so completely out of some key activities such as surveillance, security, and tracking systems that technology beyond our control will generate dangers and dependencies that will not be recognized until it is impossible to reverse them. We will be on a “J” curve of continued acceleration of change.

Compiled reactions from the 742 respondents:
42% agreed
54% disagreed
4% did not respond

Below are select responses from survey participants who chose to remain anonymous. To read reactions from participants who agreed to be identified with their statements, please click here.

Complex systems always introduce unintentional consequences.

Autonomous technology is already a problem. From hospitals to the highways, from the classroom to the bank, it is already dangerously loose.

We’re only talking about 15 years away – Big Brother won’t be here for at least another 50 years!

Likely, but I don’t see the system as terribly efficient. This makes it even more dangerous.

There will be an apparent trend towards autonomous technology, but in fact that technology will still be very much controlled by large corporations, governments (for which read, the USA), and mega-bureaucracies. The problem will not be autonomous technology, as faceless human overseers who are not held accountable. We have seen a small example of this in the recent US-goverment related illegal surveillance scandal.

This is possible, we are relying heavily on un-manned spacecraft and whatnot. I’m worried machines will be perceived as less error-prone than humans.

Technology is never outside of human control. It just depends on what you mean by control.

People seem ready to embrace technology and to use it to automate all sorts of tasks. It seems quite possible that important activities such as security-related activities will become more automated than they already are and less under the control of humans. As it is, when a name gets placed on a “watch list” it seem very hard for people to prove that they do not belong there. Also, many banking activities are already out of the control of human beings to the point that it becomes very easy for someone to steal a person’s identity and use it to open lines of credit. This can destroy someone’s life as it is very difficult to undo the damage that is done when someone else easily opens lines of credit in another person’s name.

The Terminator movies will not come to pass.

It could happen, and might happen, just not that soon.

These events are a possibility. However, the systems will have too many errors in them, such that breakdown will occur. Human beings create these systems, so monitoring them is the human problem.

While I believe in the high rate of change, truly autonomous agents won’t be intelligent enough to be a major risk

The problem will be both over control or attempts to over control and the lack of control. Further attempts to control will lead to “normal” behaviour being more and more watched over while the “bad” behaviour that everyone wants to monitor will be invisible or move offline.

We’re smarter than that. This is scare tactics.

Human judgment will continue to play a major role as mistakes continue to be made in attempts to automate.

To date, humanity has been able to identify these problem trends and work to prevent major catastrophes caused by these dependencies. However, the increasing centralization of power in a few global companies could increase the chances of it occurring.

While I agree that we may not immediately recognize some of the dangers/dependencies, I disagree that we will not be able to reverse or better yet, counter them with other intelligent-agent capabilities.

I disagree only with the “impossible to reverse them” portion. The problems are already appearing now. That it will be uncorrectable I do not agree. Partially because competing software/hardware suppliers can use such things as controllable/customizable systems as a point of differentiation to consumers. Partly because some large institutions such as governments, corporations, developer groups, hacker groups, educational institutions, have varying levels of “control issues,” for lack of a better term. They will likely provide resistance, whether organized & legal, or otherwise.

This sounds a bit paranoid, as if the human element of computing will suddenly disappear and create a world like “Terminator 2” or “The Matrix,” in which machines develop artificial intelligence and then go to war against their human makers. Intelligent agents and distributed controls will help eliminate human errors in cases where human beings slow down the system, but they won’t eliminate humans or create dangers and dependencies.

The definition of the word “problem” is highly subjective. is this questionnaire a survey over technology or philosophy?

Surveillance, security and tracking may become a danger; but I think, over time, it cuts both ways in that these technologies will be better able to uncover illegal and dangerous uses of the technology.

If this means beyond the control of most individuals then I do agree. Policy-making, surveillance, information gathering, and such are already out of the control of most individuals. Placing as many of these activities into automated hands will certainly be done if it saves someone money or increases their power.

I think we’ll have time to put on the brakes.

Agree in a general sense but it is a longer-term issue. 40 – 50 years.

Maybe we really are in “The Matrix.”

Made by humans, run by humans, deactivated by humans.

The fear of computers replacing people has been around at least since the 1950s, and the reality continues to be that we have more to fear from humans than from machines.

I agree that there will be changes that we do not recognize now.

The difficulty will be cultural rather than technical, since we’ll be increasingly dependent.

Autonomous control will be increasingly available, but controls and overrides will be available, especially inside corporations where IT experts will still exist. We’ll see more of the automation, however, in consumer applications.

This is already happening.

Intelligent agents and distributed control will expand in irreversible ways, insinuating ways, but I do not believe human input will be cut out of the loop. My somewhat darker view reflects the culture of central control and repression that seems to dominate power brokers right now. They have every intention of controlling these agents and systems at choke points they are currently building into the systems, invisible to most people, just as the ways the voting machines are being rigged is also invisible and untraceable. These people have seen what happens when the wild horse runs, and frankly, it terrifies them. We are in the middle of an enormous backlash. On the other hand, this could be a good time for some McLuhan-esque media reversals. Or is the backlash itself the media reversal? I doubt it is anything so innocuous.

Automation is a legitimate tool in security, but humans will be loathe to cut human thinking out of monitoring human behavior, which is notoriously difficult to reduce to algorithms.

“Impossible to reverse” won’t happen.

It’s already a mess in some industries, such as the airlines; no reason to believe it will stop there.

Giant technology companies in all parts of the world will endeavor to cut more people out of the loop, to maximize their profits. Machines will begin to create more machines, driven by profit, and humanity will lose control of what is really being produced.

It will be worse for users since the need for control of automatic agents will increase discrepancies. Users’ time will be jeopardized by machines

We rush to convenience, time and time again. Underestimating the power of convenience and time saving is a sure fire way to missing the next wave. This autonomous nature of transactions will be a problem, but we’ll rush into it anyway.

Technology will permit more control or power to be concentrated in the hands of a relative few, but I do not envision technology on its own spinning out of control.

Ever hear of a back door? Programmers can’t resist inserting them. Every system can be exploited. There are no secure systems. And every developer knows that systems must be developed with fail-safes.

Every attempt to automate generates a backlash. These attempts are usually subject to political developments. I don’t think any of these activities will go ignored, especially with all the internet communities that are on watch and ready to subvert such attempts.

This does not mean we will be safer. We will just be more videotaped, documented, and data mined.

We lose human control, because we try to create machines that do the procedures for us.

To some degree, I think we’re already there. Just look at the recent NSA/telecom carrier situation where, apparently and allegedly, millions of voice and data communications were tapped and analyzed. The dangers aren’t always physical, but can be financial or emotional, as in the case of identity theft.

Sounds like a Philip K. Dick story…I’m not sure I’m behind this statement. I do think that technology will be enhanced so much by 2020 that human interactions will be even more limited than today, BUT I don’t think it will be as bad as described above.

Too much hype over something that shouldn’t matter, just like Y2K.

It could go this way, but I suspect that governments will be forced by their populations to legislate to prevent this.

Technology is making us increasingly vulnerable.

I am optimistic that the public will reclaim our right to privacy and to space.

Sounds like a good movie but not reality. Machines are and always will be just machines – only as valuable and dangerous as the people who run them.

While we are on a “J-curve’ of accelerating change, the timeframe of 2020 is premature. 2050 is more likely.

I think enough people are worried about this type of thing happening that there will be plenty of watchdogs.

Technology is so interconnected that I believe it is humanly impossible to predict all the outcomes of the choices we are making. Civil liberties hang in the balance. I predict a backlash against technology.

I disagree because autonomous technology and the resulting “de-skilling” of humans is already considered a problem in some circles. Also, this sounds like a “Frankenstein” scenario – things would have to proceed perfectly according to plan for this to happen, and given the law of unintended consequences, they almost never do.

Autonomous technology will be somewhat of a problem, although nothing created or tracked electronically is impossible to undo. There will always be loopholes, and the developers of these systems will need to account for that, or the backlash will be incredible. Unless of course people just don’t know its happening.

There will be minor scare and some moral panics, but we’ll be all right. Won’t we?

Paranoia, pure and simple.

Human intelligence will still be the key driver of Internet, and user control-tags, web 2.0, etc. will be the norm.

Yes, as falls in the cost of processing power, monitoring devices, connectivity will make this increasingly economically feasible.

Every major change in technology has come with the same predictions. The result is that additional opportunities have always opened up.

Too much regulation will completely alter the beauty of the internet, which is its ability to connect people from all around the world, provide information-sharing, promote free speech, all for a low cost. Too much control will alter the WWW for the worse not for the better.

Possibly true, but this looks suspiciously like one of those “overrun by robots” scares.

Always have to be mindful of what the affect might be.

A lot of danger lurks with intelligent agents, data mining and how information can be used against someone. It is creepy to know that everything can be tracked electronically today and that the faster computer processors will be able to compile and comb through mounds of data and sometimes those smart agents will draw incorrect conclusions. That is the scary part

What IS likely is that technology will continue widen the gap between the haves and have-nots. Those with high education will continue to be able to use technology to advance. Those without will benefit from a higher standard of living, but not able to take advantage of the technology to its fullest extent. Tedious tasks will be automated, but ultimately, anything that requires a “judgment call” will be left in human hands.

I don’t think that we’ll be subject to intelligent machines. I do think that the ease of surveillance will continue to provide leaders and other resource-rich individuals with too much power and not enough oversight. I am not as worried about machines as I am people using machines.

Such trends will be recognized, but not by the larger public until it is very difficult to rein them in.

There is not a lot of discussion going on regarding the future of technology. E.g. what about privacy in the Internet? If you look at MySpace.com you see how irresponsible children are using this social software not thinking about the possible impact in the future on their lives. Technology and personal life become more and more interchangable.

We are already hyper-dependent on technology we don’t understand. Can any of your friends fix a TV or a mobile phone?

Too many people are control freaks, and time and time again law enforcement and the intelligence community have learned that nothing can truly replace human intuition

We’re on the look out and will roll back things that get too dangerous.

Consequently, we need to proceed with caution in the designing of such autonomous technology that compromises integrity over convenience

The areas in which autonomous technology will be a problem by 2020 will be limited. Regrettably, the signs of society’s willingness to give up privacy through acceptance of the proliferation of security systems that pervade contemporary urban (and increasingly non-urban) areas portend this development.

Free societies still place limits on technology. Repressive ones can’t muster the innovation it takes to implement “big brother.”

While human input will not be involved in the direct actions, it is only the human programming that will direct the issues and opportunities for tracking and surveillance. It will be the human inefficiency and poor planning that will be problematic, rather than any notion of not being able to reverse things put into action.

General awareness of this crossing of the Rubicon will be limited. Top-down control of information will have become greater.

People are creative, for better or worse. Technology can react, but rarely anticipate, human invention.

I agree with the idea, but I disagree with calling it a “problem.” It will be a problem in the U.S., where information is typically misused for political or socially abusive activities.

Already today the most significant security threats to the network are AUTOMATED. There is no reason to suspect it will be any less of a problem in the future, and it will likely be much worse. It’s not much of a leap for an automated process once unleashed on the network to become uncontrollable by its creator, especially if there is malicious intent.

Your “key” activities represent only part of Internet usage.

By 2020, technology will be developing itself, possibly at a rate beyond human capability. The unbridled quest for an edge in technology that we see today will accelerate and result in systems that will lack proper safeguards. It will not be a Y2K or “Terminator” scenario – systems will still be able to be unplugged – but there will be data and security disasters that will dwarf the ChoicePoint and other scandals seen thus far. A major banking system will collapse, perhaps a military system fold under pressure. The subsequent public outcry will result in legislation intended to insert safeguards – but they will likely be crippled by political and Johnny-come-lately pressures. Given time, however, the problems will be ironed out, but only at great cost.

We are already there in some regards!

I think we’re farther away from dependable AI than 2020.

This is the nature of system effects.

The trend is based on fear and it’s rampant, at least in the U.S. People are willing to give up civil liberties in order to have the illusion of safety. This will accelerate the move.

Humanity will remain in control of its own technology. But the control might end up in the hands of a relatively closed oligarchy.

Just seeing how worms get out of control, and how some worms have been created by mistake, there’s no doubt that today’s “spider” could be tomorrow’s worm.

Although I am unsure of technology getting out of our control, I do believe that “intelligent agents” will take over many tasks that humans can do.

Science fiction scenarios are always true

In an increasingly automated world, the human touch will become more valuable. Things that do not need the human touch will be out of sight and out of mind (not unlike the electric company).

Look at the Echelon project and other governmental initiatives to monitor U.S. citizens with little to no human interaction. How successful have they been and how do we know that what successes have been pointed out are not the only successes?

Human bureaucracy will be replaced by e-bureaucracy. Just give me a live person…help!

Tracking systems beyond any individual’s control are already somewhat in place … but we can count on the presence of operator error, an excitable, omnipresent media, and enough privacy-hungry humans to keep us from advancing too far down the J-curve.

It will get better, but I think there’s a loooong way to go before tech is truly autonomous enough to trust it for such key activities

I believe this will be so. But I can’t refer to specifics other than an underlying sense of the direction our technology is going.

Most of the world will not be a party to this, but it could be a problem for technologically advanced nations, with large intelligence machines. Without adequate checks on these systems, we could set ourselves up for a Judgment Day scenario as played out in the “Terminator” movies.

I agree that autonomous technology will thrive but not that it will become the problem described. The J curve will hit a turning point or a newer overarching technology

Technology is clearly getting more and more sophisticated and autonomous every day. It is dangerous, but I don’t believe we will ever see the threat of Neo’s “Matrix,” or Sara Conner’s “SkyNet” (but then again, I do believe that people are already inventing and deploying systems that would scare me if I knew about them).

We have a substantial track record of making choices without reckoning the consequences.

These are important advances but must be developed taking caution into account. How do this not become “Big Brother.” How do we ensure the human element always remains a part of the mix?

If by this statement you mean that there will be one or more significant “accidents” due to use of autonomous technologies – yes, I agree.

This is already a problem today, where organizations rely more on machines or software to manage tasks than people.

I agree with trend but not that autonomous technology will be irreversible, if we’re smart enough to get it there, we’re smart enough to bring it under control

Man will continue to advance technology AND control it.

Agree – to the extent that some instances of unanticipated and detrimental consequences will occur.

While change will continue and accelerate, humans seem to have the ability to keep themselves in the loop – either explicitly by preventing technology from becoming self-sustaining, or inadvertently by creating flawed technology.

What we do with the results will be the problem, not the automation itself. We will continue on a J-curve of accelerated change in this space though.

Actually, I think this is already happening.

Agree for the most part – dangers and dependencies will be generated. But not to the extent that they are beyond our control

Please, we’ll be lucky if our cars have GPS-enabled agents that can find the cheapest gas price within n miles from its current location. That’s possible now – but it’ll probably take more than 15 years for a business model to make it work, get all aspects of society to “plug-in,” etc.

We will become a world of “mere subjects” rather than free citizens due to pervasive surveillance and monitoring. It’s unlikely that this trend can be reversed.

The human factor will remain vital no matter what because human error will be part of the autonomous technology. We who will not be part of that part of society will learn how to maximize our protection from all that activity in order to not become a victim.

Too many sceptics abound to allow this scenario.

I agree that intelligent agents will control most of certain key activities – disagree that the technology will be “out of control.” And we have been on the J curve for some time.

Advanced technology, as with anything else, brings new issues and makes some jobs obsolete. However, as with everything else, change also brings opportunity. This means that people will need to be willing to adapt and change to keep up with a faster pace. Learning will need to be life-long, and I believe universities will need to teach more about how to learn and adapt, rather than basic facts and skills – which are apt to quickly become obsolete.

It will be possible, though not necessarily easy, to reverse.

I expect technology will be developed that could afford the problematic possibilities described above, although I think that human input will continue to influence the choice to exploit and implement those possibilities.

As more and more IT knowledge is commoditized and taught at ever-lower levels of education (elementary school perhaps) more people will do more things, including the creation of bots or automated processes. Among those will be people of malicious intent.

Beyond our control? Are you being serious!?

No, it will not be impossible to reverse dangers and dependencies. Dangers and dependencies break as we respond and adapt. Any “irreversible dependency” is by definition life-sustaining and will not likely be regarded as a problem. People will still carve out spaces for production, expression, and dissent. While there are certain technologies that put us at risk, there is little experience in history to suggest that our worst fears (or our most fantastic dreams) will all come to fruition. The idea that we can even produce an IT project that is “irreversibly self-sustaining” is a science fantasy concept that no one has ever been able to successfully pull off. No prognosis for this anytime in the near future (certainly not by 2020).

It has already happened. I don’t know about the J-curve stuff, but I agree with the rest.

Clearly, we will see some “beyond our control” technological failures; however they will not be the norm. We will hopefully learn from early failures and ultimately integrate human and technological solutions into a working solution in most cases.

Autonomous technology will be a problem, but not by 2020 – that issue will come about a decade later.

This is a difficult question because of the extremes it projects. I do think some aspects will get out of control, definitely to dependences, but not necessarily to dangers. Automated monitoring, database interoperability, etc. make us visible in ways we have never been before. If you see a danger to that, then you see danger coming.

To a large extent, this already is the case. It’s increasingly harder to get around or undo computer-based decision-making.

I am not a Luddite, so I cannot accept this. Yes, human-machine paradigms will change, but the human brain will adapt and supercede in many ways.

J-curves of technological adoption are rarely followed and 2020 is too soon to have fully autonomous systems

I agree to the extent that most people will not be able to influence technological development. A small group of elites, however, will continue to have some influence.

Today, many of our research and applications only care about the outcome and neglect many issues such as privacy and the human beings acceptability. This could cause serious problems later.

A system of checks and balances needs to be implemented as technology becomes more widely used.

There will always be a human behind the technology implementation and maintenance.

It won’t be a J curve. It will be sporadic, increasingly erratic ups and downs.

Agree with the principles, but disagree the it will be “impossible” to reverse such changes.

And if the kinds of hooligans now occupying the White House continue to be in power, it will happen much sooner than 2020.

We’ll recognize this happening and take steps to counter it.

We have been on a J-curve since humanity started thousands of years ago. I don’t buy into agents they were hyped a while ago and went nowhere. It’s important to have human input and control.

There will be dangers and dependences that will not be anticipated; however, there is little to no historical evidence that matters are ever completely outside the control of humans, and none to suggest that the current nature of technological innovation will be substantively different than earlier periods of innovation.

Not gonna happen as soon as 2020.

The human potential for subversion will make sure that this trend is undermined!

I hope journalists will keep this from happening (by working to illuminate the issues in a timely, accurate fashion).

I disagree with the statement with respect to the complete cut out of human input. This seems to me too over-optimistic and also does not take into account that social and political movements might create counter-movements against a total-surveillance scenario.

Wherever there is intrusive technology, there is also the will to bypass it.

Some corporations or individuals may engineer technology to cut direct user input out, however, the norm will be that technology will augment human intelligence and feedback rather than exclude it.

The human aspect of technology will always be important. We might take some shortcuts, but ultimately, we have the control.

Facial-recognition technology will be used for such purposes. However, interpersonally, pseudonymity will emerge in the mainstream, thus allowing for layers of anonymity and recognition – identity control.

This may occur locally, but I also believe that developments will go different ways. Intelligent agents will not be used everywhere; there will be anonymous servers; U.S. regulation will not affect all the globe.

Technology is never outside of human control. It just depends on what you mean by “control.”

This is already a problem, and as “labor” becomes even more expensive in a relative sense, the problem will be exacerbated. However, I do not believe that it will get as bad as the explanation above seems to portray; there will be counterbalancing factors of a legal nature once this problem is “big enough” to warrant Congress’ attention.

This is partially true but precautions must be taken earlier.

This seems almost unavoidable.

The “Brave New World” syndrome has been mooted before but doesn’t seem to occur as predicted.

There is a pendulum swinging here – and, as likely as it is that some of these activities will swing beyond human control, it is as likely that humans will manage the change after a sufficient segment of the human population recognizes the risks.

The big problem will be to adjust wrong choices made now. In 2020 the “solution” direction will be somewhere else.

These systems will be significantly improved, but they will not generate irreversible dangers and dependencies.

We are on a dangerous course re surveillance and tracking. The political climate will influence how this evolves in the future

We have seen these predictions before – there is always a need for human involvement.

This possibility really concerns me.

Science fiction.

Sure, it could happen. It may be happening already now.

This scenario is certainly possible, and elements of it will probably happen, but the level of concern about these problems now is such that I think sufficient safeguards will be implemented, although never completely or perfectly.

Too many different thoughts here… agree with some of them.

We are already on that path.

Humans are always smarter than computers.

Using the word “impossible” biases this question, since nothing is impossible. As phrased, this question should generate a resounding non-confirmation of autonomous technology.

Anything man-made can be reengineered, reversed and given another useful meaning by the human intelligence

Autonomous technology will be a problem, but not an unmanageable one. We will adjust and the technology will adjust. Intelligent agents are overhyped anyway.

This is a typical catastrophic scenario for bad journalism which historically never materialises.

There is a clear danger of this happening, but the fact that it is already being identified as a concern, while there is still time to do something about it, suggests that it is far from being a certainty.

There’ll be problems, but this doomsday scenario of irreversibility is too hysterical.

Only those with enough money can afford to not be online. To not be bothered with Internet and mobile devices is the ultimate status symbol.

Their software never seems to work as well as they say, with clever programmers and hackers always finding weaknesses, therefore human programmers will have to continue to be involved in the updating and evolution of the software needed.

I do have a general fear of relying too much on computers and then being subject to technical errors (as opposed to human error) and vulnerability to technical complications.

Most kinds of surveillance, particularly ones involving, for example, face recognition, are too difficult for technological solutions, and while I do think this prediction may come to pass, I do not think it will happen by 2020.

Irrespective of the policy issues, I am not so optimistic that technology will advance to support this scenario.

For every new technology, there will be technology that will reverse its effect. Take the example of current security devices for which people manage to create a decoder. This vicious circle makes cutting direct human input out of any scenario.

No, we’ve already seen some of these extremes, so future systems will be set up to be governed by human control. Things will change and we will be assisted by automated controls, but not too late to reverse them.

We already have technologies that are (temporarily) out of our control, e.g., automated trading systems that have brought about precipitous stock market readjustments until their behavior was reigned in and medical systems that give lethal dosages until they are found out. Similar scenarios will undoubtedly continue and counter-vening actions will be necessary. But, the situation will be no more dire than it is now.

This is happening already. Surveillance and security mechanisms are increasingly automated and they can cause “false alarms” or behaviour that will guide the human interaction to be uncorrect.

With new technology comes new fences. Those new fences will be overlooked by some, as today backups are overlooked until you lose one year of work. But I see no reasons why such isolated mistakes would put the rest of the community in danger; the fact that an agent is autonomous doesn’t make it a better invader than when its actions are controlled.

The nature of technology will show that failures will occur and that the human element will always have a place in the system.

This fear of losing control over the machines will prevent developers from relinquishing that much power to “intelligent agents.” At least by 2020, humans will still have control over these elements.

I agree with the first part about progress and availability of autonomous systems. However, such systems will still be managed by people all over the world.

To the extent we are considering a threat rather than the likely state of affairs in 2020. Modern history is dominated by ill applications of all sorts of technologies.

That scenario could happen only if power (legislation, executive) is taken by non-democratic groups.

Freedom wanes as surveillance increases, eliminating the only freedom, anonymity.

Putting systems online with “dangers and dependencies that will not be recognized until it is impossible to reverse them” does not seem prudent, unless legal precedents are created so that somebody can get away with such a system. Having said this, the increased use of surveillance, security and tracking systems may lead to an alienation of the general population from technology.

There will always be problems with too much delegation or too much distraction. But it will be human error, not computer error. Technology will always give humans more and more control over our world.

Pure fantasy.

I would quarrel with your characterization of “intelligent” agents here. Stupid agents are more likely; but governments and others will cheerfully assign them responsibilities in areas like surveillance and security.

We are pathologically drawn to convenience it would seem. We will mortgage important principles of privacy, security, finance, and morality for convenience – in many instances without even knowing it.

This is a very real danger and given that we are doing a lot of things so poorly in cyberspace we are going to be in a world of hurt. My sister just spent considerable time trying to get her real birthday back – the IRS said it was June 31! Computer matching between government agencies is a total disaster – my sister’s real birthday is June 3, but social security and IRS worked together to get it wrong. I worked in federal government for years – I fear for my grandson.

I take this to be a two-part statement: One, that technology will have been deployed and is being assumed to be working properly; and two, that it’s not working properly. I believe both will be true.