Elon University
The prediction, in brief:

People were very self-confident … They put forward a goal that was ambitious, and that I believe we may never achieve: to build agents that are very intelligent, have common-sense knowledge, and understand why people do things. AI researchers have been trying to do this for 15 or 20 years, and haven’t seen significant results.

Predictor: Maes, Pattie

Prediction, in context:

In a 1995 article for Wired magazine, Scott Berkun, a SI/Usability specialist at Microsoft, interviews artificial intelligence expert Pattie Maes, a leader in intelligent-agent research. Berkun quotes Maes saying: ”I think people have taken the wrong approach, especially in the early days of artificial intelligence. People were very self-confident; they were convinced AI [Artificial Intelligence] would be the solution to many problems. They put forward a goal that was ambitious, and that I believe we may never achieve: to build agents that are very intelligent, have common-sense knowledge, and understand why people do things. AI researchers have been trying to do this for 15 or 20 years, and haven’t seen significant results. The idea of agents really isn’t new. There have been people working on agents all along – they just haven’t produced many results yet. We have a less ambitious target. We don’t try to build agents that can do everything or are omniscient. We try to build agents that help with the more repetitive, predictable tasks and behaviors … The system learns about its user’s habits, interests, and behaviors with respect to that task. It can detect patterns and then offer to automate them on behalf of the user. Recently, we have augmented that task with collaboration – agents can share knowledge they have learned about their respective users. This is helpful to people who work in groups and share habits or interests. So those are the techniques we’ve been exploring: observing user behavior, detecting regularities, watching correlations among users, and exploiting them. We think it’s important to keep the users in control, or at least always give them the impression they are in control. In all of the systems we build, the users decide whether to give the agent autonomous control over each activity. So it’s the users who decide whether the agent is allowed to act on the users’ behalf, and how confident the agent has to be before it is allowed to do so. Users can also instruct agents, giving them rules for special situations. You can tell the system whether the rule is soft or hard – soft being accepted as a default that can be overwritten by what the agent learns, hard meaning it cannot be overwritten by the agent.”

Biography:

Pattie Maes , a researcher at MIT’s Media Lab, was a founder and board member of Firefly Network, Inc. in Cambridge, Mass. Ð one of the first companies to commercialize personalization and profiling technology (Firefly was acquired by Microsoft in 1998). She was also a founder and a board member of Open Ratings, Inc., a provider of performance data on businesses for B2B ecommerce. (Research Scientist/Illuminator.)

Date of prediction: January 1, 1995

Topic of prediction: Community/Culture

Subtopic: Human-Machine Interaction

Name of publication: Wired

Title, headline, chapter name: Agent of Change: Pattie Maes Believes Software Agents Are Ready for Prime Time

Quote Type: Direct quote

Page number or URL of document at time of study:
http://www.wired.com/wired/archive/3.04/maes_pr.html

This data was logged into the Elon/Pew Predictions Database by: Anderson, Janna Quitney