Intelligent Agents and Their Applications (information science)

Introduction

An agent, in the traditional use of the word, is a person that acts on behalf of another person or group of persons. In information technology, the term agent is broadly used to describe software that carries out a special range of tasks on behalf of either a human user or other pieces of software. Such a concept is not new in computing. Similar things have been said about subroutines, reusable objects, components, and Web services. So what makes agents more than just another computer technology buzzword and research fashion?

background

The idea of intelligent agents in computing goes back several decades. Foner (1993, p. 1) dates the first research on software agents to the late 1950s and early 1960s. However, with the breakthrough of the Internet, intelligent agents have become more intensively researched since the early 1990s. In spite of this long heritage, the uptake of these ideas in practice has been patchy, although the perceived situation may be partly clouded by commercial secrecy considerations. Even today, the many different notions of the term software agent suggest that the computing profession has not yet reached a generally accepted understanding of exactly what an agent is.

definitions and classifications

According to Jennings, Sycara, and Wooldridge (1998, p. 8), “An agent is a computer system, situated in some environment that is capable of flexible autonomous action in order to meet its design objectives.” Thus, the determining characteristics of an software agent are:


• Reactivity: An agent has profound knowledge of its environment and has the ability to interact directly with it. It can receive input from the outside and can perform reactions with external effects.

• Autonomy: An agent is in charge of its own internal status and actions. It can perform independently without the explicit interference of any user or other agents.

• Proactivity: An agent has the ability to interpret even minor changes in its environment and can take the initiative to act upon them. It can communicate and interact with entities and can delegate tasks to other agents.

• Intelligence: An agent’s degree of intelligence is determined by its capability to apply methods of AI in order optimize its action (Meier, 2006, pp. 20-320).

The research literature discusses many different types of agents, carrying out all sorts of functions with what can be termed primary and secondary characteristics. Primary characteristics include autonomy, cooperation, and learning, while secondary characteristics include aspects like multi-functionality, goodwill, or trustworthiness.

A typology of software agents was proposed by Nwana (1996, pp. 7-38):

• Collaborative agents feature a high degree of cooperation and autonomy. They are determined by the idea of distributed artificial intelligence and by the concept of task sharing, cooperation, and negotiation between agents.

• Interface agents focus on the characteristics of learning and autonomy. By collaborating with the user and by sharing knowledge with other agents, they learn a user’s behavior and are trained to take the initiative to act appropriately.

• Mobile agents are not static but have the ability to travel. This entails non-functional benefits such as freeing local resources, showing more flexibility, and enabling an asynchronous work scenario.

• Information or Internet agents emphasize managing enormous amounts of information. Their main task is to know where to search for information, how to retrieve it, and how to aggregate it.

• Reactive agents show a stimulus-response manner as opposed to acting deliberatively. Since they are based in the physical world and only react to present changes, their behavior is not predetermined.

• Hybrid agents comprise more than one agent philosophy and benefit from the combination of different architectures.

Wooldridge and Jennings (1995, pp. 24-30) offer a two-way classification, based on contrasting approaches to building agents. They distinguish the following representative architectures:

• Deliberative agent architecture: This classical agent architecture consists of one definite, symbolic world model with all decisions being made on the basis of logical reasoning. Challenges of this approach are the translation of the real world into an accurate model and the establishment of an efficient reasoning.

• Reactive agent architecture: In contrast to the deliberative agent architecture, this alternative approach is lacking an explicit and symbolic model of the world as well as extensive reasoning.

Wooldridge and Jennings (1995) also allow for hybrid agent architectures that are built as a hierarchy of deliberative and reactive agent architecture layers.

discussion

Four aspects are of particular interest when trying to understand how agents work and could be successfully employed in applications and environments: agent knowledge, agent applications, agent standards, and multi-agent systems.

Agent Knowledge

To operate autonomously, any software agent must build up a collection of knowledge, typically data and rules that enable it to serve the principal it is acting for. According to Maes (1994, pp. 2f), an agent’s knowledge base should be built up gradually by learning from users and other agents. The key issues are competence and trust. To be competent, the agent must have a knowledge base that is comprehensive and flexible enough to adapt to the user’s profile. For an agent to be trusted, a human user must feel comfortable when accepting help from the agent or when delegating tasks to it. Generally, an agent can only learn from its user and other agents if their actions show an iterative pattern. Maes (1994) suggests four different ways of training an agent to build up competence: observation and imitation of the user’s habits, user feedback, training by example, and training by other agents.

However, Nwana and Ndumu (1999, p. 10) have criticized Maes’ approach, claiming that an agent would not only need to know all peculiarities of the deployed operating system, but also must understand all tasks its user is engaged in. Furthermore, the agent would need to be capable of gathering the user’s intent at any time, thus continuously modeling its user. Nwana and Ndumu (1999) identify four main competences for an agent: domain knowledge about the application, a model of its user, strategies for assistance, and a catalog of typical problems that users face in the environment.

Agent Applications

Software agents can be employed in many fields of information technology. One role for agents is to act as an assistant or helper to an individual user who is working with a complex computer system or physical equipment. Examples are:

• Information agents (Davies, Weeks, & Revett, 1996, pp. 105-108) that help a human researcher in finding the most relevant material — for example by additionally taking browsing information into consideration (Sharon, Lieberman, & Selker, 2002).

• Decision support agents that help a user assess alternative courses of action; functions include filtering and summarization of data, optimizing algorithms, heuristics, and so forth.

• E-mail agents (Maes, 1994, p. 5f), which filter spam, allocate incoming mail to folders, and work out addresses to which outgoing mail should be sent.

• Buying and selling agents, which assist a user in finding good deals in Internet marketplaces, or bidding agents (Morris, Ree, & Maes, 2000), which assist participants in auctions (He, Jennings, & Prugel-Bennett, 2006). These agents have characteristics of information agents as well as of decision support agents.

A second group of applications is where the agent acts as a coordinator of activities, or “virtual manager.” Any workflow management system could qualify for this category. Other examples include meeting scheduling agents (Kozierok & Maes, 1993, p. 5), and dynamic scheduling agents that are able to reallocate resources to meet the goals of a business process (Lander, Corkill, & Rubinstein, 1999, p. 1ff). Delegation agents are another example in this category, although they could also be regarded as individual support.

A third group of applications is where the agent continually monitors data and rules in an organization, and on that organization’s behalf alerts or sends messages to human recipients. Examples are advertising agents, notification agents, recommendation agents, and selling agents. Such agents are at work when you receive an e-mail from an Internet  about a topic that might interest you.

Other agents act as a third party between two humans or pieces of software that need to cooperate. Examples include brokering agents, negotiation agents, mediation agents, and ontology agents (Helal, Wang, & Jagatheesan, 2001; Pivk & Gams, 2000). An area of application is an electronic marketplace. For example, He, Jennings, and Leung (2003) discussed agent-mediated e-commerce, and Loutchko and Teuteberg (2005) suggested an agent-based electronic marketplace.

Many humans, computer systems, and even other agents depend on one particular specialized task, which is a common agent or subagent, especially useful in an era of information overload. This is a categorization agent (Segal & Kephart, 2000, pp. 2f). Such an agent has the task of applying, and where necessary building up, a classification structure for incoming data. This structure may be particular to an individual (e.g., for e-mail filtering and filing) or it may be for an organizational unit.

Agent Standards

Intelligent agents are intended to function in heterogeneous system environments. To interact smoothly and efficiently in such environments, standardization is essential.

Although agent technology is relatively immature and many researchers still have their own definition of agents, professional bodies have been developing standards for agents since the late 1990s.

These organizations include (Dickinson, 1997):

• ARPA knowledge sharing effort (KSE)

• Agent Society

• OMG Mobile Agent System Interoperability Facility (MASIF)

• The Foundation for Intelligent Physical Agents (FIPA)

The FIPA and MASIF standards are regarded as of special importance for intelligent agents. While the FIPA standard has its origins in the intelligent agent community and has been influenced by the KQML (Knowledge Query and Manipulation Language), MASIF deals primarily with agent mobility.

The FIPA 2000 standard (www.fipa.org) specification deals with mobility and tries to integrate MASIF (Milojicic et al., 1998, pp. 50-67). Therefore this specification bridges the gap between the intelligent and mobile agent communities. FIPA’s specification is divided into five main categories: applications, abstract architecture, agent communication, agent management, and agent message transport.

Multi-Agent Systems

Much of the recent literature on agents envisages a system with a community of agents that cooperate in some way to achieve an overall set of goals. According to Jennings et al. (1998, pp. 9, 17f), “Multi-agent systems are ideally suited to representing problems that have multiple problem-solving methods, multiple perspectives and/ or multiple problem solving entities.” Since each agent has a restricted view of any problem and only limited information, multi-agent systems feature a flexible and advanced infrastructure to solve issues beyond individual capabilities. Thus, the system can benefit from every agent’s expert knowledge. Other characteristics of multi-agent systems are decentralized data, asynchronous computation, and the lack of a central control system.

A major challenge of multi-agent systems is clearly the means of coordination between agents. Nwana, Lee, and Jennings (1997, pp. 33-55) have identified the following key components in such coordination: foreseeable structures of agent interaction, defined agentbehavior and social structures, flexibility and dynamics, and the knowledge and reasoning to utilize the above.

Possible coordination techniques are:

• Organizational structuring: The agents’ roles, responsibilities, and their communication chains and paths of authority are defined beforehand.

• Contracting: All tasks and resources distributed among agents are controlled by a contract net protocol.

• Multi-agent planning approach: Agents decide on a detailed and interlaced plan of all activities and goals aimed at. Multi-agent planning can be centralized with one agent reviewing all individual plans and coordinating them into a single — or a distributed — multi-agent plan.

When agents are interacting in a multi-agent system, they may have to negotiate in order to fulfill their interests. Nwana et al. (1997) suggest two different negotiation theories:

• In the Game Theory-based negotiation approach, each agent holds a utility matrix that lists how much a certain interaction or goal is worth. During the negotiation process, which is defined by a negotiation protocol, the parties exchange bids and counteroffers following their strategies.

• In the Plan-based Negotiation Theory, each agent schedules its actions individually before all plans are coordinated. This is similar to the multi-agent planning coordination approach, but any agent can play the role of central coordinator.

For recent progress on negotiating agents, the reader is referred to Luo, Jennings, and Shadbolt (2006) or Fatima, Wooldridge, and Jennings (2006).

future trends

We believe the area of support for human users in the carrying out of highly heterogeneous workloads represents a promising area for the development of agent applications (see e.g. Padgham & Winikoff, 2004, for a practical guide to developing intelligent agent systems). The current support for users who work with a mixture of word processing, e-mail, spreadsheets, databases, digital libraries, and Web search tools is very primitive. The user has to do most of the work in correlating the different sources, and current tools are poor at learning the user’s commonly repeated work patterns. The authors of this article feel that agents are the most promising technology to redress this shortcoming, and we have worked on architecture for linking agents with tools such as Groupware and Workflow.

conclusion

In spite of a considerable amount of research, the killer application for intelligent agents is still somewhat elusive. The IT industry still has not reached a consensus about the use of agents now and in the future. Nwana and Ndumu (1999, p. 14) are even more critical, claiming that “not much discernible progress has been made post 1994.” The main reason might be that intelligent agent theory integrates some of the most challenging concepts in science, including artificial intelligence (AI), data mining, or contract theory. Agent technology can be considered another new demanding application of these concepts and will succeed or fail depending on any progress in these areas. The take-up of agent technology is therefore likely to suffer the same ups and downs that AI has experienced in recent decades. In the longer term, however, there is a large area of opportunity for agents supporting human users of computer systems which has yet to be fully developed.

KEY TERMS

Artificial Intelligence: Computer systems that feature automated human-intelligent, rational behavior and employ knowledge representation and reasoning methods.

Business Process: A process at the business layer of an organization. Since the 1990s, the focus of any business reengineering project and one of the central inputs for IT design. Also called workflow.

Categorization: The process of deducing, from the content of an artifact, the potentially multiple ways in which the artifact can be classified for the purpose of later retrieval from a database, library, collection, or physical storage system.

Contract Theory: Theory dealing with aspects of negotiation and contracting between two or more parties.

Data Mining: Integrating statistics, database technology, pattern recognition, and machine learning to generate additional value and strategic advantages.

Game Theory: Mathematical theory of rational behavior for situations involving conflicts of interest.

Workflow: The automation of a business process, in whole or part, during which documents, information, or tasks are passed from one participant to another for action, according to a set of procedural rules. Also called business process.

Next post:

Previous post: