Decision Making in Intelligent Agents (Artificial Intelligence)

INTRODUCTION

There are several ways of building complex distributed software systems, for example in the form of software agents. But regardless of the form, there are some common problems having to do with specification contra execution. One of the problems is the inherent dynamics in the environment many systems are exposed to. The properties of the environment are not known with any precision at the time of construction. This renders a specification of the system incomplete by definition. A traditional software agent is only prepared to handle situations conceived of and implemented at compile-time. Even though it can operate in varying contexts, its decision making abilities are static. One remedy is to prepare the distributed components for a truly dynamic environment, i.e. an environment with changing and somewhat unpredictable conditions. A rational software agent needs both a representation of a decision problem at hand and means for evaluation. AI has traditionally addressed some parts of this problem such as representation and reasoning, but has hitherto to a lesser degree addressed the decision making abilities of independent distributed software components (Ekenberg, 2000a, 2000b). Such decision making often has to be carried out under severe uncertainty regarding several parameters. Thus, methods for independent decision making components should be able to handle uncertainties on the probabilities and utilities involved. They have mostly been studied as means of representation, but are now being developed into functional theories of decision making suitable for dynamic use by software agents and other dynamic distributed components. Such a functional theory will also benefit analytical decision support systems intended to aid humans in their decision making. Thus, the generic term agent below stands for a dynamic software component as well as a human or a group of humans assisted by intelligent software.


BACKGROUND

Ramsey (1926/78) was the first to suggest a theory that integrated ideas on subjective probability and utility in presenting (informally) a general set of axioms for preference comparisons between acts with uncertain outcomes (probabilistic decisions). von Neumann and Morgenstern (1947) established the foundations for a modern theory of utility. They stated a set of axioms that they deemed reasonable to a rational decision-maker (such as an agent), and demonstrated that the agent should prefer the alternative with the highest expected utility, given that she acted in accordance with the axioms. This is the principle of maximizing the expected utility. Savage (1954/72) published a thorough treatment of a complete theory of subjective expected utility. Savage, von Neumann, and others structured decision analysis by proposing reasonable principles governing decisions and by constructing a theory out of them. In other words, they (and later many others) formulated a set of axioms meant to justify their particular attitude towards the utility principle, cf., e.g., Herstein and Milnor (1953), Suppes (1956), Jeffrey (1965/83), and Luce and Krantz (1971). In classical decision analysis, of the types suggested by Savage and others, a widespread opinion is that utility theory captures the concept of rationality.

After Raiffa (1968), probabilistic decision models are nowadays often given a tree representation (see Fig. 1). A decision tree consists of a root, representing a decision, a set of event nodes, representing some kind of uncertainty and consequence nodes, representing possible final outcomes. In the figure, the decision is a square, the events are circles, and final consequences are triangles. Events unfold from left to right, until final consequences are reached. There may also be more than one decision to make, in which case the sub-decisions are made before the main decision.

Figure 1. Decision tree

Decision tree

In decision trees, probability distributions are assigned in the form of weights (numbers) in the probability nodes as measures of the uncertainties involved. Obviously, such a numerically precise approach puts heavy demands on the input capability of the agent. The shortcomings of this representation are many, and have to be compensated for, see, e.g., (Ekenberg, 2000a). Among other things, the question has been raised whether people are capable of providing the input information that utility theory requires (cf., e.g., (Fischhoff et al., 1983)). For instance, most people cannot clearly distinguish between probabilities ranging roughly from 0.3 to 0.7 (Shapira, 1995). Similar problems arise in the case of artificial agents, since utility-based artificial agents usually base their reasoning on human assessments, for instance in the form of induced preference functions. The so-called reactive agents, for which this does not hold true, have not been put to use in dynamic domains involving uncertainty (cf., e.g., (Russell & Norvig, 1995)). Furthermore, even if an agent would be able to discriminate between different probabilities, very often complete, adequate, and precise information is missing.

Consequently, during recent years of rather intense research activities several alternative approaches have emerged. In particular, first-order approaches, i.e., based on sets of probability measures, upper and lower probabilities, and interval probabilities, have prevailed. A main class of such models has been focused on expressing probabilities in terms of intervals. In 1953, the concept of capacities was introduced (Choquet, 1953/54). This representation approach was further developed in (Huber, 1973, Huber & Strassen, 1973). Capacities have subsequently been used for modelling imprecise probabilities as intervals (capacities of order 2 (Denneberg, 1994)). Since the beginning ofthe 1960s the use of first-order (interval-valued) probability functions, by means of classes of probability measures, has been integrated in classical probability theory by, e.g., Smith (1961) and Good (1962). Similarly, Dempster (1967) investigated a framework for modelling upper and lower probabilities, which was further developed by Shafer (1976), where a representation of belief in state s or events was provided. Within the AI community the Dempster-Shafer approach has received a good deal of attention. However, their formalism seems to be too strong to be an adequate representation of belief (Weichselberger & Pohlman, 1990).

Other representations in terms of upper and lower probabilities have been proposed by, i.a., Hodges and Lehmann (1952), Hurwicz (1951), Wald (1950), Kyburg (1961), Levi (1974, 1980), Walley (1991), Danielson and Ekenberg (1998, 2007), and Ekenberg et al. (2001). Upper and lower previsions have also been investigated by various authors. For instance, Shafer et al. (2003) suggests a theory for how to understand subjective probability estimates based on Walley (1991). A few approaches have also been based on logic, e.g., Nilsson (1986). He develops methods for dealing with sentences involving upper and lower probabilities. This kind of approaches has been pursued further by, among others, Wilson (1999).

SECOND-ORDER REPRESENTATIONS

A common characteristic of the first-order representations above is that they typically do not include all of the strong axioms of probability theory and thus they do not require an agent to model and evaluate a decision situation using precise probability (and, in some cases, value) estimates. An advantage of representations using upper and lower probabilities is that they do not require taking probability distributions into consideration. On the other hand, it is then often difficult to devise a reasonable decision rule that finds an admissible alternative out of a set of alternatives and at the same time fully reflects the intensions of an agent (or its owner). Since the probabilities and values are represented by intervals, the expected value range of an alternative will also be an interval. In effect, the procedure retains all alternatives with overlapping expected utility intervals, even if the overlap is very small. Furthermore, they do not admit for discrimination between different beliefs in different values within the intervals.

All of these representations face the same tradeoff. Zero-order approaches (i.e. fixed numbers representing probability and utility assessments) require unreasonable precision in the representation of input data. Even though the evaluation and discrimination between alternatives becomes simple, the results are often not a good representative of the problem and sensitivity analyses are hard to carry out for more than a few parameters at a time. First-order approaches (e.g. intervals) offer a remedy to the representation problem by allowing imprecision in the representation of probability and utility assessments such as intervals, reflecting the uncertainty inherent in most real-life decision problems faced by agents. But this permissibility opens up a can of worms in the sense that evaluation and discrimination becomes much harder because of overlap in the evaluation results of different options for the agent, i.e. the worst case for one alternative is no better than the best case for another alternative or vice versa, rendering a total ranking order between the alternatives impossible to achieve. The trade-off between realistic representation and discriminative power has not been solved within the above paradigms. For a solution, one must look at second-order approaches allowing both imprecision in representation and power of admissible discrimination.

Approaches for extending the interval representation using distributions over classes of probability and value measures have been developed into various hierarchical models, such as second-order probability theory (Gardenfors & Sahlin, 1982, 1983, Ekenberg & Thorbiornson, 2001, Ekenberg et al., 2005). Gardenfors and Sahlin consider global distributions of beliefs, but restrict themselves to interval representations and only to probabilities, not utilities. Other limitations are that they neither investigate the relation between global and local distributions, nor do they introduce methods for determining the consistency of user-asserted sentences. The same applies to Hodges and Lehmann (1952), Hurwicz (1951), and Wald (1950). Some more specialized approaches have recently been suggested, such as (Jaffray, 1999), (Nau, 2002), and (Utkin & Augustin, 2003). In general, very few have addressed the problems of computational complexity when solving decision problems involving such estimates. Needless to say, it is important in dynamic agents to be able to determine, in a reasonably short time, how various evaluative principles rank the given options in a decision situation.

Ekenberg et al. (2006) and Danielson et al. (2007) provide a framework for how second-order representation can be systematically utilized to put belief information into use in order to efficiently discriminate between alternatives that evaluate into overlapping expected utility intervals when using first-order interval evaluations. The belief information is in the form of a joint belief distribution, specified as marginal belief distributions projected on each parameter. It is shown that regardless of the form of belief distributions over the originating intervals, the distributions resulting from multiplications and additions have forms very different from their components. This warp of resulting belief demonstrates that analyses using only first-order information such as upper and lower bounds are not taking all available information into account. The method is based on the agent’s belief in different parts of the intervals, expressed or implied, being taken into consideration. It can be said to represent the beliefs in various sub-parts of the feasible intervals. As a result, total lack of overlap is not required for successful discrimination between alternatives. Rather, an overlap by interval parts carrying little belief mass, i.e. representing a small part of the agent’s belief, is allowed. Then, the non-overlapping parts can be thought of as being the core of the agent’s appreciation of the decision situation, thus allowing discrimination.

There are essentially three ways of evaluating (i.e. making a decision in) a second-order agent decision problem. The first way (centroid analysis) is to use the centroid as the best single-point representative of the distributions. The centroid is additive and multiplicative. Thus, the centroid of the distribution of expected utility is the expected utility of the centroids of the projections. A centroid analysis gives a good overview of a decision situation. The second way (contraction analysis) is to use the centroid as a focal point (contraction point) towards which the intervals are decreased while studying the overlap in first-order expected utility intervals. The third way (distribution analysis) is more elaborated, involving the analysis of the resulting distributions of expected utility and calculating the fraction of belief overlapping between alternatives being evaluated.

FUTURE TRENDS

During recent years, the activities within the area of imprecise probabilities have increased substantially (IPP) and special conferences (ISIPTA) and journals (IJAR) are now dedicated to this theme. Second-order theories will in the future be fully developed into functional theories of decision making suitable for dynamic use by distributed software components. Algorithms for the efficient evaluation by agents using at least the first two ways of analyses above will be developed.

CONCLUSION

In this article, we discuss various approaches to probabilistic decision making in agents. We point out that theories incorporating second-order belief can provide more powerful discrimination to the agent (software agent or human being) when handling aggregations of interval representations, such as in decision trees or probabilistic networks, and that interval estimates (upper and lower bounds) in themselves are not complete. This applies to all kinds of decision trees and probabilistic networks since they all use multiplications for the evaluations. The key idea is to use the information available in efficient evaluation of decision structures. Using only interval estimates often does not provide enough discrimination power for the agent to generate a preference order among alternatives considered.

Second-order methods are not just nice theories, but should be taken into account to provide efficient decision methods for agents, in particular when handling aggregations of imprecise representations as is the case in decision trees or probabilistic networks.

KEY TERMS

Admissible Alternative: Given a decision tree and two alternatives Ai and A., Ai is at least as good as A. iff E(Ai) – E(A.) > 0, where E(Ai) is the expected value of Ai, for all consistent variable assignments for the probabilities and values. Ai is better than A, iff Ai is at least as good as a. and E(Ai) – E(A.) > 0 for some consistent variable assignments for the probabilities and values.

A is admissible iff no other A is better.

Centroid: Given a belief distribution F over a cube B, the centroid F of F is

c

tmp7E54_thumb

where VB is some k-dimensional Lebesque measure

on B. B

Decision Tree: A decision tree consists of a root node, representing a decision, a set of intermediate (event) nodes, representing some kind of uncertainty and consequence nodes, representing possible final outcomes. Usually, probability distributions are assigned in the form of weights in the probability nodes as measures of the uncertainties involved.

Expected Value: Given a decision tree with r alternatives Ai for i = 1,.. .,r, the expression

tmp7E55_thumb

where p..Jj…, j e (1,…,m), denote probability variables and v.j j denote value variables, is the expected value of alternative Ai.

Joint Belief Distribution: Let a unit cube be represented by B = (b1,…, bk). By a joint belief distribution over B, we mean a positive distribution F defined on the unit cube B such that

tmp7E56_thumb

where VB is some k-dimensional Lebesque measure on B. B

Marginal Belief Distribution: Let a unit cube:

tmp7E57_thumb

is a marginal belief distribution over the axis bi.

Projection: Let B = (b1,…,bk) and A = (b ,…,b): j e {1,…k} be unit cubes. Furthermore, let Fe BD(B), and let

tmp7E58_thumb

Then f is the projection of F on A. A projection of a belief distribution is also a belief distribution.

Next post:

Previous post: