Biology Reference
In-Depth Information
Fig. 5.1 The general RD model
Let me describe the RD model in more detail. A population is a set of
individuals. Individuals are programmed to play one strategy. A strategy is a
complete plan of action for whatever situation might arise; this fully determines
the player's behaviour. A population state is defined as the vector x ( t )
,
x k ( t )), where each component x i ( t ) is the frequency of strategy i in the population at
time t . 2 The replicator dynamics is a function that maps a population state at time
t onto a population state at t + 1. It exists both as a discrete version, in which
x ( t +1)
¼
( x 1 ( t ),
...
f ( x ( t )).
The RD function relates to the interaction of individuals in the population
through the following five steps. First, a population of individuals is presented
and the variation of strategies in the population described in the population state.
Second, in each period, every individual is paired at random with another individual
from the population. These individuals play the strategies that they are programmed
to play against each other. Third, a game is specified that members of the population
play between each other. Commonly, this game is a two-player simultaneous-move
game that for each player includes all strategies present in the population state. For
each strategy profile ( i , j ) - a combination of strategy i of one player and strategy j
of another player - the game specifies a payoff u k ( i , j ) for each player k
¼
f ( x ( t )), and as a continuous version, in which for each i ,d x i /d t
¼
{1, 2}.
Fourth, the payoff individual received from the interaction is interpreted as affect-
ing the replication of this individual: how many individuals will play strategy i in
the next period is proportional to how well individuals playing i in this period did
vis-`-vis other individuals. Fifth, proportionality of replication and payoffs leads to
differential representation of strategies in the population in the next period. Over
many periods, this differential representation may lead to the convergence of stable
state, in which differential representation of traits becomes stable over time, unless
disturbed exogenously. Alternatively, differential representation might change in
a regular fashion, for example, in regular oscillations or circles. Tracking the
outcome of the dynamics over time reveals such stability or regularity results.
Figure 5.1 depicts these five steps graphically. 3
Mathematically, these steps are represented as follows. Given a population state
x ( t ), the expected payoff to any pure strategy i in a random match is u ( i , x ): an
¼
2 The population state is formally identical to a mixed strategy. Its support is the set of strategies
played by individuals in the population.
3 These and the following graphs are schematic representations of models - of the formal RD
equation and its respective interpretations. I use these graphs in order to make comparison between
the different models more palpable.
Search WWH ::




Custom Search