Biology Reference
In-Depth Information
Other imitation rules - as plausible as Schlag's - yield processes different from any
biological ones. (B¨rgers 1996 , p. 1383)
This is even more obvious with respect to belief learning. For example, choice of
different reasoning principles or heuristics may lead to different beliefs about
strategies, strategy outcomes, etc., even when based on the same actual interactions.
This sensitivity of the population dynamic to the specifics of the learning rules
increases the 'idealisation gap' between the biological and the learning models.
Third, and related to the previous point, all learning models have to make strong
assumptions about players not making mistakes - they never switch from a better to
a worse strategy. This is a real possibility in all learning models - as agents have
to actively identify strategies, associate payoffs with them and choose their
own actions on that basis - while it has no significance in the biological model.
The way this is dealt with usually involves taking expected values. Averaging this
way over the possible behaviours of an agent idealises the influence of players'
mistakes away: even if there is a positive probability that a player will switch from
better to worse, on average the player will not (cf. Gintis 2000 , p. 192).
Fourth, stochastic fictitious play models face the particular problem of excessive
time horizons. As Sobel starkly puts it,
the long-run predictions [of stochastic fictitious play] only are relevant for cockroaches, as
all other life forms will have long been extinct before the system reaches its limits. (Sobel
2000 , p. 253)
To turn the stochastic belief-learning models into representations of social
mechanisms, the time horizons thus must be idealised.
Fifth, the imitation learning model faces the particular problem of requiring
interpersonal comparisons of utility (Gr ¨ ne-Yanoff 2011b ). The biological RD
model does that, too - yet while this requirement is innocuous under the fitness
interpretation, it is highly problematic when payoffs are interpreted as numerical
representations of preferences. Thus, this extra requirement constitutes an impor-
tant difference between the belief-learning models and the other models discussed
here.
Certain substantial idealisations need to be taken also when the RD model is
interpreted biologically. A different set of substantial idealisations needs to be taken
when the RD model is interpreted socially. By making these different idealisations,
we adapt the model for its respective representative uses. This is standard scientific
practice: most, and possibly all, model uses involve idealisations.
Yet when the same formal structure is employed to construct different, more
specific mechanistic models, and each of these models involves different
idealisations, one has to be careful when inferring purported similarities between
these different mechanisms based on the common formal structure. Like the duck-
rabbit, the RD equation is adapted for its respective representative tasks. In the
course of each adaptation, certain features of the RD are drawn on - others are
accepted as useful or at least harmless idealisations. Which features are drawn on
and which are accepted as idealisations differ with each adaptation. The mechanism
that each adaptation of the RD represents is substantially different from each other
and does not share any or little causal structure between each other. Thus, there is
Search WWH ::




Custom Search