Information Technology Reference
In-Depth Information
important at least for combinatorial creativity, which seems to be very easy for
humans but very difficult for AI systems [ 5 , 6 ]. Furthermore, the importance of
analogy is of course not limited to creativity, as analogical ability has been proposed
as an indispensable component of artificial general intelligence as well [ 11 , 26 ].
With the above in mind, it makes sense to develop models of analogy, both
computational and theoretical. Much work has been done in this direction; a few
implementations include SME [ 17 ], LISA [ 29 ], HDTP [ 40 ], and recently our own
META-R [ 32 ]. On the surface, it seems that the current generation of analogical
systems sufficiently capture and explain all of the phenomena commonly associated
with analogical reasoning, and that they will eventually reach levels characteris-
tic of human cognition. It may well be the case that the most important principles
underlying the nature of analogy have been expressed. But a serious objection has
been raised recently which, as will be argued, should be the primary focus of analog-
ical researchers over the next few years—at least if any significant further progress
is to be made in the direction of creativity and AGI.
The objection is raised by Gentner and Forbus [ 24 ]. They call it the 'tailorability
concern' (TC), and the objection echoes a common criticism of cognitive systems in
general: that they operate on toy examples manually constructed in such a way as to
guarantee the desired solution. However, though this concern may have been stated
in many forms throughout the years [ 36 ], it lacks, to our knowledge, a formulation
clear enough to anchor productive scientific discussion. And this ambiguity in turn
negatively impacts not only the relevant science, but AI engineering as well: absent
a definition of TC, it is difficult to understand precisely what an analogical system
must do in order to successfully answer TC. In the remainder, we take steps toward
addressing this problem as it applies to analogical systems.
5.2 The Tailorability Concern
A frequently appearing criticism of cognitive systems in general is that they are only
applied to manually constructed 'toy examples', a problem many researchers in the
field themselves acknowledge. Gentner and Forbus [ 24 ] refer to the problem as the
tailorability concern (TC): “that is, that (whether knowingly or not) the researchers
have encoded the items in such a way as to give them the desired results” [ 24 ].
Of course, nothing is wrong with toy examples per se. They can be extremely use-
ful in demonstrating key concepts, helping to illustrate particular qualitative strengths
or weaknesses of computational models, or helping to get a newmodel off the ground.
Indeed, the present authors plead guilty to using toy examples in these ways; but prop-
erly done, carefully chosen microcosmic cases can be very useful as demonstrations-
of-concept. But we should be careful not to treat such examples as the final proof of
a system's worth, since in most of these examples it is not clear that the principles
used to solve them generalize to other problems, nor is it clear that such principles
can be used to mechanically find useful solutions just as effectively in the absence
of human assistance.
Search WWH ::




Custom Search