Dynamical systems approaches (child development)

 

Introduction

When a liquid is heated from below, convection patterns may form in which warm currents rise to the surface in the centers of tightly packed hexagons, while the cooler parts of the liquid sink to the bottom at the boundaries of the hexagons. Such patterns are ‘self-organized’ in the sense that they arise from the laws of fluid flow and of heat transport through an instability, in which a small initial fluctuation grows into the full, regular convection pattern. The theory of such pattern-forming systems is based on the mathematics of non-linear dynamical systems.

One origin of dynamical systems approaches to development was an analogy between such forms of self-organization and the emergence of ordered patterns of nervous and behavioral activity in organisms. Although the analogy turned out to hold only superficially, the language of dynamical systems has proven fertile for a new perspective on developmental processes. An entry point was the study of patterns of coordinated movement, from which a dynamical systems approach to the development of motor behavior was initiated (Thelen & Smith, 1994). More recently, these ideas were extended to a dynamic field theory that addresses cognitive aspects of motor behavior and spatial representations (Thelen, Schoner, Scheier, & Smith, 2001). At a more abstract level, analogies between behavioral transitions and mathematical phenomena in catastrophe theory and non-linear dynamics were used to describe processes of change during development (van Geert, 1998). Also, neural network models are formally dynamical systems, a fact that has been made explicit in a number of connectionist models.

This entry first provides a brief tutorial of the relevant mathematical background of dynamical systems theory. The coordination of movement is used to illustrate the dynamical systems approach to the development of motor behavior. Dynamic field theory is illustrated in the context of the Piagetian A-not-B task. Links to other variants of dynamical systems approaches and to connectionism are discussed last.

What are dynamical systems?

The notion of dynamical systems comes from the branch of mathematics that has formed the foundations of most applications of mathematical formalization to the sciences. Through the theory of differential equations, this notion is central to physics and engineering, but is also used in a wide range of other fields.

A system is called ‘dynamical’ if its future evolution can be predicted from its present state. This lawfulness of temporal evolution comes to light only if appropriate state variables (lumped into a vector x) are identified. Given any possible initial state, the future evolution of the state variables is coded into the instantaneous direction and rate of change, dx/dt. This vector points from the initial state to the state in which the system will be found an infinitesimal moment in time later. The dynamical system thus ascribes to every possible value, x, of the state variables, a vector, f(x), that indicates the direction and rate of change from this particular value. This vector field is the dynamical function appearing on the right-hand side of the differential equation that formally defines the dynamical system:

tmp8751_thumb

In physics and other disciplines, the principal task consists of finding appropriate state variables, x, and identifying the associated dynamical function f(x), which together capture the determinism and predictability of a system.

How might a specific scientific approach arise from a setting as general as this? Dynamical systems approaches to development are based on a much more specific class of dynamical systems, those having attractor solutions. Figure 1 illustrates the idea for the simplest case, in which the state of the system can be captured by a single state variable, x. For any possible initial state, x, the rate of change, dx/dt, determines whether x will increase (positive rate of change) or decrease (negative rate of change). The cross-overs between these two regimes are points at which the rate of change is zero, the so-called fixed points. When the initial state of the system is in a fixed point, the state does not change further, and the system remains fixed at the initial state.

The dynamical function, f(x), determines the rate of change, dx/dt, of state variable, x, for every initial value of x. Intersections with the x-axis (marked with filled circles) are fixed points. (A) Fixed points are attractors when the slope of the dynamical function is negative. (B) A positive slope makes the fixed point unstable. (C) A bistable dynamical system has two attractors, separated by an unstable fixed point.

Figure 1. The dynamical function, f(x), determines the rate of change, dx/dt, of state variable, x, for every initial value of x. Intersections with the x-axis (marked with filled circles) are fixed points. (A) Fixed points are attractors when the slope of the dynamical function is negative. (B) A positive slope makes the fixed point unstable. (C) A bistable dynamical system has two attractors, separated by an unstable fixed point.

In part (A) of the illustration, the region with positive growth rate lies at smaller values of the state variable than the region with a negative growth rate, so that the dynamical function has a negative slope around the fixed point. Therefore, from a small initial value, the state variable increases as long as the growth rate remains positive, that is, up to the fixed point. From a large initial value, the state variable decreases as long as the growth rate remains negative, that is, up to the fixed point. Thus, the fixed point attracts nearby initial states. Such a fixed point is an attractor state. An attractor is a stable state in the sense that when any perturbation pushes the state away from the attractor, the system is attracted back to this state. Conversely, if through some change in the system, the attractor state is displaced, the system follows that change, tracking the attractor.

When the arrangement of regions of positive and negative rate of change is reversed (part [B] of the figure), an unstable fixed point emerges at their boundary. Now, the dynamical function, dx/dt = f(x) intersects with a positive slope at the fixed point, so that small deviations from the fixed point are amplified. When a perturbation pushes the system away from the fixed point to larger levels, the positive rate of change drives the system further up. Analogously, a perturbation to lower values is enhanced by the negative rate of change.

Three dynamical functions representing three points in a smooth change of the dynamical function, which consists of shifting the function upwards. (a) Initially, the dynamical function has three fixed points, two attractors (black filled circles) and one unstable fixed point (circle filled in gray). (b) As the function is shifted upward, one attractor and the unstable fixed point move toward each other until they collide and annihilate at the instability. (c) When the function is shifted up more, only one attractor remains.

Figure 2. Three dynamical functions representing three points in a smooth change of the dynamical function, which consists of shifting the function upwards. (a) Initially, the dynamical function has three fixed points, two attractors (black filled circles) and one unstable fixed point (circle filled in gray). (b) As the function is shifted upward, one attractor and the unstable fixed point move toward each other until they collide and annihilate at the instability. (c) When the function is shifted up more, only one attractor remains.

Unstable solutions separate different attractor states. Part (C) of Figure 1 shows a case in which there are two attractors, one at small values, the other at larger values of the state variable. At each attractor, the slope of the dynamical function is negative. Between the two attractors is an unstable fixed point marking the boundary between the two attractors. Any initial state to the right of the unstable fixed point is attracted to the right-most fixed point and any initial state to the left of the unstable fixed point is attracted to the left-most attractor.

Clearly, a linear dynamical function, f(x), cannot generate multiple fixed points, because a straight line intersects the x-axis only once. Thus, multistability, that is, the co-existence of multiple attractors and associated unstable fixed points, is possible only in non-linear dynamical systems.

When a system changes, the dynamical function may be altered. Mathematically, such change can be described through families of dynamical functions that smoothly depend on one or multiple parameters. Most smooth changes of the dynamical function transform the solutions of the dynamical system continuously. There are, however, points at which a smooth change of the dynamical function may lead to qualitative change of the dynamics, that is, to the destruction or creation of attractors and unstable fixed points. Such qualitative changes of a dynamical system are called instabilities. Figure 2 provides an example in which an initially bistable system is changed by increasing the overall rate of change. This pushes the left-most attractor and the unstable fixed point toward each other until they collide and annihilate. Beyond this point, only the right-most attractor remains. Thus, a particular attractor disappears as the dynamical function is changed in a global, unspecific manner. As the instability is approached, the slope of the dynamical function near the doomed attractor becomes flat, so that attraction to this fixed point is weakened. Thus, even before the instability is actually reached, its approach is felt through lessened resistance to perturbations (hence the term ‘instability). If the system was initially in the left-most attractor, the instability leads to a switch to the right-most attractor.

Coordination dynamics

How are such abstract mathematical concepts relevant for understanding behavior and development? That stability is a fundamental concept for understanding nervous function has been recognized since the early days of cybernetics. For any given nervous function or behavior, the large number of components of the nervous system and the complex and ever changing patterns of sensory stimulation are potential sources of perturbation. Only nervous functions that resist such perturbations, at least to an extent, may persist and actually be observable.

What is it though, that needs to be stabilized against potential perturbations? To visualize the ideas think first of the generation of voluntary movement. Most fundamentally, the physical movement of an effector (e.g., a limb, a joint angle, a muscle) must be stabilized against all kinds of forces such as the passive, inertial torques felt at one joint as a result of accelerations at other joints. Stability at that level of motor control may be helped by the physics of the system (such as the viscous properties of muscles that dampen movement), although the nervous system clearly contributes.

In a slightly more abstract analysis, the time courses of the effectors must be stabilized. Dancing to music, for instance, not only requires stable movement, but also the maintenance of a particular timing relationship with the music. When the dancer has fallen behind the beat of the music, she or he must catch up with the rhythm. When the dancer has drifted ahead of the rhythm, he or she must fall back to the correct timing. Similarly, a bimanual reach requires the two hands to arrive at the same time at the object. This is the problem of coordination, either between a movement and an event in the world (e.g., to catch a ball or to keep up with a rhythm), or between different movement effectors (e.g., to coordinate different limbs or to keep an effector on a trajectory through space). The stability of timing is thus the maintenance of temporal alignment between different movement events or between movement events and events in the world.

At an even more abstract level, the overall form of a movement is described by such parameters as direction or amplitude. These parameters must be assigned values to initiate a movement and those values must be stabilized. When, during the initiation of a goal-directed hand movement, for instance, the movement target is displaced, then an automatic adjustment of the movement trajectory brings the hand to the correct target.

Stability is thus a concept that cuts across different levels of neural control of motor behavior. To illustrate the ideas, the rest of this section focuses on a single level and behavior: interlimb coordination during rhythmical movement, perhaps the behavior best studied to date using the concepts of dynamical systems approaches (Schoner, Zanone, & Kelso, 1992). The relative time order of the movement of two limbs can be experimentally isolated from other levels of control by minimizing mechanical demands and mechanical coupling (e.g., in finger movements at moderate frequencies) and by keeping spatial constraints constant. The relative phase between the trajectories of the two limbs can then serve as a state variable (Fig. 3A). It characterizes the relative time order of the two limbs independently of the trajectory shapes and movement amplitudes.

The two fundamental and ubiquitous patterns of relative time order in coordinated rhythmical movement are synchronous movement and alternating movement. Variations of these patterns underlie locomotory gaits, but are also observed in speech articulatory movements, in musical skills, and many other motor behaviors. However, both patterns are not available under all conditions. Scott Kelso discovered that when the frequency of the alternating rhythmical movement pattern is increased, the variability of relative phase increases, leading to a degradation of the alternating pattern until it can no longer be consistently performed (Schoner & Kelso, 1988). As illustrated in Figure 3, this leads, under some conditions, to an involuntary shift to the synchronous pattern of coordination. The intention to perform the alternating pattern (as manipulated by instruction) helps stabilize the pattern, but does not make it immune to degradation at higher frequencies.

From a dynamical systems perspective, the two basic patterns of coordination must be attractor states of an effective dynamical system controlling relative timing. Nervous activity of various structures putatively contributes to this effective dynamical system including sensory processes reflecting the position of each effector in its cycle, central processes reflecting movement plans, intentions, attention, and other possible cognitive factors, and finally motor processes reflecting the activation of muscles and effectors. The resulting network has attractors whose stability is a matter of degree. Multiple processes contribute to that stability. When the stability of a pattern is ultimately lost, the performed coordination pattern changes.

Schematic representation of the instability in rhythmic bimanual coordination. (A) The trajectories of the right (solid) and left (dashed) finger are shown as functions of time. The movement is initially coordinated in phase alternation, but switches to in-phase in the middle of the trial. This transition is induced by an increase in movement frequency. The relative timing of the two fingers can be represented by the relative phase, the latency between matching events in the two fingers' trajectories (here: minima of position) expressed in percent of cycle time. (B) The relative phase as a function of frequency (dashed line) reflects this shift from anti-phase (relative phase near 0.5) to in-phase (relative phase near 0.0). When movement starts out in the in-phase coordination pattern then it remains in that pattern (solid line). (C) That the loss of anti-phase coordination at higher frequencies is due to an instability is demonstrated by the observation that the variability of relative phase increases with increasing frequency in the anti-phase pattern (dashed line), but not in the in-phase pattern (solid line).

Figure 3. Schematic representation of the instability in rhythmic bimanual coordination. (A) The trajectories of the right (solid) and left (dashed) finger are shown as functions of time. The movement is initially coordinated in phase alternation, but switches to in-phase in the middle of the trial. This transition is induced by an increase in movement frequency. The relative timing of the two fingers can be represented by the relative phase, the latency between matching events in the two fingers’ trajectories (here: minima of position) expressed in percent of cycle time. (B) The relative phase as a function of frequency (dashed line) reflects this shift from anti-phase (relative phase near 0.5) to in-phase (relative phase near 0.0). When movement starts out in the in-phase coordination pattern then it remains in that pattern (solid line). (C) That the loss of anti-phase coordination at higher frequencies is due to an instability is demonstrated by the observation that the variability of relative phase increases with increasing frequency in the anti-phase pattern (dashed line), but not in the in-phase pattern (solid line).

Two additional observations are informative. Firstly, when at a fixed movement frequency a switch from synchronous to alternating coordination or back is performed purely intentionally, this process is not immune to the stability of the two patterns. Switching into the less stable pattern takes longer than switching into the more stable pattern. The process of achieving a desired pattern of coordination is helped by the mechanisms of stabilization of that pattern. In fact, from a theoretical perspective the conclusion is even more radical: to achieve a particular pattern, nothing but stabilization is needed. With the appropriate stabilization mechanisms in place, the pattern emerges through the convergence of the dynamical state toward the attractor.

Secondly, when a new coordination skill is being learned, what evolves is not just the performance at the practiced pattern, but also performance at nearby, non-practiced patterns. For instance, after extensive practice at producing an asymmetrical, 90° out-of-phase pattern of rhythmical finger movement, participants are systematically affected when they try to perform similar, unpracticed patterns (e.g., 60° or 120° out-of-phase). They are biased toward the practiced pattern, performing patterns intermediate between the instructed relative phase and 90°. In some individuals, this effect is strong enough to reduce the stability of the basic coordination patterns of synchrony and alternation, leading to instabilities induced by the learning process itself. Thus, what evolves during learning is the entire dynamical function governing the stability of the attractor states such as to stabilize the practiced pattern. Learning consists of the shaping of the dynamical function from which performance emerges as attractor states.

The conditions under which stable performance of a particular pattern emerges may therefore include both unspecific factors (movement frequency, mechanical load) and specific factors (intention, practice). The landscape of stable states is changed through instabilities. How do these insights impact on our understanding of the development of motor abilities? Three main insights to be gained from a range of studies are the following (Thelen & Smith, 1994). Firstly, at any point in the development of motor behavior, any particular movement pattern cannot be said to be either present or absent from the behavioral repertoire. The effective dynamical function underlying the relevant motor ability may be such that the pattern may emerge under appropriate environmental conditions or with appropriate motivation. Developmental change is thus characterized more adequately in terms of the range of conditions under which the pattern emerges as a stable state. This is clearly an insight linking gradualist thinking (at the level of mechanism, here of effective dynamical functions) and theories of the discontinuous change of abilities (at the level of the absence or presence of an attractor generating a particular action in a particular situation).

Esther Thelen (1941-2004) has shown, for example, that rhythmical stepping movements can be elicited at a much earlier age than the onset of walking, simply by providing mechanical support of the body and by transporting the feet on a treadmill (Thelen & Smith, 1994). More dramatically, when a split treadmill imposes different speeds on either leg, the coordination tendency toward alternation can still be detected. Thus, coordination mechanisms supporting stepping are already in place, waiting to emerge until other behavioral dimensions such as balance and strength change.

Secondly, learning a new motor ability means changing a dynamical function to stabilize the practiced pattern. Developmental processes that lead to the emergence of new motor abilities can therefore be understood as inducing change in the underlying dynamical functions that increase the stability of the new pattern. The theoretical insight is that such gradual stabilization is sufficient for the new pattern to emerge, either continuously or abruptly through an instability. The development of reaching movements in infants is a well-studied exemplary case. A number of studies have established that, during the months over which this ability is developed, the kinematic and kinetic patterns generated by the infant reduce in variance, although the specific patterns onto which this process converges at this stage are highly specific to the individual.

Thirdly, instabilities drive differentiation. If the dynamical function characterizing motor behavior in an early stage of development permits only a small number of attractor states, new states may emerge from instabilities through which these attractors split and multiply, in each case in relation to environmental and internal conditions. The empirical support for this rather broad theoretical conclusion is less direct. One indication is the transition from an early tendency to display stereotypical movements to a capacity later in motor development to generate task-specific movements. Convergent evidence comes from the general tendency for younger infants to have greater difficulty in disengaging from a specific motor activity, gaze direction, or from a particular stimulus, than older infants.

The dynamic field approach

The dynamical systems ideas reviewed up to this point lend themselves naturally to the analysis of overt motor behaviors, for which state variables at different levels of observation can be identified. The evolution of these state variables can be observed continuously in time, and, on that basis, the stability of attractor states can be assessed through the variability in time or from trial to trial.

Even within the motor domain, limitations of this approach can be recognized. When a goal-directed movement is prepared, for example, movement parameters such as direction, amplitude, amount of force to apply, duration, and others are assigned values, which may be updated when the relevant sensory information changes. More generally, however, the assumption that each movement parameter has a unique value at all times that evolves continuously is a strong one, but for which there is only limited support. There is, for instance, not always a trace of previous values when a new motor act is prepared. When moving to more abstract forms of cognition, the need for additional concepts becomes clearer still. While spatial memory, for example, can still be conceived of as being about an underlying continuous variable, the quality of having memorized no, one or multiple spatial locations must also be expressed. In perception, sets of stimuli might be thought to span continuous spaces of potential percepts, but the presence or absence of a particular stimulus and a particular percept must be represented as well.

An important extension of the dynamical systems approach is, therefore, the integration of the concept of activation into its framework. Activation has been used in theoretical psychology and the neurosciences many times to represent information. In connectionism, for instance, computational nodes (i.e., neurons) are activated to the degree to which the information they represent is present in the current input. This is the space code principle of neurophysiology, according to which the location of a neuron in a neural network determines what it is that the neuron represents (i.e., under which conditions the neuron is activated). Activation thus represents the absence (low levels of activation) or presence (high levels of activation) of information about a particular state of affairs, coded for by the neuron.

The link between the notion of activation and the dynamical systems approach is made through the concept of a dynamic field of activation that preserves the continuity of that which is represented, such as the continuity of the space of possible movements or the continuity of memorized spatial locations. At the same time, information about those spaces is likewise represented through continuous values of state variables by introducing continuously valued activation variables for each possible point in the underlying space. The result is activation fields, in which an activation level is defined for every possible state of the represented quantity.

Figure 4 illustrates the different states of affairs such an activation field may represent. To make things concrete, think of the field as representing the direction of a hand movement. A well-established movement plan consists of a peak of activation localized at the appropriate position in the field. In the absence of any kind of information about an upcoming movement, the activation field is flat. More typically, however, there is prior information about possible upcoming movements. Such information may come from the perceptual layout of work space, from the recent history of reaching, from cues, etc., and is represented by graded patterns of activation.

 Patterns of activation in an activation field u(x) may represent (A) particular values of the underlying dimension, x, through the location of a peak of activation; (B) the absence of any specific information about that dimension; or (C) graded amounts of information about multiple values of the underlying dimension.

Figure 4. Patterns of activation in an activation field u(x) may represent (A) particular values of the underlying dimension, x, through the location of a peak of activation; (B) the absence of any specific information about that dimension; or (C) graded amounts of information about multiple values of the underlying dimension.

The preparation of a movement then consists of the generation of a peak of activation localized at the appropriate value of the underlying dimension, starting out from a more or less pre-structured pattern of prior activation. This generation is conceived of as the continuous evolution in time of the activation field, as described by a dynamical function that links the rate of change of the activation field to its current state.

In the simplest case, the activation field evolves toward attractors set by input. When, for example, a unique movement goal is specified by the perceptual layout (e.g., a single object is visible in work space), perceptual processes maybe assumed to provide input to the movement parameter field that drives activation up at field locations representing movement parameter values appropriate to achieve reaching to that object. This input-output mode of operation requires perceptual analysis, extraction of metric information from the scene, and coordination transformations to translate spatial information into information about corresponding movement parameter values.

It is easy, however, to encounter situations that inherently go beyond this input-output scheme. Natural environments have rich visual structure in work space so that a form of selection or decision making must occur to prepare a particular movement. The classical Piagetian A-not-B task, for instance, involves a form of such decision making (Thelen etal., 2001). Infants between 7 and 9 months of age are presented with a box into which two wells have been set, each covered by a lid. With the infant watching, a toy is hidden in one well, a delay imposed, and then the whole box is pushed toward the infant, so the lids can be reached for. At the time a reach is initiated, there are two graspable objects in the visual layout, the two lids of the two wells. Almost always, the infant reaches for one of the two lids, and thus makes a decision.

Most commonly, infants reach for the lid to which their attention was attracted when the toy was hidden. Subsequently, they will often recover the toy (although sometimes they enjoy just playing with the lids as well). Occasionally, however, infants may reach to the other lid, under which no toy was hidden. This error becomes quite frequent when the lid under which the toy is hidden is switched, so that after a number of trials in which the infant retrieved the toy under the A lid, the toy is now hidden under the other, B lid. The rate at which the toy is successfully retrieved in such switch trials is much smaller than the rate observed during the preceding A trials.

Older infants do not make such A-not-B errors. Are their motor plans more input-driven? A detailed analysis reveals the contrary. At least three sources of input contribute to the specification of the reaching movement in the A-not-B paradigm. The act of hiding the toy under one lid, together with attention-attracting signaling, provides input that is specific to the location of the hidden toy. This input is present only temporarily before the delay period, after which the infant initiates a reach. In contrast, the lids themselves provide constant input that is informative about the two graspable objects. Finally, the effect of prior reaches can be accounted for by assuming that a memory trace of previous patterns of activation is accumulated over time, biasing the motor representation to maintain the motor habit, that is, to reproduce the previous movement.

In a dynamic field model built on these three types of input (Fig. 5), the A-not-B error arises because input from the memory trace at A dominates over activation at B. Although specific input first induced activation at B on a B trial, this activation decays during the delay. This explains why there is less A-not-B error at short delays than at longer delays. In order to avoid the A-not-B error at longer delays, activation at the cued location must be stabilized against decay and be enabled to win the competition with the activation induced by the memory trace of previous reaches. This requires interaction, that is, the interdependence of the evolution of the field at different field sites. Activation at neighboring field sites belonging to a single peak of activation maybe mutually facilitatory, which helps sustain activation even when input is reduced. Activation at field sites that are sufficiently distant to contribute potentially to separate peaks of activation maybe mutually inhibitory, so that the field sites in effect compete for activation. When the relative weight of interaction compared to the weight of input increases, an instability occurs.

The temporal evolution of an activation field representing reaching targets in the A-not-B task during a B trial. Perceptual input at the two locations A and B pre-activates the field initially. At the A location, there is additional pre-activation due to the memory trace of prior reaches to A. When the toy is presented at B, activation near that location increases. (A) In the input-driven system modeling younger infants, this activation peak decays during the delay period, so that when the reach is initiated, activation at A is higher. (B) The interaction-driven system modeling older infants self-sustains the peak even as specific input at B is removed. When the reach is initiated, activation at B is higher.

Figure 5. The temporal evolution of an activation field representing reaching targets in the A-not-B task during a B trial. Perceptual input at the two locations A and B pre-activates the field initially. At the A location, there is additional pre-activation due to the memory trace of prior reaches to A. When the toy is presented at B, activation near that location increases. (A) In the input-driven system modeling younger infants, this activation peak decays during the delay period, so that when the reach is initiated, activation at A is higher. (B) The interaction-driven system modeling older infants self-sustains the peak even as specific input at B is removed. When the reach is initiated, activation at B is higher.  

At low levels of interaction, the field is input-dominated, so that for every input pattern, there is a unique matching activation pattern. At sufficiently large levels of interaction, the field may become interaction-dominated. Now there is no longer a unique mapping from input to activation patterns. New, self-stabilized patterns of activation may arise. One such pattern is self-sustained activation, in which a peak first induced by input remains stable even when the input is removed. Another related pattern is decision making, in which two sites receive input, but only one site develops a peak of activation.

The hypothesis underlying the dynamic field account of the A-not-B effect stipulates that the field goes through such an instability, transforming itself from an input-driven system at younger ages to an interaction-driven system at older ages. According to this hypothesis, older infants do not make the A-not-B error, because the dynamic field representing planned reaching movements is capable of sustaining activation at the initially cued site and stabilizing this sustained activation in B trials against input from the memory trace of previous A trials.

The hypothesis is supported by a wealth of detailed effects that can successfully be predicted or explained. For instance, the rate of spontaneous reaches to B during trials in which the toy was hidden at A is linked by the theory to the rate of A-not-B errors. Before the instability, both spontaneous and A-not-B errors are frequent while beyond the instability both are infrequent.

A number of different factors may put any given dynamic field on either side of the instability. Thus, whether an infant perseverates or not depends on the behavioral and stimulus context. The A-not-B error may be enhanced by providing more opportunity to reach to A first (building up a stronger memory trace there). It is reduced by spontaneous errors when infants reach to the B location on A trials. This happens because a memory trace is built up at the B location as well. Experiments in which the A and B locations are switched several times (maybe even dependent on the infants’ responses) potentially lead to memory traces at both locations reflecting each particular history of reaching, so that conclusions about the underlying representation become tenuous. The rate of A-not-B errors also depends on the perceptual layout (how visually distinct and symmetrical the two lids are), and on the reinforcement received from successful retrieval (e.g., lids that flash and make sounds when lifted up lead to a stronger memory trace than plain lids).

In these kinds of experiments, the locations to which reaches may be directed are always perceptually marked by the visible lids. In the theory, this is reflected by the fact that the perceptual layout pre-activates the field at these two locations. The underlying continuum of possible movements is thus not directly accessible to experimental observation. This is different in the sandbox version of the experiment, which reproduces the A-not-B experiment, except that the toy is hidden by burying it in the sand in one of two locations and then smoothing the sand over, so that no perceptual marker of the hiding location remains (Spencer, Smith, & Thelen, 2001). Toddlers retrieve the toy by digging for it after the imposed delay period. The location at which they begin to search is used to assess the movement plan. After a series of A trials, 2-year-olds show a clear pattern of attraction toward the A location when the toy is first hidden at the B location. Figure 6 illustrates how this attraction effect comes about in the dynamic field model. The peak induced when the toy is hidden at the B location drifts in the direction of the A location attracted by activation there due to the memory trace. This drift is suppressed in the traditional A-not-B experiment by input at both locations from the perceptual layout. The dynamic field account for this continuous version of an A-not-B error leads to the prediction that the attraction should be the larger, the more time is left between induction of the peak and execution of the movement.

The temporal evolution of an activation field representing reaching targets in the sandbox task, in which there is no permanent input from the perceptual layout. (A) Thus, on an Atrial there is no perceptual pre-activation at location B, so that the peak induced at A is unperturbed. (B) On a B trial, a peak is induced at the B location when the toy is presented. Activation at location A induced by the memory trace of prior reaches begins to attract that peak once the toy is hidden, as there is no input at B that stabilizes the peak's position.

Figure 6. The temporal evolution of an activation field representing reaching targets in the sandbox task, in which there is no permanent input from the perceptual layout. (A) Thus, on an Atrial there is no perceptual pre-activation at location B, so that the peak induced at A is unperturbed. (B) On a B trial, a peak is induced at the B location when the toy is presented. Activation at location A induced by the memory trace of prior reaches begins to attract that peak once the toy is hidden, as there is no input at B that stabilizes the peak’s position.

Such enhanced attraction at longer delays was indeed found.

The dynamic field account of the A-not-B error operated with the underlying continuum of movement plans, graded patterns of activation, and their continuous evolution in time. The toy as an object did not actually play any particular role, other than perhaps modulating the effective strength of the specific input. In fact, Linda Smith and Esther Thelen have demonstrated that the A-not-B error can be observed when the toy is completely removed from the paradigm (Smith, Thelen, Titzer, & McLin, 1999). The hiding of the toy is replaced by a waving of a lid, to which attention is attracted until it is put down over the well. Thus, less embodied forms of cognition, such as representing the hidden toy as an object independently of the associated action plan, are not necessary to understand the error. We may be learning nothing about such forms of cognition from the A-not-B paradigm. Instead, the paradigm informs us about a simple form of embodied cognition, the maintenance of an intention to act that is stabilized against the tendency to repeat a habit. This cognitive ability emerges whenever activation is sufficient to launch neuronal interaction.

In terms of the dynamic field framework, what is it then that develops? Just as in the earlier approach to movement coordination, the answer is that it is the dynamical function, now of the field, that develops. Specifically, the regime of self-sustained activation is enlarged so that the induced activation can be stabilized against the memory trace of previous movements over a wider set of perceptual layouts, specific cues to the hiding location, distractor information, and delays. How this change of the dynamical function is propelled by the ongoing sensory and motor experience of the infant is not yet understood.

Relationships to similar theoretical perspectives

At a conceptual level, connectionist approaches to development overlap broadly with dynamical systems approaches. The notion of distributed representation shares the emphasis on the graded, sub-symbolic nature of representation. Both the notion of activation-carrying network nodes of connectionism and the notion of activation fields are compatible with basic concepts of neurophysiology. Many connectionist networks are, technically speaking, dynamical systems, so that activation patterns in the networks evolve gradually in time under the influence of input and interaction. While there are technical differences in how instabilities are used and analyzed, these are not fundamental and may vanish as both approaches develop.

Dynamical systems approaches have hardly addressed the actual mechanisms of learning, focusing as a first step on an assessment of what it is that evolves during learning. In contrast, the explicit modeling of learning mechanisms has been central to connectionist approaches. One important observation of those is that characteristic signatures of such learning mechanisms may emerge from simple learning rules. For example, a fixed neuronal learning rule may lead to a time-varying rate at which new vocabulary is acquired (low rates initially, a maximal rate at intermediate levels of competence, with a return to low rates at relatively high levels of competence).

This form of emergence is analogous to the emergence of a particular attractor under appropriate conditions from the dynamical function characterizing a particular function in dynamical systems approaches. In such approaches, the states that emerge when perceptual or task conditions are changed are particular states of behavior or performance, and emergence comes from the dynamical function characterizing behavior. In contrast, in connectionism, the signatures that emerge are properties of the processes of learning, occurring on a longer time scale, while at any fixed time during the learning process, the system is typically characterized by its input-output function.

Linking these two complementary aspects of the two approaches is an obvious next step of scientific inquiry. Thus, dynamical systems approaches must be expanded to include dynamical accounts of the actual processes of learning. Connectionist models must be expanded to address dynamical properties of behavior at any given point during learning processes, including non-unique input-output relationships and the continuous evolution of activation on the fast time scale at which behavior is generated. First steps toward such a fusion of the approaches are now being made.

Perhaps because they were originally developed most strongly in the motor domain, dynamical systems approaches have provided accounts that link behavior quite closely to underlying sensorimotor processes, and thus to their neural and physical substrates. The dynamic field concept is an attempt to extend this thinking to the level of representations, again providing strong links to continuous sensory and motor surfaces. In contrast, connectionist approaches have been particularly strong in the domain of language, and thus were often constructed on the basis of relatively abstract levels of descriptions. Network nodes that represent letters, phonemes, keys to press, or even perceived objects are commonly used as input or output levels. This lack of a close link to actual sensory or motor surfaces weakens the strength of the gradualist, sub-symbolic stance of connectionism and gives some of the connectionist models the character of simplified, if exemplary, toy-like models. A second potential line of convergence could arise if connectionist models were scaled up to provide closer links to sensory and motor processes.

There are variants of dynamical systems approaches, represented by authors like Han van der Maas (catastrophe theory) and Paul van Geert (logistic growth models), that do not emphasize this link to sensory and motor processes as much (van Geert, 1998). These approaches are based on a theoretical stance somewhat similar to connectionist thinking. They depart from the discovery of analogies between characteristic signatures of developmental processes such as stages, dependence on individual history, or dependence on context on the one hand, and properties of non-linear dynamical systems such as bifurcations, sensitive dependence on initial conditions, or the existence of structure on multiple scales on the other hand. These analogies are exploited at a relatively abstract level. There is less emphasis on a systematic approach toward identifying the state variables that support such processes, as well as the specific dynamical functions that characterize these processes. These forms of dynamical systems approaches are thus less directed toward maintaining a close link between behavior and motor and sensory processes.

Conclusions

Stabilization is necessary for any behavior to emerge, not only at the level of motor behavior, but also at the level of representation. Conversely, once stabilization mechanisms are in place, behavioral or representational states may emerge under appropriate conditions. Instabilities lead to change of state and are thus landmarks of qualitative shifts in behavior and cognitive capacity. Dynamical systems provide the theoretical language in which these properties of behavior can be understood.

Dynamical systems ideas are impacting on our scientific understanding of development in a variety of ways. The most important implication is, perhaps, that what develops is the dynamical function, from which the various observable behavioral states may emerge as attractors. Thus, appropriate landmarks of development are not these states as such, but rather the range of sensory, behavioral, or environmental contexts in which the states become stable. Development may lead to the stabilization of a particular behavioral or representational state. It may also, however, facilitate the suppression of particular states and the associated inputs through instabilities, leading to flexibility and the differentiation of the dynamical landscape of behavior.

Next post:

Previous post: