Biomedical Engineering Reference
In-Depth Information
where p
(
q
)
is a prior distribution. Then, as n is large enough, we have
p
1
/
2
k
2 log
n
2p e +
log |
I
(
q
) |
S n (
p
)
(
q
)
d q
p
(
q
)
Hence the optimal prior is the Jeffreys prior which is proportional to I
(
q
)
.
1.5 Optimal control
Cortical activity related to some simple motor movements might be relatively easier
to characterize than high brain functions such as memory and attention etc. [25, 39,
40, 58]. To understand biological movement control is of great potential applications
in robot control, as reviewed in Chapter 17. Here we present some examples to
illustrate optimal control theory and refer the reader to Chapter 17 for more details.
In theory, (stochastic) optimal control is a well developed area, with wide and
successful applications in finance. In general, to find an optimal control signal is
reduced to solve the Hamilton-Jacobi-Bellman (HJB) equation [52]. However, the
HJB equation is usually difficult to solve, even numerically [43]. In the simplest
case, i.e., when the control problem is an open loop control, we can analytically
obtain the solution of the control problem (see [71] for some recent results with a
feedback control).
1.5.1
Optimal control of movement
The Model
We consider a simple model of saccadic movement. Let x 1 (
t
)
be the position of
eye (in degrees) and x 2 (
t
)
be its velocity (degree/sec) [24]. We then have
x 1 =
x 2
x 2 =
t 1 t 2 x 1
t 1 +
t 2
t 1 t 2
t 1 t 2
(1.16)
x 2 +
u
where t 1 ,
t 2 are parameters and u is the input signal as defined below. However, we
are more interested in general principles rather than numerically fitting of experimen-
tal data. From now on, we assume that all parameters are in arbitrary units, although
a fitting to biological data would be straightforward. In matrix term we have
d X
=
A X dt
+
d U
(1.17)
where
0
1
A
=
t 1 t 2
t 1
t 2
t 1 t 2
+
(1.18)
 
Search WWH ::




Custom Search