Biomedical Engineering Reference
In-Depth Information
Substituting Equations (16.32) and (16.36) into (16.37), the signal « ( t ) is transformed to
« ( t ) ¼ u fb ( t ) þ c ( t ) T j e ( t ) þ k ( t ) e ( t )
(16 : 39)
Using these signals, we have then the following two D.O.F. adaptive control theorem:
Theorem: For the force controlled object P ( s ) with its unknown inverse dynamics given as ( 16.24 )-
( 16.26 ), if we adjust the parameter of the feedforward controller ( 16.28 )-( 16.30 )as
d u ( t )
d t
¼ a
j ( t ) u fb ( t ) þ c ( t ) T j e ( t ) þ k ( t ) e ( t )
(16 : 40)
then the force control error e ( t ) ! 0. In addition, if
j ( t ) satisfies the PE condition such that
j ( t )
j ( t ) T
be positive , then the feedforward controller Q( u ) described by ( 16.28 )-( 16.30 ) tends to P 1
The detailed prove of this theorem is given in Muramatsu and Watanabe (2004).
The adaptation law (16.40) can be interpreted as a combination of the feedback error learning
and the learning control, since u ( t ) is adjusted by both the feedback input u fb ( t ) and the feedback
error e ( t ). In addition, j e ( t ) is also generated by e ( t ).
Note that the convergence of Q ( u ) ! P 1 means that we can realize the time response f ( t ) exactly
as the desired f d ( t ) without any feedback loop delay. However, in order to realize this convergence, it
is necessary for the desired f d ( t ) to satisfy the PE condition during adaptation process.
The convergence of adaptation can be increased further by the following modification:
d u ( t )
dt
d G
¼ G
dt ¼ G
j
j T G
( t )
and
(16 : 41)
instead of Equation (16.40).
16.4.2.2
Application to a Robot's Force Tracking Control
To evaluate the effectiveness of above two D.O.F. adaptive tracking control, we performed
computer simulations and robotic experiments.
In the simulations, as shown in Figure 16.13, we set the robot's parameters as m r ¼ 1, d r ¼ 2, and
k r ¼ 0.5; and the unknown dynamic environmental parameters as m e ¼ 1, d e ¼ 2, and k r ¼ 2,
respectively, at the beginning of the simulation. The simulation results are given in Figure 16.14,
where Figure 16.14(a) shows the result for the adaptation law (16.40), and (b) is the fast conver-
gence result when using (16.41). We change the environmental viscosity from d e ¼ 2 to 0.5 at the
simulation time t ¼ 250[ s ] in Figure 16.14(a) and at t ¼ 15[ s ] in (b). In order for the feedforward
controller ( Q ( u ) in Figure 16.12) to converge to the inverse of the force controlled object
m e s 2 þ d e s þ k e
( m e þ m r ) s 2 þ ( d e þ d r ) s þ ( k e þ k r ) ,
P ( s ) ¼
we set the desired force f d ( t ) as noise at the beginning 100[s] and during the time of 250[s] to 350[s]
in Figure 16.14(a) and at the beginning 4[s] in (b).
In both cases, since the force tracking error converged very fast, it is hard to distinguish between
the desired and the reaction forces in these figures. Figure 16.14 also shows that the unknown
parameters of the robot and environment are converged to the real parameters, which means that
the feedforward compensation realizes the exact inverse of the force controlled object P ( s ).
Therefore, even for the rectangle type of desired forces, the control system can realize exactly
Search WWH ::




Custom Search