Information Technology Reference
In-Depth Information
position-based visual servoing (PBVS), see e.g. [30]. In IBVS, the feedback error
is defined in the image domain, in particular as the difference between the current
and the desired vectors of coordinates of the object features. In PBVS, the feedback
error is defined in the 3D space, in particular as the camera pose between the cur-
rent and the desired locations. Starting from these two methods, several others have
been derived, for example proposing the use of a feedback error defined in both
image domain and 3D space (see e.g. [25]), partition of the degrees of freedoms
(see e.g. [16]), global motion plan via navigation functions (see e.g. [17]), control
invariant with respect to intrinsic parameters (see e.g. [24, 28]), use of complex im-
age features via image moments (see e.g. [29]), switching strategies for ensuring the
visibility constraint (see e.g. [9, 18]), generation of circular-like trajectories for min-
imizing the trajectory length and achieving global convergence (see e.g. [14]), and
path-planning methods that take constraints into account (see e.g. [26, 27, 1, 10, 6]).
In all these methods the goal condition is that the image features in the current
view have to match their corresponding ones in the desired view. However, the ful-
fillment of this condition can never be ensured due to the presence of image noise,
which leads to unavoidable image measurement errors. Moreover, even when ful-
filled, this condition can never guarantee that the robot end-effector has reached the
sought desired location since the available measurements are corrupted. Hence, this
problem is of fundamental importance in visual servoing. See also [23, 22] where
the effect of image noise on the control laws of PBVS, 2 1/2 D visual servoing, and
IBVS, has been investigated.
This chapter addresses the estimation of the worst-case robot positioning error
introduced by image measurement errors. Specifically, a strategy for computing an
estimate of the set of admissible values of this worst-case error is proposed. This
estimate is obtained by solving optimizations over polynomials with linear matrix
inequalities (LMIs) and barrier functions, which provide upper and lower bounds to
the sought worst-case robot positioning error. These optimizations are built by in-
troducing suitable parametrizations of the camera frame and by adopting the square
matrix representation (SMR) of polynomials. Some examples with synthetic and
real data illustrate the application of the proposed strategy. This chapter extends our
previous results in [11, 15].
The organization of the chapter is as follows. Section 9.2 introduces the problem
formulation and some preliminaries. Section 9.3 describes the computation of the
upper and the lower bounds. Section 9.4 presents some illustrative examples. Lastly,
Section 9.5 concludes the chapter with some final remarks.
9.2
Preliminaries
In this section we describe the notation adopted throughout the chapter, we provide
the formulation of the problem, and we briefly explain how polynomials can be
represented by using the SMR.
Search WWH ::




Custom Search