Biomedical Engineering Reference
In-Depth Information
errors by providing an absolute orientation and position. Researchers have improved
orientation by merging accelerometer and gyroscopic data but did not test a sys-
tem under translational motion [ 9 ]. Other research has shown that you can combine
accelerometers and gyroscopes for accurate position and orientation tracking [ 26 ]. In
addition, researchers have successfully used Kalman filters to merge accelerometer
and gyroscopic data [ 1 , 27 ].
16.2.2 Playstation Move
The PlayStation Move system consists of a PlayStation Eye and one to four
PlayStation Move motion controllers (see Fig. 16.3 ). The controller is used with
one hand and has several buttons on the front and a long analog “T” button on the
back. This hybrid device combines the advantages of camera tracking and motion
sensing with traditional buttons, but achieves better results than the Wiimote due to
the sensor differences.
Internally, it has several MEMS sensors like the Wiimote, including a three-
axis gyroscope and three-axis accelerometer (see previous subsection for details
on these sensors). The distinctive feature of the Playstation Move is the 44mm-
diameter sphere on the top that houses a RGB LED. The sphere color can be changed
dynamically to enhance interaction, but the sphere's primary purpose is to track the
controller's 3D position with the PlayStation Eye. Because the sphere generates its
own light, tracking in a dark room works very well and even under non-optimal
lighting it manages well. The spherical shape also makes the color tracking invariant
to rotation, simplifying position recovery and improving precision. Deriving the
Playstation Move state involves two major steps: image analysis and sensor fusion.
Though the exact details of these steps are beyond the scope of this chapter, the
following overview provides a qualitative understanding of each step.
16.2.2.1 Image Analysis
Conceptually, the image analysis can be broken up into two stages: finding the sphere
in the image and then fitting a shape model to the sphere projection. Color segmenta-
tion is used to find the sphere and includes two steps; segmentation and pose recovery.
Segmentation consists of labeling every video pixel that corresponds to the object
being tracked. Pose recovery consists of converting 2D image data into 3D object
pose (position and/or orientation). This can be accomplished robustly for certain
shapes of known physical dimensions by measuring the statistical properties of the
shape's 2D projection. The approximate size and location in the image are derived
from the area and centroid of the segmented pixels (see Fig. 16.4 ).
It is well-known that the 2D perspective projection of a sphere is an ellipse
[ 21 ], though many tracking systems introduce significant error by approximating
the projection as a circle. In theory, fitting such a model to the image data is straight-
Search WWH ::




Custom Search