Biomedical Engineering Reference
In-Depth Information
that, we need to calculate bed's normal vector first. In order to rotate the camera view
to the top of the bed, three points and one rotate center point need to be specified
manually. After taking three 3D-points on the bed and using cross product, the system
could get the normal vector of the bed. Then, these 2D-points could project to 3D-
points in the real world. Then, we proceed to calculate rotation matrix for the bed's
normal vector. Once we have the rotation matrix, we can project all 2D points back to
3D point-cloud. Again, we project it back to 2D depth image. However, it will lose
some information after rotating the camera view, so a median filter is used to fill emp-
ty holes. Fig. 2b shows the original depth image, and Fig. 2c shows the depth image
after view transformation.
Cross-Section. We generate several binary images by setting different thresholds
starts from the shallowest point of the depth image to the depth of the bed. We gener-
ate cross-sections every 2 cm from top to bottom. Generally, the distance between the
highest point of the human body and the bed is around 18~28 cm, therefore, there
would be 9~13 transverse sections of the person from top to bottom. Fig. 1 shows ten
cross-sections (red line) from the highest point of the red point to the bed.
Head and Torso Detection. By using connect-component analysis, the components
from each cross-section can be extracted. The concept of this method is to find out
spheres in each cross-section. Once there is a circle growing larger from top section to
bottom section, we assume that it might be a sphere there. So far, this algorithm might
find other spheres. To decide the highest sphere, we collect each circle's contribution
from each section. More circles at the same location means higher probability to have
sphere there. If sphere candidates have n different locations, the probability that might
be a sphere at location l is:
#
(1)
∑∑ #
In addition to detecting head from single depth image, we need to leverage the advan-
tage of video sequence. Hence, we push every head location found by each frame into
a queue. Then, we use the same idea to re-locate the highest probability head-like
sphere. This will avoid some occasional misleading failed detection. Once the head is
detected, the next step is to detect the torso's ROI (region-of-interest). We adopt al-
most the same way as detecting the head, but this time we track cuboids rather than
spheres. However, there is a problem that the pillow might be recognized as a torso.
Therefore, we reject cuboids if there is a head on it. Fig. 2 shows the processing pro-
cedure of head and torso detection in this system.
Breath Measurement. The breathing signal can be extracted from the torso ROI once
we detect the head and torso. While the user is inhaling, his chest wall will expand,
and the average depth value of the torso ROI will decrease; on the contrary, the aver-
age depth value of the torso ROI will increases while the user is exhaling. Therefore,
the sequential of the average depth value of the torso ROI is considered as the breath-
ing signal under the premise that the user is sleeping. For breath measurement, a turn-
ing point detection algorithm is proposed. At first, a mean filter is used for reducing
the noises caused by the sensing deviation and body movements. Then, the turning
points of the breathing signal are detected using the second derivative method. Final-
ly, in order to eliminate redundant turning points, a dynamic threshold is applied to
Search WWH ::




Custom Search