Biomedical Engineering Reference
In-Depth Information
live-wire algorithm. Our approach is in fact complementary to these works since we
concentrate on segmentation within one volume slice, and they emphasize
interpolating between slices.
ITK Snap [ 15 ] provides tools to manually perform segmentation and to initialize
and view the results of active contour. This application is excellent for semi-
automatic segmentation. Contrarily, our approach tightly couples user input with
an automatic process throughout the segmentation. In [ 16 ], the authors implement a
GPU-based level set solver to make interaction real time. However, they focus on
interactively modifying parameters used in the energy functional, which is likely to
be insufficient for reaching the correct local minimum.
We also timed ourselves using several interactive methods; segmentation of
5 MRI cases was performed and we present average times. Each scan had approxi-
mately 200 slices of which 150 contained bone. Manual segmentation took 90 min,
35 s per slice. Segmenting with the live-wire algorithm, one slice took 20 s, or
50 min per case. With the HSC Segmentor, an MRI case was completed in 10 min,
or 6.5 s per slice and segmentation is of manual quality.
4 Conclusion
In this work, we phrase interactive segmentation as an HSC task; there is a tight
coupling between user input and automatic segmentation. Consequently, strengths
of manual and automatic segmentations are leveraged. The user acts as a supervisor
by providing high level guidance in the form of a close initialization to an automatic
algorithm. User input is occasional and local iterating an active contour segmentor.
The work presented in this note is clinically relevant and is being used for a
population study of human skeletal growth. Results of the approach show that a
segmentation that is qualitatively comparable to manual segmentation results can
be achieved in less than 5 of the time. Future work includes developing visualiza-
tion strategies and incorporating shape prior knowledge to use human input more
efficiently.
Acknowledgements This work was supported in part by grants from AFOSR, ARO, ONR, and
MDA. This work is part of the National Alliance for Medical Image Computing (NA-MIC),
funded by the National Institutes of Health through the NIH Roadmap for Medical Research, Grant
U54 EB005149. Information on the National Centers for Biomedical Computing can be obtained
from http://nihroadmap.nih.gov/bioinformatics . Finally, this project was supported by grants from
the National Center for Research Resources (P41-RR-013218) and the National Institute of
Biomedical Imaging and Bioengineering (P41-EB-015902) of the National Institutes of Health.
Search WWH ::




Custom Search