Graphics Reference
In-Depth Information
13. RESOURCES
and can be visualized with the new method and exported under different formats (PDF, CVS,
etc.).
Finally, also a benchmark for co-segmentation (the so-called Shape COSEG Dataset)
[ 182 ] is available at http://web.siat.ac.cn/~yunhai/ssl/ssd.htm (see Figure 13.1 ). ๎€€e
goal of this work is to provide data for quantitative analysis of how people consistently segment
a set of shapes and for evaluation of our active co-analysis algorithm. ๎€€e dataset is a collection
of eleven classes of shapes which possess a consistent ground-truth segmentation and labeling.
Among them, seven sets come from the dataset of Sidi et al. [2011] while a small, but challeng-
ing, class of iron models has been created. In addition, to consider the labeling of large sets, three
additional large sets have been created: tele-aliens, vases and chairs. ๎€€ese three sets consists of
200, 300, 400 shapes, respectively.
Figure 13.1: ๎€€e COSEG main page.
In addition to the benchmarks here described, other references must be cited: Prince-
ton Shape Benchmark: http://shape.cs.princeton.edu/benchmark/ and the McGill 3D
Shape Benchmark: http://www.cim.mcgill.ca/~shape/benchMark/ .
Search WWH ::




Custom Search