Graphics Reference
In-Depth Information
13. RESOURCES
e Shape repository of the Visualization Virtual Services (VVs) ”VISIONAIR:” http:
//visionair.ge.imati.cnr.it/ontologies/shapes/ , which also offers services for
preparing digital shapes for visualization purposes, and shape search facilities.
13.2.2 BENCHMARKS AND CONTESTS
Enabling the reproduction and evaluation of algorithms is an important issue in computer sci-
ence. Several benchmarking initiatives exist in the multimedia domain, such as TRECVID
[ 153 , 184 , 185 ] and IMAGECLEF [ 148 ]. e computer graphics community recognized op-
portunity in benchmarking as a means to ensure reproducibility of results: indeed, with the ever-
growing number of techniques proposed, it is imperative to provide users with tools to decide on
which solution is best suited to the application at hand. e creation of standard datasets with a
ground truth and a “quality label” paves the road to a fair evaluation of the existing technologies,
as well as the identification of new directions of research. erefore, benchmarks were proposed
for example for surface reconstruction from point clouds [ 16 ], 3D retrieval (SHREC [ 205 ]), and
keypoint detection [ 200 ].
In 2011, a world-leading scientific publisher, Elsevier, launched the ExecutablePapersGrand
Challenge , pushing the idea of benchmarking towards new forms of publication. e goal was to
address the question of how to reproduce computational results within the confines of the research
article. e winner, the Collage Authoring Environment, launched a pilot special issue on 3D
Object Retrieval with the journal Computers & Graphics , showcasing executable research results
in articles [ 4 ]. In an executable paper, authors can embed chunks of executable code and data into
their papers, and readers may execute that code within the framework of the research article.
In the following, we focus on benchmarking initiatives addressing two topics, namely shape
retrieval and segmentation, which are very relevant to the shape analysis, and provide a substantial
number of interesting 3D shapes to be used for experimenting with shape analysis techniques.
Retrieval Among the others, the Shape Retrieval Contest (SHREC) ( http://www.aimatsha
pe.net/event/SHREC/ ) was proposed with the general objective of evaluating the performances
of 3D shape matching and retrieval algorithms [ 205 ]. e initial results of the contest provided
the first opportunity to analyze a number of state-of-the-art algorithms, their strengths, as well as
their weaknesses, using a common test collection allowing for a direct comparison of algorithms.
SHREC provides many resources to compare and evaluate 3D retrieval methods, in particular
ground truths and statistical measures, such as precision recall or first and second tier. Started in
2006, SHREC is still ongoing. Indeed, a single test collection necessarily delivers only a partial
view of the whole picture, hence the contest quickly moved towards a multi-track organization,
with specific tracks for the single aspects of the problem (e.g., global and partial matching) [ 19 ,
36 , 38 , 42 ], with a distinction among the query representations (e.g., polygon soup, sketches
and watertight models), as well as a number of context-specific benchmarks, for example for
mechanical part matching, molecule matching, or 3D face matching.
As an example, in 2014, five tracks were organized:
Search WWH ::




Custom Search