Databases Reference
In-Depth Information
TABLE 4.1: Programs Used for Our Experiments with Number of Produced
Runtime Events, Extracted Collaborations (Measured after Applying Collab-
oration Transformers), and Inferred FSMs
Program
Classes loaded
Runtime events
Collaborations
FSMs
antlr
126
9,359,100
1,172
38
chart
219
28,875,810
490
10
eclipse
795
499,296
7,388
133
fop
231
6,813,674
9,945
54
hsqldb
131
18,618,350
4,164
8
jython
251
54,222,673
59,371
75
luindex
128
19,358,160
69
10
lusearch
118
83,888,242
32
3
pmd
325
295,748
144
20
xalan
244
57,340,996
9,645
56
Sum
279,272,049
92,420
407
Number and size of inferred protocols. How many protocols does the
analysis infer and how large are they? Producing a reasonable number
of protocols of manageable size is desirable, for instance, to use them as
API usage documentation.
Influence of coverage. How much does the coverage of the API by the
method traces influence the results? Answering this question helps to
decide when it is worth to gather and analyze more traces.
Quality of inferred protocols. Do the inferred protocols show typical API
usage scenarios? This question is important since specification miners
risk to produce incidental call sequences that are not representative.
Performance and scalability. How does the analysis perform for large
method traces? Real-world programs produce millions of runtime events
and only a scalable analysis can analyze them in reasonable time.
4.4.1 Experimental Setup
For our experiments, we use the DaCapo benchmark suite [8]. It contains
real-world Java programs from different application domains and provides
input to each program to run them in a controlled and reproducible manner.
The first three columns of Table 4.1 show the analyzed programs, how many
classes were loaded during their execution, and how many runtime events we
analyzed.
All experiments are done on a 3.16 GHz Intel Core 2 Duo machine with
 
Search WWH ::




Custom Search