Information Technology Reference
In-Depth Information
To generate test cases for routines in specified classes, AutoTest repeatedly performs
the following three steps:
Select routine. AutoTest maintains the number of times that each routine has been
tested, then it randomly selects one of the least tested routines as the next routine under
test, thus trying to test routines in a fair way.
Prepare objects. To prepare objects needed for calling the selected routine, AutoTest
distinguishes two cases: basic types and reference types.
For each basic type such as INTEGER, DOUBLE and BOOLEAN, AutoTest main-
tains a predefined value set. For example, for INTEGER, the predefined value set is
0 , + / − 1 , + / − 2 , + / − 10 , + / − 100 ,maximum
.Itthen
chooses at random either to pick a predefined value or to generate it at random.
AutoTest also maintains an object pool with instances created for all types. When
selecting a value of a reference type, it either tries to create a new instance of a con-
forming type by calling a constructor at random or it retrieves a conforming value from
the object pool. This allows AutoTest to use old objects that may have had many rou-
tines called on them, resulting in states that would otherwise be unreachable.
Invoke routine under test. Eventually, the routine under test is called with the selected
target object and arguments. The result of the execution, possible exceptions and its
branch coverage information is recorded for later use.
and
minimum integers
2.3
Experiment Setup
Class selection. We chose the classes under test from the library EiffelBase [1] version
5.6. EiffelBase is production code that provides basic data structures and IO function-
alities. It is used in almost every Eiffel program. The quality of its contracts should
therefore be better than average Eiffel libraries. This is an important point because we
assume the contracts to be correct. In order to increase the representativeness of the test
subjects, we tried to pick classes with various code structure and intended semantics.
Table 1 shows the main metrics for the chosen classes. Note that the branches shown in
Ta b l e 1 . Metrics for tested classes
Class
LOC
Routines Contract assertions Faults Branches Branch Coverage
ACTIVE LIST
2433
157
261
16
222
92%
ARRAY
1263
92
131
23
118
98%
ARRAYED LIST
2251
148
255
22
219
94%
ARRAYED SET
2603
161
297
20
189
96%
ARRAYED STACK
2362
152
264
10
113
96%
BINARY SEARCH TREE
2019
137
143
42
296
83%
BINARY SEARCH TREE SET
1367
89
119
10
123
92%
BINARY TREE
1546
114
127
47
240
85%
FIXED LIST
1924
133
204
23
146
90%
HASH TABLE
1824
137
177
22
177
95%
HEAP PRIORITY QUEUE
1536
103
146
10
133
96%
LINKED CIRCULAR
1928
136
184
37
190
92%
LINKED LIST
1953
115
180
12
238
92%
PA RT SORTED TWO WAY LIST
2293
129
205
34
248
94%
Average
1950
129
192
23
189
93 %
Total
27302
1803
2693
328
2652
93 %
 
Search WWH ::




Custom Search