Information Technology Reference
In-Depth Information
describing naming conventions [21] to define the corresponding automaton as
described in Definition 3. Finally, we used the 1654 Enterprise Services as the
input to the algorithm as described in the previous section. The resulting set of
detected concepts has been stored as annotations in form of RDF triples refer-
encing the original Enterprise Service. This is the basis of the analysis below.
5.2 Annotation Completeness
The annotation completeness represents the number of Enterprise Services that
have been partially or fully annotated. For this, we calculated the expected
maximal number of annotations for each Enterprise Service operation by taking
advantage of their Camel Case notation. We refer to the annotation accuracy as
the ratio of actual number of generated annotations compared to the expected
number of annotations. To determine the annotation completeness, we only con-
sidered Enterprise Services with an accuracy greater than null. As a result, we
achieved an overall annotation completeness of 1583 out of 1654 Enterprise Ser-
vices, which is equivalent to 95.7%. The missing 4.3% stem from Enterprise
Service signatures that did not comply with existing naming conventions.
5.3 Annotation Accuracy
In this part of the evaluation, we only considered the 1583 fully or partially anno-
tated Enterprise Services from above. To determine the accuracy of annotations,
we grouped them into categories of 100% to 40% annotation accuracy (using a
10% scale). In terms of annotation accuracy, we refer to the ratio of actual vs.
expected annotations from the previous section. We set the lower margin to 40%
based on the lowest accuracy of all 1583 annotated Enterprise Services. For that
level of accuracy, we only found four Enterprise Services. In fact, less than 1%
of Enterprise Services have been annotated with less than 50% accuracy. On the
other hand, the majority of Enterprise Services, i.e. 73.0%, have been fully anno-
tated as illustrated in Figure 4. For an annotation accuracy of 80% or more, the
percentage of annotated Enterprise Services increases to 91.4%. The whole pro-
cedure on the entire data set took less than5minutesonanIntel(R)Core(TM)2
Duo Processor T7300 machine @ 2GHz CPU and 3GB of RAM. These numbers
lead to two observations: (i) the naming conventions were largely followed in the
tested sample of ESs, and (ii) our approach delivered an effective annotation.
5.4 Annotation Correctness
To the best of our knowledge, there is no obvious solution to automatically
validate the correctness of any generated annotation; the baseline therefore is
manual verification. Therefore, we first selected a 10% sample of the completely
annotated Enterprise Services to evaluate their correctness. Half of these services
were strategically selected by a domain expert to cover various applications as
well as a variety of design concepts and naming conventions. The other half was
randomly selected to avoid any biased decision regarding the selection of con-
cepts. In a second step, an independent expert in SOA Governance has been
 
Search WWH ::




Custom Search