Information Technology Reference
In-Depth Information
Ta b l e 2 . Conversation execution time for an increasing
number of sequential and parallel states
Ta b l e 1 . Original OOI RPC
vs. conversation-based RPC
Rec NoM Mon
States (s)
Par NoM Mon
States (s)
10 RPCs (s)
RPC Lib 0.103
No Monitor 0.108 +4%
Monitor
(s)
(s)
10
0.92
0.95
+3.2%
10
0.45
0.49
+8%
100
8.13
8.22
+1.1%
100
4.05
4.22
+4.1%
0.122 +13%
1000
80.31 80.53 +0.8%
1000
40.16 41.24 +2.7%
the number of states passed through sequentially by a simple recursive protocol (used
to parameterise the length of the conversation), and “Par States” the number of parallel
states in a parallel protocol. Two benchmark cases are compared. The main case “Mon-
itor” (Mon) is fully monitored, i.e. FSM generation and message validation are enabled
for both the client and server. The base case for comparison “No Monitor” (NoM) has
the client and server in the same configuration, but monitors are disabled (messages
do not go through the interceptor stack). As above, we found that the overhead intro-
duced by the monitor when executing conversations of increasing number of recursive
and parallel states is again mostly due to the cost of the initial FSM generation. We
also note that the relative overhead decreases as the session length increases, because
the one-time FSM generation cost becomes less prominent. For dense FSMs, the worse
case scenario results in linear overhead growth wrt. the number of parallel branches.
In both of the above tables, the presented figures are the mean time for the client
and server, connected by a single-broker AMQP network, to complete one conversation
after repeating the benchmark 100 times for each parameter configuration. The client
and server Python processes (including the conversation runtime and monitor) and the
AMQP broker were each run on separate machines (Intel Core2 Duo 2.80 GHz, 4 GB
memory, 64-bit Ubuntu 11.04, kernel 2.6.38). Latency between each node was mea-
sured to be 0.24 ms on average (ping 64 bytes). The full source code of the benchmark
protocols and applications and the raw data are available from the project page [35].
4.3
Use Cases
We conclude our evaluation with some remarks on use cases we have examined. Table 3
features a list of protocols, sourced from both the research community and our industry
use cases, that we have written in Scribble and used to test our monitor implementation
on more realistic protocol specifications. A natural question for our methodology, being
based on explicit specification of protocols, is the overhead imposed on developers wrt.
writing protocols, given that a primary motivation for the development of Scribble is to
reduce the design and testing effort for distributed systems. Among these use cases, we
found the average Scribble global protocol is roughly 10 LOC, with the longest one at
26 LOC, suggesting that Scribble is reasonably concise.
The main factors that may affect the performance and scalability of our monitor
implementation, and which depend on the shape of a protocol, are (i) the time required
for the generation of FSMs and (ii) the memory overhead that may be induced by the
generation of nested FSMs in case of parallel blocks and interrupts. Table 3 measures
Search WWH ::




Custom Search