Information Technology Reference
In-Depth Information
Table 3. The communication migration protocol
erating lookup requests. Table 5 presents results
conducted over mahar, measuring the execution
time of all clients when looking up 25 366 unique
but randomly generated π-channel names. Each
client performed 40 000 lookups, without chan-
nel read/write operation. Clients were assigned
on execute nodes, with the π-Server on the head
node. At least 12 runs were performed for each
test case, using only the timings from the middle
10 runs.
On Table 6, we present the timings for lookups
on a WAN between Monash and VPAC. Clients
ran on mahar compute nodes with the π-Server
running on wexstan's head node, using the same
parameters as in the LAN tests. These results
indicate that the bottleneck for grid applications
will most likely be the high latencies between the
π-Server and the clients.
79
if I am migrating:
80
disable sending acks for heartbeats
81
migrating ← true
82
foreach open π-channel:
83
save offset into checkpoint
84
flush and close all connections
85
perform local state checkpoint
86
send checkpoint to the new location
87
if a peer is migrating:
88
/* reject connections from this list */
89
add peer_addr to hold_list
of universities in Victoria, Australia. Our wide-area
testbed uses both Monash and VPAC resources.
π-Server Lookup Performance
We evaluate the request-handling rate of the
π-Server, with up to 32 clients concurrently gen-
Figure 6. Time diagrams showing concurrent reading of a π-channel from π-Server and writer. In (a),
the migrated reader resumes reading from cache. In (b), it also resumes connection with the writer.
Table 4. Participating systems in our experiments
Name
Location
Processor
OS
#CPUs
mahar.infotech.monash.edu.au
Monash
Intel P4
Linux 2.4.27-3
50
edda.vpac.org
VPAC
IBM Power5
SLES 9 Linux
80
wexstan.vpac.org
VPAC
AMD Opteron
Red Hat Linux
246
tango.vpac.org
VPAC
AMD Opteron
CentOS 5 Linux
760
 
Search WWH ::




Custom Search