Information Technology Reference
In-Depth Information
Table 2. continued
Algorithm of the out-bound thread
75
if forwarding π-channel pipe_id to dest:
76
fd ← open connection to dest
77
foreach segment ∈ local buffer:
78
send segment to fd
1. my_id - unique ID, implemented as an IP/
port pair. The port number is dynamically
generated during application startup.
2. id_tab - a local table (#4, #14) associating
the pipe_id with open file descriptors fd_list,
channel read-write pointers, and other local
state information. This table corresponds to
the π-channel internal state component in
Figure 5.
stored in a shared buffer, so that pi_read() can
retrieve (#19, #22) them. The out-bound thread
pushes π-channels to sinks (#54, #75-#78). Dur-
ing reader recovery, these threads send missed
segments to the reader.
We only outline the migration mechanism
(Table 3), showing when application state is saved
and restored after migration. The idea (#79-#86)
is to attempt a graceful connection shutdown
before migrating. Since pipes are cached, unde-
livered data segments can be retrieved from the
π-Server. The hold_list (#89, #44) identifies the
migrating processes.
Communication state migration, similar to
(Chanchio & Sun, 2004), performs a connection
hand-over with the migrated reader (#39-#43).
In Figure 6, the migrated peer re-establishes con-
nection with the writer so that: (1) Seg 2 is retrieved
from the π-Server; and at the same time (2) Seg
3 is streamed from the writer.
During a pi_create(), a put request (#10) is
sent to the π-Server, which creates (#58-#65) an
entry for this π-channel on a hash table and re-
turns a unique pipe_id. It replies (#58, #61, #62)
with a list (possibly empty) of destinations. If the
reader's identity is known, the reader's address
appears first, followed by the server's address.
The pi_create() establishes (#12, #13) a connec-
tion with the destinations and associates the pipe
with the file descriptors. It returns as descriptor
(#14, #16) the position of the π-channel on id_tab.
The pi_attach() sends a get request to the server
(#2), which replies (#51, #52) with the unique
pipe_id for the π-channel, even if non-existent. The
server creates an entry for this π-channel, storing
the reader's address for use in channel creation.
A pi_read() does not read directly from the open
connection with the source. Instead, incoming
data segments are handled by the in-bound thread
(#66-#74), which listens and accepts TCP opera-
tions on behalf of the application. The π-Server
manages a thread pool for the same purpose of
enabling asynchronous read operations. When the
in-bound thread accepts a π-channel, it allocates
a buffer (#70) for pipe segments. Each segment
(#72-#74) contains type information, length,
offset, and pipe_id. The received segments are
EXPERIMENTAL RESULTS
Two aspects of the implementation are evaluated.
First, we measure the rate in which π-channel
lookup operations are handled by the π-Server
under two scenarios: (a) π-Server and clients are
on one cluster; and (b) clients perform lookups
over a wide-area network. Second, we measure
the throughput when communication takes place
between two applications over our WAN testbed.
This test shows how asynchronous read operations
improve the bandwidth utilisation. Table 4 lists the
resources we used. VPAC (Victorian Partnership
for Advanced Computing) is an HPC consortium
 
Search WWH ::




Custom Search