Database Reference
In-Depth Information
FIGURE 1.7 A distributed program that corresponds to the sequential program in Figure
1.5a coded using the message-passing programming model.
As shown, for every send operation, there is a corresponding receive operation. No
explicit synchronization is needed.
Clearly, the message-passing programming model does not necessitate any sup-
port from the underlying distributed system due to relying on explicit messages.
Specifically, no illusion for a single shared address space is required from the distrib-
uted system in order for the tasks to interact. A popular example of a message-passing
programming model is provided by the Message Passing Interface (MPI) [50]. MPI
is a message passing, industry-standard library (more precisely, a specification of
what a library can do) for writing message-passing programs. A popular high-
performance and widely portable implementation of MPI is MPICH [52]. A common
analytics engine that employs the message-passing programming model is Pregel. In
Pregel, vertices can communicate only by sending and receiving messages, which
should be explicitly encoded by users/developers.
To this end, Table 1.1 compares between the shared-memory and the message-
passing programming models in terms of five aspects, communication , synchroni-
zation , hardware support , development effort , and tuning effort . Shared-memory
programs are easier to develop at the outset because programmers need not worry
about how data is laid out or communicated. Furthermore, the code structure of
a shared-memory program is often not much different than its respective sequen-
tial one. Typically, only additional directives are added by programmers to specify
parallel/distributed tasks, scope of variables, and synchronization points. In con-
trast, message-passing programs require a switch in the programmer's thinking,
Search WWH ::




Custom Search