Databases Reference
In-Depth Information
FIGURE 6.4: Inter-process communication approaches for a distributed com-
puting environment.
6.2 Parallel Programming Techniques
In this section, we will briefly discuss two parallel programming domains
that are commonly used in distributed computing environments: message pass-
ing interface (MPI) and graphical processing unit (GPU) programming. These
programming techniques enable communications between processes to be per-
formed in a distributed and parallel manner and allow these processes to use
shared or distributed resources across the entire computational network.
6.2.1 Message-Passing Scheme
Message-passing is a communication procedure through which two or more
processes share information. This approach is a type of inter-process commu-
nication, in which information is shared by means of message communications.
Message-passing is different from the shared data approach, in that each pro-
cess is capable of sending and receiving information rather than accessing a
common repository of shared data (see Figure 6.4).
A message-passing system provides a set of message-based inter-process
communication (IPC) protocols, which shield the details of complex net-
work protocols and multiple heterogeneous platforms from programmers. The
system enables processes to communicate by exchanging messages. Message-
passing programs are written using simple communication primitives, such as
send and receive. The message-passing scheme serves as a suitable infrastruc-
ture for building higher level IPC systems, such as RPC (Remote Procedure
Call) and DSM (Distributed Shared Memory).
There are a number of desirable features for a good message-passing scheme.
These include the following:
• Simplicity: The scheme should be easy to understand and comprehend.
It should not have complex communication procedures. The sole purpose
of message passing is to ensure information can be exchanged between
one or more processes.
Search WWH ::




Custom Search