Database Reference
In-Depth Information
TABLE 1.1
A Comparison between the Shared-Memory and the Message-Passing
Programming Models
Aspect
The Shared-Memory Model
The Message-Passing Model
Communication
Implicit
Explicit
Synchronization
Explicit
Implicit
Hardware support
Usually required
Not required
Initial development effort
Lower
Higher
Tuning effort upon scaling up
Higher
Lower
wherein the programmer needs to think a priori about how to partition data across
tasks, collect data, and communicate and aggregate results using explicit messaging.
Alongside, scaling up the system entails less tuning (denoted as tuning effort in Table
1.1) of message-passing programs as opposed to shared-memory ones. Specifically,
when using a shared-memory model, how data is laid out, and where it is stored start
to affect performance significantly. To elaborate, large-scale distributed systems like
the cloud imply non-uniform access latencies (e.g., accessing remote data takes more
time than accessing local data), thus enforces programmers to lay out data close to
relevant tasks. While message-passing programmers think about partitioning data
across tasks during pre-development time, shared memory programmers do not.
Hence, shared memory programmers need (most of the time) to address the issue
during post-development time (e.g., through data migration or replication). Clearly,
this might dictate a greater post-development tuning effort as compared with the
message-passing case. Finally, synchronization points might further become perfor-
mance bottlenecks in large-scale systems. In particular, as the number of users that
attempt to access critical sections increases, delays, and waits on such sections also
increase. More on synchronization and other challenges involved in programming
the cloud are presented in Section 1.5.
1.5.3 s ynChronous anD a synChronous D istributeD P rograms
Apart from programming models, distributed programs, being shared-memory or
message-passing based, can be specified as either synchronous or asynchronous
programs. A distributed program is synchronous if and only if the distributed tasks
operate in a lock-step mode . That is, if there is some constant c ≥ 1 and any task has
taken c + 1 steps, every other task should have taken at least 1 step [71]. Clearly, this
entails a coordination mechanism through which the activities of tasks can be syn-
chronized and the lock-step mode be accordingly enforced. Such a mechanism usu-
ally has an important effect on performance. Typically, in synchronous programs,
distributed tasks must wait at predetermined points for the completion of certain
computations or for the arrival of certain data [9]. A distributed program that is
not synchronous is referred to as asynchronous. Asynchronous programs expose no
requirements for waiting at predetermined points and/or for the arrival of specific
 
Search WWH ::




Custom Search