Information Technology Reference
In-Depth Information
processor keeps track of the number of consecutive requests from a process
that have been rejected. Once that number reaches a predetermined thresh-
old, a state machine that inhibits other processes from making requests to
the main store is initiated until the deadlocked process is successful in gain-
ing access to the resource.
1.4.2 Vector Processing
The next step in the evolution of parallel processing was the introduction
of multiprocessing. Here, two or more processors share a common work-
load. The earliest versions of multiprocessing were designed as a master/
slave model, where one processor (the master) was responsible for all the
tasks to be performed and it only off-loaded tasks to the other processor
(the slave) when the master processor determined, based on a predeter-
mined threshold, that work could be shifted to increase performance. This
arrangement was necessary because it was not then understood how to
program the machines, so they could cooperate in managing the resources
of the system. Vector processing was developed to increase processing per-
formance by operating in a multitasking manner. Matrix operations were
added to computers to allow a single instruction to manipulate two arrays
of numbers performing arithmetic operations. This was valuable in cer-
tain types of applications in which data occurred in the form of vectors or
matrices. In applications with less well-formed data, vector processing was
less valuable.
1.4.3 Symmetric Multiprocessing Systems
The next advancement was the development of symmetric multiprocessing
(SMP) systems to address the problem of resource management in master/
slave models. In SMP systems, each processor is equally capable and respon-
sible for managing the workflow as it passes through the system. The pri-
mary goal is to achieve sequential consistency, in other words, to make SMP
systems appear to be exactly the same as a single-processor, multiprogram-
ming platform. Engineers discovered that system performance could be
increased nearly 10%-20% by executing some instructions out of order.
However, programmers had to deal with the increased complex-
ity and cope with a situation where two or more programs might
read and write the same operands simultaneously. This diffi-
culty, however, is limited to a very few programs, because it only
occurs in rare circumstances. To this day, the question of how SMP
machines should behave when accessing shared data remains
unresolved.
 
Search WWH ::




Custom Search