Information Technology Reference
In-Depth Information
able to make individual processor cores run faster and faster, we have reached
an era where individual cores are not getting much faster and where speedups
will have to come from parallel processing.
Threads and event-driven programming are important models, but they are
not the only ways to write parallel programs. Other approaches are also highly
effective, particularly for certain classes of applications.
Data parallel programs. Data parallel programming or SIMD (single
Denition: Data parallel
programming
instruction multiple data) programming models allow a programmer to
Definition: SIMD (single
instruction multiple
data) programming
describe a computation that should be performed in parallel on many
different pieces of data. Rather than having the programmer divide work
among threads, the runtime system decides how to map the parallel work
across the hardware's processors.
For example, to divide each element in an N-element array in half, you
might write
forall(iin0:N-1)array[i]=array[i]/2
and the runtime system would divide the array among processors to exe-
cute this computation in parallel.
Data parallel programming is frequently used in large data-analysis tasks.
For example, the Hadoop system is widely-used, open source system that
can process and analyze terabytes of data spread across hundreds or thou-
sands of servers. SQL (Structured Query Language) is a standard language
for accessing databases in which programmers specify the database query
to perform and the database maps the query to lower-level operations and
schedules those operations on its processors and disks.
Multimedia streams (e.g., audio, video, and graphics) often have large
amounts of data on which similar operations are repeatedly performed,
so data parallel programming is frequently used for media processing, and
specialized hardware to support this type of parallel processing is common.
Because they can be optimized for regular, data parallel programs, GPUs
(Graphical Processing Units) can provide significantly higher rates of data
processing. For example, in 2011 a Radeon HD5870 GPU is capable of 544
GFLOPS (billion (Giga) Floating Point Operations Per Second (double-
precision)); for comparison, an Intel Core i7 980 XE CPU (a high end,
general purpose processor) is capable of 109 double-precision GFLOPS.
Considerable research and development effort is currently going towards
developing and using General Purpose GPUs (GPGPUs), GPUs that have
been extended to better support a wider-range of programs. It is still not
clear what classes of programs can work well with GPGPUs and which
require more traditional CPU architectures, but for those programs that
can be ported to the more restrictive GPGPU programming model, the
performance gains can be dramatic.
Search WWH ::




Custom Search