Java Reference
In-Depth Information
Figure 6-1. Comparison of concurrency and parallelism
The goal of parallelism is to reduce the runtime of a specific task by breaking it down into
smaller components and performing them in parallel. This doesn't mean that you won't do as
much work as you would if you were running them sequentially—you are just getting more
horses to pull the same cart for a shorter time period. In fact, it's usually the case that running
a task in parallel requires more work to be done by the CPU than running it sequentially
would.
In this chapter, we're looking at a very specific form of parallelism called data parallelism .
In data parallelism, we achieve parallelism by splitting up the data to be operated on and as-
signing a single processing unit to each chunk of data. If we're to extend our horses-pulling-
carts analogy, it would be like taking half of the goods inside our cart and putting them into
another cart for another horse to pull, with both horses taking an identical route to the destin-
ation.
Data parallelism works really well when you want to perform the same operation on a lot of
data. The problem needs be decomposed in a way that will work on subsections of the data,
and then the answers from each subsection can be composed at the end.
Data parallelism is often contrasted with task parallelism , in which each individual thread of
execution can be doing a totally different task. Probably the most commonly encountered
task parallelism is a Java EE application container. Each thread not only can be dealing with
Search WWH ::




Custom Search