Biomedical Engineering Reference
In-Depth Information
resources with a shared address space at one's disposal. In MPP architecture,
where the address space is not shared among the nodes, parallel processes must
transmit data over a network to access data that other processes update. To that
goal, message-passing is employed.
In parallel execution, several programs can cooperate in providing computa-
tions and handling data. But our parallel implementation of the co-volume sub-
jective surface algorithm uses so-called the SPMD (single program multiple data)
model. In the SPMD model, there is only one program built, and each parallel
process uses the same executable working on different sets of data. Since all the
processes execute the same program, it is necessary to distinguish between them.
To that goal, each process has its own rank , and we can let processes behave dif-
ferently, although executing one program, using the value of rank . In our case, we
split the huge amount of voxels into several parts, proportional to the number of
processors, and then we have to rewrite the serial program in such a way that each
parallel process handles the correct part of the data and transmits the necessary
information to other processes.
Parallelization should reduce the time spent on computation. If there are
p processes involved in parallel execution, ideally, the parallel program could
be executed p times faster than a sequential one. However, this is not true in
practice because of the necessity for data transmission due to data splitting. This
drawback of parallelization can be overcome in an efficient and reliable way by
so-called message-passing, which is used to consolidate what has been separated
by parallelization.
The Message Passing Interface (MPI) is a standard specifying a portable in-
terface for writing parallel programs that have to utilize the message-passing. It
aims at practicality, efficiency, and flexibility at the same time. The MPI subrou-
tines solve the problems of environment management, point-to-point and collective
communication among processes, construction of derived data types, input/output
operations, etc.
The environment managment subroutines, MPI Init andMPI Finalize, initiate
and finalize anMPI environment. Using subroutineMPI Comm size, one can get a
number of processes involved in parallel execution belonging to a communicator -
identifier associated with a group of processes participating in the parallel job, e.g.,
MPI COMM WORLD. Subroutine MPI Comm rank gives a rank to a process
belonging to communicator. The MPI parallel program should include the file
mpi.h, which defines all MPI-related parameters (cf. Figure 6).
Collective communication subroutines allow one to exchange data among a
group of processes specified by the communicator, e.g., MPI Bcast sends data from
a specific process called the root to all the other processes in the communicator.
Or, subroutine MPI Allreduce does reduction operations such as summation of
data distributed over all processes in the communicator and places the result on all
of the processes.
 
Search WWH ::




Custom Search