Information Technology Reference
In-Depth Information
Figure 7. Speedups achieved by the grid-enabled versions of the k-NN application (left) and the image
restoration application (right)
from this caching technique too, but this would
have required yet more modifications to the
original application code, and thus it would have
increased TLOC.
With respect to the image application, the
plain variant of JGRIM (this is, without using the
mobility policy) performed better than Satin, even
when in the experiments JGRIM used Satin for
performing parallelism. This is because JGRIM
exploits Satin by extending it so as to avoid the
standard handshaking process of Satin when coop-
eratively executing applications. Furthermore, the
ProActive version showed acceptable performance
levels. In this case, unlike ProActive k-NN, the
deployment times did not heavily impact in the
performance, since these times were not significant
with respect to the total execution times.
Moreover, ProActive generated the least
amount of WAN traffic. Unlike Satin and there-
fore JGRIM, its job scheduling is not subject to
random factors. Basically, the Satin platform is
based on a load balancing algorithm by which
each machine of the underlying Grid randomly
asks other nodes for jobs to execute when it
becomes idle. Nevertheless, injecting mobility
allowed JGRIM to achieve higher performance
and reduce this traffic. Again, the policy did not
affect the original code. Unfortunately, Satin do
not let developers to explicitly control mobility,
whereas ProActive only offers weak mobility,
which requires extensive code modifications to
manually handle the behavior for saving/restor-
ing the execution state of running computations.
To conclude, Figure 7 shows the speedup
achieved by the various applications, which were
computed as AET s /AET, where AET s is the average
execution time of the original codes on a single
machine (C.1). Note that, in both graphics, the
speedup curves of Satin and JGRIM seemed to
have the same behavior, since JGRIM relies on
Satin for parallelism. This is, JGRIM inherits the
job scheduling scheme of Satin. Due to the random
nature of the Satin scheduler plus the heteroge-
neity of our Grid setting, for some experiments
regarding the Satin and JGRIM applications, we
obtained lower speedups for larger experiments.
For example, note that for k-NN, there was a dip
in the speedup for 20 instances (Figure 7 (left)). To
a lesser extent, this effect was also present in the
restoration application. Furthermore, the ProAc-
tive applications appeared to linearly gain speedup
as the size of the experiments increased, but this
trend should be further corroborated. In summary,
the implications of the speedups are twofold. On
one hand, the original codes certainly benefited
from being gridified, thus they were representa-
tive Grid applications to experiment with. On the
other hand, through the use of policies, JGRIM
Search WWH ::




Custom Search