Database Reference
In-Depth Information
SDF Director
Kepler
Contant
6.2
DistributedFluxModel
FluxTableDisplay
DataFilter
|T|
Timed Plotter
|T|
Figure 13.2 (See color insert following page 224.) Kepler supports execution
of workflows on remote peer nodes and remote clusters. Users indicate which
portions of a workflow should be remotely executed by grouping them in a
distributed composite component (shown in blue in the workflow). The user
selects from a list of available remote nodes for execution (see dialog), and
Kepler calculates a schedule and stages each data token before execution on
one of the set of selected remote nodes.
workflow and then map part of the workflow to distributed services through
the use of one of the internal scripts, e.g., parallel or pipeline. In this mode,
Triana distributes workflows by using (or deploying on the fly) distributed
Triana services that can accept a Triana task graph as input. In the case
of a task-based workflow, the user can designate portions of the workflow as
compute-intensive, and Triana will send the tasks to the available resources for
execution. It can, for example, use the GAT interface to the Gridlab GRMS
broker 52 to perform the resource selection at runtime. Workflows can also be
specified using a number of built-in scripts that can be used to map from a
simple workflow specification (e.g., specifying a loop) to multiple distributed
resources in order to simplify the orchestration process for distributed render-
ing. Such scripts can map subworkflows onto available resources by using any
of the service-oriented bindings available, e.g., Web Services Resource Frame-
work (WSRF), Web and peer-to-peer (P2P) services using built-in deployment
services for each binding.
Workflows specified in DAGMan can be a mixture of concrete and abstract
tasks. When DAGMan is interfaced to a Condor task execution system, 37 the
Search WWH ::




Custom Search