Information Technology Reference
In-Depth Information
scatter plot showing the distribution of all polymorphs and pinpoints pos-
sible discovery of crystal structure are produced. The visualizer workl ow
coordinates everything from the creation of the basic summary page, to the
dynamic update at the end of each batch of DMAREL executions. A Web
service that assists the visualizer, like performing XSLT transformation of
the data from chemical markup language (CML) to HTML, or storing and
merging visualization data, and so on, has been developed and orches-
trated. The last workl ow is the job manager that is in charge of job submis-
sion and monitoring. The core interactions of the workl ow with the
underlying grid resources are supported by invocations to the middleware
service called GridSAM . GridSAM is another OMII-managed program
product, which mediates the interactions with the grid resources managed
by Condor, Globus, or other infrastructures that may be bridged into the
software architecture, by exposing a general interface of job submission,
monitoring, and control. Job descriptions that are expressed in the GGF
(Global Grid Forum) standard, job submission description language (JSDL),
provided by the users, will be translated by GridSAM to the actual submis-
sion scripts used by the in-operation resource manager in runtime. A vari-
ety of data staging methods, like FTP, SFTP, Webdav, and so on, is supported
to fetch i les on grids. File paths to be specii ed in the job description can be
dynamically determined and composed in a BPEL process using proper
XPath queries and expressions with assign activities, while other informa-
tion such as user account, i le names, l ags, operating system requirements,
and so on can be statically assigned. In any case, the job manager takes any
valid JSDL descriptor as its input and builds advanced process logic, such
as fault handling, error detection, resubmission, timing control, and so on,
around the core orchestration of GridSAM services to guarantee quality
service provision that is largely relied on by both MOLPAK and DMAREL
workl ows. During the parallel job submissions, a vast number of job
managers will be instantiated, running independently until the return of a
completed execution. A detailed breakdown of the execution progress
and status can be examined separately in the ActiveBPEL admin console.
Considering the fact that the actual number of machines available in the
Condor pool at any one time for the system is usually limited due to the
concerns of fairness and administration, the top-level workl ow has con-
strained the number of parallel DMAREL submissions to a number of three
batches (i.e., 600 jobs by three DMREL workl ow instances) at any one time
based on a FIFO policy. Whichever MOLPAK has completed i rst, the
following DMAREL processing will be dispatched as soon as the token
acquired by one of the previous batches is released. Synchronizations have
also been carefully wired into the workl ows to prevent corrupted updates
to shared holder variables, like the one that holds all DMAREL optimiza-
tion summaries that are used in the visualization. MOLPAK and DMAREL
workl ows also rely on another Web service to perform data processing in
between executions; for example, to extract and transform intermediate
Search WWH ::




Custom Search