Image Processing Reference
In-Depth Information
prevent the logical clock from advancing. As a result, no messages are processed on that node while it
is waiting, and all incoming messages are queued. he queuing of incoming messages allows messages
from the future to be delayed, and then processed after a slower node “catches up.”
Any request or reply leaving a node carries the current logical clock value of that node. his value
is used by the receiving node to sort the incoming messages based on their time of release and the
logical clock of the receiving node. If the logical clock of the receiving node is later than that of the
incoming message, then the message is stamped with the value of the logical clock at the receiving
node.
Each “item” (dispatchable or request/reply message) carries its logical execution time, which is
predeinedforeachitem.Whenanitemisreadyandmosteligibleforexecution,theclockthread
dequeues it and checks whether it can complete its execution before the earliest more eligible item's
release time. If it can complete its execution before another more eligible item must be processed, the
clock thread enqueues the current item in the appropriate “lane” for “actual” execution on the pro-
cessor.Ifnot,theclockthread“simulates”thepartialexecutionofthecurrentitemwithout“actually”
executing it, by () storing the remaining logical execution time in the item itself and () enqueuing
the updated item back into the clock thread's queue so that it can compete with other enqueued items
for its next segment of execution eligibility.
Alanecanbeconiguredtorunasinglethreadorapoolofworkerthreads.Asdescribedin
Section .., without clock simulation each lane thread is run at its own actual OS priority. In the
simulationenvironment,timeandeligibilityareaccountedforbythelogicalclockthread,soallthe
lane threads are run at the same actual OS priority. Each lane still maintains its logical priority in
thread-specific storage [], so that the logical priority can be sent with the request messages and
used for eligibility ordering, as it will be in the target system.
Figure . shows an illustrative sequence of events that results from the addition of the logical
clock thread to nORB, assuming for simplicity that all items run to completion rather than being
preempted.
. When the logical clock advances, the clock thread goes through the list of dispatchables to
seewhetheranyarereadytobetriggered.A“ready”dispatchableisonewhosenexttrigger
time is less than or equal to the current logical clock and whose previous invocation has
completed execution. In general, the clock thread determines the earliest time a message
or dispatchable will be run, and marks all items with that time as being ready.
. Any ready dispatchables are released to the clock thread's queues, according to their
assigned logical priorities.
. The clock thread selects the most eligible ready item (message or dispatchable) from
among its priority queues. he clock thread then enqueues the selected item in the appro-
priate priority lane of the dispatcher, where it will compete with other messages and
locally released dispatchables.
. The corresponding lane thread in the dispatcher dispatches the enqueued item. The
resulting upcall might in turn invoke a remote call to a servant object, which we describe
in the following steps -.
. The logical priority of the dispatchable or the message is propagated to the server side.
Currently, the application scheduler uses RMS to decide the logical priority of the dis-
patchable based on its simulated rate. Each lane thread stores its assigned logical priority
in thread-specific storage []. [].The actual OS priorities of all the lane threads are kept the
same under the clock simulation mechanism.
. Anincomingmessageisacceptedbytheserver'sreactorthreadandisenqueuedfortem-
poral and eligibility ordering by the clock thread. Note that there is only one reactor
thread, which runs at an actual priority level between the clock and the lane threads' actual
 
Search WWH ::




Custom Search