Graphics Programs Reference
In-Depth Information
j in a cycle is obviously equal to E [SJ j ], so that we again find the result
( A.81) .
Note that in this latter case the future evolution of the process given that
we are at the beginning of a cycle is independent of the past history, and of
the present state. The process is thus said to be regenerative with respect
to these new cycle starting instants.
The cycle analysis described above can be used to evaluate the steady-state
distribution of regenerative stochastic processes; ( A.84) can be applied for
the computation of the average cycle time, and the steady-state probabilities
of the process can be computed using ( A.81) .
A.7
Semi-Markov Reward Processes
A semi-Markov reward process is obtained by associating with each state
i S of a semi-Markov process X(t) a real-valued reward rate r i .
The semi-Markov reward process R(t) is then defined as:
R(t) = r X(t)
(A.85)
Assume, for the sake of simplicity, that the semi-Markov process state space
is finite. Denote with η the steady-state distribution of the SMP, and with r
the row vector of the reward rates. The expected reward rate in steady-state,
R, can be obtained as:
X
r i η i = r η T
R =
(A.86)
i∈S
where η T is the column vector obtained by transposing the row vector η.
The expected reward accumulated in a period of duration τ at steady-state
can then be simply obtained as R τ.
Markov reward processes (MRPs) are obtained in the special case in which
the process X(t) is a continuous-time Markov chain.
The definition of a (semi-)Markov reward process model is often interesting,
because it normally happens that aggregate performance parameters are
computed with a weighted summation of the probabilities resulting from
the steady-state solution of the stochastic process. These weights can be
interpreted as rewards, so that a clear abstract framework can be formulated.
 
 
Search WWH ::




Custom Search