Information Technology Reference
In-Depth Information
5.2 The Q-learning Algorithm with SR
The main steps of Q-learning algorithm with SR are the same as the Q-learning
algorithm without SR. It just needs to change the state space defined in section
4 as following:
Let S SR
T
×
R
×
WL be a set of states, where:
- let T be a finite set of tasks;
- let R be a finite set of previous resources;
- let WL be a set of the workload of all resources.
S SR ,which
means the agent is allocating t 2nowand r exs is the previous resources of the
completed work items in the same case. Because of the changed state space, the
Q n ( s, a ) function should be changed as:
For example, considering a particular state s ˄ =( t 2 ,r exs ,wl ˄ )
Q ( s, a )= Q Table ( t, r exs ,r,wl ( r ))
6 Experiment
The experiment of this paper is to analyze the performance of task allocation
with SR under the assumption that the previous resources have influence on the
candidate resources. So in this section, the performance of the one with SR will
be compared with Q-learning without SR. The analysis of the result from the
average Flow time and Throughput perspectives is also shown here.
6.1 Experiment Setting
The even log of a real-life process of BPI Challenge 2012 i 1 which taken from a
Dutch financial institute will be used for simulation. About 6078 cases, 5 activi-
ties and 55 resources are acquired from the real log. In Fig.1, the process model
is expressed in terms of a workflow net [16]. The process may start with the
W Afhandelenleads withprobability0.46orstartwith W Completerenaanvraag
with probability 0.54.
Fig. 1. A Real-life Financial Process
1 http://www.win.tue.nl/bpi/2012/challenge
 
Search WWH ::




Custom Search