Information Technology Reference
In-Depth Information
extend allowed by the process model). ARE p shows similar behavior and set-
tles to normal error rates once the probabilistic message states and flow model
updates have settled. The consistent error rates above zero are due to the sim-
ple activity recommendation algorithm. Activities at branching points receive
almost equal probabilities, and the log sequences happen to prefer the second
highest choice.
In experiment 2 (Figure 6 b and d), we apply the same sequences but take an
empty process model i.e., all FlowData annotations are on the first arc towards
activity (A). As expected, MCE p and ARE p are high during the first few iter-
ations, but quickly decrease to low error rates and then settle to the same rates
as the evolved process model in experiment 1.
2
6
Log Sequence A
Log Sequence B
Log Sequence A+B
Log Sequence A
Log Sequence B
Log Sequence A+B
1.8
5
1.6
1.4
4
1.2
1
3
0.8
2
0.6
0.4
1
0.2
0
0
0
5
10
15
20
25
30
0
5
10
15
20
25
30
Iterations
Iterations
(a)
(b)
3
3
Log Sequence A
Log Sequence B
Log Sequence A+B
Log Sequence A
Log Sequence B
Log Sequence A+B
2.5
2.5
2
2
1.5
1.5
1
1
0.5
0.5
0
0
0
5
10
15
20
25
30
0
5
10
15
20
25
30
Interations
Iterations
(c)
(d)
Fig. 6. Overall Message Classification Error MCE p (a,b) and Activity Classification
Error ARE p (c,d) over 30 process iterations for fixed and alternating log sequences; for
existing (a,c) and empty process model (b,d)
6.3 Discussion
Two important characteristics describe our self-learning approach. First, the
process model and state management model allow a quick stabilization which
reflects in the low prediction and recommendation error rates. As a positive
side-effect, the user is hardly affected by incorrect activity recommendation.
Second, the described techniques apply not only to cases of process evolution
but also eciently support a grass-roots approach to processes learning. We
observe similarly swift adaptation when no message-activity dependencies are a-
priori known. The applied 30 iterations are sucient to demonstrate the learning
behavior as our algorithm considers all changes simultaneously. The presented
Search WWH ::




Custom Search