Information Technology Reference
In-Depth Information
5.1 The experiment
In the following, we describe the method of our experiment:
Generate the decision repository:
repository is generated in terms of predicates (Decision
points, and choices). We generated four sets containing 1000, 5000, 15000, and 20000
choices. Choices are defined as numbers represented in sequential order, as example: In
the first set (1000 choices) the choices are: 1, 2, 3,…, 1000. In the last set (20000 choices) the
choices are: 1, 2, 3, …, 20000. The number of decision point in each set is equal to number
of choices divided by five, which means each decision point has five choices.
Define the assumption:
We have three assumptions: i) each decision point and choice
has a unique name, ii) each decision point is orthogonal, and iii) all decision points have
the same number of choices.
Set the parameters
: The main parameters are the number of choices and the number of
decision points. The remaining eight parameters (common choice, common decision
point, choice requires choice, choice excludes choice, decision point requires decision
point, decision point excludes decision points, choice requires decision point, and choice
excludes decision point) are defined as a percentage. Three ratios are defined: 10%, 25%,
and 50%. The number of the parameters related to choices (such as; common choice,
choice requires choice, choice excludes choice, choice requires decision point, and choice
excludes decision point) is defined as a percentage of the number of the choices. The
number of parameters related to decision point (such as; decision point requires decision
point) is defined as a percentage of the number of decision points. Table 6 represents
snapshots of an experiment's dataset, i.e. the decision repository in our experiments.
Calculate output
: for each set, we made thirty experiments, and calculated the
execution time as average. The experiments were done with the range (1000-20000)
choices, and percentage range of 10%, 25%, and 50%.
In the following section, the experiments that are done for dead decision detection,
explanation, and logical inconsistency detection are discussed. The rest two operations
(constraint dependency satisfaction, and propagation and delete-cascade) are working in
semi-auto decision environment, where some decisions are propagated automatically
according to decisions made. In semi-auto decision environment, the scalability is not a
critical issue.
type(dp1,decisionpoint).
type(1,choice).
variants(dp1,1).
common(570,yes).
Common(dp123,yes).
requires_c_c(7552,2517).
requires_dp_dp(dp1572,dp1011).
excludes_dp_dp(dp759,dp134).
excludes_c_c(219,2740).
requires_c_dp(3067,dp46).
excludes_c_dp(5654,dp1673).
Table 6. Snapshot of experiment's dataset
Search WWH ::
Custom Search