Information Technology Reference
In-Depth Information
Tabl e 2. Game patterns in ( N,M )=(3 , 3) lottery for the last 5,000 turns
a.OneQFPvs.TwoALs
b.TwoQFPsvs.oneAL
φ ·
λ ·
Pattern 1 Pattern 2 Pattern 3
φ ·
λ ·
Pattern 1 Pattern 2 Pattern 3
0.1
0.1
32
33
35 0.1
0.1
31
35
34
0.1
1.0
67
24
9 0.1
1.0
12
27
61
0.1 10.0
5
14
81 0.1 10.0
82
12
6
0.1 100.0
44
20
36 0.1 100.0
23
61
16
Tabl e 3. Game patterns in ( N,M )=(3 , 4) lottery for the last 5,000 turns
a.OneQFPvs.TwoALs
b.TwoQFPsvs.oneAL
φ ·
λ ·
Pattern 1 Pattern 2 Pattern 3
φ ·
λ ·
Pattern 1 Pattern 2 Pattern 3
0.1
0.1
52
27
21 0.1
0.1
22
35
43
0.1
1.0
94
6
0 0.1
1.0
0
5
95
0.1 10.0
1
12
87 0.1 10.0
76
14
10
0.1 100.0
60
0
40 0.1 100.0
15
85
0
With these learning algorithms, we run the computational experiments under
the following conditions:
- Each game has at least one QFP agent and at least one AL agent. Hence,
two kinds of three-person DIY-L and three kinds of four-person DIY-L are
considered:
Three-person DIY-L
Two QFPs vs. one AL
One QFP vs. two ALs
Four-person DIY-L
Three QFPs vs. one AL
TwoQFPsvs.twoALs
One QFP vs. three ALs
- It has 10,000 turns, iterated 100 times.
- Each player knows current turn, her previous submission, and the previous
winning integer (if there is no winner, then this input is zero) for her decision-
making. Which means, she does not directly know the previous submissions
of others.
- Parameters are as follows: φ a = φ f =0 . 1 5 , λ a = λ f =0 . 1 , 1 . 0 , 10 . 0 , 100 . 0,
w i,k (0) = 0 . 0for k =1 ,
,M , L −i (0) = 0 and L i (0) = 0 for i =1 ,
,N .
If there are plural QFP or AL agents, they use the same initial conditions.
···
···
3.2 Result
The results presented in this section are from the simulation runs for the last
5,000 turns and we classified the simulation runs into several game patterns in
accordance with the number of wins for each agent.
5 We also pursued simulations with φ a = φ f =0 . 5and0 . 9. But due to limited space,
we report the results with φ a = φ f =0 . 1only.
 
Search WWH ::




Custom Search