Information Technology Reference
In-Depth Information
References
[1] Ballard, B.W.: The *-Minimax Search Procedure for Trees Containing Chance Nodes.
Artificial Intelligence 21, 327-350 (1983)
[2] Baxter, J., Tridgell, A., Weaver, L.: Learning to Play Chess Using Temporal Differences.
Machine Learning 40(3), 243-263 (2000)
[3] Beal, D.F., Smith, M.C.: First Results from Using Temporal Difference Learning in
Shogi. In: van den Herik, H.J., Iida, H. (eds.) CG 1998. LNCS, vol. 1558, pp. 113-125.
Springer, Heidelberg (1999)
[4] Buro, M.: Experiments with Multi-ProbCut and a New High-Quality Evaluation Function
for Othello. Games in AI Research, 77-96 (1997)
[5] Game 1024, http://1024game.org/
[6] Game Threes!, http://asherv.com/threes/
[7] Game 2048, http://gabrielecirulli.github.io/2048/
[8] Knuth, D.E., Moore, R.W.: An analysis of alpha-beta pruning. Artificial Intelligence 6,
293-326 (1975)
[9] Melko, E., Nagy, B.: Optimal Strategy in games with chance nodes. Acta Cybernetica
18(2), 171-192 (2007)
[10] Nneonneo and xificurk (nicknames), Improved algorithm reaching 32k tile,
https://github.com/nneonneo/2048-ai/pull/27
[11] Overlan, M.: 2048 AI, http://ov3y.github.io/2048-AI/
[12] Pearl, J.: The solution for the branching factor of the alpha-beta pruning algorithm and its
optimality. Communications of ACM 25(8), 559-564 (1982)
[13] Schaeffer, J., Hlynka, M., Jussila, V.: Temporal Difference Learning Applied to a High-
Performance Game-Playing Program. In: Proceedings of the 17th International Joint
Conference on Artificial Intelligence, pp. 529-534 (August 2001)
[14] Silver, D.: Reinforcement Learning and Simulation-Based Search in Computer Go, Ph.D.
Dissertation, Dept. Comput. Sci., Univ. Alberta, Edmonton, AB, Canada (2009)
[15] StackOverflow.: What is the optimal algorithm for the game, 2048?,
http://stackoverflow.com/questions/22342854/what-is-the-
optimal-algorithm-for-the-game-2048/22674149#22674149
[16] Sutton, R.S., Barto, A.G.: Temporal-Difference Learning, An Introduction to
Reinforcement Learning. MIT Press, Cambridge (1998)
[17] Szubert, M., Jaskowaski, W.: Temporal Difference Learning of N-tuple Networks for the
Game 2048. In: IEEE CIG 2014 Conference (August 2014)
[18] Taiwan 2048-bot, http://2048-botcontest.twbbs.org/
[19] Tesauro, G.: TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-
Level Play. Neural Computation 6, 215-219 (1994)
[20] Trinh, T., Bashi, A., Deshpande, N.: Temporal Difference Learning in Chinese Chess. In:
Tasks and Methods in Applied Artificial Intelligence, pp. 612-618 (1998)
[21] Wu, K.C.: 2048-c, https://github.com/kcwu/2048-c/
[22] Wu, I.-C., Tsai, H.-T., Lin, H.-H., Lin, Y.-S., Chang, C.-M., Lin, P.-H.: Temporal
Difference Learning for Connect6. In: van den Herik, H.J., Plaat, A. (eds.) ACG 2011.
LNCS, vol. 7168, pp. 121-133. Springer, Heidelberg (2012)
[23] Zobrist, A.L.: A New Hashing Method With Application For Game Playing. Technical
Report #88 (April 1970)
 
Search WWH ::




Custom Search