An Alternative Method for Feature Reduction in Go By William Mill Advisor: Dr. Yahdi 2003

sciencediscussionΤεχνίτη Νοημοσύνη και Ρομποτική

20 Οκτ 2013 (πριν από 3 χρόνια και 7 μήνες)

64 εμφανίσεις

An Alternative Method for Feature Reduction in Go

By William

Mill

Advisor: Dr. Yahdi

2003


Computer programs are now among the best players in the world at most board
games, including chess, checkers, othello, and backgammon. However, despite a million
-
dol
lar prize for the first Go program to beat a professional, the best programs still only
play at the level of a rank amateur. Although they have shown progress in recent years,
the game of Go continues to be one of the largest open questions in artificial i
ntelligence.

Brute force methods for playing Go, such as those used so effectively for chess,
are not very effective. The large size of the Go board, which is 19 by 19, and the greater
number of possible moves combine with the lack of an effective board ev
aluation
function to render them almost useless.

One of the promising areas of Go research is that of machine learning. Recent
results have shown that it is possible to play a fairly strong game of Go using learning
techniques such as neural networks to te
ach a program to play the game.

A neural network is a mathematical model for the operation of the brain which
allows a program to learn to approximate a function. A researcher gives the neural
network a series of examples with a definite expected output,
and the network learns to
approximate that output over time.

Again, the large size of the Go board handicaps the use of neural networks with
the game. Because the size of the board is so large, when you feed it into a neural
network, the network has troubl
e finding an approximation of a function to generate
moves. Furthermore, it is common to feed not only the board or part of the board into the
network, but also various features of the board determined at a stage called pre
-
processing. Eventually, too many

features reduce the performance of a neural network, an
effect called the curse of dimensionality.

In my research, I propose a new method of feature
-
space reduction, which I call
absolute primary component analysis (APCA). In traditional primary component

analysis, the eigenvectors with the largest eigenvalues are used to choose the most
important features. In APCA, I order eigenvectors by their magnitude instead of by their
value. I then feed the features generated by PCA and APCA into a neural network to

test
their relative effictiveness for move prediction. My conjecture is that this will lead to
increased move prediction ability.