Artificial Intelligence In Video Games: A Survey

periodicdollsΤεχνίτη Νοημοσύνη και Ρομποτική

17 Ιουλ 2012 (πριν από 5 χρόνια και 7 μέρες)

434 εμφανίσεις


A
rtificial
I
ntelligence

In Video Games : A Survey


Thomas Klemensen and Wesley Iliff

University of Northern Iowa

genesistk@cfu.net

iliffw@uni.edu












Abstract


Artificial Intelligence research has seen many uses and
many dead ends. Video games are one of the newest
venues for artificial intelligence research, and many
have found it an excellent way to model human
-
level
AI, machine learning, and scripted behavior. In this
survey of the work that ha
s been done in the game
industry
, we examine the pa
st, present, and future of
AI research
. We examine the primary uses for AI in
video games, the research‟s beginnings in board game
systems, as well as modern techniques used to model
computer opponents. Included in these techniques are
minim
ax/Alpha Beta algorithms, decision trees,
scripted behavior, genetic algorithms and learning
agents. We also condense the future outlook of the
industry according to researchers and industry heads,
outlining the impact of new technologies such as multi
co
re processors and online gaming.



I. Introduction



Video

g
ames have become big business; s
o large, in fact,
that in yearly earnings the video game industry has overtaken
the movie industry (Laird &

Lent 2001). As the industry
grows, more resources become allocated to the development
of video games. In the growing field of video game design,
artificial intelligence has taken a lead role
as

topic of choice

for researchers
. End users gauge a game‟s value by the
qua
lity of the AI in the game. If the enemies and allies act
more human and more complex, then the game is more
entertaining. This requires walking a line that is rarely
addressed in other fields of AI, but often seen in popular
culture. AI agents in video

games must be smart enough to
seem competent and even challenging opponents, but must
also operate within the confines of human senses and
reaction times, to prevent the notion that the AI opponent is
somehow “cheating.”


G
ame AI has several applications

and can be broken
down into several roles an agent can fulfill (Laird & Lent
2001). Enemies need to act at an individual tactical level,
incorporating
maneuvers

more complex

than approaching
and attacking the player. Groups of enemies need to work
with
each other to overcome common goals, flanking the
player or working together to trick them. Partners and
support characters must work with the player, anticipating
their needs and supplying relevant tips and assistance. In
sports games commentators must
react to uncertain actions,
and in role playing
games Non
-
Player Characters (
NPCs)
must drive the story forward based on player input. All of
these problems and more may be addressed b
y us
ing a host
o
f AI techniques. T
he resources that video game
d
evelopment has available allows for AI research that may
have implications outside the target field.




Figure 1: Grand Theft Auto IV, featuring a town that
runs

on its own AI routine, had a budget of over $100
million.



In this survey we will explore the history of AI research
in games, its current uses, and its possible future application.
By examining the most prevalent tactics for game AI, we can
see the growing trends in creating the ultimate game AI goal,

human
-
leve
l AI. This will also demonstrate why video
games can be a driving force in AI research not only for
video games, but for the field at large.



II. Background



At the start of the interest in AI research, there were no
real video games
available
,
so there w
as a strong push to test
a computer‟s “intelligence
.

This was done

using the popular
games that had been played for years and in some cases, even
centuries. Trying to make a machine that could play chess,
checkers, bridge, poker and many other games that would
seem to
take intelligence to play was
too tempting to pass up.
The work on computer games has been one of the most
successful and

visible results of AI research

and has resulted
in advances

in numerous areas of computing

(Schaeffer
2001)
.


One of the first
games that grabbed the attention of AI
researchers was chess, when
, in 1950,

Claude Shannon
published his work

on how to program a computer to
play
chess

(Schaeffer 2001)
.

It was another

50 years before the
hardware and AI programming techniques ha
d advanced far
enough for

a computer
to be

able to win against a
grandmaster in the game. In 1997, Grandmaster Gary
Kasparov lost a 6 game match to the chess computer DEEP
B
LUE. DEE
P BLUE was estimated to analyze

200,000,000
chess positions per second

while a human pl
ayer only
analyzes
two

per second

(Schaeffer 2001)
.

As great a victory
for the AI community as this was, it also proved how great
the human mind is, and how far AI research has to go to even
come close to true intelligence. The score of th
e 6
-
game
match was DEEP BLUE: 2 wins, 3 draws and 1 loss,
showing the efficiency of the human cognitive power that
could lose by such a small margin while the margin of moves
analyzed per second was astronomically huge.


Arthur Samuel worked on a checkers
program. B
ut
unlike DEEP BLUE
,

which used tables for opening moves,
closing moves and an Alpha
-
Beta search pattern, it focused
on the use of learning algorithms that learned checkers by
playing. It won a single game in an exhibition match against
checkers champion
Robert Ne
aley in 1963. In 1970, a Duke
University research team wr
ote a program that beat Nealey

in
a short match

(Schaeffer 2001)
. In 1989, a University of
Alberta research team, lead by Jonathan Schaeffer, wrote
a
checkers
-
playing program called
CHINOOK. In 1990
CHINOOK lost

to World Champ
ion M
arion Tinsley
,

4
games to 2.


B
ut in the 1994 rematch, CHINOOK played all
6 games to a draw. CHINOOK uses alpha beta pruning with
iterative deepening, a transposition table, move ordering,
search ex
tensions, and search reductions

(Schaef
fer 2001)
.

CHINOOK has the honor of being the first program to win a
human world championship for any game. In 2007,
CHINOOK was declared by the scientists at the Web site of
Journal Science to be unbeatable. The best that an opponent
can ever hope to a
chieve would be a draw. Checkers is now
just a larger version of tic
-
tac
-
toe.
Players desiring to
challenge

CHINOOK can do so at:

http://www.cs.ualberta.ca/~chinook/


Othe
r games where computers have come to dominate
the human competition are Othello, Scrabble, Nine Men‟s
Morris, Connect
-
4, Qubic, Go Moku, and 8x8

Domineering.
After
witnessing the Scrabble program MAVEN

beat World
Champion Adam Logan, Brian Sheppard wrote

“…MAVEN
should be moved from the championship caliber
class to the
abandon hope class
” (Schaeffer 2001)
.

Even though there
are many disadvantages for a computer simulating a game,
some of the games have distinct and even u
nfair advantages.
For instance
,

the Scrabble program has the entire Scrabble
dictionary programmed into it as data.


Although great strides have been taken for artificial
intelligence for every area of game design, there are some
games that still are not able to play at the master level
, let
alone the grandmaster level. A couple of the reasons for this
are that some of the games have a very high branching factor
or they have too much unknown information. A few of the
titles that still defy computer AI ma
stery are Bridge, Go, and
Poker

(e
specially Texas Hold‟em) (Schaefer 2001)
.


A notable phenomenon that arises when a game joins the
solved category is that researchers seem to lose interest in it.
A been
-
there
-
done
-
that attitude appears, and attention is
redirected to the next challenge
. This attitude
keeps driving

AI research ahead

and

keeps the field fresh and new. As new
games are created, they too benefit from all the research that
has been done and

in many cases, set the bar even

higher.


Video game AI is
in its infancy, but great
advances

are
already being made, and a g
reat majority of the recent
advances in AI have their roots in game design.

As
Arthur
Samuel, c
reator o
f the original checkers program,

wrote,
“Programming computers

to play games is but one stage in
the development of understanding … it seems reasonable to
assume that these newer techniques will be applied to real
-
life situations with increasing frequency…” (Schaeffer 2001)



III. Learning Agents



Learning Agents

ar
e a focal point of creating human
-
level AI in video games. If an intelligent agent can observe
and modify behavior based on what it sees the player doing,
the world becomes more immersive, more challenging, and
hopefully more entertaining for the player.

The idea of
evolving NPC AI has been broached for several topics, but
none more prevalently than evolving enemy AI.


The idea of evolving enemies has persisted for decades,
owing to the fact that many games increase difficulty simply
by making enemies mor
e resilient or increasing their number.
This makes the idea of tactically increasing difficulty
immediately more appealing. The foremost room for study
in this kind of learning is in First Person Shooters
(Overholtzer & Levi 2005). As shooting games can

mimic
real life combat tactics in use today, there are many places

we can get a baseline of behavior. However, scripting this
directly can only go so far. While the enemies may react like
trained war veterans, the player may act more erratically and
dis
rupt established strategies. This is where an element of
learning comes in.


Overholtzer and Levi demonstrated that real
-
time
learning and generational variations
can mark a steady
increase in agent capabilities and challenge
. This was
accomplished

by implementing a relatively simple genetic
learning algorithm into
an
open source first person shooter

called

Cube
. While the data set used is fairly rudimentary,
the advancement of gaming
hardware over time shows that
this kind of learning is promising and can be applied. In fact,
these ideas are used in AI today under the blanket term of
ALife (Baker 2002).


ALife is a blanket term used to describe AI
that

simulate
s

biology. This means

agents may choose from
multiple actions instead of a set script, an idea called „fuzzy
logic.‟ This may also include multiple agents coordinating
together, or genetic algorithms wherein the agents evolve
based upon a heuristic showing which agents have p
erformed
best. These genetic algorithms are an important part of the
evolution of video game AI
.


Learning for learning‟s sake does come with a price
.

While the end goal of a learning agent within a game may
be
to create a more believable character or a stiffer challenge,
many games focus so much on the learning aspect that they
become little more than virtual fish tanks.
Attempts to
incorporate this level of learning into an entertaining,
playable game have
been slow, but
progressing.


NERO is an open source strategy game where learning
takes a key role in
game play

(Stanley, Bryant, Karpov, &
Miikkulainen 2006).
In NERO, players must spend a period
of time training their soldier agents using a given
toolkit
before sending them against each other. The idea is that the
better army will win over the inferior one. Th
e s
ystem uses a
kind of
algorithm called a Neural Network (rtNEAT,
specifically) to
determine which routines are completing the
tra
ining optimally and propagate those traits as best possible.
This displays the possibility of a real time learning agent, but
NERO does not show the graphical fidelity nor the depth of a
modern blockbuster game. Until processers can handle both
a wealth
of content and real time learning, alternative forms
of learning are being investigated.



Figure 2
: NERO shows impressive technology, but low
graphical quality



Some researchers are finding that behavior may be
learned dynamically, then transformed into

a script and
implemented statically at runtime (Kelly, Botea & Koenig
2008). In an

experiment w
ith the open
-
ended

role playing
game
The Elder Scrolls 4: Oblivion
, researchers were able to
use large scale planners outside of the resource
-
heavy
implementation required to plan as the game is playing.
Instead, basic information is fed into a planning algorithm
and the work of creating a script is automated. Creating

a
unique set of behavior in this way takes, at its longest, thirty
seconds. A human doing the scripting manually, however,
w
ould take several hours or more

depending on the
complexity of the behavior desired.
This could be widely
applied to games curre
ntly using scripted behavior and allow
for scripts more complex than those we currently have.
While this does not allow for learning as the
game is played
,
it allows for agents that have already learned before the game
starts
,

creating a more entertaining
experience for the user.
This may be preferable to many of the existing techniques
used for gaming AI today.


IV
. Simple AI Technologies



Video game AI has not always been a complicated
ordeal. For years, programmers have relied on simple
mechanisms to
allow enemies to react to a player. For a long
time, “artificial intelligence” for an enemy meant very simple
behavior, such as approaching an enemy in a straight or
semi
-
straight pattern. The classic arcade game
Space
Invaders

featured constructs that could only
advance

toward
the player in a short, repeated pattern, and
Pong

featured a
paddle that would simply detect the trajectory of a ball and
attempt to intercept (Wexler 2002).


Until the past decade or so, game AI made few

attempts
to progress past the charging enemy. Other forms of games,
however, had been setting the stage for more complex
interactions.
Perhaps the most famous example is D
EEP

BLUE
. By calculating hundreds of millions of moves per
second to a human‟s two,
DEEP BLUE

managed to outwit
the champion by sheer brute force (Schaeffer 2001).


The most common form of this algorithm seen in game
playing is minimax. In a mini
max algorithm, an AI agent
recursively calls an optimal opponent to see what their best
move would be, whom calls the original agent to check their
move, until a decision is reached. This creates a tree
structure that allows the agent to decide the best m
ove
available in its current state. This is recalculated each move,
allowing the agent to make the best possible move at all
times. This works well in a situation where the agent can
observe the entire world and all possible actions, such as in a
board g
ame. While this approach has been used in other
video games before, the size of the tree created can grow
exponentially and the computational time can swiftly get out
of hand.

As most video games must perform in real time,
this algorithm is increasingly
out of place in a gaming
landscape populated by fast
-
paced action games and swiftly
tactical first person shooters.


As technology has only recently found itself up to the
task of dynamically computing any behavior in real
-
time, the
most common form of vid
eo game AI for the past two
decades has been scripted behavior (Baker 2002). Scripted
behavior involves giving a computer opponent a set of
possible player inputs and a set of reactions to that input, so
that the opponent may know how to react to any give
n player
action. This allows the oppon
ent to react swiftly to stimuli
,
as it only needs to look up its action in some sort of table.



Figure 3: An example of scripted behavior



Scripted behavior is not without its detriments
. As
interactions

within games get more complex, it becomes
more and more difficult to maintain a reaction for every
possible player action. As a player finds failings in a script,
the player can exploit these reactions and make the game far
too easy. In addition, these
failings often ruin immersion in a
game world. A player who finds an opponent running
headfirst into a wall due to an overlooked scripting loophole
becomes acutely aware of the fact they are playing a game
with non
-
human opponents.


This issue can be
addressed reliably by incorporating
decision trees into existing scripts (Wexler 2002). While
scripts are necessary when true learning AI is too taxing on a
system, the behavior of an agent can be supplemented by
building a tree of behaviors based on the
success of previous
actions in similar situations. In this way, an opponent can
run on scripted behavior, but deem parts of the script a poor
idea after it fails to deliver the desired results too often. This
allows the heavy computing to be done with a
manual script,
but incorporates the leeway of a learning system to cover
loopholes in logic and inhuman behavior. While these
decision trees may be incorporated statically, as with a script,
the growth potential of using one in concordance with an
existin
g script has been found to yield greater benefits.



Figure 4: A simple decision table for the game
Black &
White



Scripting and decision tables have carried the game
industry for the better part of two decades. With limited
player inputs and relatively isolated gaming experiences,
developers could make easy assumptions

about what the
player would do

and create oppon
ent behavior accordingly.
However, games are getting more complex. A player in
today‟s gaming landscape can interact with a game world
almost as intimately as they can with the real one. In
addition, online play has opened up communication between
playe
rs, allowing AI exploits to be discovered significantly
more easily than in years past. As players find ways to break
the intended systems, the need for agents that can learn has
grown. One of the approaches at the forefront of the
movement for video gam
e learning agents is the concept of
genetic algorithms.



V. Genetic Algorithms



Genetic a
lgorithms

intend to copy the mechanism by
which nature finds the species most likely to survive, creating
opponents that grow and learn organically. This allows a
co
mputer opponent to replicate the learning undertaken by
human players.


The concept for a genetic algorithm is simple, and to
demonstrate it we will examine the typical use of a genetic
algorithm in a video game. Genetic algorithms involve
making several
agents compete

and allowing traits of the best
performer
s to carry o
n to the next match. This weeds

out the
worst agents and keeping the best. The algorithm starts out
simply
,

with a sample of agents
having

little instruction as to
what it needs to do other
than a heuristic on which to judge
its performance. In gaming, this may mean several agents
with little other than finite state machines mapping their
possible actions (Geisler 2005). These agents are given
random twe
aks to their poss
ible behaviors

(
for
example
, how
often an agent jumps when presented with a wall
)
.
The
agents are then set off to perform their tasks; within video
games this often means the agents play competitive games
against each other. While there is no set method for keeping
the best

performers, there is a notion of the top several
staying and some of the worst performers being replaced by a
cross
-
breed of the top performers. This “child” of the best
performers of the last round is often given a random and
minor change, to allow for
the
continuing evolution of the
best performer.



Figure 5: A graph showing the evolution of a single
behavior for a genetic agent over fifty generations.



Several studies have been conducted showing that, over
time, a consensus is found on single behaviors after a point.
This usually shows roughly the optimal choice for a behavior.
In figure 5, a simple agent in a first person shooter is tasked
with determ
ining whether to jump over a wall when
approaching an enemy (Overholtzer & Levy 2005). After
roughly 40 generations, the algorithm finds that about
thirteen in every fifteen situations, jumping is preferable.
Even if this behavior is not integrated direc
tly into the
behavior of an agent, this algorithm shows what decision is
optimal in a given situation and can be incorporated statically
into a script in a finished product.



VI. Neural Networks



In
artificial intelligence research and programming, we
constantly run into things that are either incredibly difficult
or near impossible to represent on a computer. Oddly
enough, as programmers and software designers, we
take this
for granted and are
surprised when it comes up.

The thing
that makes this odd is that the problems that arise the most
often are usually things that we do naturally every day
without thinking about it. We recognize hundreds or even
thousands of faces, read writing in vario
us scripts, even if
part of the message is missing or destroyed. We can talk and
understand communication in verbal and even non
-
verbal
modes. The question that still needs an adequate answer is,
“What is the difference between computers and humans?”


A

neural network is modeled after the neurons in the
human brain. It can be thought of as a mathematical function
approximator where the input represents independent
variables and the output represents the
dependent variable or
variables

(Bourg & Seemann 2
004). In

the human brain,
there are
10 billion neurons with 60 trillion connections
,

so
the mat
hematical function model does not

do justice to the
vastness of the human brain, but it is adequate for the current
capabilities of computers.


In the realm of

video games, neural networks are one of
the most exciting aspects on the horizon. From a single
player adventure game or first person shooter to a massively
multiplayer online role
-
playing game, the application
s for
artificial intelligence are

almost endless.

Learning
techniques can increase both the longevity of video games as
well as

decrease their production cost
(Stanley, Bryant &
Mikkulainen 2005).

Neural networks allow a game designer to set up scripts
and game agents that can act in a r
ealistic manner and even
challenge the most skilled
of gamers. In first person shooters,
neural networks can allow the computer controlled opponents
to learn as they go and get better. They will learn from
mistakes that they make to avoid repeating them,

as well as
learning strategies that are more effective against a particular
player.

One problem that can arise from having an agent
actually learn while the game is being played is that the game
can become unpredictable. Another problem is that if the
ga
me agent
s are allowed to learn too well

or too much, it is
likely that the game will become too hard or even
“unbeatable” and no one will want to play it. This is why the
more common approach to neural networks is to train the
agents outside the game and

then transfer the now static

learned network into the game.



The general algorithms for neural networks are normally
not well suited for a real
-
time game environment, because
they

have
:



1.

L
arge state/action space


ma
king it difficult
to check the many different possible actions
and checking the value of every possible
action on every game tick.

2.

Diverse behaviors


every agent should not
settle on the same action as this would make
for a very boring game if every agent

did the
same thing.

3.

Consistent individual behaviors


a player
doesn’t want to see agents in the game doing
something totally random for no reason,
because it destroys the illusion that the game
is a real word.

4.

Fast adaptation


the game agents must be
ab
le to adapt quickly as players would quickly
become bored

and/or frustrated if it took hours
before an agent adapted.

5.

Memory of past states


agents should be able
to use past events to help to choose a better
action in the present.


(Stan
ley, Bryant & Mikkulainen 2005)


In

a real
-
time game e
nvironment, the typical neural
network algorithms must be tweaked to update at a regular
interval, usually corresponding to a set number of game ticks.
One implementation of neural networks for real time video
games is rtNEAT (real
-
time NeuroEvolution of
Augmenting
Topologies).

NERO (NeuroEvolving Robotic Operatives) is
a

video game that was specifically wri
tten to explore the
potential of


rtNEAT
.

The object
ive

of
NERO

is
to
train a group of agents

in
military combat
by exposing them to experiences in a
controlled environment.
The agents begin with no skills at
all and
after the agents are trained to the satisfaction of the
player the
y

can be pitted against other groups of agents that
have been similarly trained. The

training is performed in an
interface that allows the player to reward or punish different
behaviors.

(See figure 6).




Figure 6
: NERO
control pane used to set rewards
and punishments for all the actions that an agent can
make.



The actions that can
be trained are

actions like

approaching enemies, hitting targets,
getting hit, following
friends, and
dispersing
. So
one

could train
his or her

agents
to charge the enemy and fire, avoid the
ene
my and shoot
from a distance. One could

even train half the agents for one

and half for the other.

The control panel also allows the placement of objects
like walls, turrets (a stationary gun that shoots at the agents),
and static enemies.


One of the experiments that
was

done with NERO shows
the vast potential of Neural Networks. In this experiment,
the agents who spawn at one end were rewarded for going to
the other end of the map.
During e
ach training session, a
wall was put in the path of the agents so they
had to go
around

it. Each time a wall was added
,

the path to go around
the wal
ls became more and more complex

until a virtual
maze was create
d. The agents were able to tra
verse the maze
and make it to the other side wi
thout the aid of path
-
find
ing
tables

(Stanley, Bryan
t & Mikkulainen 2005).


VI
I
. Future Outlook



While artificial intelligence in video games has started as
a business venture, some look to video games as an ideal
topic for generalized AI research. As video games strive to
make opponents that react like h
umans, it is a natural fit for
developing the kind of human
-
level AI dreamed of in science
fiction (Laird & Lent 2001).


There are several reasons some argue that video games
are an optimal venue for future AI research. First, funding is
abundant. Video
games are not just research reliant on
grants;

they are a self sustaining and highly profitable industry. If
only one idea every year of research gets put into practice,
this may still be enough for game companies to continue
giving money to AI research departments. Hardware is also
an increasing fa
ctor. Real
-
time, learning AI requires a large

amount of processing power. Luckily, today‟s video game
consoles and personal computers have multiple core systems,
and have taken to using separate hardware to take care of
graphical work. This means that
entire processers may be
dedicated solely to the artificial intellige
nce al
gori
thms
ru
nning in games. Some speculate

that future consoles will
include dedicated AI logic units that can take care of basic AI
functions for the programmer (Snider

2007).


The fact that these agents are worki
ng out with a large
sample set
-

many games today sell mill
ions of copies in
America alone
-

allow for interactions that teach us more
about how AI agents work. Damian Isla of developer studio
Bungie speaks of hi
s experiences with complex AI systems:



The interaction of all those rules is
absolutely

unpredictable. There is simply
no way that I as a programmer can predict
what is going to happen next. What we get
is „emergence,‟ one of the holy grails of AI:
so
me really complex, interesting results out
of very, very simple rules.


(Snider 2007)



Emergence is one of the cornerstones of human level AI.
A system that can act in ways that the programmer did not
intend shows some degree of independent thought, wheth
er
we think of this as conscious or simply reactions. Players
react positively to
an unpredictable opponent, provided that
there is some logic that can be gleaned from that
unpredictable behavior. Man
y „god games,


wherein the
player

takes an omniscient

role over subjects that the player
has no direct control over, strive for these kinds of
reactionary yet independent actions from their agents (Laird
& Lent 2001).


One of the signs of
a
possible research slowdown within
the industry, however, is the advent

of online gaming. In
today‟s gaming landscape, it is easy for players to use a
network
-
connected personal computer or video game console
to play games with or against players at distance, at any
time
(
Schaeffer 2001)
. The easy access of real humans

to t
he
player lessens the desire to play a multiplayer match against
non
-
human opponents, no matter how realistically they react.
While this may slow research somewhat, single player
campaigns remain a staple of any large
-
budget game. In
addition, modes wher
ein players can cooperate online against
AI opponents have seen a swell of popularity with recent
games such as Epic Games introduction of a cooperative
Horde mode in
Gears of War 2
, wherein human players take
on increasin
gly difficult swarms of enemies

a
nd the fully
cooperative online survival game
Left 4 Dead
by Valve.
These changes in the online landscape may offset the
possible loss of AI resources as research needs to be made
for these cooperative games.


The real future of video game artificial inte
lligence is in
continual growth. While some disciplines need a large
breakthrough to continue research, video games give a
suitable environment for incremental growth of AI
sophistication (Laird & Lent 2001). Game companies may
not be able to dedicate la
rge amount of resources per game to
AI research and developmen
t, but the number of games
released each year is steadily increasing, as are the budgets of
those games. The resources going into game development
should see
some growth in the AI rese
arch and development
portion of game development, making it an increasingly
attractive field for AI researchers.



VII
I
. Conclusion


For decades of popular culture, humans have

been
interested in
creating
artificial humans, or inanimate objects
that are seemingly i
ntellig
ent.
Our m
ythology, writing,

movies and even
thoughts are full of references to
seemingly
magically constructed automatons, talking mirrors, and
intelligent
robots.

When comput
ers came along, it was only a matter of
time before
people

tried to project our own humanity into it.
There are so many areas that could be used as a testing
ground for artificial intelligence research, but

games have

quickly become

the chosen m
edium.
Luckily
, because new
AI algorithms can be tested without
risking safety or vital
systems when tested in a game space
.
This

is much safer
and
arguably
better than testing it in areas where a
malformed algorithm could be high in cost of life and
pr
operty
, as well as financial loss
.

A great number

of
the major advances in artificial
intelligence were born in game design before they were used
in real world applications.

Alpha beta algorithms, minimax,
and advances in human level AI are just a few of the

most
important AI advances in the past several decades, and all
were borne of game design.

As the industry and budgets
increase, the chances of new breakthroughs increase
exponentially alongside them.

Many researchers are even calling for more research
in
the field. Michael Buro argues that real time strategy games
(RTS) have a wealth of opportunities for research that are yet
untouched
. C
urrent levels of RTS intelligence are low, but
the number of agents and the number of actions is high

(Buro
2004)
. Laird and Lent insist that video games are the
premiere showcase for human level AI (Laird & Lent 2001).
And nearly every algorithm shown in this paper is intended
only to be
an early stage of

their kind of research,
and source
code is made available for researchers wishing to expand
upon it.

What could lie just over the horizon? What new
algorithm is waiting to change the world as we know it? Will
the rapid advances in medicine, hardware engineering and
software technology open doors that we can only dream of?
While nobody

knows for sure, the possibilities
are
invigorating researchers and the future will certainly be
exciting to see.



References


Baker, Tracy. “Game Intelligence: AI Plays Along.”
Smart
Computing.

Jan. 2002.
<
http://www.smartcomputing.com/editorial/article.asp?articl
e=articles/archive/c0201/39c01/39c01b.asp>.

Bourg, David M. and Glenn Seemann. “
Four Cool Ways to
Use Neural Networks in Games.”
On Lamp.

30

Sep.
2004.
<http://onlamp.com/pub/a/onlamp/2004/09/30/A
IforGameDe
v.html>.

Buro, Michael. “Call for AI Research in RTS Games.”
American Associati
on for Artificial Intelligence
(2004).


Geisler, Bob. “Integrated Machine Learning For Behavior
Modeling in Video Games.” Radical Entertainment (2005):.


Kelly, John
-
Paul, Adi Botea and Sven Koen
ig.
“Offline
Planning with Hierarchical Task
Networks in Video Games.”
Proce
edings of the Fourth Artificial
Intelligence and
Interactive Digital

Entertainment Conference (2008): 60


65.


Laird, John E. and Michael van Lent. “Human Level AI‟s
Killer Application.”
AI Magazine

22 (2001): 15


26.



Overholtzer, C. Adam and Simon D. Levy.


Evolving AI
Opponents in a First
-
Person
-
Shooter Video Game.”

American Association for Artificial
Intelligence (2005): 1620


1621.


Schaeffer, Jonathan. “A Gamut of Games.” American
Association for Artifi
cial Intelligence (2001): 29


46.


Snider, Mike. “AI is A
-
OK in new games.”
USA Today.

25
Sep. 2007.
<http://www.usatoday.com/tech/gaming/2007
-
09
-
24
-
a
-
i_N.htm>.


Stanley, Kenneth O., Bobby D. Bryant, Igor Karpoc, and
Risto Miikkula
inen. “Real
-
Time Evolution of
Neural
Networks in the NERO Video Game.” American
Associati
on for Artificial Intelligence
(2006): 1671


1674.


Wexler, James. “
Artificial
Intelligence in Games: A look at
the smarts behind Lionhead Studio‟s „Black and White‟ and
where it can and will go in the future.” University of
Rochester (2002).