Step One: Document the Problem

apatheticyogurtSoftware and s/w Development

Dec 13, 2013 (3 years and 6 months ago)

64 views

Step One: Document the Problem

On the Value of Documenting
Real

Game AI Problems

by baylor,

independent game miscreant

Workshop: Challenges in Game AI

AAAI 2004

Agenda

2m

1. What do we know about game AI problems?

Not much

½m

2. Are there problems with game AI?

Tons

½m

3. Is having AI problems a bad thing?

Not really

4m

4. What are the problems?

3m

5. Why do the problems exist?

Hmm…

½m

6. Can we solve the problems?

Easily

3m

7. Do solutions already exist?

A few

½m

8. Opinion: What should we do next?

Document

3m

9. BadAI.com

5m

10. Q&C&Y&A

What Do We Know?


What is an AI problem?


What is the difference between an AI
problem and a bug?


How many AI problems are there?


Is game AI getting better?


What percent of problems are easy to fix?


What are the most important problems?


What percent of problems have solutions?


Which problems have solutions that are not
being used?


What game AI solutions exist?

?

?

Are there game AI problems?

Um, yeah

Hopefully everyone agrees there are lots of them

(note: but i’m not sure i do)

Is that a bad thing?

Hmm, maybe not

After all, people keep buying the games

but…

A game with good AI
might
sell better (maybe)

And we need better AI for:



simulators used for training



simulators used for prediction



simulators used for decision making



simulators used for education

What are the problems?

if a picture is worth a
thousand words…

AI: “We would never accept such
a deal!”

Paving Antarctica

Building rail roads increases the value of the land (if the land
has value to begin with) and increases unit movement.

This civilization has several central, irrigated squares it has
not yet deployed rail roads to

Ravenous Bugblatter Beast of Traal

Instance Of
: Does not respond to pain

More generally
: Agent does not process important events

Specific Problem
: Slave does not “turn on” at appropriate time

Trigger Failure

Also seen in:

Bard’s Tale

Doom

Heretic

Quake

Quake 2

SW:KotOR

Betrayl at Krondor

…but not

Delta Force 1

Delta Force 2

Thief

Baldur’s Gate

Planescape: Torment

HM&M IV

Certain AI Problems Appear
Over and Over


Ally attracts enemies (Baldur’s Gate, *cough* Dikatana *cough*)


Ally loses the mission (Delta Force 2, Delta Force: Blackhawk Down)


Repeatedly use attacks known not to work (Civilization, all 8 Baldur’s
Gate games)


Move to point blank range to use ranged weapon (Civilization,
DF:BHD, Star Wars: Knights of the Old Republic)


Ignore grenades (SW:KotOR, Full Spectrum Command)


Don’t react to significant changes such as a new enemy appearing
(DF:BHD, Full Spectrum Command, SW:KotOR)


Not predicting outcomes, such as what happens when you shoot a
rocket launcher into a post 2 feet in front of you or trying to shoot
through a mountain (DF:BHD, SW:KotOR)

Most of these problems have been solved in other games

Why do we still have this many
AI problems?

i’ve asked this question many times

i’ve heard many answers/opinions/theories

But do these theories match the evidence?

Let’s find out!

Excuse 1: These problems are hard!

AKA

-

AI is hard

-

Building human intelligence is hard

Problem: Allies decide to run in front of a helicopter mounted minigun
you have been firing non
-
stop for two minutes

Question: How hard would it be to solve this “AI” problem?

Problem: Allies block you’re only exit, will not move

Question: Can this AI problem be solved?

“I said,
BUMP!!!


Excuse 2: We can’t find
all

the
problems, there are too
many possibilities

AKA

-

Players are unpredictable

-

We didn’t think to test that

-

There are too many different items to track

-

Combinatorial complexity

Five minutes in the life of a navigation problem…

“OK everyone, watch where you step”

“Jolee, try not to step on that landmi…”

“Hey Bastilla, can you believe Jolee just… Oh, I guess so”

“Bastilla, did you just step on another
marked

poison gas mine?”

“Jolee!!!”

“Don’t you dare…”

“I hate you guys”

AKA

-

This is AI and AI people have PhDs

-

I’m still learning neural networks

-

I haven’t finished Russel & Norvig

-

I tried reading AI but couldn’t pronounce
"d$
φ∑θπ
μ
f
ΦγΨασω

Excuse 3: Lack of formal AI training

“Note to self: Don’t throw grenades at point blank range”

Theory 4: No time!

AKA

-
It takes a long time to “create” “AI”

-
Time is tight and I need to tweak our bump
-
mapped, b
-
splined,
ambient occluded,

UV
-
mapped pixel shader

-
Did I mention I’m waiting on an AI co
-
processor?

The Pond Fish

“If anyone tries to invade the swimming pool,
we’ll be ready for them!”

Which is older, the game’s
AI flaw

or its
customers
?

Theory 5: Not enough CPU time
(thanks a lot stupid graphics guys)

AKA

-
An A* search of 1,500,000 nodes takes over a second!

-
Solving hard problems requires slow, complicated algorithms

-
I can’t do that with my current data structures

-
I’m waiting until graphics are no longer important. That’s next
year, right?

-
I’m waiting for the inevitable AI co
-
processor

How much CPU is needed?

Theory 6: AI isn’t important


AI can’t be summarized in a marketing bullet point (e.g.,
“inverse kinematics”)


AI can’t be turned into a pretty box cover or screen shot (e.g.,
particle effects)


No objective benchmark (e.g., frames per second); Reviewers
and marketing can’t use for between
-
game comparisons


No feature checklist (e.g., bump mapping, dynamic lighting,
MIP
-
mapping); Reviewers can’t objectively compare games


Aside from Black&White, no one has ever bought or avoided
game based on its AI


No competition; no game has good AI


Good AI is invisible. Bad AI is a stupid action that takes the
player out of the game. Good AI just means a lack of noticeable
stupidity. Selling “lack of stupidity” is like selling “our game is
stable”; How many games have sold because of bug count?

Are these problems solvable?

Well, yeah…


As seen in previous slides, solutions are
often obvious, simple and light weight

But are these problems and solutions
representative?

How do we know baylor didn’t just cherry pick

the items that supported his position?


That’s a pretty good question

Wouldn’t it be nice if we had a way to answer this question?

Are there already solutions?

Yes, in:


Existing games


CGF research


Ethological research


Psychological literature

Example:


Learning (Rescorla
-
Wagner, Matching Law, biological preparedness,
latent learning, S
-
(R
-
O), opponent
-
process theory, learning by
observation, Structure Mapping, etc.)


Decision Making (Recognition
-
Primed Decisions, expectation
violation, recognition heuristic, Affordance Theory,
forward
simulation
, Elimination By Aspects, Theory of Mind, etc.)


Information Gathering (Think Aloud protocol,
behavior capture via
video games
)

Matching Law

Problem Area: action selection


AI Type: decision making,


learning (secondary),


personality (secondary)

Detail Level: mid
-
level


Technique: matching law


Assumptions: Options are relatively equal

Example Uses: sports: choosing a shot type


FPS: choosing a weapon


RPG: choosing a spell


RTS: choosing a build unit type



Explanation

-----------

... RA/RB = b(rA/rB)^s ...


Variables

---------

RA = Rate of response for option A.


How often option A is chosen.


This is a counter

RA/RA+RB = Relative rate of response for


option A.

rA = Rate of reinforcement for option A.


The percentage of time choosing


option A has lead to a good result

rA/rA+rB = Relative rate of response for


option A.

b = Response bias.


b>1 means prefers

Game Example

------------

A wizard is 30 meters away from a group of orcs.
He has three third level spells he can use
-

flamestrike, iceblade and stonestorm.

Question: which spell should the wizard cast?



Assume that the wizard has successfully hit his
enemies 3/10 times with flamestrike, 2/4 times
with iceblade and 6/7 times with stonestorm.


r(flamestrike) = 3/10 = 0.3 (30%)


r(iceblade) = 2/4 = 0.5 (50%)


r(stonestorm) = 6/7 = 0.86 (86%)




relative r(f) = .3/.3+.5+.86 = 0.18 (18%)


relative r(i) = .5/.3+.5+.86 = 0.30 (30%)


relative r(s) = .86/.3+.5+.86 = 0.52 (52%)



So the wizard would cast stonestorm 52% of the
time, iceblade 30% and flamestrike 18%



Limitations

-----------

-

As stated, does not include learning by


observation. LBO can be added however



Notes

-----

-

Bias and sensitivity are hard coded

Problems with existing solutions


Game programmers are not teachers


Paid to write code, not publish their knowledge


Historically, little information was shared. Much better
today (
Charles River, id open source, IGDA, John Laird
)


No known place to get basic info


Getting slightly better (AI Programming Wisdom,
Game Programming Gems, Gamasutra)


Too much bad/useless information


A*, neural networks, genetic algorithms, Russell &
Norvig

Classic Problem: Target Selection

A lot more interesting than Towers of Cannibal Water Jugs

Opinion: What should we do?

Document!


(hey, I never promised it’d be fun)

AI Problem
(and solution)

Database


Since I couldn’t find one, I made one


It’s an actual database (Access today,
looking at SQLite)


Plus an image gallery (considering adding
animations)


Internet searchable


Semi
-
open


Will have user submissions


Will have forum


Might use something like Wiki

BadAI.com


Image Gallery

BadAI.com
-

Database

BadAI.com
-

Games

reason #20,000,000 why baylor is not allowed to do design

BadAI.com


Problem Categories

Web page for viewing
problem category
hierarchy has not been
created yet


Originally, AI problem
categories were not
hierarchical.


Today, the hierarchy can
be any depth but has yet
to exceed three levels (see
gallery screen for an
example)

BadAI.com
-

Reviews

Issues


Currently run by one person


…who is lazy


…and has too many other projects


…and doesn’t play that many games


Currently lacking a home


Hosted on ihatebaylor.com today


Might give it it’s own Web site (BadAI.com
domain is open)


Might move to a school Web site (if I can find a
school that is dumb enough to take me)

Questions & Answers

and comments and yelling and…

(looks suspiciously like Mike van Lent)