Game AI Readings

wyomingbeancurdΤεχνίτη Νοημοσύνη και Ρομποτική

7 Νοε 2013 (πριν από 4 χρόνια και 1 μήνα)

62 εμφανίσεις

Game AI Readings


Rabin, S.
AI Game Programming Wisdom
. Charles River Media, 2002.



Finite State Machines:

o

Section 2.5, Page 71:
FSM concept
, including hierarchical FSM.

o

Section 6.5, Page 3
14
:
B
asic
FSM
implementation
.

o

Section 6.6, Page 321:
More on the
implementation
.

o

See also Section 5.3


which is part of ‘Decision Making for Team Behaviours’


next
item.



Decision Making for Team Behaviours

o

Check the reference ‘Tozour, P. Influence Mapping.
Game Programming Gems 2
, Charles
River Media, 2001’. It is ref
erenced a lot.

o

Section 5.1, Page 211:
Strategic and Tactical Map Analysis
. Includes anticipating
enemy’s move. Interesting fundamental concepts such as influence map (i.e., essentially
tactical picture), which are representations of the current and nearby
future situation
(game configuration), line of sight, ...

o

Section 5.2, Page 221:
Recognizing and Evaluating Strategic Dispositions/configurations

(red and blue) for optimal engagement of the enemy


This is engageability assessment.
Very good simple ideas
. They can be improved by trying to throw in some:
projection/look
-
ahead (planning) and/or uncertainty reasoning

(see Section 5.4, also
discussed below).

o

Section 5.3, Page 233:
Many useful concepts
. Force
-
level


squad AI they call it! A squad
is a small t
eam. Very nice. Talks about centralized and decentralized approaches. Shows
how to use FSMs. Includes fundamental concepts such as: threat, weapon capability,
team cohesion (e.g., maintain line of sight to other team members), mental
picture/state, team e
xchanges, tactics (see an example in Page 244).
Opportunity to
use LTL to express them
.

o

Section 5.4, Page 247: Planning maneuvers. Interesting concepts:



Execution monitoring (squad situation, which monitors and evaluates the
execution of the current cou
rse of action):
Opportunity to apply LTL based
execution monitoring
.



Selecting manoeuvres. There is an example on Page 251.
Opportunity to apply
behavior
-
based selection using PLTL progress.

Or maintain a manoeuvre
selection automaton (or strategy), such

as the states give the manouvers
instead of the actions (that is, the example of Figure 5.4.3, page 252)!
Opportunity to improve this by adding utility and uncertainty a la MDP!

o

Section 5.5, Page 261.
Is essential one more example of what precedes. Take t
he
strategic decision making flow chart of Figure 5.5.2 as a manoeuvre strategy automaton.
Good for use in the IFT615 simulator.



AI Game Architectures

o

Sections 6.1, Page 285: intro.

o

Section 6.2, Page 291: basic priority scheduler based architecture.

o

Sectio
n 6.4, Page 305: Rule
-
Based. Check the GOAL primitive.
Reminiscent of TLPLAN
search strategies
!

o

Section 6.5, FSM based. We can do better by
Throw
ing

in there LTL based strategies
.

o

Section 8.2, Page 397: AI architecture for RTS games: key concepts.

o

Section
8.3, Page 402:
adding decision theory to it
.

o

Section 8.4, Page 411:
weapon capability modeling
.




Plan Recognition

o

Section 7.2, Page 345
. Using Bayesian networks! See how he makes allusion to
recognizing threat and to engageability assessment at page 353
(calculating visibility and
chance to hit). See also how refers to detecting an intruder (Page 355). We can try doing
better with BN or with HMM!

Rabin, S.
AI Game Programming Wisdom

3
. Charles River Media, 2005
.



RTS Games:

o

Section 4.7, Page 321. Interest
ing ideas on the modeling of a state in an RTS game
(influence map). How we calculate the combat value (CV) of a unit (page 322), the
influence of a player (friendly or enemy) in a region (influence map


refers to same
paper by Touzour in Game Programmi
ng Gems 2’). Related to utility modeling and
preferences.

o

Section 4.8, Page 331
. Utility theory. Very good. I like his introduction. He models
decisions.
We can throw this into the PHAT like algorithm to choose the next action
from the current pending set
.

Chose the action with the highest expect utility w.r.t.
current goal. How to hypothesize goals: see the expected value of attacking a village.
We have to model preferences of the opponent. Building inertia: good idea. In our case,
model a notion of goal

commitment. That is, agents commit to goals as indicated by the
mission and this goal remains relatively valid


even though a higher value unity may
appear later on. However, models reasons to switch from current goal. Page 335: see
how he discuss the is
sue of additive vs noadditive utility values (rewards). Page 338: see
how he discuss the limits of relying solely on utility
-
based reaction! He advocates
injecting rule
-
based strategies. We can inject in our HTNs or automata.

o


Section 4.11, Page 369. Nothi
ng special, except that he introduces the notion of context
as a combination of the current state of an agent and the environment the agent is in.
Effectively, it is important to distinguish between the two, but I am not convinced we
need to introduce a ne
w terminology for that.

o

Section 6.7. RTS Citizen Unit AI.
Shows examples of RTS modeling using FSM
.

o

Section 8.4
, Preference
-
Based Player Modeling.
Can inspire us on how we model the
preferences of the opponents.



Tactics and Planning

o

Section 5.2, Page 389.

Dynamical Tactical Positioning
. Very good. Figure 5.2.1, page
392, illustrates how we could represent our agents in a paper
--
we do not have
permission from StarCraft to use theirs
. In our modeling of EU (expect utility) for a unit,
take into account the

fact that the unit will avoid exposing itself. Also introduce
possibility it may miss evaluating such exposure. Also introduce the possibility it may be
suicidal


asymmetric threats.
See performance discussion on page 395!

In Starcraft,
how efficient is
it to get the positions of surrounding units? Do they also use ray casts?
Make sure to incorporate the concepts of line of sight, line of fire into our plan libray /
action representations. We have to go to such a level of details
.

o

Section 5.3
, Page 405
.

F
inding cover in Dynamic Environments. Read it quickly. Contains
interesting ideas for modeling search control strategies and tactics.



General

o

Section 8.3, Page633. Introduction to HHM. Nice intro which could be used for IFT615.

o

Section 8.8., Page 693. This

is NEAT. Get demos from the web and updates to replace
old examples in IFT615.


Rabin, S.
AI Game Programming Wisdom

4
. Charles River Media, 2008
.



FSM + Goal
-
Based AI
:

o

Section 3.2, Page 222
.
Complements well the previous FSM readings.

Explains the
difference and
complementarities

between FSM based and Goal
-
based from an
expressiveness standpoint.

Mentions an example of hybrid approach (combining FSM
and goals) at Page 236.

o

Section 3.10, Page 257. See a definition of Hierarchical FSM
.

o

Section 3.10, Page 317. FSM and Harel’s Statechart
s
!

Explains the difference between
hierarchical FSM and statecharts (paragraph 1, Page 320).



Dialogue Managers

o

Section 6.1, Page 531. Good intro.

o

Section 6.5, Page 593.
Very

good intro
.
Idea: recycle our w
ork on argumentation into
games.