CSE 5290: Artificial Intelligence - ximb

topsalmonΤεχνίτη Νοημοσύνη και Ρομποτική

23 Φεβ 2014 (πριν από 4 χρόνια και 10 μήνες)

355 εμφανίσεις

Artificial Intelligence and Decision Making

Session3: Intelligent Agents



Reading:



Russell, S.J., & Norvig, P. (1995).
Artificial Intelligence: A Modern Approach
. Upper
Saddle River, NJ: Prentice Hall. Reading Chapter 2 (Pp 31


52)


2.1 Introduction


2.2 How Agents Should Act


2.2.1 Rational Agents

2.2.2 The ideal mapping from percept sequences to actions

2.2.3 Autonomy


2.3 Structure of Intelligent Agents


2.3.1 Agent Programs

2.3.2 Why not just look up the answers?

2.3.3 A example

2.3.4 Simple refl
ex agents

2.3.5 Agents that keep track of the world

2.3.5 Goal
-
based agents

2.3.5 Utility
-

base agents


2.4 Environments.


2.4.1 Properties of environments

2.4.2 Environment programs


2.5 Summary


Agents and AI



Patrick Winston, head of the AI laboratory
at MIT, delimited AI in a manner that
allows new concepts of man and machine. His forward to the text on Actors states:



Artificial Intelligence (AI) studies intelligence using the ideas and methods of
computation. A true definition of AI does not appear
possible at the present time since
intelligence appears to be a combination of multiple information processing and
information representation abilities.



AI offers new methodologies to study intelligence while attempting to make
computers intelligent and
more useful. The purpose of this attempt is to provide a
medium of study to understand the principles and processes of intelligence. The central
thesis is AI is to understand intelligence
-
using methods of computation. The theories of
AI attempt to apply to

both human and machine intelligence.



Agha (1988) defined actors (intelligent agents) as [Page 8]:



“ Actors are computational agents which map each incoming communication to a
3


tuple consisting of:


1.

A finite set of communications sent to other actor
s

2.

A new behavior (which will govern the response to the next communication
processed); and

3.

A finite set of new actors created.



Intelligent Agents have a background in human notions of reality. Phenomenology is a
method in philosophy and cognitive process
es that views the appearance of objects as
contrasted with the objects themselves. The mind can never see the light of day or know
an object itself due to the sensory system providing a filter. Knowledge is restricted to the
representation or appearance of

objects. This provides a foundation to modern theories of
human intelligence and intelligent agents.


Web Sites


The human mind according to artificial intelligence

http://info.greenwo
od.com/books/0275962/0275962857.html


Sternberg's Triarchic Theory of Intelligence

http://psychology2.semo.edu/PY531/chap8/tsld024.htm




OVERVIEW: AI, INTELLIGENCE AND INTELLIGENT AGENTS


Intelligence is a necessary, but not sufficient conditions for Predication and Inference.
The quality and quantity of the intelligence affects both the judgment and problem
solving skills of the of decision
-
making process. Berkowitz (1999) suggested that
the
compartmentalization of intelligence may lead to bad inferences. Morin (1999)
summarize a survey in the Annals of Improbable Research (A humor magazine with eight
Nobel Prize Winners on the editorial board). The survey asked which field of science has
the smartest people? Astronomer Vinay Kashyap of the Harvard
-
Smithsonian Center for
Astrophysics offered this response.

"Speaking of ranking the various disciplines
-

Politicians think they are Economists.

Economists think they are Social Scientists.

Socia
l Scientist thinks they are Psychologists.

Psychologists think they are Biologists.

Biologists think they are Organic Chemists

Organic Chemists think they Physicists.

Physicists think they are Mathematicians

Mathematicians think they are God,

God… ummm… s
o happens that God is an Astronomer."



The Lens model is based on correlation and multiple regression. Multiple regression
involves predicting one variable from many.


INTELLIGENCE


There is not a unity in defining intelligence. No definition is acceptabl
e to all
psychologists. The definition of intelligence depends on the theorist and their view of
what intelligence consists. The difference views of intelligence disagree on whether
intelligence is composed of a single (global) entity or made up of multipl
e factors. Most
theorist agree that intelligence represents a persons ability to adapt or adjust to situations
or to solve problem.


THEORIES OF INTELLIGENCE


Binet (1857
-

1911)

1.

Alfred Binet, a French psychologist, was the "father of intelligence testing
" and the
author of the original successful attempt to objectively assess ability in children. His
test is called the Stanford
-

Binet today.

2.

His test used the concept of mental age of person as a basis of determining if a person
was functioning at a level

above or below their actual age. At each age level test
questions were those that most persons could be expected to answer?

3.

Binet defined intelligence as the ability to adjust to and to understand problems in a
manner that permitted solving those problems
. His four concepts of intelligence were:

a.

Comprehension

b.

Invention

c.

Direction

d.

Criticism


Terman

1.

Terman had the Binet tests translated and implemented them for American use. He
was a professor at Stanford University. Term took the top 1000 scores on the test
and
followed the students for 50 years. His book "A genetic study of genius" is well worth
reading. In California these 1000 students were nicknamed "Termites."

2.

The scale is verbally biased. Children with foreign or low
-
income families do not test
well.


S
pearman

1.

Karl Spearman was a statistician that taught in England. He developed
"Spearman's Two
-
Factor Theory of Intelligence in 1923. The theory contained:

a.

"G" Factor. This general reasoning of "G" comprised intellectual capacity for all
mental processes.

b.

"
s" Factor. This specific intellectual function relates to specific skills like arithmetic,
music, etc.


Wechsler

1.

David Wechsler was a psychiatrist at New York Cities Belleview hospital. He
developed the Wechsler Intelligence scales in 1949.

2.

Wechsler's noti
on of intelligence was global. He thought intelligence was a unitary
global traits, but intellectual functioning is not.


Thorndike

1.

Thorndike was the first educational psychologist.

2.

His notion was that intelligence consisted of:

a.

Altitude: the ability to pe
rform tasks that are progressively more difficult.

b.

Breadth: an assortment of tasks.

c.

Speed: rate per unit of time.


Thurstone

1.

L.L. Thurstone was a statistician who worked as an electrical engineer for Thomas
Edison

2.

Thurstone proposed seven primary mental tr
aits and developed tests to measure these
traits

a.

visual or spatial ability

b.

logical (verbal) ability

c.

memory

d.

inductive ability (obtained through the senses)

e.

perceptual speed

f.

problem solving

g.

deductive ability (reasoning)


Guilford

1.

Guilford is considered the w
orlds leading statistical researcher. Until his death he was
a professor at the University of Southern California in Los Angles.

2.

His notion of intelligence contains three dimension and over 128 traits. A summary of
his Structure of the Intellect (SOI) mode
l is:

a.

The Process (Operations)

(1)

cognition

(2)

memory

(3)

convergent thinking (converging on a single answer)

(4)

divergent thinking (thinking which branches out from the answer)

(5)

evaluation (judgment and decision making)

b.

The Material (Content)

(1)

figural

(2)

symbolic or semant
ic (abstract intelligence)

(3)

behavioral (social intelligence)

c.

Product (Result)

(1)

Units

(2)

Classes

(3)

Relations

(4)

Systems

(5)

Transformations

(6)

Implications


SUMMARY OF INTELLIGENCE


Brody and Brody (1976) suggested that different subsets of individuals might account for
p
ositive relationships between different measures of ability by benefit of using the same
abilities for different measures. Hyland (1981) suggests that this result may be because
theoretical concepts in psychology (to a greater extent than in other discipli
nes) are often
poorly described and therefore add ambiguity to and explanation. Thus theoretical terms
therefore introduce conceptual ambiguity as well as uncertainty in measurement. This
ambiguity affects generalization about a particular set of "observab
les" and is become not
possible to generalize from the observations to other observations. The additional
information present in a theoretical explanation that is correct can provide valuable
guidance when attempting to apply an understanding of people gai
ned in the laboratory
and elsewhere to problems in the outside world. Inference and prediction is dependent on
the intelligence of the decision
-
maker. It is very difficult to specify the exact nature of
this relationship at the present time.



Problem find
ing and alternative generation



Web Sites


Problem Finding Approach to Effective Corporate Planning

http://info.greenwood.com/books/08990302/0899302629.html


Team Problem Finding

http://www.ncrel.org/ncrel/sdrs/areas/issues/educators/


95% of the time of debugging is finding the cause of a problem

http://files.ocs.drexel.edu/courseweb/mcs350
-
991/lectures


US Army Simulation and Training Command

http://www.stricom.army.mil


Office of Science and Technology Policy

http://www.whitehouse.gov/WH/EOP/OSTP/html/OSTP_Home.html


War Games using real
-
time strategy

http://www.stargatesoftware.com/html/gamebuy/wargasm.htm


Brookings (Policy Analysis)

http://www.brook.edu/default.htm


Office of Congressional and Government Affairs

http://www4.nas.edu/ocga/reso.nsf


Federally Funde
d Research and Development Centers (FFRDC)

http://www.dtic.mil/lablink/areas_of_interest/ffrdc.html


RAND

http://www.rand.org/


Lawrence Livermore N
ational Laboratory (LLNL)

http://www.llnl.gov/


Los Alamos: Bomb Builders to Custodians

http://.cnn.com/SPECIALS/coldwar/experience/t
hebomb/route/02.los.alamos/


Cold War Interactive Game

http://www.cnn.com/SPECIALS/cold.war/games/


Modeling and Simulation Resource Page (Professor Paul Fishwick: U. FL)

http://www.dml.cs.ucf
.edu/cybray/fyi_modsim.html


Defense Modeling and Simulation Office

http://www.dmso.mil/dsmo/index.msq/


Advanced Research Projects Agency

http://www.arpa.mil/


Game Th
eory and Cold War Nuclear Confrontation (Part I)

http://www.cbc.net/~steve/sub1.html


The legacy of Nuclear Testing Respects no Boundaries

http://www.rama_
usa.org/nuclear.htm


Conference Panel on Disarmament of Informatics

http://www
-
diotimath.upatras.gr/mirror/prncyb
-
1/1191.html








OVERVIEW: PROBLEM FINDING/ALTERNATIVE GENERATION


Thierauf (1987) suggest the Thomas Fuller was attribute to say, "a danger foreseen is half
avoided." Problem finding may be conceptualized as problem solving future problem that
identify future opportunities. The scope of problem finding may be classified

as:




Short Range or Operational



Medium Range or Tactical



Long Range or Strategic


The criterion for good problems usually does not vary with the scope of the problem. The
criteria is:




Interest



Embedded in Theory



Likely to have Impact



Original in some as
pect



Feasible or within conceptual, resource, and institutional limits


There are also several tests that can be applied to problem finding:




The "Goldilocks" test: Is the question so broad that it is untenable or so narrow that it
is dull or is it just ri
ght?



The five
-
year test: A five
-
year old should be able to understand the purpose of the
problem
-
solving project.



The blood test: People besides you blood relative should be interested in the results.


Problem Characteristics [From: Bourne, et. al]



Actu
al problems do not come in clear
-
cut categories. Problem attributes provide a
general inadequate methodology for conceptualizing types of problems that theories have
been designed to account for and that are the topic of research in the area of cognitive
p
sychology.


Well
-
Defined and ill
-
defined Problems


In distinguishing between well and ill
-
defined problem, effort is directed toward the
degree of constraint imposed on the problem solver. For example, a well
-
defined problem
considers a task that sometimes

appears on exams [e.g. Prove

x
2

=

X
2

-

(

X)
2

/N].
In this example the problem solver is given a very clear starting point (i.e. the left side of
the equation), a very clear finishing point (i.e. the right side of the equation). The solution
to the prob
lem is known.


Now consider an ill
-
defined problem. For example, how do you improve the quality of
life? In this example the problem source must further define the problem or the problem
solver must define the problem in a manner that his solution (given t
he definition) is
acceptable.


The two examples indicate the great variation possible in the degree of specification of a
problem. For ill
-
defined problems a critical part of the person or groups task is to define
the problem in a potentially productive ma
nner. Ill
-
defined problems usually require
creative problem solving for their solution. A group methodology for solving ill
-
defined
problem is Brain Storming (BS). BS usually operates under the following rules:



All ideas are acceptable



Criticism is forbidd
en at this stage



Building on other's idea is encouraged



Metaphors and analogies are welcome

Problem finding in BS stresses attempting to identify the discrepancies between what is
and what should be and trying to narrow the range of possible problems until

the group
arrives at the real problem. Thinking in problem finding is very fluid. Like raindrops
running down a windowpane intellectual processes come together and separate and run
together again.


Preparation for Thought and Judgment [From: Johnson]


A q
uestion well put is half answered


Everything that paves the way and influences thought may be called preparation for
thought. The contribution of past learning is preparation for thought. Preparation in a
dynamic sense is a process of getting ready or ado
pting a preparatory set, based on
present conditions as well as past learning. These factors control the subsequent
production of relevant responses.


Classifications of Associations


Several attempts have been conducted to experimentally group the associa
tion of
thoughts into a few large classes based on the relationship of the response word to the
stimulus word. The relationship between any one
-
response word and its stimulus word
may be due to some peculiarity of the stimulus word, but when many stimulus
words are
used such idiosyncrasies are likely to be balanced out of the totals. Therefore, if the
classification procedure is a good one, any trends that appear can be taken as indications
of what the subject is prepared to do at the moment the stimulus wo
rd is presented to
them.


Your instructor was able to influence the association word by pre
-
exposure instruction
sets. The use of positive, neutral, and negative instruction sets was significant in word
recognition at p < .0001.


In another study reported
by Johnson researchers made use of the following classification
of relationships:




Essential similarity: large
-

big, rough
-

rugged



General identification: cabbage
-

vegetable, hand
-

arm



Specific identification: ocean
-

Pacific, friend
-

Tom



Contingent i
dentification: egg
-

breakfast, music
-

room mate



Essential opposition: hot
-

cold, fill
-

empty



Contingent opposition: food
-

hand, house
-

barn


The researchers had responses classified by four analysts and calculated the inter
-
analyst
agreement between
one pair of analyst and the other pair. When the responses of 100
subjects to 20 words were classified, there was disagreement between the two pairs of
analysts to the extent of 18 percent of the response in one study and 20 percent in another
study. Corre
ctional analysis showed that the classes are not independent. The percentages
of 5811 responses of these college students that were classified in the above categories
relationship. It was later proven that the number of responses placed in any one category

depends on the other categories that are available. It was also shown that the relationships
given most frequently were given the fastest.


Problem Representation


Digital Computers and Knowledge [From: Van Doren]


It is useful (heuristic) to think abou
t computer in a different way to make their role in
decision theory clear. Not only are they the 20
th

century's greatest invention, but also a
necessary, but not sufficient condition, for decision theory processes today.


A distinction needs to be made abo
ut analog and digital computer. It is approximately
analogous to the distinction between measuring and counting.


An analog computer is a measuring device that measures (responds to) a continuously
changing input. A thermometer is a simple analog computer.

A car speedometer is more
complicated. Its output device, a needle that moves up and down on a scale respond to,
(i.e.), measures continuous change in the voltage output of a generator connected to the
drive shaft. Even more complicated analog computer co
ordinate a number of different
changing inputs. For example: temperature, fluid flow, and pressure. In this case the
computer could be controlling processes in a chemical plant.


The mathematical tool used to solve continuous changes of input to a system i
s a
differential equation. Analog computers are machines, some them very complicated, that
are designed to solve sets of differential equations.


The human brain is probably an analog computer. Or it is like one in the sense that an
airplane is like a bird

(the aerodynamics are the same). The real world and the human
brain process the concurrent signal and gives directions to the muscles. The brain can
solve a large number of differential equations concurrently, in real time, that is, a fast as
the situatio
n itself is changing. The brain has 2
25

neurons. No machine comes close in
capacity to the human brain. Computer scientist calls the brain "wet ware."


All analog computers made by man have one serious defect: they do not measure
accurately enough. The mix

in the chemical plant is changing rapidly in several different
ways: it is getting hotter or colder; the pressure is increasing or decreasing; the flow is
faster or slower. All of these changes will affect the final product, and each calls for the
compute
r to make subtle adjustments in the process. The devises used to measure the
changes are therefore crucial. They must record the changes very rapidly, and transmit
the continuously changing information to the central processor. A very slight inaccuracy
in
measurement will obviously result in inaccurate results down the line.


The difficulty does not lie in the inherent ability of measuring devices to measure
accurately. The difficulty comes from the fact that the devises records the continuously.
As a resul
t there is a very small ambiguity in its readings. At what precise instant did it
record the temperature as 100 degrees? Was that the same instant that another devise
recorded the pressure as 1,000 pounds per square inch? When very slight inaccuracies are
amplified, as the must be, the result can be errors of several parts per thousand, which is
typical in even the best analog process controllers.


A digital computer has no such defect. It is a machine for calculating numbers, not
measuring phenomena. An an
alog signal has continuously valid interpretations from the
smallest to the largest value that is received. A digital signal has only a discrete number
of valid interpretations. Usually, the number of valid interpretations is two: zero or one,
off or on, b
lack or white. The digital signal is therefore always clear, never ambiguous; as
a result, calculations can be arranged to deliver exactly correct results.


Digital computers employ the binary number system to process information, although
their outputs ma
y be in the decimal system, or in words, or in pictures, or in sounds
-

whatever you wish. In the binary system there are only two digits 1 and 0. The number
zero is denoted 0. One is 1. Two is 10. Three is 11. Four is 1000 (i.e. 2
2
). Five is 101.
Eight is

1000. Sixteen is 10000. The numerals become large very quickly. Multiplication
of even quite small number (in the decimal system) involves enormous strings of digits
(in the binary system). This does not matter since the digital computer works so fast. A
hand calculator can computer the result of multiplying two three
-
digit number (in the
decimal system) and deliver the answer in the decimal system in much less than a second.
It appears almost instantly.


Because binary system numerals are much longer than

digital system numerals, the
machine is required to perform a very large number of different operations to come up
with an answer. Even such a small cheap calculator is capable of performing fifty
thousand or more operations per second. Supercomputers are

capable of performing a
billion or even a trillion operations per second. Obviously, your small calculation does
not trouble any of them.


There is a problem. Remember the analog computer measures, the digital computer
counts. What does counting have to
do with measuring? If the analog device has
difficulty measuring a continuously changing natural phenomenon, how does it help
apparently to reduce the freedom of the digital signal to the point where it can only give
one of two results?



Ancient Greek mat
hematicians attempt to solve this problem by finding common,
numerical units between the commensurable and the incommensurable. This is not
mathematics. Descartes attempted to solve the problem by inventing analytical geometry
to give precise number names
to physical things, places, and relationship. This did not
solve the problem. Newton solved the hardest part of the problem by inventing
differential and integral calculus to deal with these changes. The result of the calculus
was the creation of a mathema
tical system of the world, as he knew it, which worked
with astonishing accuracy. Newton used the notion developed by Descartes that you
break a large problem down into small steps, and solve each of the small steps. This is
what calculus does. It breaks d
own a change or movement into a very large number of
steps, and then in effect climbs the steps each of them very little, one at a time. The more
steps a curve is broken down into, the closer the line joining the steps is to the curve.


If you can imagine
the number of steps approaching (but never reaching) infinity, then
the stepped line can be imagined as approaching the actual continuous curve as closely as
you please. Thus the solution of an integration or of a differential equation is never
absolutely
accurate, but it can always be made as accurate as you please, which comes
down to its being at least as accurate as the most accurate of all the other variables in the
problem.


This is an important mathematical idea that is often not understood by non
-
ma
thematicians. In dealing with the physical world, mathematics gives up the absolute
precision that it enjoys in pure mathematical spaces, in for example, geometrical proofs,
where circles are absolutely circular lines absolutely straight, etc. Reality is a
lways
slightly fuzzy. Our measurements of the real world are never perfectly precise and it is
our measurement, expressed as numbers, with which the mathematician deals.


The mapping of the real world to the mathematical world and back to the real world
cr
eates errors. This is because the mathematical world is more perfect than the real world.
When measurements are aggregated and time or event stepped into a future state the
desegregated number in the future state is not the same as the sum of its parts. Da
vis
(1998) discussed the problem of developing models with multiple levels of resolution.
Estimator theory from statistics was found useful. The choices of aggregate variables can
cause errors. Any one set corresponds to a particular representation of the
problem.


Problems in representation and measurement of change [Fm: Harris]



Procedural decisions in the measurement of change assume, with discouraging insistence,
the character of dilemmas, of choices between equally undesirable alternatives. Three
basi
c dilemmas are found in most measurement of change problems. They are:




Over
-
correction
--
under
-
correction dilemma



Unreliability
-

invalidity dilemma.

Physical
-

subjectivism dilemma



LEVELS OF MEASUREMENT AS A NECESSARY BUT NOT SUFFICENT
CONDITION FOR A

USEFUL DEFINITION OF INTELLIGENCE & AGENTS


Embedded in the measurement problem is the mapping the reality to the measurement.
Siegel (1956) developed the notion of the limits of measurement with levels of
measurement. The operations allowable limit the u
se of data. For example:


Scale: Nominal

Defining Relation (s): Equivalence

Statistics Permitted: Mode; Frequency

Example: Numbers on football jerseys

Formal Mathematical Properties:

1.

Reflexive: x = x for all values of x

2.

Symmetrical: If x = y Then y = x

3.

T
ransitive: If x =y and y = z Then x = z


Scale: Interval

Defining Relations: 1. Equivalence; 2. Greater than

Statistics Permitted: Median; Percentile

Example: A Sergeant (three strips) is greater than a Corporal (two strips). Add a strip to
each and the re
lationship stays the same. Add a constant to the numbers and the
relationship stays the same.

Formal Mathematical Properties:

1.

Irreflexive: It is not true for any x that x is greater than (

) x

2.

Asymmetrical:If x


y, Then y not greater than x

3.

Transitive: If

x =y and y = z Then x = z


Scale: Interval

Defining Relations: Equivalence; Greater Than; Known ratio of two intervals

Statistics: Mean; Standard Deviation; Product Moment/Regression

Example: Thermometers (C/F) [Except absolute zero]

Formal Mathematical P
roperties: Addition, Subtraction, Multiplication, Division


Scale: Ratio

Defining Relations: Equivalence; Greater Than; Known ratio of two intervals; Known
ratio of any two scale values.

Example: Pounds, Meters

Formal Mathematical Properties: Isomorphic to

mathematics


What does this mean?


Utility Theory is at an ordinal scale. Intelligence measures (e.g. IQ) are also at an ordinal
scale. Engineering measurement of physical objects is at a ratio scale. We do not have
robust enough measure in intelligence t
o map cognitive processes directly to software at
this time.


The notions of agents



The notion of an agent developed in object
-
oriented programming. Object


oriented program attempted to develop schemes for allowing independent application
modules to co
operate in solving particular problems. Tello (1989) suggested that all main
modules or agents or large application software be capable of cooperating with each other
as a hard
-
wired object system. Cooperation could be effected by:


1.

Cooperation beginning w
hen one agent makes a request of the other agent.

2.

Cooperation as an essential feature of the way the objects operate and do not need
to be initiated externally.

Korf (1985) further developed the notion using software macro


operators for search. He
applie
d his notion to solving Rubik’s Cube. These notions matured in Object
-
Oriented
Programming in Common Lisp (Keene, 1989). Software Agents search the Internet to
find knowledge. These agents are called “knowbots.” Intelligent agents are a work in
progress t
hat could profit from sound empirical underpinning in both theory and
methodology.








References


Armed Force Staff College (1993).
The Joint Staff Officer's Guide 1993 (AFSC Pub 1).
Washington, DC: US Government Printing Office


Battilega, J.A., & Gra
nge, J.K. (1984).
The Military Applications of Modeling
. Wright
-
Patterson Air Force Base, OH: Air Force Institute of Technology Press.

Berkowitz, B. D. (5 September 5, 1999) Facing the Consequences: As El Shifa Show, It
takes more than Intelligence to Make

Smart Decision.
The Washington Post
, B1
-

B5.


Brody, E. B., & Brody, N. (1976).
Intelligence: Nature, Determinants, and Consequences
.
New York: Academic Press.


Bourne, L.E., Ekstrand, B.R., & Dominowski, R.L. (1971).
The Psychology of Thinking
.
Englewwo
rd Cliffs, NJ: Prentice Hall.


Davis, P.K. & Bigelow, J.H. (1998).
Experiments in Multiresolution Modeling (MRM
).
Santa Monica, CA: RAND.


Druzhinin, V.V., & Kontorov, D.S. (1972).
Decision Making and Automation: Concept,
Algorithm, Decision (A soviet Vie
w)
. (Translated and Published under the auspices of
the US Air Force. Washington, DC US Government Printing Office.


Harris, C.W. (Ed.) (1963).

Problems in Measuring Change
. Madison, WI: University of
Wisconsin Press.


Hillier, F.S. & Lieberman, G.J. (1974
).
Operations Research (2
nd

ed.).

San Francisco, CA:
Holden Day.


Hugh, W.P. (1989).
Military Modeling (2
nd

ed.).

Alexandria, VA: Military Operations
Research Society.


Johnson, D.M. (1955). T
he Psychology of Though and Judgment
. Westport, CT:
Greenwood Pr
ess.


Keene, S.E. (1989).
Object
-
Oriented Programming in Common Lisp
. Reading, MA:
Addison


Wesley.


Korf, R.E. (1985)
Learning to Solve Problems by Searching for Macro
-

Operators
.
Boston, MA: Pitman Advanced Publishing Programs.


Morse, P.M. & Kimball,
G.E. (1970).
Methods of Operations Research (1
st

ed. Revised).

Los Altos, CA: Peninsula Publishing.


National Defense University (1988).

Essays on Strategy
. Washington, DC: US
Government Printing Office.


Roberts, P.A. (1988).
Technology Transfer: A Policy

Model
. Washington, DC: National
Defense University Press.


Seigel, S. (1956).
Nonparametric Statistics
. New York: McGraw


Hill.


Stokey, E. & Zeckhauser, R. (1978).
A Primer for Policy Analysis
. New York: W.W.
Norton & Company.



Tello, E.R. (1989).
Obj
ect
-
Oriented Programming for Artificial Intelligence: A guide to
tools and system design
. Reading, MA: Addison


Wesley.


Thierauf, R.J. (1987).

A Problem
-
Finding Approach to Effective Corporate Planning
.
Wesport, CT: Quorum Books.


Van Doren, C (1991).
A
History of Knowledge: Past, Present, and Future
. New York:
Ballantine Books.


Winston, W.L. (1994).
Operations Research: Applications and Algorithms
. Belmont, CA:
Duxbury Press.


Williams, H.P. (1985).
Model Building in Mathematical Programming
. New York:
Wiley.