talk

siennaredwoodAI and Robotics

Feb 23, 2014 (3 years and 6 months ago)

76 views

Artificial Intelligence

A Brief History

1

Great Expectations

It is not my aim to surprise or shock you


but the simplest way I
can summarize is to say that there are now in the world
machines that think, that learn and that create. Moreover, their
ability to do these things is going to increase rapidly until


in a
visible future


the range of problems they can handle will be
coextensive with the range to which the human mind can be
applied.


We have invented a computer program capable of thinking non
-
numerically, and thereby solved the venerable mind
-
body
problem.







Herbert Simon, 1957r.

2

Early Successes


L
ogic

Theorist
proved
38
out of

52 t
heorems of Chapter 2 of
Principia
Mathematica


Geometry Theorem Prover
proved

theorems too hard for
undegraduate students in mathematics


ELIZA
, computer
-
based psychoterapist helped many
hypochondriacs


MYCIN
, an expert system to diagnose blood infections, was
able to perform considerably better than junior doctors

3

Trouble


Solutions developed for „microworlds” did not apply in the
real world (computational complexity)


Expert systems could not be extended to broader domains
(context)


Fiasco of the automatic translation project (context)


The spirit is willing but the flesh is weak


The vodka is good but the meat is rotten


Fiasco of the planning systems (the frame problem)

4

Planning

A

C

B

D

S1

T[On(B,A), S1]

T[Clear(B), S1]

T[Clear(C), S1]

T[Clear(D), S1]

A≠B ≠C ≠D

Plan a sequence of actions
α
=<A1,...,An> such that:



T[On(A,C), Result(
α
,S1]


T[On(D,A), Result(
α
,S1]





5

Planning, cont.

Available actions:


stack: S(x,y)


unstack: U(x,y)


For every atomic action we specify their effects through axioms:



T[Clear(x), S] &

T[Clear(y), S] & x ≠ y →







T[On(x,y), Result(<S(x,y)>, S)]

T[On(x,y), S] & T[Clear(x), S] →





T[Clear(y), Result(<U(x,y)>, S)]







6

Planning, cont.

A

C

B

D

D

C

A

B

D

C

A

B

D

C

A

B

U(B,A)

S(A,C)

S(D,A)

7

Planning
-

proof


T[On(B,A), S1]


T[Clear(B), S1]


T[Clear(A), S2], where S2=Result(<U(B,A)>,S1)


T[Clear(C),S2)



T[On(A,C), S3], where S3=Result(<S(A,C)>,S2)





Ad hoc solution


let’s add frame axioms for the
unstack
action
:


T[Clear(x), S] → T[Clear(x), Result(<U(y,z)>,S)]


false!

8

The Frame Problem (AI version)

How to formalize changes (and lack thereof) in
the world as a result of our actions.


Adding the frame axioms does not solve the problem:



It is impractical (we would need millions of such axioms)


It is not intuitive (
we
do not do it!)


It is often
false
(what should we do when one robot is moving
the blocks while another one is painting them?)



9


Default Logic

Commonsense law of inertia: things stay as they are
unless we have knowledge to the contrary.

γ
β
:
α
Default rule where
α
,
β
,
γ

are formulas.


Once
α

has been established and
β

is consistent with what we
know, we conclude
γ
.

Example: take the generic truth„Birds fly”. In Default Logic we write this as:



flies(x)
flies(x)

:

bird(x)
If we know that Tweety does not fly (because he is an ostrich), the rule will not fire

despite the fact

that Tweety is a bird.

10

Default Logic: theory

E is an extension of <W,D> iff there exist E
0
, E
1
,
E
2
, ... such that:





0
i
i
E

E
}
E
β
,
E
α
D,
γ
β
:
α
|
{
γ
)
Cn(E

E
i
i
1
i







W
E
0

11

Default Logic: example

Quaker

Pacifist

Nixon

Republican


W={R(nixon), Q(nixon)}

}
P(x)
P(x)
:
R(x)
,
P(x)
P(x)
:
Q(x)
{
D



This theory has two extensions:

)
{P(nixon)}
Cn(W
E
1


P(nixon)})
{
Cn(W
E
2



12

Default Logic: problem

This theory also has two extensions. This time,
however, this does not agree with our intuitions.

Amish

Speaks German

Born in

Pennsylvania

Born in the USA

Hermann

We solved the Frame Problem to face the
problem of relevance.

13

What Next?

17

Path 1: Stay the Course

Projekt CYC

The problem of AI is commonsense knowledge: let’s add it
then!


Goals:


30 people are entering data from newspapers, ads,
disctionaries, etc.


After 6 years a million assertions have been entered; the goal
was 100 million


CYC had its own ontology, representations of causal
relationships and simple rules of relevance


The project came to an end in 1994 r. (after 50 mln $); its
remnants are still around today



EN
CYC
LOPEDIA

18

Path 2: Change the Paradigm

Dreyfus’s criticism: AI’s basic assumptions are wrong!



Biological assumption: the brain is a symbol
-
manipulating device like a digital computer.


Psychological assumption: the mind is a symbol
-
manipulating device like a digital computer.


Epistemological assumption: intelligent behavior can
be formalized and thus reproduced by a machine.


Ontological assumption: the world consist of
independent, discrete facts.

19

Path 2: cont.

Filozoficzni przodkowie AI (według Dreyfusa):


Kartezjusz: wszelkie rozumowanie polega na manipulacji
reprezentacjami symbolicznymi złożonymi z prostych idei


Kant: wszelkie pojęcia można zbudować z prostych
elementów przy użyciu reguł


Frege: reguły można sfromalizować tak, by używać ich
bez konieczności ich rozumienia lub interpretacji


20

Path 2 cont.


Mind (intelligence) is:


situated in the environment (Heidegger:
In
-
der
-
Welt
-
sein
)


embodied (Merleau
-
Ponty:
le corps propre
)


AI Lab at MIT (Rodney Brooks) builds the first robots following these
tenets (e.g.
Big Dog
).


Dreyfus’s views are further developed by: Andy Clark, John
Haugeland, Michael Wheeler, Walter Freeman


New trends in cognitive science:
embodied cognition, dynamicism,
neurophenomenology, neurodynamics
...

21

Path 3: Change the Goal


Distinguish between
strong

and
weak

AI


Strong AI: we build machines that really think


Weak AI: we build machines that behave
as if
they were thinking


We are only interested in the weak AI


Even weaker version: we build machines that behave
rationally


We stay with the logistic approach

22

Path 3: State of the Art




Which of the following can be done at present?




Play a decent game of table tennis



Drive safely along a curving mountain road



Drive safely along Telegraph Avenue



Buy a week’s worth of groceries on the web



Buy a week’s worth of groceries at Berkeley Bowl



Play a decent game of bridge



Discover and prove a new mathematical theorem



Design and execute a research program in molecular biology



Write an intentionally funny story



Give competent legal advice in a specialized area of law



Translate spoken English into spoken Swedish in real time



Converse successfully with another person for an hour



Perform a complex surgical operation



Unload any dishwasher and put everything away

23

Path 3: State of the Art




Which of the following can be done at present?




Play a decent game of table tennis



Drive safely along a curving mountain road



Drive safely along Telegraph Avenue



Buy a week’s worth of groceries on the web



Buy a week’s worth of groceries at Berkeley Bowl



Play a decent game of bridge



Discover and prove a new mathematical theorem



Design and execute a research program in molecular biology



Write an intentionally funny story



Give competent legal advice in a specialized area of law



Translate spoken English into spoken Swedish in real time



Converse successfully with another person for an hour



Perform a complex surgical operation



Unload any dishwasher and put everything away

24

AI and Cognitive Science



AI 50 years ago




Cognitive

Science



AI today





Logic

Thinking

Acting

Rationally

Humanly

The central question in the discussion about the methodology of AI : can AI learn from
Cognitive Science?

Has aeronautics learn anything from ornitology?

25