Intro to AI

gudgeonmaniacalIA et Robotique

23 févr. 2014 (il y a 3 années et 5 mois)

134 vue(s)


11
-
11
-
2013


Connectionism


Symbolic AI vs.
Subsymbolic

AI


Artificial Life


Artificial Neural Networks: ANNs


Read
: AIMA Chapter 18, Learning from Examples

return and discuss HW#7

Exam
#2
,
Wednesday
,
11/13,
7:00
pm, SC166



“everything of interest in
cognition happens
above the 100
millisecond level


the
time it takes you to
recognise your mother”





Herb A. Simon

“everything of interest in
cognition happens
below the 100
millisecond level


the
time it takes you to
recognise your mother”





Douglas R. Hofstadter

Newell & Simon, Turing Award Lecture, 1976

Intelligent activity, in either human or machine, is
achieved through use of:

1.
Symbol patterns to represent significant aspects
of a problem domain.

2.
Operations on these patterns to generate
potential solutions to problems.

3.
Search to select a solution from among these
possibilities.


A physical symbol system is a machine that
produces through time an evolving collection of
symbol structures. Such a system exists in a world
of objects wider than just these symbolic
expressions themselves.

The Physical Symbol System Hypothesis


A physical symbol system has the
necessary

and
sufficient

means for general intelligent action.

PSS => Intelligence


Well…maybe

Intelligence => PSS

???


Robot

(
Moravec

,1999)

Computers

Calculate

Sense & Act

Reason

Humans

On the fringes:



Humans are slow, error
-
prone calculators.



Robots sense and act no better (and much slower) than frogs.

The battle for the middle ground:



Deep Blue beat the best human chess player.



But
minimax

search ≠ “reasoning”.



Should we care?


Living organisms



Computers

Sense & Act:
10,000,000+
years.

15+

years

Reason:


100,000+
years.


30+

years

Calculate:


1,000+

years



50+

years


Evolution of reasoning was tightly constrained and influenced by
sensorimotor

capabilities. Else extinction!


GOFAI systems are often
in their own little worlds,
making
unreasonable assumptions about independent
sensorimotor

apparatus.


To achieve AI’s scientific goal of understanding human intelligence,
the road from sense
-
and
-
act to reasoning via simulated evolution
may be the only way.


But, to achieve AI’s engineering goals, both approaches seem
important. E.g. Deep Blue (
minimax

search) for chess, Samuel’s
-
vs
-

Blondie
-
24 in checkers, etc.

Calculate

Sense & Act

Reason

GOFAI

New AI

GOFAI


Disembodied reasoning systems can’t plug
-
and
-
play on robots.


Lack of common sense => no general human reasoning
abilities.

New AI


Embodied S&A gives basis for common sense but has not yet
scaled up to sophisticated human
-
like abstract reasoning.


Complex intelligence is better understood and
more successfully embodied in artifacts by
working up from low
-
level sensory
-
motor agents
than from abstract cognitive mechanisms of
rationality (e.g. logic, means
-
ends analysis, etc.).


Cognitive Incrementalism:

Cognition (and hence
common sense) is an extension of sensorimotor
behavior.


Brooks, Steels, Pfeifer, Scheier, Beer, Nolfi,
Floreano…


2 2 2 2 2 2 2 2

2 1 7 0 1 4 0 1 4 2

2 0 2 2 2 2 2 2 0 2

2 7 2 2 1 2

2 1 2 2 1 2

2 0 2 2 1 2

2 7 2 2 1 2

2 1 2 2 2 2 2 2 1 2 2 2 2 2

2 0 7 1 0 7 1 0 7 1 1 1 1 1


2 2 2 2 2 2 2 2 2 2 2 2 2

Langton Loop

Cellular Automata

Simulated Real Worlds

Simple Robots


Synthetic
:

Bottom
-
up, multiple interacting agents



Self
-
Organizing
:
Global structure is emergent.




Self
-
Regulating
:
No global/centralized control.



Adaptive
:
Learning and/or evolving



Complex
:

On the edge of chaos; dissipative


Key focus of Situated & Embodied AI (I.e., Alife AI)


But now, often at level of simple organisms (ants, flies,
frogs, etc.)


Machine Learning (ML) is also a key part of GOFAI.


Alife AI is very interested in subsymbolic ML
techniques:


Artificial Neural Networks (ANNs)


Evolutionary Algorithms (EAs)


Learning:

agents modify their own behavior
(normally to improve performance) in their lifetime.


Evolution:

populations of agents change their
behavior over the course of many generations.


Both:

Evolving populations of learning agents

Why the ALife approach to AI is worth pursuing?

1.
Intelligence (rationality, intentionality,
cognition) are (often only) in the eye of the
observer
.

2.
Mind, body and environment are very tightly
coupled, with cognition built on top of the
sensorimotor apparatus. Sensing and acting
come first (in both evolution and human
development), so our understanding of
cognition is enhanced by knowing how it arises
from and interacts with sensing and acting.


Also ‘connectionism’, ‘Parallel Distributed
Processing’, ‘
subsymbolic

AI’


AI technique


Analogous to processes in the brain


“Intelligence emerges from the interactions of
large numbers of simple processing units”
(
Rumelhart

et al., 1986)


Roughly based on brains


some simplification is
made

from Searleman & Searleman, Introduction to Cognition


Excitatory (E) and Inhibitory (I) impulses

(from Searleman & Searleman, Introduction to Cognition)



Dense
: Human brain has 10
11

neurons, 10
14
synapses



Highly Interconnected
: Human neurons have 10
4

fan
-
in.



Neurons firing
: send action potentials (APs) down the
axons when sufficiently stimulated by SUM of incoming APs
along the dendrites.



Neurons can either
stimulate or inhibit
other neurons.



Synapses vary in transmission efficiency

Axon

Dendrites

Synapses

Neurons


Robust


fault tolerant and degrades gracefully


Flexible
--

can learn without being explicitly
programmed


Can deal with fuzzy, probabilistic information


Is highly parallel


Key intuition: Much of intelligence is in the connections
between the 10 billion neurons in the human brain.


Neuron switching time is roughly 0.001 second; scene
recognition time is about 0.1 second. This suggests
that the brain is massively parallel because 100
computational steps are simply not sufficient to
accomplish scene recognition.


Development:

Formation of basic connection topology


Learning:

Fine
-
tuning of topology + Major


synaptic
-
efficiency changes.


The matrix IS the intelligence!


Distributed representational and computational
mechanism based (very roughly) on
neurophysiology.


A collection of simple interconnected processors
(neurons) that can learn complex behaviors &
solve difficult problems.


Wide range of applications:


Supervised Learning

o
Function Learning (mapping from inputs to outputs)


Time
-
Series Analysis, Forecasting, Controller Design

o
Concept Learning


Standard Machine Learning Classification tasks:
Features => Class


Unsupervised Learning

o
Pattern Recognition (Associative Memory models)


Words, Sounds, Faces, etc.

o
Data Clustering


Unsupervised Concept Learning

Characteristics


Large number of simple neuron
-
like processing
elements


Large number of weighted connections between the
elements (the weights encode the knowledge)


Highly parallel, distributed control


Fault
-
tolerant.


Degrades gracefully.


Inductive learning of internal representation


Weights are tuned automatically


Each unit (node) receives signals from its input
links and computes a new activation level that
it sends along all output links.


Computation is split into two steps:

(1)
in
i

=
W
j,i

a
j

, the linear step, and then


(2)
a
i


g(
in
i
), the nonlinear step.

j

node = unit

node

node

node

link

weight of link

activation

level

A NODE

in
i

g

a
i

input

function

activation function

output

input links

output

links

a
j

W
j,i

a
i

= g(
in
i
)

Step function

Sign function

Sigmoid (logistic) function

step(x) = 1, if x >= threshold


0, if x < threshold

(in picture above, threshold = 0)

sign(x) = +1, if x >= 0


-
1, if x < 0

sigmoid(x) = 1/(1+e
-
x
)

Adding an extra input with activation a
0

=
-

1 and weight

W
0,j

= t is equivalent to having a threshold at t. This way

we can always assume a 0 threshold.

i
n
i
i
x
w


0
o/w

0

and

0

if

1
0




i
n
i
i
x
w
o
x
0

x
n

w
0

w
n


o

Threshold units

axon

dendrites

dendrites

synapse

cell