PowerPoint Slides

imminentpoppedIA et Robotique

23 févr. 2014 (il y a 8 années et 10 jours)

290 vue(s)

John Searle

Can Computers Think?

Understanding of Mind (CRUM)

The central hypotheses of cognitive science
is that thinking can best be understood in
terms of representational structures in the
mind and computational processes that
operate on those structures. (Thagard)

Analogy with a Computer



Data structures +
algorithms = running

Mental representations +
computational procedures
= thinking

Abstract of Argument


Intentionality is a product of causal features of the brain.
Certain brain processes are sufficient for intentionality.

2) Instantiating a computer program is never sufficient for

3) So, the explanation of how the brain produces
intentionality cannot be that it does it by instantiating a
computer program. (1,2)

4) Any mechanism capable of producing intentionality must
have causal powers equal to those of the brain.(from 1)


Any attempt to create intentionality artificially would
have to duplicate the causal powers of the brain (2, 4)



df. The characteristic
feature of cognitive states

that they
invariably represent or are about something
beyond themselves.

intentionality = representation = meaning,

Primary Intuition

Syntax (formal rules) is not

sufficient for semantics (meaning)

Primary Intuition

Syntax (formal rules) is not

sufficient for semantics (meaning)

Belief that it is raining is not defined as a
formal shape, but as a certain mental
content with conditions of satisfaction.

“Both consciousness and intentionality are
biological processes caused by lower
neuronal processes in the brain, and neither
is reducible to something else.”

Why Intrinsic?

The mental/non
mental distinction must be
intrinsic to the system; otherwise, it would
be up to any beholder to treat people as non

Searle’s Point

Thinking is not computational operations
(process) over purely formally specified
elements. (Thinking is not following a
program, not purely formal or abstract).

Chinese Room

Set Up


Initial large batch of Chinese


Second batch of writing plus rules
for correlating with the first


Third batch plus rules
correlating these symbols with first two and
rules saying what symbols to give back
when given third batch.


Answers are indistinguishable from those of
native Chinese speakers. Person in the room
“can pass the Turing test”

Person in the room does not understand a
word of Chinese.

System as a whole does not understand a
word of Chinese.

Main Claim Against Strong AI

Whatever formal principles you put into the
computer they will not be sufficient for
understanding, since a human will be able
to follow the formal principles without
understanding anything.

Possible Replies

The Systems Reply

Individual person doesn’t understand the
story, but the system does.

The Systems Reply

Individual person doesn’t understand the
story, but the system does.

Let the person memorize the rules, etc. so that all
processing is in the person. He still doesn’t
understand Chinese.

If inputs, outputs and program in between is
sufficient for cognition then our stomach, heart,
liver, etc. are all thinking.

The Robot Reply

Suppose we had a robot who
walks, moves about, hammers nails, eats
and drinks, etc.

The Robot Reply

Suppose we had a robot who
walks, moves about, hammers nails, eats
and drinks, etc.

Robot reply tacitly concedes that cognition is not
solely a matter of formal symbol manipulation.

Suppose put person inside the robot, who doesn’t
know that she is hooked up via the robot. Robot
still doesn’t have any intentional states.

The Brain Simulator Reply

Suppose we have a program that simulates
the actual sequence of neuron firings at the
synapses of the brain of a native Chinese
speaker when he understands stories and
gives answers to them.


Strong A.I. say that we don’t need to know how
the brain works to know how the mind works. We
can understand cognition as computational
processes over formal elements

Suppose man hooked up to water pipes that
simulate neural firings and synaptic connections.
Man + pipes doesn’t understand Chinese.

The Combination Reply

Imagine a robot with a brain shaped
computer lodged in its cranial cavity;
imagine the computer programmed with all
the synapses of a human brain; imagine the
whole behavior of the robot is
indistinguishable from human behavior.


Doesn’t help AI. According to strong AI,
instantiating a formal program with the right
input and output is a sufficient condition of,
indeed is constitutive of, intentionality.

We would not attribute intentionality to it if
we knew it had a formal program.

Other Objections


Other Minds Reply

Only way we know
anything is by behavior.


Many Mansions Reply

Some day we
will build computers that have these causal

Searle’s View

“It is not because I am the instantiation of a
computer program that I am able to understand
English and have other forms of
intentionality…but as far as we know it is because
I am a certain sort of organism with a certain
biological (i.e., chemical and physical) structure,
and this structure, under certain conditions, is
causally capable of producing perception, action,
understanding, learning, and other intentional
phenomenon.” (p. 11)


Could a Machine Think?


We are machines

Could a Man
made Machine Think?


Only if you exactly duplicate the causes

Could a Digital Computer Think?


We are the instantiation of a number of programs


Could something think, understand, and so on
solely in virtue of being a computer with the right
sort of program?


Formal symbol manipulations by themselves
don’t have any intentionality; they don’t even
symbolize anything; they have syntax but no semantics.

Problem with Multiple

Distinction between program and
realization has the consequence that the
same program could have lots of crazy
realizations that have no intentionality.
Stones, toilet paper, wind, and water pipes
are the wrong kind of stuff to have
intentionality in the first place.