The Chinese Room

blabbedharborAI and Robotics

Feb 23, 2014 (3 years and 8 months ago)

77 views


The Chinese Room

Philosophical vs. Empirical Questions


The kind of questions Turing considers concern what a machine
could, in principle, do.


Could it simulate human conversation so well that a normal
human would mistake it for a person?


Could it simulate altruistic behavior?


Could it

commit suicide


i.e. self
-
destruct?


Philosophy cannot answer these empirical questions!


These are questions for science, in particular computer science


Our question is:
If

a machine could do such things would it count as
thinking?
If

a machine could perfectly simulate human behavior
should we understand it as a thinking being?

Searle Against Strong AI



Could a machine think?


On the argument advanced here only a
machine could think, and only very special kinds of machines,
namely brains and machines with internal causal powers equivalent
to those of brains. And that is why strong AI has little to tell us about
thinking, since it is not about machines but about programs, and no
program by itself is sufficient for thinking.


Simulation and Duplication


Strong AI:
The computer is not merely a tool in the study of the
mind: rather, the appropriately programmed computer really is a
mind


Computers may
simulate

human psychology, in the way that they
may simulate weather or model economic systems but they
don

t themselves have cognitive states.


The Turing Test

is not a test for intelligence


Even if a program could pass the most stringent Turing Test, that
wouldn

t be sufficient for its having mental states.

Searle isn

t arguing against Physicalism!!!



Could a machine think?


My own view is that only a machine could
think, and indeed only very special kinds of machines, namely
brains and machines that had the same causal powers as brains.


The problem isn

t that the machines in question are physical (rather
than spiritual) but precisely that they are
not

physical since they are
abstract:


Strong AI only makes sense given the dualistic assumption that,
where the mind is concerned, the brain doesn

t matter. In strong AI
(and in functionalism as well) what matters are programs, and
programs are independent of their realization in machines…This
form of dualism…insists that what is specifically mental about the
mind has no intrinsic connection with the actual properties of the
brain.


Searle is arguing against functionalism (and, a fortiori, behaviorism)


The Chinese Room thought
-
experiment is intended to show that
passing the Turing Test isn

t sufficient for understanding.


Searle isn

t concerned with

feely


mental states


Some mental states are

feely

: they have intrinsic qualitative
character, a phenomenology, a

what it is like


to be in them.


If you don

t have them, you don

t know what they

re like


Locke

s

studious blind man


thought red was like the sound of
a trumpet


Other mental states are not

feely


in this sense


Believing that 2 + 2 = 4 isn

t

like


anything


Referring to Obama when one utters

Obama is President of the
United States


rather than just making language like noises


Speaking

rather than just making noises as

Mind the gap


or

You have entered…one…two…three…four…five.



Searle argues that machines can

t have even these
non
-
feely

mental states

Searle passes the Turing Test…

…but Searle doesn

t understand Chinese!

Symbol
-
manipulation isn

t understanding


From the external point of view…the answers to the Chinese
questions and the English questions are equally good. but in the
Chinese case, unlike the English case, I produce the answers by
manipulating uninterpreted formal symbols…For the purposes of the
Chinese, I am simply an instantiation of the computer program.

/P


[Q


(P


Q)]



1. P



ACP




2. Q


ACP




3. P


Q


1, 2, conj.



4. Q


(P


Q)


2


3, CP

5. P


[Q


(P


Q)]


1


4 CP


This is an example of
manipulating
uninterpreted formal
symbols. Do you
understand anything?

Intentionality


Searle

s complaint is that mere rule
-
governed symbol
-
manipulation
cannot generate
intentionality
.


Intentionality

is the power of minds to be about, to represent, or to
stand for, things, properties and states of affairs.


Reference is intentional in this sense: I think (and talk)
about

things


Perceptions, beliefs, desires and intentions and many other

propositional attitudes


are mental states with intentionality:
they have

content



Intentionality is
directedness

understood by Brentano as the
criterion for the mental


Only mental states have
intrinsic

intentionality: other things
have it only in a
derivative

sense to the extent that they are

directed


by intelligent beings.





Intrinsic and Derived Intentionality



Guns don

t kill

people do



the directedness of a gun to a target is
derived
: people
aim

and
direct

them to their targets.


Words don

t refer

people do


Computers, Searle argues, don

t have intrinsic intentionality: our
ascriptions of psychological states and intentional actions to them is
merely metaphorical


[F]ormal symbol manipulations by themselves don

t
have any intentionality; they are quite meaningless;
they aren

t even symbol manipulations, since the
symbols don

t symbolize anything…Such
intentionality as computers appear to have is solely
in the minds of those who program them and those
who use them, those who send in the input and
those who interpret the output.

The Mystery of Intentionality


Problem:
granting that a variety of inanimate objects we use don

t
have intrinsic intentionality, what does and why?


Can a car
believe

that that fuel/air mixture is too rich and adjust
accordingly?


My (1969) Triumph Spitfire had a manual choke


My first Toyota Corolla had a carburetor with an automatic choke
that opened up and closed on relatively simple mechanical
principles


My new Toyota Corolla has
software

I don

t understand that
does this stuff


Question:
Is there some level of complexity in the

program at which we get intrinsic intentionality?


Searle

猠䅮獷e爺
Looking in the program

is looking in the wrong place.



Intentionality: The Right Stuff


Precisely that feature of AI that seemed so appealing

the
distinction between the program and the realization

proves fatal to
the claim that simulation could be duplication…the equation,

mind
is to brain as program is to hardware


breaks down…[T]he program
is purely formal…[M]ental states and events are literally a product of
the operation of the brain, but the program is not in that way a
product of the computer.


Searle will argue that no matter how complex the software, no
matter what inputs and outputs it negotiates, it cannot be ascribed
mental states in any literal sense and


Neither can the hardware that runs the program since it lacks the
causal powers of human (and other) brains that produce intentional
states.


We may ascribe mental states to them metaphorically

in the way
we say the car (which needs an alignment)
wants

to veer left or the
jam you just cooked

is trying

to gel.

Argument from Vacuous Opposition


If strong AI is to be a branch of psychology, then it must be able to
distinguish those systems that are genuinely mental from those that
are not…The study of the mind starts with such facts as that
humans have beliefs, while thermostats, telephones, and adding
machines don't. If you get a theory that denies this point you have
produced a counterexample to the theory and the theory is
false…What we wanted to know is what distinguishes the mind from
thermostats and livers.

1.
X

s in our ordinary way of thinking have P

2.
Z

s in our ordinary way of thinking don

t have P.

3.
If, in order to argue that Y

s have P we have to redefine

having P


in such a way that Z

s count as having P, ascribing
P to Y

s is uninteresting.


Compare the Gaia Hypothesis, to

Everybody

s beautiful

in their
own way,


or to Lake Woebegone where all the children are above
average

Objections to Searle

s Chinese Room Argument

Searle considers 3 kinds of objections to his Chinese Room Argument:

1.
Even if in the original thought experiment Searle wouldn

t count as
understanding Chinese, a more complicated system that was a
machine in the requisite sense would understand Chinese


Systems Reply: add the room, rule book, scratchpads, etc.


Robot Reply: add a more elaborate input device and output


The Brain Simulator Reply: complicate the system so that it
mimics the pattern of brain activity characteristic of
understanding Chinese


The Combination Reply: all of the above

2.
The Other Minds Reply

3.
The Many Mansions Reply: We could duplicate the causal
processes of the brain as well as the formal features of brain activity
patterns


Objections to the Chinese Room Argument

1.
The Systems Reply:
While it is true that the individual person who
is locked in the room does not understand the story, the fact is that
he is merely part of a whole system, and the system does
understand the story. The person has a large ledger in front of him
in which are written the rules, he has a lot of scratch paper and
pencils for doing calculations, he has 'data banks' of sets of
Chinese symbols. Now, understanding is not being ascribed to the
mere individual; rather it is being ascribed to this whole system of
which he is a part.

2.
The Robot Reply:
Suppose we wrote a different kind of program
from Schank's program. Suppose we put a computer inside a robot,
and this computer would not just take in formal symbols as input
and give out formal symbols as output, but rather would actually
operate the robot in such a way that the robot does something very
much like perceiving, walking, moving about, hammering nails,
eating drinking
--

anything you like. The robot would, for example
have a television camera attached to it that enabled it to 'see,' it
would have arms and legs that enabled it to 'act,' and all of this
would be controlled by its computer 'brain.' Such a robot would,
unlike Schank's computer, have genuine understanding and other
mental states.

Objections to the Chinese Room Argument

3.
The Brain Simulator Reply:
Suppose we design a program that
doesn't represent information that we have about the world, such as
the information in Schank's scripts, but simulates the actual sequence
of neuron firings at the synapses of the brain of a native Chinese
speaker when he understands stories in Chinese and gives answers to
them. The machine takes in Chinese stories and questions about them
as input, it simulates the formal structure of actual Chinese brains in
processing these stories, and it gives out Chinese answers as
outputs…Now surely in such a case we would have to say that the
machine understood the stories; and if we refuse to say that, wouldn't
we also have to deny that native Chinese speakers understood the
stories? At the level of the synapses, what would or could be different
about the program of the computer and the program of the Chinese
brain?

4.
The Combination Reply:
While each of the previous three replies
might not be completely convincing by itself as a refutation of the
Chinese room counterexample, if you take all three together they are
collectively much more convincing and even decisive. Imagine a robot
with a brain
-
shaped computer lodged in its cranial cavity, imagine the
computer programmed with all the synapses of a human brain,
imagine the whole behavior of the robot is indistinguishable from
human behavior, and now think of the whole thing as a unified system
and not just as a computer with inputs and outputs. Surely in such a
case we would have to ascribe intentionality to the system.

Objections to the Chinese Room Argument

5.
The Other Minds Reply:
How do you know that other people
understand Chinese or anything else? Only by their behavior. Now
the computer can pass the behavioral tests as well as they can (in
principle), so if you are going to attribute cognition to other people
you must in principle also attribute it to computers. [Remember,
Turing argued along these lines too]

6.
The Many Mansions Reply:
Your whole argument presupposes
that AI is only about analogue and digital computers. But that just
happens to be the present state of technology. Whatever these
causal processes are that you say are essential for intentionality
(assuming you are right), eventually we will be able to build devices
that have these causal processes, and that will be artificial
intelligence. So your arguments are in no way directed at the ability
of artificial intelligence to produce and explain cognition.

The Systems Reply


While it is true that the individual person who is locked in the
room does not understand the story, the fact is that he is
merely part of a whole system, and the system does
understand the story.


Searle says:
Let the individual internalize…memorize the rules in
the ledger and the data banks of Chinese symbols, and…[do] all the
calculations in his head. He understands nothing of the Chinese,
and a fortiori neither does the system.


You

ve memorized the rules for constructing WFFs, the 18 Rules of
Inference and rules for Conditional and Indirect Proof in Hurley

s
Concise Introduction to Logic
.


Now you can do all those formal derivations in the Propositional
Calculus without looking…and get an

A


on your logic exam!


Do you understand what those symbols
mean
?



The Systems Reply


Could there be a

subsystem


of the man in the room that
understands Chinese?


Searle:
The only motivation for saying there must be a subsystem in
me that understands Chinese is that I have a program and I can
pass the Turing test; I can fool native Chinese speakers. But
precisely one of the points at issue is the adequacy of the Turing
test.


Which ever way you cut it, you can

t crank semantics (meaning) out
of syntax and symbol
-
manipulation.


Whether it

s the man in the room, the room (with rulebook,
scratchpad, etc.) or some subsytem of the man in the room, if all
that

s going on is symbol
-
pushing, there

s no understanding.


The Robot Reply


Suppose we put a computer inside a robot, and this computer
would not just take in formal symbols as input and give out
formal symbols as output, but would rather actually operate the
robot in such a way that the robot does something very much
like perceiving walking, moving about, etc.


Searle:
the addition of such

perceptual


and

motor


capacities
adds nothing by way of understanding, in particular, or intentionality,
in general…Suppose, unknown to me, some of the Chinese symbols
…come from a television camera attached to the robot and other
Chinese symbols that I am giving out serve to make the motors
inside the robot move the robot

s legs or arms…I don

t understand
anything…All I do is follow formal instructions about manipulating
formal symbols.

The Brain Simulator Reply


Suppose we design a program that doesn

琠牥灲敳r湴n
information that we have about the world…but simulates the
actual sequence of neuron firings at the synapses of the brain
of a native Chinese speaker…At the level of the synapeses,
what would or could be different about the program of the
computer and the program of the Chinese brain?


Searle:
The problem with the brain simulator is that it is simulating
the wrong things about the brain. As long as it simulates only the
formal structure of the sequence of neuron firings at the synapses, it
won't have simulated what matters about the brain, namely its
causal properties, its ability to produce intentional states. And…the
formal properties are not sufficient for the causal properties.

Block

s Chinese Nation Thought Experiment


Suppose that the whole nation of China
was reordered to simulate the workings
of a single brain (that is, to act as a
mind according to functionalism). Each
Chinese person acts as (say) a neuron,
and communicates by special two
-
way
radio in the corresponding way to the
other people. The current mental state
of China Brain is displayed on satellites
that may be seen from anywhere in
China. China Brain would then be
connected via radio to a body, one that
provides the sensory inputs and
behavioral outputs of China Brain.

Thus China Brain possesses all the elements of a functional description
of mind: sensory inputs, behavioral outputs, and internal mental states
causally connected to other mental states. If the nation of China can be
made to act in this way, then, according to functionalism, this system
would have a mind.


The Hive Mind


Individual bees aren

t too bright

but the swarm

behaves
intelligently



Is there a hive mind?

The Combination Reply


While each of the previous three replies might not be
completely convincing by itself as a refutation of the Chinese
room counterexample, if you take all three together theyare
collectively much more convincing and even decisive.


Searle:
Suppose we knew that the robot's behavior was entirely
accounted for by the fact that a man inside it was receiving
uninterpreted formal symbols from the robot's sensory receptors and
sending out uninterpreted formal symbols to its motor mechanisms,
and the man was doing this symbol manipulation in accordance with
a bunch of rules. Furthermore, suppose the man knows none of
these facts about the robot, all he knows is which operations to
perform on which meaningless symbols. In such a case we would
regard the robot as an ingenious mechanical dummy. The
hypothesis that the dummy has a mind would now be unwarranted
and unnecessary, for there is now no longer any reason to ascribe
intentionality to the robot or to the system of which it is apart.

The Combination Reply


Compare to our reasons for ascribing intelligence to animals:


Given the coherence of the animal's behavior and the assumption of
the same causal stuff underlying it, we assume both that the animal
must have mental states underlying its behavior, and that the mental
states must be produced by mechanisms made out of the stuff that
is like our stuff. We would certainly make similar assumptions about
the robot unless we had some reason not to, but as soon as we
knew that the behavior was the result of a formal program, and that
the actual causal properties of the physical substance were
irrelevant we would abandon the assumption of intentionality.


But some questions here…


Why should the right stuff matter?


What sort of stuff is the right stuff?


And why?

The Right Stuff: A Conjecture


Compare to the water/H20 case


Until recently in human history we didn

t know what the
chemical composition of water was: we didn

t know that it was
H20.


But we assumed that what
made

this stuff water was something
about the stuff of which it was composed

its hidden internal
structure.


Once we
discover

what that internal structure is, we refuse to
recognize other stuff that has the same superficial characteristics
as water.


Similarly, we don

t know what thinking/understanding/intentionality
is intrinsically

in terms of its internal workings, but regard that
internal structure (whatever it is) as what it is to think/understand.


So, when we discover that something that superficially behaves like
a thinking being doesn

t have the appropriate internal organization,
we deny that it thinks/understands/exhibits intentionality.

The Other Minds Reply


How do you know that other people understand Chinese or
anything else? Only by their behavior. Now the computer can
pass the behavioral tests as well as they can (in principle), so if
you are going to attribute cognition to other people you must in
principle also attribute it to computers.


Searle:

[T]his discussion is not about how I know that other people
have cognitive states, but rather what it is that I am attributing to
them when I attribute cognitive states to them.


Compare to Turing

s remarks about solipcism


Searle notes that the issue isn

t an
epistemic

question of how we
can
know

whether some other being is the subject of psychological
states but what it is to have psychological states.


The Many Mansions Reply


Your whole argument presupposes that AI is only about
analogue and digital computers. But that just happens to be the
present state of technology. Whatever these causal processes
are that you say are essential for intentionality (assuming you
are right), eventually we will be able to build devices that have
these causal processes, and that will be artificial intelligence.


I really have no objection to this reply save to say that it in effect
trivializes the project of strong AI by redefining it as whatever
artificially produces and explains cognition…I see no reason in
principle why we couldn't give a machine the capacity to understand
English or Chinese… But I do see very strong arguments for saying
that we could not give such a thing to a machine
where the
operation of the machine is defined solely in term of computational
processes over formally defined elements
…The main point of the
present argument is that
no purely formal model will ever be
sufficient by itself for intentionality because the formal properties are
not by themselves constitutive of intentionality
. [emphasis added]
Note: Searle isn

琠愠摵慬楳琡

And now…some questions


What is the

瑨楮朠瑨g琠t桩湫s

㼠䡯眠楮瑥牥獴楮朠楳⁓敡牬i


周敳楳?


I do see very strong arguments for saying that we could not give such a
thing [understanding a language] to a machine
where the operation of
the machine is defined solely in term of computational processes over
formally defined elements


What

is defined in terms of such computational processes:


The program as an abstract machine (at bottom, a Turing
Machine)?


The hardware (or wetware) that runs the program?


Would we run into the same difficulties in describing how humans
operate if we identified the

thing that thinks


with the

programs


they
instantiate?


Arguably

minds


don

t think and neither do brains: people do.


And computer hardware may have causal powers comparable to
human wetware.

Thought Experiments


Searle relies on a thought
-
experiment to elicit our intuitions,
elaborated in response to objections: how much does this show?


We may be guilty of

species chauvinism


vide Turing on ignoring
the appearance of the machine or its capacity to appreciate
strawberries and cream


The sequence of elaborations on the original thought experiment
may be misleading: suppose we
started

with the robot, or the brain
-
simulator?


With the development of more sophisticated computers our intuitions
about what it is to think might change. Compare, e.g.


Stravinsky (or Wagner, or Berlioz) isn

t music but just a lot of
noise


Whales are fish


Marriage is (necessarily) between a man and a woman.

Theory of Mind?


What theories of mind does Searle reject?


behaviorism: passing the Turing Test won

t do.


functionalism

at least

machine functionalism.


Cartesian dualism: Searle repeatedly remarks that the
brain

is a
machine


To what theories of mind do Searle

s arguments suggest he

s
sympathetic?


The Identity Theory?


Intentionality remains a mystery and it

s not clear what Searle

s
positive thesis, if any, comes to.


Liberalism and Species Chauvinism



Human adults are the
paradigm case
of beings with psychological
states: how similar, and in what respects similar, does something
else have to be in order to count as the subject of psychological
states?


Does the right stuff matter? If so why?



Could a machine think?


The answer is obviously
yes..Assuming it is possible to produce artificially a machine with
a nervous system, neurons with axons and dendrites, and all the
rest of it, sufficiently like ours.


Why axons and dendrites? What about Martians, etc.?


Does the right organization matter? If so, at what level of abstraction


Searle

s argument clearly tells against behaviorism, the view
that internal organization doesn

t count.


And it

s meant to tell against functionalism.

Is Intentionality

what matters

?


Or is it
consciousness