Alan Turing on Machine Learning

randombroadΤεχνίτη Νοημοσύνη και Ρομποτική

15 Οκτ 2013 (πριν από 3 χρόνια και 9 μήνες)

90 εμφανίσεις

Alan Turing on Machine Learning

Prepared by K. Mullen, May 5, 2002


Abstract: We present a précis of Alan Turing's ideas on
machine learning, working from his original, hand
-
annotated
manuscripts. We give Turing’s defense of machine learning
as a theore
tical possibility, and outline his vision of how
machine learning would be implemented. We discuss his
criteria for when a machine can be said to exhibit
intelligent behavior, and end with his views on the
implications of the development of machine learni
ng for the
future of mankind.



In 1951 Alan Turing delivered the paper 'Intelligent
machinery, a heretical theory' to the '51 Society' at
Manchester. Newly developed code
-
breaking machines had
just helped the Allies win the Second World War, proving to

the governments of the world (or at least to their security
offices) the potential intelligence of machinery. But the
phrase "machine learning" was still considered to be an
oxymoron in the world at large.

Turing endeavored to show that this view was wr
ong;
that machines could learn and exhibit intelligent
behaviour. He had worked on code
-
breaking machines during
the war, and had previously worked on the formalization of
computing machines, defining the limits of that which is
computable by a mechanical

process. Through working with
his paper machines and the machines of the war, he had
developed an intuition regarding the power of machine
learning, and the development of the field to come.

Turing began his ‘heretical theory’ of intelligent
machinery
read to the ’51 Society

'You cannot make a machine to think for you'. This is
a commonplace that is usually accepted without
question. It will be the purpose of this paper to
question it. [IH, 1]

This beginning would be a common thread through all of
Tur
ing’s publications and lectures on machine learning. He
was faced with first convincing his audience that machine
learning was possible in principle, before being able to
discuss his ideas regarding how such capabilities could be
implemented.

It is not

difficult to imagine why Turing faced such
skepticism. He himself reflected, “The very limited
character of the machinery that has been used up to recent
times (e.g. up to 1940) [has] encouraged the belief that
machinery was necessarily limited to extrem
ely
straightforward, possibly even repetitive, jobs” [IM, 1].
But the objections to the possibility of machine learning
and intelligence did not all spring from difficulty in
assimilating the new idea of the digital computer. There
were more substantia
l objections at hand.

Turing believed such objections against sophisticated
behavior on the part of machines fell into nine general
categories, all centered on the question of whether or not
machines could be said to “think”. In Turing’s view, to
think
meant the capability to imitate capabilities of
humans. Hence Turing tackled the question of whether
machines could learn and exhibit intelligent behaviour in
terms of the question “Can machines think?” The objections
as he saw them, along with his refut
ations of each, are
summarized below.


(1) The Theological Objection


Thinking is a function
of man's immortal soul. God has given an immortal soul to
every man and woman, but not to any other animal or to
machines. Hence no animal or machine can think”
[C
M, 7].
Turing responds, “I am unable to accept any part of this,
but will attempt to reply in theological terms” [CM, 8].
He continues that even within the orthodox view the
objection above is flawed in that it “…implies a serious
restriction of the omnip
otence of the Almighty” [Cm, 8].
Furthermore, he states that “In attempting to construct
such machines we should not be irreverently usurping His
power of creating souls, any more than we are in the
procreation of children: rather we are, in either case,
instruments of His will providing mansions for the souls
that He creates” [CM, 8].

(2) The 'Heads in the Sand' Objection
"’The
consequences of machines thinking would be too dreadful.
Let us hope and believe that they cannot do so."’
[CM, 8]
Turing responds
, “I do not think that this argument is
sufficiently substantial to require refutation. Consolation
would be more appropriate: perhaps this should be sought in
the transmigration of souls.” [CM, 8]


(3) The Mathematical Objection
There are a number of
resu
lts of mathematical logic (such as those of Godel,
Church, Kleene, Rosser, and Turing) which can be used to
show that there are limitations to the powers of discrete
-
state machines. These limitations imply machines cannot
think.
Turing responds, “The sho
rt answer to this argument
is that although it is established that there are
limitations to the powers of any particular machine, it has
only been stated, without any sort of proof, that no such
limitations apply to the human intellect… We too often give
w
rong answers to questions ourselves to be justified in
being very pleased at such evidence of fallibility on the
part of the machines” [CM, 9].

(4) The Argument from Consciousness
In expressing this
objection Turing quotes a
Professor Jefferson's
Lister
Or
ation for 1949, who remarks as follows, “
’Not until a
machine can write a Sonnet or compose a concerto because of
thoughts and emotions felt, and not by the chance fall of
symbols, could we agree that machine equals brain
-
that is,
not only write it but kno
w that it had written it.’
Turing
replies, “According to the most extreme form of this view
the only way by which one could be sure that a machine
thinks is to
be
the machine and to feel oneself
thinking…Likewise according to this view the only way to
kno
w that a man thinks is to be that particular man. It is
in fact the solipsist point of view. It may be the most
logical view to hold but it makes communication of ideas
difficult.” He continues, “I do not wish to give the
impression that I think there is n
o mystery about
consciousness…But I do not think these mysteries
necessarily need to be solved before we can answer the
question with which we are concerned in this paper. [CM, 9
-
10]

(5) Arguments from Various Disabilities
These
arguments take the form, "I

grant you that you can make
machines do all the things you have mentioned but you will
never be able to make one to do X” [CM, 10].


Turing replies, “No support is usually offered for these
statements. I believe they are mostly founded on the
principle of

scientific induction. A man has seen thousands
of machines in his lifetime. From what he sees of them he
draws a number of general conclusions…Naturally he
concludes that these are necessary properties of machines
in general.” [CM, 10]

(6) Lady Lovelace'
s Objection
In stating this objection,
Turing quotes a memoir of Lady Lovelace, in which she
writes,

"’The Analytical Engine has no pretensions to
originate anything. It can do whatever we know

how to order
it to perform’"

[CM, 12] Turing responds, “Who ca
n be
certain that 'original work' that he has done was not
simply the growth of the seed planted in him by teaching,
or the effect of following well
-
known general principles.”
[CM, 12]. He notes that a variant of this objection is “a
machine can never 'ta
ke us by surprise'“ He writes, “This
statement is a more direct challenge and can be met
directly. Machines take me by surprise with great
frequency. This is largely because I do not do sufficient
calculation to decide what to expect them to do…The view
th
at machines cannot give rise to surprises is due, I
believe, to a fallacy to which philosophers and
mathematicians are particularly subject. This is the
assumption that as soon as a fact is presented to a mind
all consequences of that fact spring into the
mind
simultaneously with it.” [CM, 12]

(7) Argument from Continuity in the Nervous System

The
nervous system is certainly not a discrete
-
state machine,
so it cannot be modeled with a discrete
-
state system.
Turing responds that experience with simple anal
og machines
has made clear that one can get the same type of answers
produced by an analog machine using a digital computer.



(8) The Argument from Informality of Behaviour “
It is
not possible to produce a set of rules purporting to
describe what a man s
hould do in every conceivable set of
circumstances”.

Since computational processes operate in
accord with such rule
-
tables, this implies their behaviour
will never be intelligent.
Turing responds, “The only way
we know of for finding such laws is scientif
ic observation,
and we certainly know of no circumstances under which we
could say, ‘We have searched enough. There are no such
laws.’” [CM, 13] So the premise of this argument is not
supportable.

(9) The Argument from Extra
-
Sensory Perception

Humans
have
E.S.P. while machines do not, so machine thought
cannot be as powerful as human thought.

Turing responds
that if E.S.P. existed, this would be a strong argument.
However, the fact remains that humans do many types of
thinking without E.S.P., so presumably

this argument does
not forbid machines from thinking without E.S.P.

Turing’s formulation of objections against the
possibility of machine learning, coupled with his ideas on
why these objections were misplaced, provided reason for
academics to reconside
r their assumptions regarding the
possibility of machine learning. For whether they believed
intelligent, learning machines possible or not, computer
scientists like Turing were dreaming of ways to make such
machines a reality.

Turing’s ideas on creating

intelligent machines allows
machine learning a central role. Furthermore, he
anticipates major themes of modern machine learning. He
writes,

If we are trying to produce an intelligent machine,
and are following the human model as closely as we can
we s
hould begin with a machine with very little
capacity to carry out elaborate operations or to react
in a disciplined manner to orders (taking the form of
interference) [interference is stimuli for training in
Turing’s terminology]. Then by applying appropr
iate
interface, mimicking education, we should hope to
modify the machine until it could be relied on to
produce definite reactions to certain commands.” [IM,
20].

The parallel here with machine learning via trained neural
networks is straightforward.


Turing also predicted that reinforcement learning
would take a place among successful methods for
implementing machine learning. He writes,

The training of the human child depends largely on a
system of rewards and punishments, and this suggests
that i
t ought to be possible to carry through the
organizing [of an intelligent machine] with only two
interfering inputs, one for ‘pleasure’ or ‘reward’ (R)
and the other for ‘pain’ or ‘punishment’ (P)…With
appropriate stimuli on these lines, judiciously
operat
ed by the ‘teacher’ one may hope that the
‘character’ will converge towards the one desired,
i.e. that wrong behavior will tend to become rare.
[IM, 25
-
26]

He also remarks, foreseeing the field of genetic
algorithms,

Further research into intelligence of

machinery will
probably be very greatly concerned with ‘searches’…It
may be of interest to mention…other kinds of search in
this connection. There is the genetic or evolutionary
search by which a combination of genes is looked for,
the criterion being su
rvival value. [IM, 32]


Not all of Turing’s ideas on machine learning would be
realized. We will not investigate here his archaic methods
of implementing machine learning using Turing machines (his
P
-
type unorganized machines; see [IM, 27
-
33]). We will
m
ention only briefly his comment, “I suggest that the
education of the machine be entrusted to some highly
competent schoolmaster…” [IH, 5], and corresponding belief
that it would soon be as easy for a schoolmaster to school
a machine as it is for him to sc
hool a child. This
certainly is not the case, (yet).


Turing’s ideas on creating intelligent machines,
machines that could play games and learn, beg the
questions, ‘What separates intelligent, thinking machines
from other machines?’ and ‘Do all machines
think, but some
more than others?’ Indeed, Turing designed the first
question as a sort of refutation against the idea that
machines cannot think.

Turing proposed that the question “Can machines
think?” be replaced by the question “Can machines play the

imitation game?” The imitation game “is played with three
people, a man (A), a woman (B), and an interrogator (C) who
may be of either sex. The interrogator stays in a room
apart from the other two. The object of the game for the
interrogator is to deter
mine which of the other two is the
man and which is the woman. He knows them by labels X and
Y, and at the end of the game he says either 'X is A and Y
is B' or 'X is B and Y is A'. The interrogator is allowed
to put questions to A and B” [CM, 1] in writin
g, to which A
and B may respond in typed writing. The game played with
the machine has the man replaced by the machine, and the
new purpose is to distinguish the woman from the machine.
Should the machine be able to convince the interrogator at
least hal
f of the time that it is in fact the woman, the
machine wins the game. Turing held that in the case that a
machine wins, we have an affirmative answer to the question
“Can machines think?”

This methodology for determining whether machines
think is undesir
able in that it does not provide a way to
answer the second question, ‘Do all machines think, but
some more than others?’ Turing recognized this. He also
realized that the concept of intelligence was just as
difficult to pin down.


The last section of
his paper Intelligent
Machinery is subtitled, “Intelligence as an Emotional
Concept”. In it he says,

The extent to which we regard something as behaving in
an intelligent manner is determined as much by our own
state of mind and training as by the proper
ties of the
object under consideration. If we are able to explain
or predict its behaviour or if there seems to be
little underlying plan, we have little temptation to
imagine intelligence. With the same object therefore
it is possible that one man would

consider it as
intelligent and another would not; the second man
would have found the rules out of its behaviour. [IM,
37]

And so the question “When is a machine intelligent?” was
left by Turing unanswered, and established as unanswerable
in an objective

fashion.


We move now to Turing’s ideas regarding the future of
machine learning. At the time he wrote his papers on the
subject, he was working with paper machines, electronic
computers being unavailable to all but a few computer
scientists. He writes,


I would like to investigate other types of unorganized
machine, and also to try out organizing methods that
would be more nearly analogous to our ‘methods of
education’. I made a start on the latter but found
the work altogether too laborious at present
. When
some electronic machines are in actual operation I
hope that they will make this more feasible. It
should be easy to make a model of any particular
machine that one wishes to work on within such a
U.D.C.M (Universal Digital Computing Machine) inst
ead
of having to work with a paper machine as at present.
[CM, 32]

Even though he was constrained to work with paper machines,
Turing believed machine learning was to establish itself
firmly and quickly, and he thought it “probable for
instance that at the

end of the century it will be possible
to program a machine to answer questions in such a way that
it will be extremely difficult to guess whether the answers
are being given by a man or by the machine [CD, 4
-
5].


Beyond the end of the century, his pred
ictions are of
a different character. He writes, “…it seems probable that
once the machine thinking method had started, it would not
take long to outstrip our feeble powers. There would be no
question of the machines dying, and they would be able to
conv
erse with each other to sharpen their wits. At some
stage therefore we should have to expect the machines to
take control, in the way that is mentioned in Samuel
Butler’s ‘Erewhon’” [IH, 10]. We have yet to see this
particular prediction realized. Even
so, the line between
biological and mechanical has continued to blur. It seems
very reasonable to predict that Turing’s prediction will
eventually come true, in own form or another. We can, of
course, hope to become the machines that take control.



Re
ferences:

All source material taken from the Turing Digital Archive,

http://www.turingarchive.org/ held at King's College,
Cambridge.


'Intelligent machinery, a heretical theory', a lecture
given to '51 Society' at Manchester. 2 versions, one TS
numbered
1
-
10, the other CTS numbered 96
-
101. c. 1951Paper,
16 sh. in envelope. See also AMT/B/20 for additional TS
version



'Can digital computers think?'. TS with AMS annotations of
a talk broadcast on BBC Third Programme, 15 May 1951.Paper,
8 sh. in envelope. S
ee also letter from C. Strachey in
AMT/D/5


'Digital computers applied to games'. n.d. AMT's
contribution to 'Faster than thought', ed. B.V. Bowden,
London 1953. TS with MS corrections. R.S. 1953bPaper, 10
sh. in envelope.


'Computing machinery and intelli
gence'. TS copy of article
published in Mind (Vol. LIX, Oct. 1950). In Mrs Turing's
ring
-
backed binder with MS title by her. The paper is
paginated pp.2
-
40 and also pp.146
-
84. There is a TS
footnote added between pp.159 and 160 with a MS note in
[?AMT's] h
and. Paper, 40 sh. in envelope. See AMT/B/19 for
off
-
print


TS, 'Intelligent machinery', with AMS corrections and
additions. Pages numbered 1
-
37, with 2 un
-
numbered pages of
references and notes. Page 1 has MS note by R.O. Gandy,
'Turing's typed draft'. n.
d.Paper, 40 sh. in envelope.