History of AI

imminentpoppedAI and Robotics

Feb 23, 2014 (3 years and 4 months ago)

54 views

History of AI


Foundations from related fields



Philosophy (400 B.C
-
)


Socrates
-
>Plato
-
>Aristotle


Socrates (socratic definitions): “I want to know
what is characteristic of piety which makes all
actions pious...that I may have it to turn to, and
to use as a standard whereby to judge your
actions and those of other men” (algorithm)


Aristotle: Try to formulate laws of rational part
of the mind. Believed in another part, intuitive
reason

3

Dualism vs. Materialism


Dualism
: The belief that mind, consciousness,
cognition, and intelligence are separate and
distinct from the material world and cannot be
explained as purely physical processes.


Rene Descartes was a dualist who believed that the
mind made contact with the physical body through the
pineal gland at the back of the brain.


Materialism
: The belief that mind, consciousness,
cognition, and intelligence are physical processes
that can be explained through normal scientific
investigation of the material world.

Descartes


Strong AI:
goal is to produce machines that
understand in the sense that humans do


Weak AI:
goal is only to produce machines that can
act intelligently


Ray Mooney (a current AI researcher):


Strong AI seems to imply materialism


If purely physical machines can be intelligent, then mind is a
physical phenomenon


Materialism seems to imply strong AI


If mind is a physical process and computers can emulate any
physical process (strong Church
-
Turing thesis), then AI must
be possible



Philosophy: Source of knowledge


Empiricism

(Francis Bacon 1561
-
1626)


John Locke (1632
-
1704): “Nothing is in the
understanding which was not in the senses”


David Hume (1711
-
1776): Principle of induction:
General rules from repeated associations between
their elements


Bertrand Russell (1872
-
1970):
Logical positivism
:
All knowledge can be characterized by logical
theories connected, ultimately, to observed
sentences that correspond to sensory inputs


Logic


George Boole (1815
-
1864): formal language for
making logical inference


Gottlob Frege (1848
-
1925): First
-
order logic (FOL)


Computability


David Hilbert (1862
-
1943): is there an algorithm for deciding
the truth of any logical proposition involving the natural
numbers?


Kurt Godel (1906
-
1978): No: undecidability.


Alan Turing (1912
-
1954): which functions are computable?


Church
-
Turing thesis: any computable function is computable
via a Turing machine


No machine can tell in general whether a given program will
return an answer on a given input, or run forever

Mathematics

Mathematics…


Intractability


Polynomial vs. exponential (Cobham 1964; Edmonds 1965)


Reduction of one class of problems to another (Dantzig,
1960; Edmonds, 1962)


NP
-
completeness (Steven Cook 1971, Richard Karp 1972)


“Electronic Super
-
Brain”


in the 1950s, many felt
computers had unlimited potential for intelligence;
intractability results were sobering.


Interesting article:
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=00476631
-

Interesting article

Mathematics…


Probability


Gerolamo Cardano (1501
-
1576): probability in gambling


Pierre Fermat (1601
-
1665), Blaise Pascal (1623
-
1662),
James Bernoulli (1654
-
1705), Pierre Laplace (1749
-
1827): new methods


Thomas Bayes (1702
-
1761): updating rule


Decision theory = probability theory + utility theory


John Von Neumann & Oskar Morgenstern 1944


Game theory

Psychology (1879
-
)


Scientific methods for studying human vision


Hermann von Helmholtz (1821
-
1894), Wilhelm Wundt (1832
-
1920)


Introspective experimental psychology


Wundt


Results were biased to follow hypotheses


Behaviorism (prevailed 1920
-
1960)


John Watson (1878
-
1958), Edward Lee Thorndyke (1874
-
1949)


Against introspection


Stimulus
-
response studies


Rejected knowledge, beliefs, goals, reasoning steps

Psychology


Cognitive psychology


Brain posesses and processes information


Kenneth Craik 1943: knowledge
-
based agent:


Stimulus translated to representation


Representation is manipulated to derive new representations


These are translated back into actions


Widely accepted now


Anderson 1980: “A cognitive theory should be like a
computer program”

Computer Engineering


Roman Abacus


Schikard’s Adding Machine, 1623


Blaze Pascal: Pascaline adding machine about 1650


Charles Babbage: Analytical and Difference
engines 1840’s


Computer engineering


Abacus

(7000 years old)


Pascaline
: mechanical adder & substractor
(Pascal; mid 1600’s)


Leibniz added multiplication, 1694


Analytic Engine
: universal computation; never
completed (ideas: addressable memory, stored
programs, conditional jumps)


Charles Babbage (1792
-
1871), Ada Lovelace

Computer engineering…

[See Wired magazine late Fall 1999]


Heath Robinson
: digital electronic computer for
cracking German codes


Alan Turing 1940, England


Z
-
3
: first programmable computer


Konrad Zuse 1941, Germany


ABC
: first electronic computer


John Atanasoff 1940
-
42, US


ENIAC
: first general
-
purpose, electronic, digital
computer


John Mauchy & John Eckert (1946)


IBM 701, 1952


the first computer to yield a
profit

Linguistics (1957
-
present)


Noam Chomsky (against B.F Skinner’s behaviorist
approach to language learning): behaviorist theory
does not address creativity in language. Chomsky’s
theory was formal enough it could in principle be
programmed. Chomsky hierarchy of grammars for
formal languages very important in CS (regular
grammars, context
-
free grammars, and so on).


Much of the early work in KR was tied to language
and informed by research in linguistics

History of AI

Birth of AI (1943
-
56)


Warren McCulloch & Walter Pitts (1943): ANN with on
-
off neurons


Neurons triggered by sufficient #neighbors


Showed that any computable function computable with some
network like this


Logical connectives implementable this way


Donald Hebb’s 1949 learning rule


Arguable forerunner of both logicist and connection traditions in
AI


Turing & Shannon chess programs, 1950s (not
implemented)


SNARC
, first ANN computer, Minsky & Edmonds,
1951(3000 vacuum tubes and a surplus automatic pilot
mechanism from a B
-
24 bomber)

Birth of AI...


Dartmouth 1956 workshop for 2 months


Term “artificial intelligence”


Fathers of the field introduced (McCarthy,
Minsky, Shannon, Samuel, Selfridge, Newell
and Simon


“Carnegie Tech”)


Logic Theorist
: program for proving
theorems by Alan Newell & Herbert Simon

Skakey: SRI 1966
-
72

Early enthusiasm (1952
-
69)


Claims: computers can do X


General Problem Solver
, Newell & Simon


Intentionally solved puzzles in a similar way as humans do (order of
subgoals, etc)


Geometry Theorem Prover
, Herbert Gelernter, 1959


Arthur Samuel’s learning checkers program 1952


LISP, time sharing, Advice taker: McCarthy 1958


Integration, IQ geometry problems, algebra stories


Blocks world: vision, learning, NLP, planning


Adalines [Widrow & Hoff 1960], perceptron convergence
theorem [Rosenblatt 1962]

Early AI Researchers were not
shy about claims


It is not my aim to surprise or shock you

but the simplest way
I can summarize is to say that there are now in the world
machines that think, that learn and that create. Moreover, their
ability to do these things is going to increase rapidly until

in a
visible future

the range of problems they can handle will be
coextensive with the range to which human mind has been
applied. (Herb Simon, 1957)


In 1958, he predicted that within 10 years a computer would be
chess champion, and an important new mathematical theorem
would be proved by machine. Claims such as these turned out to
be wildly optimistic.


Methods were demonstrated on 1 or 2 simple examples. Failed
on others.



A dose of reality (1966
-
74)


Simple syntactic manipulation did not scale


ELIZA (example rule: if sentence contains “mother”, then say: “tell me more about your
family”)


However, sometimes such bots (e.g. Julia) can fool humans


“the spirit is willing but the flesh is weak”
-
> “the vodka is good but the meat is rotten”
(simple syntactic transformations and word replacement)


In 1966, a report by an advisory committee found that “there has been no machine translation of
general scientific text, and none is in immediate prospect.” All U.S. government funding for
academic translation projects was cancelled.


Intractability (can’t just add more machines)


Machine evolution (genetic algorithms) did not scale


Failure to come to grips with the “combinatorial explosion” was one of the main criticisms of
AI contained in the Lighthill report (Lighthill, 1973), which formed the basis for the decision
by the British government to end support for AI research in all but two universities.


Perceptrons

book with negative result on representation capability of 1
-
layer
ANNs [Minsky & Papert]


Funding dwindled


Ironically, back
-
prop for multi
-
layer networks caused resurgence in the 80’s. It was
actually discovered first in 1969 (Bryson and Ho)

ELIZA

[Weizenbaum 1966]


Men are all alike.


IN WHAT WAY


They're always bugging us about something or other.


CAN YOU THINK OF A SPECIFIC EXAMPLE


Well, my boyfriend made me come here.


YOUR BOYFRIEND MADE YOU COME HERE


He says I'm depressed much of the time.


I AM SORRY TO HEAR THAT YOU ARE DEPRESSED


It's true that I am unhappy.


DO YOU THINK COMING HERE WILL HELP YOU NOT BE UNHAPPY


I need some help, that much seems certain.


WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME HELP


Perhaps I could learn to get along with my mother.


TELL ME MORE ABOUT YOUR FAMILY


My mother takes care of me.


WHO ELSE IN YOUR FAMILY TAKES CARE OF YOU

ELIZA…



HOW DO YOU DO. PLEASE STATE YOUR PROBLEM


I'm wondering whether computers can think.


WHY DO YOU MENTION COMPUTERS


Since you are one I thought I would ask you
-

can you think?


DID YOU THINK THEY MIGHT NOT BE ONE YOU THOUGHT YOU WOULD ASK
I
-
CAN I THINK


Let's talk just about you
-

not me. Can you think?


YOU LIKE TO THINK I
-

NOT YOU
-

DON'T YOU


Pardon?


WHAT DOES THAT SUGGEST TO YOU?


What does it suggest to you?


TELL ME ABOUT WHAT DOES IT SUGGEST TO I


That's nonsense
-

which of us do you mean by I?


WHY DO YOU THINK I MEAN BY YOU


The question meant to which of us is the word referring.


SUPPOSE THE QUESTION MEANT TO WHICH OF US WERE NOT THE WORD
REFERRING


I think you are an extremely stupid program.


PERHAPS YOU WOULD LIKE TO BE AN EXTREMELY STUPID PROGRAM

Knowledge
-
based systems (1969
-
79)


DENDRAL
: molecule structure identification [Buchanan et al.]


Knowledge intensive


Mycin
: medical diagnosis [Feigenbaum, Buchanan, Shortliffe]


450 rules; knowledge from experts; no domain theory


Better than junior doctors


Certainty factors


PROSPECTOR
: drilling site choice [Duda et al]


found a large
deposit


Domain knowledge in NLP (Winograd’s SHRDLU); Roger
Schank and his students; Wood’s LUNAR system.


Knowledge representation: logic, frames...

AI becomes an industry (1980
-
88)


R1
: first successful commercial expert system,
configured computer systems at DEC; saved 40M$/year


1988: DEC had 40 expert systems, DuPont 100...


Nearly all major US corps had its own AI group working
on expert systems


1981: Japan’s 5th generation project


Software tools for expert systems: Carnegie Group,
Inference, Intellicorp, Teknowledge


LISP
-
specific hardware: LISP Machines Inc, TI,
Symbolics, Xerox


Industry: few M$ in 1980
-
> 2B$ in 1988

Return of ANNs (1986
-
)


Mid
-
1980s, different research groups
reinvented backpropagation (originally from
1969)


Disillusionment on expert systems


Fear of AI winter

Recent events (1987
-
)


Rigorous theorems and experimental work rather than intuition


Real
-
world applications rather than toy domains


Building on existing work


E.g. speech recognition


Ad hoc, fragile methods in 1970s


Hidden Markov models now


E.g. planning (unified framework helped progress)

Intelligent Agents (1995
-
present)


SOAR


complete agent architecture (Newell, Laird,
Rosenbloom).


Intelligent agents on the Web. So common that
“.bot” has entered everyday language


AI technol0gies underlie many internet tasks, such
as search engines, recommender systems, site
aggregators.


Can’t work on subfields of AI in isolation. E.g.,
reasoning and planning systems must handle
uncertainty, since sensors are not perfect

Availability of very large data sets
(2001
-
present)


Data rather than which algorithm sometimes more
important


Billions of words, pictures, base pairs of genomic
sequences, …


Yarowsky (1995) showed that a simple
bootstrapping approach over a very large corpus
could be effective for WSD


Perhaps the knowledge bottleneck will be solved by
learning methods over very large datasets rather
than by hand
-
coding knowledge.

Arguments against strong AI


Theological objectives


“It’s simply not possible for a machine”


“machines cannot feel emotions” (Why?)


Dreyfus (1972, 1986, 1992): background
commonsense knowledge, the qualification
problem, uncertainty, learning, compiled forms
of decision making. Actually, his work was
helpful. AI has made progress in all of these
areas.


Arguments against strong AI


Theological objectives


“It’s simply not possible for a machine”


Godel’s incompleteness theorem: vast literature. Responses:


R&N: applies onto to formal systems that are powerful enough to do
arithmetic, such as Turing Machines. But TMs are infinite and computers
are finite. So, any computer can be viewed as a large system in
propositional logic, which is not subject to Godel’s IT.


R&N: Humans were behaving intelligently for 1000’s of years before they
invtend mathematics, so it is unlikely that formal mathematic reasoning
plays more than a peripheral role in what it means to be intelligent


R&N: Even if we grant that computers have limitations on what they can
prove, we have no evidence that humans are immune from those
limitations. “It’s impossible to prove that humans are not subject to
Godel’s incompleteness theorem because any rigorous proof would
require a formalization of the claimed unformalizable human talent, and
hence refute itself. So, we are left with an appeal to intuition that humans
can somehow perform superhuman feats of mathematical insight.”


Arguments against strong AI


Machines just do what we tell them (Maybe
people just do what their neurons tell them
to do?)


Machines are digital; people are analog


The Chinese Room argument …

John Searle’s Chinese Room


R&N p. 1033


“Searle appeals to intutition, not proof, for
this part: just look at the room; what’s there to
be a mind? But one could make the same
argument about the brain: just look at this
collection of cells (or of atoms), blindly
operating according to the laws of
biochemistry (or of physics)


what’s there to
be a mind? Why can a hynk of brain be a
mind while a hunk of liver cannot? That
remains a great mystery”