Artificial Intelligence [T1: Introduction]

vinegarclothAI and Robotics

Jul 17, 2012 (4 years and 11 months ago)

379 views

Artificial Intelligence.
T1: Introduction
Artificial Intelligence [T1: Introduction] 2
Readings :
• CHAPTERS 1 + 2 from Russell + Norvig
• CHAPTERS 1 + 2 from Nilsson
Artificial Intelligence [T1: Introduction] 3
What is AI?
￿
AAAI
: "the scientific understanding of the mechanisms underlying thought
and intelligent behavior and their embodiment in machines.“
￿
Understand what is intelligence (more general than “human thinking”).
￿
Build intelligent devices.
￿
Origin:
￿
First AI efforts begin around WW II.
￿
The name is coded in a 1956 conference (Dartmouth) by John MacCarthy
[organizers: J. McCarthy, M. L.Minsky, N. Rochester, and C.E. Shannon].
￿
Intelligent behavior:
￿
Perception, reasoning, learning, communication, interaction with a complex
environment (including other agents).
￿
AI encompasses:
￿
Knowledge representation, automated reasoning, machine learning,natural
language processing, perception, computer vision, robotics.
Artificial Intelligence [T1: Introduction] 4
Foundations of AI: Philosophy
￿
￿
Formal rules to derive valid conclusions
Formal rules to derive valid conclusions
￿
Rationality and logos
￿
Deductions based on syllogisms
(Aristotle 384-322 b.C.)
￿
Combinatorial reasoning
(Llull c. 1315, Hobbes, 1588-1679)
￿
Automated reasoning
(Leibniz 1646-1716)
￿
Formal logic.
￿
￿
Mind / body problem
Mind / body problem
￿
Dualism
(Descartes 1596-1650)
￿
Human intelligence has its origin in not subject to physical
laws.
￿
Materialism
: Brains produce minds.
￿
￿
Origins of knowledge and its relation to reality
Origins of knowledge and its relation to reality
￿
Idealism
.
￿
Nominalism
Artificial Intelligence [T1: Introduction] 5
Foundations of AI: Philosophy II
￿
Empiricism
(Locke 1632-1704, Hume 1711-1776)
Induction: General rules can be learned by repeated
association between its elements.
￿
Logical positivism
(Wittgenstein 1889-1951, Russell 1872-1970,
Carnap 1891-1970, and the Vienna Circle)
Knowledge = logical theories connected to “atomic”
observation sentences that correspond to sensory inputs.
￿
￿
Knowledge and action
Knowledge and action
￿
Goal based analysis
:
￿
Aristotle: Logical connection between goals and the knowledge
of the outcome of the action.
￿
Antoine Arnauld (1612-1694) [ several / no actions to goal]
￿
Utilitarianism
(1863) John Stuart Mill 1806-1873
Actions guided by “rational decisions”
Artificial Intelligence [T1: Introduction] 6
Foundations of AI: Mathematics
￿
￿
Formal Logic
Formal Logic
￿
Propositional (Boolean) logic
. (Leibnitz,Boole 1815-1864)
￿
First-order logic
: Includes objects + relations (Frege 1848-1925)
￿
“Principia Mathematica”, 1910 Russell + WhiteHead.
￿
Theory of reference
(Tarski 1902-1983).
￿
Incompleteness theorem
(Gödel 1906-1978).
￿
￿
Algorithms
Algorithms
￿
Ancient Greece: Euclide (g.c.d.), Eratostenes (primes)
￿
Al-Khwarizmi (Persia, 9th. Century)
￿
Pascal, Leibniz,Ada Lovelace.
￿
Intractability
: NP-complete problems (Cook 1971, Karp 1972)
￿
￿
Probability theory:
Probability theory:
￿
Cardano 1501-1576, Fermat 1601-1665, Pascal, Bernoulli 1654-
1705, Bayes 1702-1761 (Bayes rule),Laplace 1749-1827,
Artificial Intelligence [T1: Introduction] 7
Foundations of AI: Economics
￿
Adam Smith (1723-1790) “An Inquiry into the nature and causes of
wealth of nations” (1776)
￿
Utility theory [Léon Walras 1834-1910, Frank Ramsey, 1931]
￿
Decision theory: How to make choices that lead to desired outcomes.
Combines utility + probability theory
￿
Game theory: “The theory of Games and Economic Behavior”, 1944
John von Neumann, Oskar Morgenstern
￿
Operations research: Rational decisions with delayed payoffs.
Richard Bellman: Markov decision processes (1956)
￿
Bounded rationality [H. Simon 1916-2001]
Satisficing rather than optimizing.
￿
Agent models
Artificial Intelligence [T1: Introduction] 8
Foundations of AI: Computers / Cybernetics
￿
￿
Computer science
Computer science
￿
Automatic calculus: Pascal, Leibniz, Neper,Babbage (difference
engine, analytical engine), Ada Lovelace, von Neumann, Zuse,
Turing, modern computers (ABC Atanasoff+ Berry, ENIAC,
Mauchly and Eckert).
￿
￿
A. Turing:
A. Turing:
Halting problem, Turing machines, Turing test
Halting problem, Turing machines, Turing test
￿
￿
A. Church:
A. Church: Lambda calculus, Chuch-Turing conjecture (1935)
￿
￿
Theory of artificial automata
Theory of artificial automata (von Neumann)
￿
￿
Cybernetics / control theory
Cybernetics / control theory
￿
Stable feedback (homeostatic) systems:
water clocks [Ktesibios of Alexandria 250 b.c.]
thermostat [Drebel 1572,1633]
steam engine [Watt, 1736-1819]
￿
”Cybernetics” [1948, Norbert Wiener]
￿
Control theory: Maximize objective function over time.
Artificial Intelligence [T1: Introduction] 9
Foundations of AI:
Neuroscience / Psychology / Linguistics
￿
￿
Neuroscience:
Neuroscience:The study of the brain
￿
￿
Neurons
Neurons
[Santiago Ram
[Santiago Ram
ó
ó
n y
n y
Cajal
Cajal
1852
1852
-
-
1934
1934
]
]
￿
￿
Measurements of brain activity
Measurements of brain activity
￿
￿
Electroencephalograph [EEG, Hans Berger, 1929
Electroencephalograph [EEG, Hans Berger, 1929
]
]
￿
￿
functional magnetic resonance imaging [
functional magnetic resonance imaging [
fMRI
fMRI
, Ogawa et al. 1990
, Ogawa et al. 1990
]
]
￿
￿
Models of information processing in the brain.
Models of information processing in the brain.
￿
￿
Psychology
Psychology
￿
Behaviorism: Behavior as a set of stimulus-response associatons.
[John Watson 1878-1958, B. F. Skinner “Verbal Behavior”, 1957
]
]
￿
Cognitive psychology: Mental models (knowledge, beliefs, intentions,
etc.) as scientific objects [William James 1842-1919, Bartlett 1886-
1960, Craik, Broadbent
]
]
.
.
￿
Cognitive science: MIT workshop, 1956 [Chomsky, Newell, Simon
]
]
￿
￿
Linguistics:
Linguistics: Noam Chomsky, “Syntactic structures”, 1957
Generative grammar, computational linguistics
Artificial Intelligence [T1: Introduction] 10
A brief history of AI: Pioneers
￿
Neural networks
￿
Artificial neurons [McCulloch and Pitts. "A Logical Calculus of the Ideas
Immanent in Nervous Activity“, 1943].
￿
Hebbian learning [Donald Hebb, 1946]
￿
SNARC: Neural network computer [Marvin Minsky
, Dean Edmonds, 1951]
￿
A. Turing, “Computing machinery and intelligence”
Mind, 59, 433:460, 1950
￿
Turing test
￿
Machine learning: Reinforcement learning, Genetic algorithms.
￿
Claude Shannon: chess playing as search (1950).
￿
Dartmouth conference, 1956
organizers: J. McCarthy, M. L.Minsky, N. Rochester, and C.E. Shannon.
Logic Theorist, by A. Newell + H. Simon, a computer program capable of
demonstrating theorems in “Principia Mathematica”.
Artificial Intelligence [T1: Introduction] 11
A brief history of AI: Early work
[
http://www.aaai.org/AITopics/html/history.html
]
￿
General Problem Solver, 1957 Newell and Simon, Mimic human
reasoning.
￿
Nathaniel Rochester’s group at IBM
￿
Programs for checkers, 1952-1962 Arthur Samuel
￿
Geometry Theorem Prover, Herbert Gelernter 1959
￿
John McCarthy (logicist AI)
￿
Lisp, 1958
￿
Advice Taker, 1958: Representation and reasoning
￿
1963 Stanford AI lab.
Projects to integrate logical reasoning + action
￿
Resolution method for first-order logic (Robinson, 1965)
Artificial Intelligence [T1: Introduction] 12
A brief history of AI: Early work
￿
Marvin Minsky (MIT AI Lab since 1958)
￿
Importance of heuristics, implementation issues (as opposed to purely
formal methods)
￿
Microworlds: Slagle’s SAINT (1961) for calculus, T. Evan’s ANALOGY
(1963) for geometry, Daniel Bobrow’s STUDENT (1964) for algebra.
￿
The block’s world [P. Winston: learning 1970; D. Huffman: vision,
1971; T.Winograd: Natural language understanding, 1972; S.Fahlman
planning,1974 ; David Waltz vision + constraint propagation 1975]
￿
Frames, 1975
￿
The society of mind, 1987
￿
Neural Networks
￿
Adalines (Widrow and Hoff, 1960)
￿
Linear perceptron + learning algorithm (F. Rossenblatt, 1962)
￿
Neural networks can represent concepts (Winograd + Cohen, 1963)
￿
Development of PROLOG (1972) by Alain Colmerauer.
Artificial Intelligence [T1: Introduction] 13
A brief history of AI: Crisis + KBS
￿
Difficulties for the AI program
￿
Incorporation of knowledge about the world.
￿
Combinatorial explosion (Lighthill report, 1973)
E.g.Difficulties in early machine evolution (GA) experiments by
Friedberg et al. 1958)
￿
Simplistic models
E.g.Linear percentron unable to solve XOR problem
[“Perceptrons” Minsky + Papert, 1969)
Solution: Use nonlinear hidden layer + Backpropagation
(Bryson
+ Ho, 1969; Werbos 1974,Rumelhart + Hinton+ Williams, 1986)
￿
Kowledge-based systems: Use domain specific knowledge
￿
DENDRAL, by Buchanan + Feigenbaum+ Lederberg., 1967 (infer
molecular structure from mass spectrogram)
￿
SHRDLU, by Winograd, 1971 (Natural language understanding)
￿
MYCIN, by Buchanan + Feigenbaum+ Shortlife, 1974 (diagnose
blood infections. Incorporates “certainty factors).
Artificial Intelligence [T1: Introduction] 14
A brief history of AI: Recent History
￿
AI as an industry
￿
Commercial expert systems (1980’s)
￿
Data mining (from mid 1990’s)
￿
Neural networks (the return of):
Connectionist methods as a complement of symbolic methods.
￿
Statistical mechanics methods, Hopfield (1982)
￿
Rumelhart + Hinton (mid 80’s) neural network models of memory.
￿
Topological networks (Kohonen)
￿
Machine Learning (Valiant, Mitchell,Vapnik, Breiman, Quinlan, …)
￿
Actual trends
￿
Bayesian networks (Judea Pearl 1988)
￿
Agents (E.g., SOAR, Newell, Laird, Rosenbloom1987-)
￿
Web crawlers, collective / distributed intelligence, robot pets,
exploration robots, sociable machines, emotional agents, domotics …
Artificial Intelligence [T1: Introduction] 15
Definitions of AI
4. ACT RATIONALLY
Design devices that exhibit
intelligent behavior
1. ACT HUMANLY
Create androids
ACT
3. THINK RATIONALLY
Construct computational
models for rational
processes
2. THINK HUMANLY
Build machines with minds
THINK
RATIONALLYHUMANLY
Artificial Intelligence [T1: Introduction] 16
1. Act like a human: The Turing Test
A. Turing, “Computing machinery and intelligence”, Mind, 59, 433:460, 1950
Operational definition of intelligence: A system is intelligent if it can
persuade another intelligent system (e.g., a human) of its intelligence.
Partial Turing Test
Imitation game: Can the AI system make the interrogator think it is the human?
Needs:natural language, knowledge representation, automated reasoning,
machine learning.
Total Turing Test:
Remove interface.
Needs:Computer vision (object perception) + robotics (object manipulation and
mobility)
Interface
Human
AI System
Human interrogator
Artificial Intelligence [T1: Introduction] 17
Advantages:
￿
Avoids entering possibly fruitless debates on ”what is intelligence?”, or
“can machines think?”
￿
The test is still a relevant measure of the success of AI systems in
exhibiting intelligent behavior.
Difficulties:
￿
Ambiguous, not reproducible.
￿
Not constructive.
￿
Not amenable to a mathematical description.
￿
Current AI efforts not directed to passing the Turing test.
Artificial Intelligence [T1: Introduction] 18
2. Think like a human: Cognitive models
Cognitive science (experimental psychology + computer science):
Models of the human thinking processes.
Advantages:
￿
Provides understanding of intelligence + human cognition.
￿
The models are designed to mirror the workings of the human mind and
are therefore intelligible.
Difficulties:
￿
The fact that a device performs like a human at a task that requires
intelligence does not mean that the AI system used in the design of the
device is an appropriate model of the corresponding human thinking
process.
￿
The best artificial design for an intelligent system need not mirror the
human mind.
Artificial Intelligence [T1: Introduction] 19
3. Think rationally: Logicist AI
Formal logic (mathematics + computer science):
Automatic reasoning procedure where valid conclusions are deduced from
correct premises [representation + axioms + derivation rules]
Advantages:
￿
Precise, unambiguous notation for statements about objects in the world
and relations among them.
￿
Resolution + complete search algorithm can, in principle, solve any
solvable problem that can be formulated in logical notation. The algorithm
might never stop if the problem does not have a solution.
Difficulties:
￿
Formalization of informal knowledge.
￿
Computational (space/time) requirements may render calculations
impossible in practice.
Artificial Intelligence [T1: Introduction] 20
4. Acting rationally: Agents
Rational agent
Autonomous system, capable of perceiving
and interacting with its
environment
, of exploration (information gathering
), learning
and
adaptation
, of formulating goals
and designing plans
to reach those goals.
The agent is rational, in the sense that it acts to achieve the best
outcome, or best expected outcome
, according to a performance
measure, conditioned to its knowledge of the world and given (limited)
computational resources.
Advantages:
￿
Integrates and extends previous definitions.
￿
Limited rationality is more realistic than absolute rationality.
Difficulties: We explore them throughout this course.
Artificial Intelligence [T1: Introduction] 21
Task environment
Task Environment
￿
Environment
￿
Perceptions (sensors) / Actions (actuators)
￿
Performance measure: Given a sequence of percepts, a rational
agent selects an action that maximizes the expected performance
given evidence (from percepts / internal knowledge) + resources.
Classification of task environments
￿
Fully observable / partially observable
￿
Deterministic / Strategic (multi-agent system) / Stochastic
￿
Episodic / sequential
￿
Static /semidynamic (agent) / dynamic (agent + environment)
￿
Discrete / continuous
￿
Single agent / multiple agent (cooperation / competition)
Artificial Intelligence [T1: Introduction] 22
Agent design
￿
Simple reflex agent [fully observable environment]
1.
Agent is in a given state
2.
State is updated by incorporating the knowledge obtained through a percept.
3.
The agent selects an action from a table containing condition-action rules.
￿
Incorporate a model to handle partially observable environments
The agent has a model of the evolution of the world + results of the actions
(model-based agent)
￿
Incorporate goals to select the best possible action(s) to reach a desirable state
(goal-based agent)
￿
Incorporate utility to select between different actions that achieve (or come
close to ) the goal (utility-based agent)
￿
Incorporate learning (learning agent)
￿
Critic
￿
Reward / penalty
Artificial Intelligence [T1: Introduction] 23
Multiagent systems
Distributed AI:
Intelligent behavior that emerges from the cooperation of autonomous
agents.
￿
Communication
￿
Coordination
￿
Negotiation, etc.
Applications:
￿
E-Commerce
￿
Optimization of industrial production processes.
￿
Real-time monitoring of telecom networks.
￿
Simmulations
￿
Ecology
￿
Social interactions.
￿
Social aspects of intelligence