Introduction to Artificial Intelligence

hesitantdoubtfulAI and Robotics

Oct 29, 2013 (3 years and 7 months ago)

104 views

For Thursday


No reading or homework


Exam 2


Take
-
home due Tuesday


Read chapter 22 for next Tuesday

Program 4


Any questions?

Hypothesis Space in

Decision Tree Induction


Conducts a search of the space of decision
trees which can represent all possible
discrete functions.


Creates a single discrete hypothesis
consistent with the data, so there is no way
to provide confidences or create useful
queries.


Algorithm Characteristics


Performs hill
climbing search so may find a
locally optimal solution. Guaranteed to find
a tree that fits any noise
free training set, but
it may not be the smallest.


Performs batch learning. Bases each
decision on a batch of examples and can
terminate early to avoid fitting noisy data.

Bias


Bias is for trees of minimal depth; however,
greedy search introduces a complication
that it may not find the minimal tree and
positions features with high information
gain high in the tree.


Implements a preference bias (search bias)
as opposed to a restriction bias (language
bias) like candidate
elimination.


Simplicity


Occam's razor can be defended on the basis that
there are relatively few simple hypotheses
compared to complex ones, therefore, a simple
hypothesis that is consistent with the data is less
likely to be a statistical coincidence than finding a
complex, consistent hypothesis.


However,


Simplicity is relative to the hypothesis language used.


This is an argument for any small hypothesis space and
holds equally well for a small space of arcane complex
hypotheses, e.g. decision trees with exactly 133 nodes
where attributes along every branch are ordered
alphabetically from root to leaf.

Overfitting


Learning a tree that classifies the training data
perfectly may not lead to the tree with the best
generalization performance since


There may be noise in the training data that the tree is
fitting.


The algorithm might be making some decisions toward
the leaves of the tree that are based on very little data
and may not reflect reliable trends in the data.



A hypothesis, h, is said to
overfit

the training data
if there exists another hypothesis, h’, such that h
has smaller error than h’ on the training data but h’
has smaller error on the test data than h.


Overfitting and Noise


Category or attribute noise can cause overfitting.


Add noisy instance:


<<medium, green, circle>, +> (really
)



Noise can also cause directly conflicting examples
with same description and different class.
Impossible to fit this data and must label leaf with
majority category.


<<big, red, circle>,
> (really +)



Conflicting examples can also arise if attributes
are incomplete and inadequate to discriminate the
categories.


Avoiding Overfitting


Two basic approaches


Prepruning
: Stop growing the tree at some point
during construction when it is determined that
there is not enough data to make reliable
choices.


Postpruning
: Grow the full tree and then
remove nodes that seem to not have sufficient
evidence.


Evaluating Subtrees to Prune


Cross
validation:


Reserve some of the training data as a hold
out set
(validation set, tuning set) to evaluate utility of
subtrees.


Statistical testing:


Perform some statistical test on the training data to
determine if any observed regularity can be dismissed
as likely to to random chance.


Minimum Description Length (MDL):


Determine if the additional complexity of the
hypothesis is less complex than just explicitly
remembering any exceptions.


Learning Theory


Theorems that characterize classes of learning problems or
specific algorithms in terms of computational complexity
or
sample complexity
, i.e. the number of training examples
necessary or sufficient to learn hypotheses of a given
accuracy.


Complexity of a learning problem depends on:


Size or expressiveness of the hypothesis space.


Accuracy to which target concept must be approximated.


Probability with which the learner must produce a successful
hypothesis.


Manner in which training examples are presented, e.g. randomly or
by query to an oracle.

Types of Results


Learning in the limit
: Is the learner guaranteed to
converge to the correct hypothesis in the limit as the
number of training examples increases indefinitely?


Sample Complexity
: How many training examples are
needed for a learner to construct (with high probability) a
highly accurate concept?


Computational Complexity
: How much computational
resources (time and space) are needed for a learner to
construct (with high probability) a highly accurate
concept?


High sample complexity implies high computational complexity,
since learner at least needs to read the input data.


Mistake Bound
: Learning incrementally, how many
training examples will the learner misclassify before
constructing a highly accurate concept.

Learning in the Limit


Given a continuous stream of examples where the learner
predicts whether each one is a member of the concept or
not and is then is told the correct answer, does the learner
eventually converge to a correct concept and never make a
mistake
again?


No limit on the number of examples required or
computational demands, but must eventually learn the
concept
exactly.


By simple enumeration, concepts from any known finite
hypothesis space are learnable in the limit, although
typically requires an exponential (or doubly exponential)
number of examples and time.


Class of total recursive (Turing computable) functions is
not learnable in the limit.

Learning in the Limit vs.

PAC Model


Learning in the limit model is too strong.


Requires learning correct exact concept


Learning in the limit model is too weak


Allows unlimited data and computational resources.


PAC Model


Only requires learning a
Probably Approximately
Correct

Concept: Learn a decent approximation most of
the time.


Requires polynomial sample complexity and
computational complexity.

Cannot Learn Exact Concepts

from Limited Data, Only Approximations

Negative

Learner

Classifier

Positive

Negative

Positive

Cannot Learn Even Approximate Concepts

from Pathological Training Sets

Learner

Classifier

Negative

Positive

Negative

Positive

PAC Learning


The only reasonable expectation of a learner
is that with
high probability

it learns a
close
approximation

to the target concept.


In the PAC model, we specify two small
parameters,
ε

and
δ
, and require that with
probability at least (1


δ
) a system learn a
concept with error at most
ε
.

Formal Definition of PAC
-
Learnable


Consider a concept class
C

defined over an instance space
X

containing instances of length
n
, and a learner,
L
, using a
hypothesis space,
H
.


C

is said to be
PAC
-
learnable

by
L

using
H

iff

for all
c

C
,
distributions
D

over
X
, 0<
ε
<0.5, 0<
δ
<0.5; learner
L

by
sampling random examples from distribution
D
, will with
probability at least 1


δ

output a hypothesis
h

H

such that
error
D
(h)


ε
, in time polynomial in 1/
ε
, 1/
δ
,
n

and size(
c
).


Example:


X
:

instances described by
n

binary features


C
:
conjunctive descriptions over these features


H
: conjunctive descriptions over these features


L
: most
-
specific conjunctive generalization algorithm (Find
-
S)


size(c)
: the number of literals in
c

(i.e. length of the conjunction).

Issues of PAC
Learnability


The computational limitation also imposes a
polynomial constraint on the training set size,
since a learner can process at most polynomial
data in polynomial time.


How to prove PAC
learnability
:


First,
prove sample complexity of learning
C

using
H

is
polynomial.


Second,
prove that the learner can train on a
polynomial
-
sized data set in polynomial time.


To be PAC
-
learnable, there must be a hypothesis
in
H

with arbitrarily small error for every concept
in
C
, generally
C

H.


Version Space


Bounds on generalizations of a set of
examples

Consistent Learners


A learner
L

using a hypothesis
H

and training data
D

is said to be a consistent learner if it always
outputs a hypothesis with zero error on
D

whenever
H

contains such a hypothesis.


By definition, a consistent learner must produce a
hypothesis in the version space for
H

given
D
.


Therefore, to bound the number of examples
needed by a consistent learner, we just need to
bound the number of examples needed to ensure
that the version
-
space contains no hypotheses with
unacceptably high error.

ε
-
Exhausted Version Space


The version space, VS
H
,
D
, is said to be
ε
-
exhausted

iff

every
hypothesis in it has true error less than or equal to ε.


In other words, there are enough training examples to
guarantee than any consistent hypothesis has error at most ε.


One can never be sure that the version
-
space is
ε
-
exhausted,
but one can bound the probability that it is not.


Theorem 7.1

(Haussler, 1988): If the hypothesis space
H

is
finite, and
D

is a sequence of
m

1 independent random
examples for some target concept
c
, then for any 0


ε



1,
the probability that the version space
VS
H
,
D

is
not
ε
-
exhausted is less than or equal to:


|
H
|
e

ε
m


Sample Complexity Analysis


Let
δ

be an upper bound on the probability of
not
exhausting the version space. So:

Sample Complexity Result


Therefore, any consistent learner, given at least:




examples will produce a result that is PAC.


Just need to determine the size of a hypothesis space to
instantiate this result for learning specific classes of
concepts.


This gives a
sufficient

number of examples for PAC
learning, but
not
a
necessary

number. Several
approximations like that used to bound the probability of a
disjunction make this a gross over
-
estimate in practice.

Sample Complexity of Conjunction Learning


Consider conjunctions over
n

boolean

features. There are 3
n

of these
since each feature can appear positively, appear negatively, or not
appear in a given conjunction. Therefore |H|= 3
n,
so a sufficient
number of examples to learn a PAC concept is:




Concrete examples:


δ=ε=0.05,
n
=10 gives 280 examples


δ=0.01, ε=0.05,
n
=10 gives 312 examples


δ=ε=0.01,
n
=10 gives 1,560 examples


δ=ε=0.01,
n
=50 gives 5,954 examples


Result holds for any consistent
learner.

Sample Complexity of Learning

Arbitrary Boolean Functions


Consider any
boolean

function over
n

boolean

features such as the
hypothesis space of DNF or decision trees. There are 2
2^
n

of these, so a
sufficient number of examples to learn a PAC concept is:





Concrete examples:


δ=ε=0.05,
n
=10 gives 14,256 examples


δ=ε=0.05,
n
=20 gives 14,536,410 examples


δ=ε=0.05,
n
=50 gives 1.561
x10
16

examples


COLT Conclusions


The PAC framework provides a theoretical framework for
analyzing the effectiveness of learning algorithms.


The sample complexity for any consistent learner using
some hypothesis space,
H
, can be determined from a
measure of its expressiveness |
H
| or VC(
H
), quantifying
bias and relating it to generalization.


If sample complexity is tractable, then the computational
complexity of finding a consistent hypothesis in
H

governs
its PAC learnability.


Constant factors are more important in sample complexity
than in computational complexity, since our ability to
gather data is generally not growing exponentially.


Experimental results suggest that theoretical sample
complexity bounds over
-
estimate the number of training
instances needed in practice since they are worst
-
case
upper bounds.

COLT Conclusions (cont)


Additional results produced for analyzing:


Learning with queries


Learning with noisy data


Average case sample complexity given assumptions about the data
distribution.


Learning finite automata


Learning neural networks


Analyzing practical algorithms that use a preference bias is
difficult.


Some effective practical algorithms motivated by theoretical
results:


Winnow


Boosting


Support Vector Machines (SVM)


Beyond a Single Learner


Ensembles of learners work better than
individual learning algorithms


Several possible ensemble approaches:



Ensembles created by using different learning
methods and voting


Bagging


Boosting

Bagging


Random selections of examples to learn the
various members of the ensemble.


Seems to work fairly well, but no real
guarantees.

Boosting


Most used ensemble method


Based on the concept of a
weighted

training set.


Works especially well with
weak

learners.


Start with all weights at 1.


Learn a hypothesis from the weights.


Increase the weights of all misclassified examples
and decrease the weights of all correctly classified
examples.


Learn a new hypothesis.


Repeat

Why Neural Networks?


Why Neural Networks?


Analogy to biological systems, the best examples we
have of robust learning systems.


Models of biological systems allowing us to
understand how they learn and adapt.


Massive parallelism

that allows for computational
efficiency.


Graceful degradation due to
distributed represent
-
ations

that spread knowledge representation over
large numbers of computational units.


Intelligent behavior is an
emergent property

from
large numbers of simple units rather than resulting
from explicit symbolically encoded rules.

Neural Speed Constraints


Neuron “switching time” is on the order of
milliseconds compared to nanoseconds for
current transistors.


A factor of a
million

difference in speed.


However, biological systems can perform
significant cognitive tasks (vision, language
understanding) in seconds or tenths of
seconds.

What That Means


Therefore, there is only time for about a
hundred serial steps needed to perform such
tasks.


Even with limited abilties, current AI
systems require orders of magnitude more
serial steps.


Human brain has approximately 10
11

neurons each connected on average to 10
4

others, therefore must exploit massive
parallelism.


Real Neurons


Cells forming the basis of neural tissue


Cell body


Dendrites


Axon


Syntaptic terminals


The electrical potential across the cell membrane
exhibits spikes called action potentials.


Originating in the cell body, this spike travels
down the axon and causes chemical neuro
-
transmitters to be released at syntaptic terminals.


This chemical difuses across the synapse into
dendrites of neighboring cells.

Real Neurons (cont)


Synapses can be excitory or inhibitory.


Size of synaptic terminal influences strength
of connection.


Cells “add up” the incoming chemical
messages from all neighboring cells and if
the net positive influence exceeds a
threshold, they “fire” and emit an action
potential.


Model Neuron

(Linear Threshold Unit)


Neuron modelled by a unit (
j
) connected by
weights,
w
ji
, to other units (
i
):


Net input to a unit is defined as:




net
j

=
S

w
ji

* o
i


Output of a unit is a threshold function on
the net input:


1 if net
j

> T
j


0 otherwise

Neural Computation


McCollough and Pitts (1943) show how
linear threshold units can be used to
compute logical functions.


Can build basic logic gates


AND: Let all
w
ji

be
(T
j

/n)+


where
n

= number
of inputs


OR: Let all
w
ji

be
T
j
+




NOT: Let one input be a constant 1 with weight
T
j
+e

and the input to be inverted have weight
T
j


Neural Computation (cont)


Can build arbitrary logic circuits, finite
state
machines, and computers given these basis
gates.


Given negated inputs, two layers of linear
threshold units can specify any boolean
function using a two
layer AND
OR network.


Learning


Hebb (1949) suggested if two units are both
active (firing) then the weight between them
should increase:




w
ji

= w
ji

+

o
j
o
i





is a constant called the learning rate


Supported by physiological evidence



Alternate Learning Rule


Rosenblatt (1959) suggested that if a target
output value is provided for a single neuron
with fixed inputs, can incrementally change
weights to learn to produce these outputs
using the
perceptron learning rule
.


Assumes binary valued input/outputs


Assumes a single linear threshold unit.


Assumes input features are detected by fixed
networks.

Perceptron Learning Rule


If the target output for output unit
j

is t
j





w
ji

= w
ji

+

(t
j

-

o
j
)o
i


Equivalent to the intuitive rules:


If output is correct, don't change the weights


If output is low (o
j

= 0, t
j

=1), increment weights for all inputs
which are 1.


If output is high (o
j

= 1, t
j

=0), decrement weights for all inputs
which are 1.


Must also adjust threshold:





T
j

= T
j

+

(t
j

-

o
j
)


or equivalently assume there is a weight w
j0

=
-
T
j

for an
extra input unit 0 that has constant output o
0

=1 and that
the threshold is always 0.


Perceptron Learning Algorithm


Repeatedly iterate through examples adjusting
weights according to the perceptron learning rule
until all outputs are correct


Initialize the weights to all zero (or randomly)

Until outputs for all training examples are correct


For each training example,
e
, do



Compute the current output
o
j




Compare it to the target
t
j

and update the weights




according to the perceptron learning rule.


Algorithm Notes


Each execution of the outer loop is called an
epoch
.


If the output is considered as concept membership
and inputs as binary input features, then easily
applied to concept learning problems.


For multiple category problems, learn a separate
perceptron for each category and assign to the
class whose perceptron most exceeds its threshold.


When will this algorithm terminate (converge) ??


Representational Limitations


Perceptrons can only represent linear
threshold functions and can therefore only
learn data which is
linearly separable

(positive and negative examples are
separable by a hyperplane in n
dimensional
space)


Cannot represent exclusive
or (xor)


Perceptron Learnability


System obviously cannot learn what it cannot represent.


Minsky and Papert(1969) demonstrated that many
functions like parity (n
input generalization of xor) could
not be represented.


In visual pattern recognition, assumed that input features
are local and extract feature within a fixed radius. In which
case no input features support learning


Symmetry


Connectivity


These limitations discouraged subsequent research on
neural networks.


Perceptron Convergence and
Cycling Theorems



Perceptron Convergence Theorem
: If there are a
set of weights that are consistent with the training
data (i.e. the data is linearly separable), the
perceptron learning algorithm will converge
(Minsky & Papert, 1969).


Perceptron Cycling Theorem
: If the training data
is not linearly separable, the Perceptron learning
algorithm will eventually repeat the same set of
weights and threshold at the end of some epoch
and therefore enter an infinite loop.


Perceptron Learning as Hill
Climbing


The
search space

for Perceptron learning is the space of
possible values for the weights (and threshold).


The
evaluation metric

is the error these weights produce
when used to classify the training examples.


The perceptron learning algorithm performs a form of hill
-
climbing (gradient descent), at each point altering the
weights slightly in a direction to help minimize this error.


Perceptron convergence theorem guarantees that for the
linearly separable case there is only one local minimum
and the space is well behaved.


Perceptron Performance


Can represent and learn conjunctive
concepts and M
of
N concepts (true if any M
of a set of N selected binary features are
true).


Although simple and restrictive, this high
-
bias algorithm performs quite well on many
realistic problems.


However, the representational restriction is
limiting in many applications.


Multi
Layer Neural Networks


Multi
layer networks

can represent arbitrary functions,
but building an effective learning method for such
networks was thought to be difficult.


Generally networks are composed of an
input layer
,
hidden layer
, and
output layer

and activation feeds
forward from input to output.


Patterns of activation are presented at the inputs and
the resulting activation of the outputs is computed.


The values of the weights determine the function
computed.


A network with
one hidden layer

with a sufficient
number of units can represent
any boolean function
.

Basic Problem


General approach to the learning algorithm
is to apply gradient descent.


However, for the general case, we need to
be able to differentiate the function
computed by a unit and the standard
threshold function is not differentiable at
the threshold.

Differentiable Threshold Unit


Need some sort of non
linear output function to
allow computation of arbitary functions by mulit
-
layer networks (a multi
layer network of linear
units can still only represent a linear function).


Solution: Use a nonlinear, differentiable output
function such as the sigmoid or logistic function




o
j

= 1/(1 + e
-
(net
j

-

T
j
)

)


Can also use other functions such as tanh or a
Gaussian.


Error Measure


Since there are mulitple continuous outputs,
we can define an overall error measure:



E(W) = 1/2 *(
S S

(t
kd

-

o
kd
)
2
)






d

D
k

K



where D is the set of training examples, K is
the set of output units, t
kd

is the target
output for the k
th

unit given input d, and o
kd

is network output for the k
th

unit given input
d.

Gradient Descent


The derivative of the output of a sigmoid
unit given the net input is





o
j
/

net
j

= o
j
(1
-

o
j
)


This can be used to derive a learning rule
which performs gradient descent in weight
space in an attempt to minimize the error
function.






w
ji

=
-

(

E /

w
ji
)


Backpropogation Learning Rule


Each weight w
ji

is changed by






w
ji

=


j
o
i


j
= o
j
(1
-

o
j
) (t
j

-

o
j
)

if j is an output unit


j
= o
j
(1
-

o
j
)
S

k
w
kj

otherwise


where


is a constant called the learning rate,

t
j

is the correct output for unit j,

d
j

is an error measure for unit j.


First determine the error for the output
units, then backpropagate this error layer by
layer through the network, changing
weights appropriately at each layer.


Backpropogation Learning Algorithm


Create a three layer network with N hidden units and fully
connect input units to hidden units and hidden units to output
units with small random weights.


Until all examples produce the correct output within e or the
mean
squared error ceases to decrease (or other termination
criteria):


Begin epoch


For each example in training set do:




Compute the network output for this example.




Compute the error between this output and the correct output.




Backpropagate this error and adjust weights to decrease this error.


End epoch


Since continuous outputs only approach 0 or 1 in the limit,
must allow for some e
approximation to learn binary functions.


Comments on Training


There is no guarantee of convergence, may
oscillate or reach a local minima.


However, in practice many large networks
can be adequately trained on large amounts
of data for realistic problems.


Many epochs (thousands) may be needed
for adequate training, large data sets may
require hours or days of CPU time.


Termination criteria can be:


Fixed number of epochs


Threshold on training set error


Representational Power

Multi
layer sigmoidal networks are very expressive.


Boolean functions
: Any Boolean function can be represented
by a two layer network by simulating a two
layer AND
OR
network. But number of required hidden units can grow
exponentially in the number of inputs.


Continuous functions
: Any bounded continuous function can
be approximated with arbitrarily small error by a two
layer
network. Sigmoid functions provide a set of basis functions
from which arbitrary functions can be composed, just as any
function can be represented by a sum of sine waves in
Fourier analysis.


Arbitrary functions
: Any function can be approximated to
arbitarary accuracy by a three
layer network.


Sample Learned XOR Network

Hidden unit A represents ¬(X


Y)

Hidden unit B represents ¬(X


Y)

Output O represents:

A


¬B






¬(X


Y)


(X


Y)






X


Y



A

B

X

Y

3.11

6.96

-
7.38

-
5.24

-
2.03

-
5.57

-
3.6

-
3.58

-
5.74

Hidden Unit Representations


Trained hidden units can be seen as newly
constructed features that re
represent the
examples so that they are linearly separable.


On many real problems, hidden units can
end up representing interesting recognizable
features such as vowel
detectors, edge
-
detectors, etc.


However, particularly with many hidden
units, they become more “distributed” and
are hard to interpret.


Input/Output Coding


Appropriate coding of inputs and outputs
can make learning problem easier and
improve generalization.


Best to encode each binary feature as a
separate input unit and for multi
valued
features include one binary unit per value
rather than trying to encode input
information in fewer units using binary
coding or continuous values.

I/O Coding cont.


Continuous inputs can be handled by a single
input by scaling them between 0 and 1.


For disjoint categorization problems, best to
have one output unit per category rather than
encoding
n

categories into log
n

bits.
Continuous output values then represent
certainty in various categories. Assign test
cases to the category with the highest output.


Continuous outputs (regression) can also be
handled by scaling between 0 and 1.

Neural Net Conclusions


Learned concepts can be represented by networks
of linear threshold units and trained using gradient
descent.


Analogy to the brain and numerous successful
applications have generated significant interest.


Generally much slower to train than other learning
methods, but exploring a rich hypothesis space
that seems to work well in many domains.


Potential to model biological and cognitive
phenomenon and increase our understanding of
real neural systems.


Backprop itself is not very biologically plausible