Search as a problem solving technique.
Consider a goal

based agent capable of formulating a search problem by:
–
providing a description of the current problem state,
–
providing a description of its own actions that can transform one problem
state into another one,
–
providing a description of the goal state where a desired goal holds.
The solution of such a problem consists of finding a
path
from the current state to the
goal state. The problem space can be huge, which is why
the agent must know how
to efficiently search it and evaluate solutions.
To define how “good” the solution is, a
path cost function
can be assigned to a path. Different solutions of the same
problem can be compared by means of the corresponding path cost functions.
We shall discuss two types of searches:
–
Uninformed search
(
no problem

specific information is available to direct the
search). We shall use the Missionaries and Cannibals (M&C) problem to
illustrate uninformed search.
–
Informed search
(
there is a problem

specific information helping the agent
through the search process). We shall use the 5

puzzle problem (a downsized
version of the 8

puzzle problem) to illustrate informed search
.
Uninformed search example: the Missionaries and
Cannibals problem.
The search problem is defined as follows:
Description of the current state: a sequence of six numbers, representing
the number of missionaries, cannibals and boats on each bank of the river.
Assuming 3 missionaries, 3 cannibals and one boat, the initial state is
(setf start '(3 3 1 0 0 0))
Description of possible actions (or operators): take either one missionary,
one cannibal, two missionaries, two cannibals, or one of each across the
river in the boat, i.e.
(setf list

of

actions '((1 0 1) (0 1 1) (2 0 1) (0 2 1) (1 1 1)))
Description of the goal state, i.e.
(setf finish '(0 0 0 3 3 1))
Note that some world states are illegal (the number of cannibals must always
be less or equal to the number of missionaries on each side of the river.
Therefore, we must impose certain constraints on the search to avoid illegal
states. We also must guarantee that search will not fall in a loop (some actions
may “undo” the result of a previous action).
The problem space for the M&C problem
Problem space is a complete description of the domain. It can be huge, which is
why it is only
procedurally
defined. Here is the problem space for the M&C problem.
3,3,1,0,0,0
2,2,0,1,1,1
2,3,1,1,0,0
2,0,0,1,3,1
3,3,0,0,0,1
0,0,0,3,3,1
3,2,1,0,1,0
3,2,0,0,1,1
3,1,1,0,2,0
3,1,0,0,2,1
2,3,0,1,0,1
1,1,1,2,2,0
1,2,0,2,1,1
1,2,1,2,1,0
1,3,1,2,0,0
3,0,0,0,3,1
3,0,1,0,3,0
2,1,0,1,2,1
2,0,1,1,3,0
1,0,1,2,3,0
1,1,0,2,2,1
0,3,1,3,0,0
1,3,0,2,0,1
0,3,0,3,0,1
2,2,1,1,1,0
2,1,1,1,2,0
0,1,0,3,2,1
0,1,1,3,2,0
0,2,0,3,1,1
0,2,1,3,1,0
Search (or solution) space is a part of the
problem space which is actually examined
3,3,1,0,0,0
3,1,0,0,2,1
3,2,0,0,1,1
2,2,0,1,1,1
[1,1,1]
[0,1,1]
[0,2,1]
Dead end
3,2,1,0,1,0
3,2,1,0,1,0
[1,0,1]
[0,1,1]
3,0,0,0,3,1
2,2,0,1,1,1
3,1,0,0,2,1
3,0,0,0,3,1
[0,2,1]
[0,1,1]
[1,0,1]
[0,2,1]
Dead end
Dead end
...
...
Depth

first search: always expand the path to
one of the nodes at the deepest level of the
search tree
Each path is a list of states on that path, where each state is a list of six
elements (m1 c1 b1 m2 c2 b2). Initially, the only path contains only the start
state, i.e. ((3 3 1 0 0 0)).
(defun depth

first (start finish &optional (queue (list (list start))))
(cond ((endp queue) nil)
((equal finish (first (first queue)))
(reverse (first queue)))
(t (depth

first start finish
(append (extend (first queue)) (rest queue))))))
(defun extend (path)
(setf extensions (get

extensions path))
(mapcar #'(lambda (new

node) (cons new

node path))
(filter

extensions extensions path)))
Breadth

first search: always expand all nodes at
a given level, before expanding any node at the
next level
(defun breadth

first (start finish &optional
(queue (list (list start))))
(cond ((endp queue) nil)
((equal finish (first (first queue)))
(reverse (first queue)))
(t (breadth

first start finish
(append (rest queue) (extend (first queue)))))))
(defun extend (path)
(setf extensions (get

extensions path))
(mapcar #'(lambda (new

node) (cons new

node path))
(filter

extensions extensions path)))
Depth

first vs breadth

first search
Depth

first
search
1. Space complexity:
O(bd)
, where
b
is the branching factor, and
d
is the depth of the search.
2. Time complexity:
O(b^d).
3. Not guaranteed to find the
shortest path (not optimal).
4. Not guaranteed to find a solution
(not complete)
5. Polynomial space complexity
makes it applicable for non

toy
problems.
Breadth

first search
1. Space complexity:
O(b^d)
2. Time complexity:
O(b^d).
3. Guaranteed to find the shortest
path (optimal).
4. Guaranteed to find a solution
(complete).
5. Exponential space complexity
makes it impractical even for toy
problems.
Other uninformed search strategies.
Depth

limited
is the same as depth

first search, but a limit on how deep
into a given path the search can go, is imposed. In M&C example, we
avoided unlimited depth by checking for cycles. If the depth level is
appropriately chosen, depth

limited search is complete, but not optimal.
Its time and space complexity are the same as for the depth

first
search, i.e.
O(b^d)
and
O(bd),
respectively.
Iterative deepening
is a combination of breadth

first and depth

first
searches, where the best depth limit is determined by trying all possible
depth limits. Its space complexity is
O(bd)
, which makes it practical for
large spaces where loops are possible, and therefore the depth

first
search cannot be successful. It is optimal, i.e. guaranteed to find the
shortest path.
Bi

directional search
is initiated simultaneously from the initial state and
goal state in a hope that the two paths will eventually meet. It is
complete and optimal, but its time and space efficiencies are
exponential, i.e. O
(b^(d/2))
.
Informed search strategies: best

first “greedy”
search
Best

first search always expends the node that is believed to be the closest to
the goal state. This is defined by means of the selected
evaluation function
.
Example: consider the following graph whose nodes are represented by means
of their property lists:
(setf (get 's 'neighbors) '(a d)
(get 'a 'neighbors) '(s b d)
(get 'b 'neighbors) '(a c e)
(get 'c 'neighbors) '(b)
(get 'd 'neighbors) '(s a e)
(get 'e 'neighbors) '(b d f)
(get 'f 'neighbors) '(e))
(setf (get 's 'coordinates) '(0 3)
(get 'a 'coordinates) '(4 6)
(get 'b 'coordinates) '(7 6)
(get 'c 'coordinates) '(11 6)
(get 'd 'coordinates) '(3 0)
(get 'e 'coordinates) '(6 0)
(get 'f 'coordinates) '(11 3))
To see the description of a node, we can say:
* (describe 'a)
........
Property: COORDINATES, Value: (4 6)
Property: NEIGHBORS, Value: (S B D)
To find how close a given node is to the goal, we can use the formula computing
the straight line distance between the two nodes:
(defun distance (node

1 node

2)
(let ((coordinates

1 (get node

1 'coordinates))
(coordinates

2 (get node

2 'coordinates)))
(sqrt (+ (expt (

(first coordinates

1)
(first coordinates

2))
2)
(expt (

(second coordinates

1)
(second coordinates

2))
2)))))
Given two partial paths, whose final node is closest to the goal, can be
defined by means of the following
closerp
predicate:
(defun closerp (path

1 path

2 finish)
(< (distance (first path

1) finish)
(distance (first path

2) finish)))
The best

first search now means “expand the path believed to be the closest to
the goal”, i.e.
(defun best

first (start finish &optional (queue (list (list start))))
(cond ((endp queue) nil)
((equal finish (first (first queue))) (reverse (first queue)))
(t (best

first start finish
(sort (append (extend (first queue)) (rest queue)) #'(lambda (p1 p2)
(closerp p1 p2 fi
nish)))))))
(defun extend (path)
(mapcar #'(lambda (new

node) (cons new

node path))
(remove

if #'(lambda (neighbor) (member neighbor path)) (get (first path)
'neighbors))))
A* search: a combination of the best

first greedy
search and uniform

cost search
Uniform

cost search takes into account the path cost, and expands always
the lowest cost node. Assume that this path cost is g(n).
Best

first search expands the node which is believed to be the closest to
the goal. Assume that the estimated cost to reach the goal from this node
is h(n).
A* search always expands the node with the minimum f(n), where
f(n) = g(n) + h(n).
We assume here that f(n) never decreases, i.e. f(n) is a
monotonic
function. Under this condition, A* search is both optimal and complete.
A* is hard to implement because any time a shorter path between the
start node and any node is found, A* must update cost of paths going
through that node.
The 5

puzzle problem (a downsized version of
the 8

puzzle problem)
Here is an example of the 5

puzzle problem:
Consider the following representation:
–
Initial state description: (4 3 2 1 5 0)
–
Possible moves: move the empty “0” tile up, down, left or right depending on its
current position.
–
Goal state description: (1 2 3 4 5 0)
The problem space contains 6! = 720 different states (for the 8

puzzle, it is 9! = 362,880
different states). However, assuming the branching factor of 2 and a length of a typical
solution of about 15, exhaustive search would generate about 2^15 = 32,768 states (for
the 8

puzzle, these numbers are: branching factor 3, typical solution is about 20 steps, or
3^20 = 3.5 * 10^9 states).
4
3 2
1 5 0
1 2 3
4 5 0
Solving the 5

puzzle problem
We shall compare the following searches for solving the 5

puzzle problem
(some of this comparison will be done by you as part of homework 2):
1.
Breadth

first search (as it guarantees to find the shortest path given enough
time and space).
2.
Best

first search with 2 admissible heuristic functions :
–
Number of tiles out of place (or the equivalent one
–
number of tiles in
place).
–
Manhattan distance. It computes the distance of each tile from its final
place, i.e. the distance between the tile’s current and final position in the
horizontal direction plus the distance in the vertical direction.
3.
Depth

limited search (similar to depth

first, but the maximum path length is
limited to prevent infinite paths).
Notes:
1. The search space for this type of puzzles is known to be not fully interconnected, i.e. it
is not possible to get from one state to any other state. Initial states must be
carefully selected so that the final state is reachable from the initial state.
2. Best

first search using an admissible heuristic is known to be equivalent to A* search
with all advantages and disadvantages from here (still may take an exponential
time and may involve backtracking), but is both optimal and complete.
Iterative improvement methods: hill

climbing
search
If the current state contains all the information needed to solve the problem,
then we try the best modification possible to transform the current state into
the goal state.
Example: map search.
(defun hill

climb (start finish &optional (queue (list (list start))))
(cond ((endp queue) nil)
((equal finish (first (first queue))) (reverse (first queue)))
(t (hill

climb start finish
(append (sort (extend (first queue))
#'(lambda (p1 p2) (closerp p1 p2 finish))) (rest queue))))))
Best applications for a hill

climbing search are
those where initial state contains all the
information needed for finding a solution.
Example: n

queens problem, where initially all queens are on the board,
and they are moved around until no queen attacks any other.
Notice that the initial state is
not fixed
. We may start with any configuration of
n
queens, but there is no guarantee that a solution exists for that particular
configuration. If a dead end is encountered, we “forget” everything done so far,
and re

start from a different initial configuration. That is, the search tree
generated so far is erased, and a new search tree is started.
Best

first search vs hill

climbing search
Best

first search
1. Space complexity: O(b^d),
because the whole search tree
is stored in the memory.
2. Time complexity: O(b^d). A good
heuristic function can
substantially improve this worst
case.
3. Greedy search: not complete,
not optimal.
A* search: complete and optimal
if the estimated cost for the
cheapest solution through n,
f(n), is a monotonic function.
Hill

climbing search
1. Space complexity: O(1), because
only a single state is maintained in
the memory.
2. Time complexity: O(b^d).
3. Not complete, because of the local
maxima phenomena (the goal state
is not reached, but no state is better
that the current state). Possible
improvement:
simulated annealing
,
which allows the algorithm to
backtrack from the local maxima in
an attempt to find a better
continuation.
4. Not optimal.
Constraint satisfaction problems
A constraint satisfaction problem
is a triple (V, D, C) where:
1.
V = {v
1
, v
2
, …, v
n
} is a finite set of variables;
2.
D = {d
1
, d
2
, …, d
m
} is a finite set of values for v
i
V (i = 1, n);
3.
C = {c
1
, c
2
, …, c
j
} is a finite set of constraints on the values that can be
assigned to different variables at the same time.
The solution
of the constraint satisfaction problem consists of defining
substitutions for variables from corresponding sets of possible values so as to
satisfy all the constraints in C.
Traditional approach
: “generate and test” methods or chronological
backtracking. But, these methods only work on small problems, because they
have exponential complexity.
The N

Queens example: the constraint
satisfaction approach
The most important question that must be addressed with respect to this
problem is how to find consistent column placements for each queen. The
solution in the book is based on the idea of "
choice sets
". A choice set is a set
of alternative placements. Consider, for example, the following configuration for
N = 4:
0 1 2 3
0 choice set 1 = {(0,0), (1,0), (2,0), (3,0)}
1 choice set 2 = {(0,1), (1,1), (2,1), (3,1)}
2 choice set 3 = {(0,2), (1,2), (2,2), (3,2)}
3 choice set 4 = {(0,3), (1,3), (2,3), (3,3)}
choice set 1 choice set 3 Notice that in each choice set, choices
choice set 2 are mutually exclusive and exhaustive.
Q
Q
Q
Q
Q
Q
choice set 4
Q
Each solution (legal placement of queens) is a consistent combination of
choices

one from each set. To find a solution, we must:
1.
Identify choice sets.
2.
Use search through the set of choice sets to find a consistent combination of
choices (one or all). A possible search strategy, utilizing chronological backtracking
is the following one (partial graph shown):
Choice set 1
Choice set 2
Choice set 3
Choice set 4
(0,0)
(0,1)
…
(0,1)
(1,1)
(2,1)
(3,1)
X
X
X X X X
(inconsistent combinations of choices)
X
X X X X
A generic procedure for searching through choice
sets utilizing chronological backtracking
The following is a generic procedure that searches through choice sets.
When an inconsistent choice is detected, it backtracks to the most recent
choice looking for an alternative continuation. This strategy is called
chronological backtracking.
(defun Chrono (choice

sets)
(if (null choice

sets) (record

solution)
(dolist (choice (first choice

sets))
(while

assuming choice
(if (consistent?)
(Chrono (rest choice

sets)))))))
Notice that when an inconsistent choice is encountered, the algorithm
backtracks to the previous choice it made. This algorithm is not efficient
because: (1) it is exponential, and (2) it re

invents contradictions. We shall
discuss another approach called,
dependency

directed backtracking
handles this type of search problems in a more efficient way.
Types of search
In CS, there are at least three overlapping meanings of “search”:
1.
Search for stored data
. This assumes an explicitly described collection of
information (for example, a DB), and the goal is to search for a specified
item. An example of such search is the binary search.
2.
Search for a path to a specified goal
. This suggests a search space
which is not explicitly defined, except for the initial state, the goal state and
the set of operators to move from one state to another. The goal is to find a
path from the initial state to the goal state by examining only a small portion
of the search space. Examples of this type of search are depth

first search,
A* search, etc.
3.
Search for solutions
. This is a more general type of a search compared to
the search for a path to a goal. The idea is to efficiently find a solution to a
problem among a large number of candidate solutions comprising the
search space. It is assumed that at least some (but not all) candidate
solutions are known in advance. The problem is how to select a subset of a
presumably large set of candidate solutions to evaluate. Examples of this
type of search are hill

climbing and simulated annealing. Another example
is the Genetic Algorithm (GA) search, which is discussed next.
Genetic Algorithms: another way of searching
for solutions.
The Genetic Algorithm (GA) is an example of the
evolutionary approach
to AI.
The underlying idea is to evolve a population of candidate solutions to a given
problem using operators inspired by natural genetic variation and selection.
Note that evolution is
not
a purposive or directed process; in biology, it seems
to boil down to different individuals competing for resources in the environment.
Some are better than others, and they are more likely to survive and propagate
their genetic material.
In very simplistic terms, we can think of evolution as:
A method of searching through a huge number of possibilities for solutions.
In biology, this huge number of possibilities is the set of possible genetic
sequences, and the desired outcome are highly fit organisms able to
survive and reproduce.
As a massively parallel search, where rather than working on one species
at a time, evolution tests and changes millions of species in parallel.
Genetic algorithms: basic terminology
Chromosomes
: strings of DNA that serve as a “blueprint” for the organism.
Relative to GAs, the term chromosome means a candidate solution to a
problem and is encoded as a string of bits.
Genes
: a chromosome can be divided into functional blocks of DNA,
genes
,
which encode traits, such as eye color. A different settings for a trait (blue,
green, brown, etc.) are called
alleles
. Each gene is located at a particular
position, called a
locus
, on the chromosome. In a GA context, genes are
single bits or short blocks of adjacent bits. An allele in a bit string is either
0 or 1 (for larger alphabets, more alleles are possible at each locus).
Genome
: if an organism contains multiple chromosomes in each cell, the
complete collection of chromosomes is called the organism’s genome.
Genotype
: a set of genes contained in a genome.
Crossover
(or recombination): occurs when two chromosomes bump into one
another exchanging chunks of genetic information, resulting in an
offspring
.
Mutation
: offspring is subject to
mutation
, in which elementary bits of DNA
are changed from parent to offspring. In GAs, crossover and mutation are
the two most widely used operators.
Fitness
: the probability that the organism will live to reproduce.
Genetic Algorithm search: more definitions
Search space
: in a GA context, this refers to a (huge) collection of candidate
solutions to a problem with some notion of
distance
between them. Searching
this space means choosing which candidate solutions to test in order to identify
the real (best or acceptable) solution. In most cases, the choice of the next
candidate solution to be tested depends on the results of the previous tests;
this is because some correlation between the quality of neighboring candidate
solutions is assumed. It is also assumed that good “parent” candidate solutions
from different regions in the search space can be combined via crossover to
produce even better offspring candidate solutions.
Fitness landscape
: let each genotype be a string of
j
bits, and the distance
between two genotypes be the number of locations at which the corresponding
bits differ. Also suppose that each genotype can be assigned a real

valued
fitness. A
fitness landscape
can be represented as a
(j + 1)
dimensional plot in
which each genotype is a point in
j
dimensions and its fitness is plotted along
the
(j + 1)
st axis. Such landscapes can have hills, peaks, valleys. Evolution can
be interpreted as a process of moving populations along landscapes in
particular ways, and “adaptation” can be seen as movement towards local
peaks. In a GA context, crossover and mutation can be seen as ways of
moving a population around on the landscape defined by the fitness function.
GA operators
Simplest genetic algorithms involve the following three operators:
Selection:
this operator selects chromosomes in the population according to
their fitness for reproduction. Some GAs use a simple function of the fitness
measure to select individuals to undergo genetic operation. This is called
fitness

proportionate
selection. Other implementations use a model in which
certain randomly selected individuals in a subgroup compete and the fittest
is selected. This is called
tournament
selection.
Crossover:
this operator randomly chooses a locus and exchanges the
subsequences before and after that locus between two chromosomes to
create two offspring. For example, consider chromosomes 11000001 and
00011111. If they crossover after their forth locus, the two offspring will be
11001111 and 00010001.
Mutation:
this operator randomly converts some of the bits in a chromosome.
For example, if mutation occurs at the second bit in chromosome 11000001,
the result is 10000001.
A simple genetic algorithm
The outline of a simple genetic algorithm is the following:
1.
Start with the randomly generated population of “n” j

bit chromosomes.
2.
Evaluate the fitness of each chromosome.
3.
Repeat the following steps until
n
offspring have been created:
a.
Select a pair of parent chromosomes from the current population based on
their fitness.
b.
With the probability p
c
, called the
crossover rate
, crossover the pair at a
randomly chosen point to form two offspring. If no crossover occurs, the two
offspring are exact copies of their respective parents.
c.
Mutate the two offspring at each locus with probability p
m
, called the
mutation rate
, and place the resulting chromosomes in the new population.
If
n
is odd, one member of the new population is discarded at random.
4.
Replace the current population with the new population.
5.
Go to step 2.
Each iteration of this process is called a
generation
. It is typical for a GA to
produce between 50 to 500 generations in one
run
of the algorithm. Since
randomness plays a large role in this process, the results of two runs are
different, but each run at the end typically produces one or more highly fit
chromosomes.
Example
Assume the following:
length of each chromosome = 8,
fitness function
f(x)
= the number of ones in the bit string,
population size
n
= 4,
crossover rate
p
c
= 0.7,
mutation rate
p
m
= 0.001
The initial, randomly generated, population is the following:
Chromosome label Chromosome string Fitness
A 00000110 2
B 11101110 6
C 00100000 1
D 00110100 3
Example (cont.): step 3a
We will use a
fitness

proportionate selection
, where the number of times an
individual is selected for reproduction is equal to its fitness divided by the
average of the fitnesses in the population, which is (2 + 6 + 1 + 3) / 4
For chromosome A, this number is 2 / 3 = 0.667
For chromosome B, this number is 6 / 3 = 2
For chromosome C, this number is 1 / 3 = 0.333
For chromosome D, this number is 3 / 3 = 1
(0.667 + 2 + 0.333 + 1 = 4)
To implement this selection method, we can use “roulette

wheel sampling”,
which gives each individual a slice of a circular roulette wheel equal to the
individual’s fitness, i.e.
Assume that the roulette wheel is spun, and
the ball comes to rest on some slice; the
individual corresponding to that slice is selected
for reproduction. Because n = 4, the roulette
wheel will be spun four times. Let the first two
spins choose B and D to be parents, and the
second two spins choose B and C to be parents.
B
D
C
A
Example (cont.): steps 3b and 3c
Step 3b
Apply the crossover operator on the selected parents:
Given that B and D are selected as parents, assume they crossover after the
first locus with probability p
c
to form two offspring, say E = 10110100 and F =
01101110. Assume that B and C do not crossover thus forming two offspring
which are exact copies of B and C.
Step 3c: Apply the mutation operator on the selected parents:
Each offspring is subject to mutation at each locus with probability p
m
. Let E
is mutated after the sixth locus to form E’ = 10110000, and offspring B is
mutated after the first locus to form B’ = 01101110.
The new population now becomes:
Chromosome label Chromosome string Fitness
E’ 10110000 3
F 01101110 5
C 00100000 1
B’ 01101110 5
Note that the best string, B, with fitness 6 was lost, but the average fitness of the
population increased to (3 + 5 + 1 + 5) / 4. Iterating this process will eventually
result in a string with all ones.
Comments 0
Log in to post a comment