Sunteți pe pagina 1din 9

Chap 3 HEURISTIC SEARCH TECHNIQUES:

A framework for describing search methods.


General purpose search technique for solving AI problems
of chapter 2
All varieties of heuristic search.
Independent of any particular task or problem domain.
When applied to a particular problem, efficiency depends upon
how they exploit domain specific knowledge.
Definition: A heuristic is a search technique used in solving
problems. It may or may not converge, but if it does, it will do so
fast. Further, the solution may not be optimal.
Two basic search strategies: DFS and BFS: These are not
heuristics. Others:
Generate and test
Problem reduction
Hill climbing
Constraints satisfaction
Best first search
Means end analysis
1. GENERATE AND TEST: simplest
It is also DFS with backtracking.
Exhaustive search of problem space: Systematic
ALGORITHM:

1. Generate a possible solution.


2. Test to see if this is actually a solution by comparing
end points of the chosen path to the set of acceptable
goal states.
3. If solution found, quit, else go to step 1.
Generate and test can also operate by generating solutions
randomly: British Museum algorithm
for simple problems, it is reasonable.
Eg. Consider four 6 sided cubes, each side painted in one of 4
colors.

Problem: arrange four cubes in a row such that on all four sides
of the row one block face of each color is showing.
--one person can solve by exhaustive search in few minutes.
It can be solved even more quickly using heuristic generate and
test procedure.
The color which is most used should not be placed outside
Place such a color to abut the next block.
Generate and test is not good for harder problems.
2. HILL CLIMBING:
Procedure: Simple hill climbing:
1. Evaluate the initial state. If it is also a goal state, quit.
Else continue with the initial state as the current state.
2. Loop until a solution is found or until they are no new
operators left to be applied on the current state.
a) Select an operator that has not yet been applied to the
current state and apply it to produce a new state.
b) Evaluate the new state.
i) If it is a goal state, return it and quit.
ii) If it is not a goal state but it is better than the current
state, then make it the current state (vague, but main
difference from generate & test
else, continue in the loop.
Main difference between this and generate & test is the use of an
evaluation function as a way to inject task specific knowledge into
the control process- heuristic function
eg. For four colors cubes problem:
Heuristic function is Sum of the number of different colors on
each of the four sides.
A solution to the problem will have a value 16.
Set of rules: only one: pick up a block and rotate it by 90 degrees
in any direction.
Next take a starting configuration at random. Then apply the rule
to get a new configuration. If the resulting state is better, keep it
else return to earlier.

Continue till solution is got.


PROBLEMS WITH HILL CLIMBING:
a). Local maxima - solution is to backtrack to earlier node in
different direction
b). Plateau--- solution is to make a big jump in some direction.
c). Ridge---solution is to apply two or more rules before doing the
test, i.e., move in several directions at once.
Disadvantages: local, therefore may not give an optimal solution,
like NN.
Advantage: not combinatorially explosive
(eg. Hill climbing problem:- blocks world tutorial)
3. BEST FIRST SEARCH: combination of DFS and BFS
OR GRAPHS: follow a simple path at a time,
but switch whenever any competing path looks more promising
than the current one, does by evaluating using heuristic -> cost of
getting to a solution from a given node.
A

(3) (5) (1)

(3)

(5)

(4)

(6)

(6)

(5)

(6)

Can be used for graph traversals also


Simplification of A* algorithm.

(2)

(1)

4. PROBLEM REDUCTION:--For decomposable problems


--Generating arcs called AND arcs.
AND-OR GRAPHS:
Goal: acquire a TV set

AND arc
Goal: steal tv

Goal: earn
some money

Goal: buy tv set

5. CONSTRAINT SATISFACTION:
--the goal is to discover some problem state such that certain given
constraints (time, cost, materials) are satisfied.
eg. A design problem
Constraint satisfaction uses two search methods to find solution.
1. For operating on a list of constraints.
2. for operating on original problem space: propagation of
constraints,
i.e, changes in list of constraints, using rule patterns
and heuristics
Consider Cryptarithmetic problem: solution proceeds in cycles. At
each cycle two things are done. It is a two-step process.
a) Constraints are discovered and propagated as far as possible
throughout the system: Apply constraint reference rules to generate
new constraints.
b) If still no solution, search begins. A guess is made and added as
a new constraint to be propagated again: Apply letter assignment
rules to perform all assignments required by the current
constraints.
PROBLEM:
SEND
+MORE
----------MONEY

Initial State: a) No two letters should have the same value.


b) Constraints of arithmetic.
Goal state: all letters have been assigned a digit following the
constraints.
Heuristic used here:
a) if there is a letter that has only two possible values and
another that has, say six, guessing is better for the first one. Or
b) If there is a letter that participates in many constraints,
then prefer it for guessing.
SOLVING A CRYPTARITHMETIC PROBLEM:
Initial state

SEND
+M O R E
M=1
S=8 or 9
O=0 or 1 -> O=0
N=E or E+1 -> N=E+1
C2=1
N+R > 8
E <>9

MONEY

E=3

E=4
E=2
N=3
R = 8 or 9
2+D = Y or 2+D = 10+Y

C1=0
2+D = Y
N+R = 10+E
R=9
S = 8, C3 =0

C1=1
2+D =10+Y
D = 8+Y
D = 8 or 9

X
Y= 0

Y= 1

Observations:
1. Constraint generator rules should not infer spurious constraints.
2. Guess at a variable whose value is highly constrained.
3. Which node to choose next: here it follows DFS.
Control must be to constraint generator or problem solver.
6. MEANS-ENDS ANALYSIS:
All the above search strategies reason either forward or backward.
However, often a mixture of forward and backward is needed.
--So solve major parts of the problem first. And then go back
and solve these small problems to glue by pieces together.
--centers around detection of differences between the current
and goal state.
--When detected reduce the difference by choosing the
appropriate operator.
--But operator may not be applicable on current state.
--So setup a sub problem of getting on to a state in which it
can be applied.
This backward chaining in which operators are selected and
subgoals are set up to establish precondition of operators is called
operator subgoaling.
First AI program to exploit MEA was General Problem Solver
(GPS). 1963-69
a)
Some differences are easier than others
For GPS, both differences are =
2
But (b) is better, because disc3
3
1
is harder to move
A
B
C
b)
1
2
3
A
B
C
In GPS operators have 3 components
1. Preconditions ---necessary conditions for applicability of
operators.

2. Transformation functions ---actual move.


3. Differences reduced ---differences operator reduces.
eg. Move disk i from peg p to peg q.
Preconditions --- no disk smaller than i is on p and q.
Transformation function ---actual move.
Differences reduced --- disk i is on peg p not q.
Now suppose from initial state GPS tries to reduce the difference.
i.e. 3 is on A not C. Therefore move is move 3AC: move 3 from A
to C
Precondition is 1 & 2 should be on B-----not met.
This becomes subgoal of current state.
Differences are 1 on A not B and 2 on A not B: Move is 1AB.
No preconditions.
next state ((23)(1)(1)) ?X
3 disk T of H:
((123)()())
move 1AC
((23)()(1))
move 2AB
((3)(2)(1))
move 1CB
((3)(12)())
move 3AC
(()(12)(3))
move 1BA
((1)(2)(3))
move 2BC
((1)()(23))
move 1AC
(()()(123))
7 moves,
= (2n-1), where n is number of disks.

Generate operator- difference tree exhaustively:


i.s.
1 on A not C
CC

((123)( )( ))

3 on A not C

2 on A not C

Diff.

operation
differences

Move 3AC

Move 2AC

1 on A not B

2 on A not B

*better to try
bigger disc first
1 on A not
B

operation
Move 2AB

Difference
operation

Move 1AB

1 on A not C
Move
1AC

1st move
2 on A not
C

3 on A not
C

Move 2AC

Move 3AC

1 on A not
B

2 on A not
B
Move 2AB

2nd move
((3) (2) (1))

n.s.

((23)( )(1))

1 on C not
B
Move 1CB

3rd move
((3) (12) ( ))

3 on A not
C

1 on B not
C

Move 1BC

2 on B not
C

Move 3AC

Move 2BC

4th move (( )(12) (3)


1 on B not
A

Move 1BA

5th move ((1) (2) (3)

1 on A not
C

2 on B not
C

Move 1AC

Move 2BC

7th move
((1) (2) (3))

*
6th move ((1) ( ) (23))

T of H is not good illustration of GPS. GPS is not very practical


more of an idea - useful because it introduces matching.

S-ar putea să vă placă și