The following points highlight the three main types of search methods used in artificial intelligence. The types of are: 1. Incremental Heuristic Search 2. Constraint Satisfaction 3. Means-End-Analysis.

Type # 1. Incremental Heuristic Search:

Incremental search has been studied at least since the late 1960s. Incremental search algorithms reuse information from previous searches to speed up the current search and solve search problems potentially much faster than solving them repeatedly from scratch.

Similarly, heuristic search has been studied at least since the late 1960s. Heuristic search algorithms often based on A* use heuristic knowledge in the form of approximations of the goal distances to focus the search and solve search problems potentially much faster than uninformed search algorithms.

Incremental heuristic search algorithms combine both incremental and heuristic search to speed up searches of sequences of similar search problems, which is important in domains which are only incompletely known or change dynamically.

ADVERTISEMENTS:

The resulting search problems, sometimes called dynamic path planning problems, are graph search problems where paths have to be found repeatedly because the topology of the graph, its edge costs, the start vertex or the goal vertices change over time.

So far, three main classes of incremental heuristic search algorithms have been developed:

1. The first class restarts A* at the point where its current search deviates from the previous one (example- Fringe Saving A*).

2. The second class updates the h-values from the previous search during the current search to make them more informed (example- Generalized Adaptive A*).

ADVERTISEMENTS:

3. The third class updates the g-values from the previous search during the current search to correct them when necessary, which can be interpreted as transforming the A* search tree from the previous search into the A* search tree for the current search (examples- Lifelong Planning A*, D*, D* Lite).

All three classes of incremental heuristic search algorithms are different from other re-planning algorithms, such as planning by analogy, in that their plan quality does not deteriorate with the number of re-planning episodes.

Applications:

Incremental heuristic search has been extensively used in robotics, where a larger number of path planning systems are based on either D* (typically earlier systems) or D* Lite (current system), two different incremental heuristic search algorithms.

Type # 2. Constraint Satisfaction:

ADVERTISEMENTS:

This is another type of search method in which more domain knowledge is required than in the other search methods, which are also called weak methods in the sense that they require only a weak understanding of the domains, to which they are applied. However, ‘weak’ may not be taken to mean low powered.

Example Magic Square:

Consider the magic square shown in (Fig. 4.19) or in 3 × 3 matrix (Fig. 4.20) such that the parameters m11…m33 must satisfy the following examples of constraints.

 

Constraint conditions of a 3 × 3 matrix.

These constraints are arithmetic inequalities. Another example from logic can be: Find x, y, z such that (˥X^ y) ^ (x ^ ˥y ^ ˥z) is true. This is a logic formula under some logical constraints.

Constraints can be used to:

ADVERTISEMENTS:

1. Check the correctness of a partial solution and hence prune the fruitless branches of the search tree (breadth of the search tree reduced).

2. Calculate some parameters from others for example, if we know m11 and m12, m13 can be calculated from one of the above relations, that is:

m13 = 15 – m11 – m12

3. Choose which parameters to fix next.

The first two constraints can be accomplished using a general software method, called, constraint propagation. It is the third constraint which is ultimately the most powerful.

The constraints can be of two types:

1. Those given as part of the problem domain and

2. Those which are guessed and may be changed later, using back tracking.

Propagation of Constraints:

The constraints travel downward. Some heuristics are again needed. For this purpose we exploit relationships between the constraints, one function may be involved in many relations or many functions may be involved in a relation as is clear in the above eight relations between the different functions or parameters of the magic square (Fig. 4.19).

A general heuristic in such a case is to choose to fix parameter first, which will complete constraints. For example, choose a parameter which is involved in most of the relations i.e., select m22 which is involved in three relations.

The propagation of constraints downward can also be due to the presence of inference contained in (the right hand side of) the rules.

If a condition is satisfied, the conclusion part of the constraints gets added to the constraints. Propagation terminates when:

(i) Contradiction is the given condition of constraints.

(ii) No more guess can be made. If however no motion takes place even after making the guess then no solution exists.

Algorithm- Constraint Satisfaction:

1. Propagate available constraints. To do this, first set OPEN, to the set of all objects which must have values assigned to them in a complete solution. Then do until an inconsistency is detected or until OPEN is empty.

i. Select an object OB from OPEN. Strengthen as much as possible with the set of constraints which apply to OB.

ii. If this set is different from the set which was assigned the last time OB was examined or if it is first time OB has been examined, then add to OPEN all objects which share any constraints with OB.

2. If the union of the constraints discovered above defines a solution, then quit, report solution.

3. If the union of constraints discovered above defines a contradiction, then return failure.

4. If neither of the above action occurs, then it is necessary to make a guess at something in order to proceed.

To do this, loop until a solution is found or all possible solutions have been eliminated:

i. Select an object whose value is not yet determined and select a way of strengthening the constraint on that object.

ii. Recursively invoke constraint satisfaction with the current set of constraints augmented by strengthening constraint just selected.

This algorithm is general. To apply it in a particular domain requires the use of two kinds of rules:

1. Which define the way constraints may be propagated.

2. Which suggest guesses when guesses are necessary.

In order to illustrate this also called relaxation algorithm we consider a simple problem of cryptic arithmetic (Fig. 4.21), though Waltz (1975) algorithm for labeling line drawing and natural language understanding are also examples of constraint satisfaction in a modified form wherein no guess is made.

Consider:

Statement of Constraints:

1. Values are to be assigned to the letters from 0 to 9.

2. No two letters should have the same value.

3. If the same letter occurs more than once, it must be assigned the same digit each time.

4. The sum of the digits must be as shown in problem.

Goal states:

All letters have been assigned a digit in such a way that all above constraints are satisfied.

Solution:

It proceeds in cycles: at each cycle two significant things are done:

(a) Constraints are propagated by using rules which correspond to the properties of arithmetic. A value is guessed for some letter whose value is not yet determined, as this affects the effort involved in search.

(b) More constraints are generated due to propagating constraints.

The results of first-few cycles of processing this example are shown in Fig. 4.22 C1, C2, C3,C4 are the carry bits out of the columns, from right to left.

I. Cycle:

(i) M = 1 because two single digit numbers plus a carry cannot total more than 19.

(ii) S = 8 or 9 (minimum values) because S + M + C3 should be > 9, to generate a carry and M = 1, therefore S + 1 + C3 > 9 or S + C3 > 8, as C3 can be 0 or 1, so S > 8 or > 7 depending upon C3 = 0 or 1.

(iii) O = 0 because S + M + C3 = 0.

O must be > 9 that is 10 or 11, to generate a carry that is O can be zero or 1; as M is already 1, and no two digit should have the same value, so O is zero (0).

(iv) N-E or E + 1 depending on the value of C2. But N cannot have the same value as E. So N = E + 1 and C2 = 1.

(v) In order for C2 to be 1, the sum of N + R + C1 must be greater than 9 so N+R must be greater than 8.

(vi) N + R cannot be greater than 18, even with a carry in, so E cannot be 9.

E + O + C2 = N

When C2 = 0 and O = 0

E = N which is not possible

At this point let’s assume that no more additional constraints are generated so a GUESS is made.

Suppose value of E (which occurs the maximum number of times (3) and so interacts maximum with other letters).

II. Cycle Begins:

Let E = 2 (the next value after 0 and 1, already allocated)

(i) So N = 3 because E + 1 = N

as R + N + C1 = E

R + 3 + C1 (0 or 1) = 2 or 12

R + 3 + 0 = 2, R = -1 not possible

R + 3 + 1 = 2, R = – 2 not possible

R + 3 + 0 = 12, R = 9, possible

R + 3 + 1 = 12, R = 8, possible.

(ii) D + E = Y when C1 = 0 or 2 + D = Y and when C1 = 1 or 2 + D = 10 + y, from right most column.

Again assuming no more constraints can be generated, GUESS is required and now C is chosen for GUESS. Let

C1 = 1

III. Cycle Begins:

(i) D + E = Y + C0

2 + D = Y + 10

3 + R = 12

R = 9

(ii) S = 8 because S was either 8 or 9, as now R has been allotted 9.

Now let us assign value to C1 = 0 or 1.

First let us assign C1 = 1. D + E = Y + 10 (because C1 = 1).

D + E = 10 + Y

2 + D = 10 + Y and for this to happen

either D = 8

or D = 9

so C1 cannot be 1.

Now consider C1 = 0

then 8 + D = Y, Y can be 0 or 1

D = – 8 (when Y = 0)

D = -7 (when Y = 0)

so C1 cannot be 0 either.

Thus D has four values, two negative and two positive.

When D = 8

Y = 11 i.e., 0

and when

D = 9

Y = 11 i.e., 1

Y = 0 and Y = 1

lead to conflict

This contradiction could have been explained through another type of arguments- Earlier S = 8 or 9 and again R = 8 or 9 from II loop and when C1 = 1,

D = 8 or 9

And now D = 8 or 9 from C1 = 1

Three letters cannot share two values so C ≠ 1.

Comments on Constraints Propagation:

The rules of the constraint propagation should not infer spurious constraints. Nor do they need to infer all legal rules.

Example:

We could adopt an alternative path which gives C1= 0.

We could have done as follows:

Let C1 = 1

then E + D – Y + C1

or E + D = Y + 10

2 + D = Y + 10

D would be either 8 or 9

8 or 9 is assigned to R, to D and to S

is assigned to D

is assigned to S

Two values cannot be shared among three letters, so C1 cannot be 1. If this was realised earlier fruitless search could have been avoided. But the constraint propagation rules used were not that sophisticated so it took some search steps. Whether the time taken by the actual search route is more or less than the constrained method depends upon how long it takes to perform the reasoning required for constraint propagation.

Type # 3. Means-End-Analysis:

This is a special kind of knowledge-rich search. This is based on operators, which transform the state of the world. Given a description of the desired state of the domain (goal or the end) it works backwards selecting and using operators which will achieve it (the means). This is not done as a single step but instead works incrementally; in order to apply operators which achieve the goal state conditions must apply to the previous state and so the algorithm is applied recursively.

Goals of Means-End-Analysis:

The basic three primary goals means-ends analysis are shown in Fig. 4.20.

1. Transform:

Transform object A into object B. This is an and graph which subdivides the problem into an intermediate problem and transforming that problem into the goal state B. The recursive procedure stops when the primitive goal is reached, that is, when there is no difference between A and B (Fig. 4.20).

2. Reduce:

Reduce the difference between object A and object B by modifying object A. The goal of this operation is to reduce the difference between the object A and the object B by transforming object A into object A’ nearer the goal B by means of a relevant operator Q (Fig. 4.24).

3. Apply:

Apply the operator Q to object A. This is again the AND graph showing the goal of reducing the difference between object A and pre-conditions required for the operator Q, giving intermediate object A’. Operator Q is then applied to A’, transforming it into A” which is close to the goal B (Fig. 4.22).

States are described in some structured way and moves are performed by the operators. Each operator has a pre-condition, which constraints the states it uses and post condition which says which conditions will be true when the operator has been used. In a state described by predicates the post condition must say both what is additionally true and what ceases to be true in the new state.

As an example consider blocks world similar to that used as the domain of natural language program, SHRDU. This world (domain) consists of blocks of different shapes which can be piled on the top of one another or placed on a table top. An imaginary robot is living in block world and can pick up and move blocks to try to get to any desired state. The initial state can be displayed graphically Fig. 4.23, or described using predicates Fig. 4.27.

 

 

On _table (blue triangle)

On_top (red triangle, green box)

On_top (blue box, red box)

On_table (green box)

On_table (red box)

Predicate representation of initial state

Let the goal be- to have a pile of blocks, consisting of blue triangle on_top of red box, placed on_table. i.e. the goal has a pile of blocks, consisting of a blue triangle on top of red box which is further placed on table top.

Fig. 4.27 expressed in predicate calculus:

on_top (blue_triangle, red box) ^ on_table (red box)

The problem can be tackled with depth-first search but the steps involved are unnecessarily large. But means-end-analysis tackles it out very simplifiedly. For this purpose four operators are defined, as shown in the Table 4.3.

Each operator makes more events true which were not so in the post condition. This list is called add list. Similarly some events become false (untrue) in the precondition and the list of such events is called delete list.

In order to simplify the issue there are two operators to pick up things, one to pick up things from the table, pick_up (A) and other from blocks, pick_off (A, B). Similarly there are two operators for putting down things, put_down (A) and put_on (A, B).

Meaning of Operators:

For example, the first operator, pick_up (A) can be read as:

In order to pick-up block A it must be on table, must have nothing on top of it and there must be nothing in the robot’s hand. Similarly, other operators can be interpreted.

Solution:

The condition, on_table (red box) in the current (initial) state is already true (present in the goal state) so the difference between the current state and the goal state is

On_top (blue triangle, red box).

In order to follow means-end-analysis match this difference against the post condition column of the Table 4.3 looking for an operator which reduces the difference. On scanning the table it is found that the operator which is useful is

Put_on (A, B) that is put_on (blue-triangle, red box).

Now match the preconditions of this operator,

in_hand (A) ˥ top (C, B)

with the current state

On_top (blue triangle) ^ on-top (red-pyramid, green, block ^ (On_top (blue block, red block).

Preconditions now become the goal state, compute the difference and look for the new operator.

In the Next Stage. The operator put_on (A, B) turns out to be useful operator and Preconditions of Operator put_on (A, B) become new goal that is new goal is In hand (A)^

In_hand (A)^

˥ (on_top (C, B)

Compare the two goals:

The two preconditions (on_table (A)), and on_top (A, B) are contradictory hence these two sub-goals in_hand (A) and ˥ on_top (C, B) interfere. Hence the solution cannot be determined.

This is not always the case, splitting up a problem into sub-problems (divide and conquer) is a powerful solution technique. Even where interference is found it is often more efficient to produce two interfering sub-plans and then modify them than to work on the whole problem at once.

Algorithm for Means-Ends Analysis:

1. Until the goal is reached or nor more procedures are available.

2. Describe the current state, the goal state, and the difference between the two.

3. Use the difference between the current state and goal state, possibly with the description of the current state or goal state, to select a promising procedure.

4. Use the promising procedure and update the current state.

5. If the goal is reached, announce success; otherwise, announce failure.