In this article we will discuss about:- 1. Search Methods for the Problem of Combinational Explosion 2. Uninformed Search Methods for the Problem of Combinational Explosion 3. Branch and Bound 4. Beam Search Methods 5. Informed Search or Heuristic Search Methods 6. Simulated Annealing Search 7. Comparison.

Search Methods Search Methods for the Problem of Combinational Explosion:

Heuristics always help in finding an optimum solution or at least take us close to it. Further, they are of two types; special purpose and general purpose. The general purpose, heuristics are described independently of any particular task or problem domain, but can be applied to particular problems as well. Consistent application of heuristics improves the efficiency of the heuristics, further.

However, they are unable to overcome the problem of combinatorial explosion inherently present in search techniques, completely. These search methods are- Depth-first Breadth-first Generate and test Branch and Bound Beam Search Hill Climbing Best-first Search Problem Reduction Constraint Satisfaction Means-end-analysis.

The first two methods have been dealt in order to explain control strategy in water jug problem and in production system. They would again be taken up in a greater detail in order to understand other methods in the right perspectives.

Uninformed Search Methods for the Problem of Combinational Explosion:

ADVERTISEMENTS:

The state space is commonly identified with a directed graph in which each node is a state and each arc represents the application of an operator transforming a state to a successor state. A solution is a path from a start state to a goal state. Goal states may be defined either explicitly or as the set of states satisfying a given predicate.

The search for a solution is conducted by making explicit just enough of the state- space graph to contain a solution path. If the order in which potential solution paths are considered is arbitrary, using no domain-specific information to judge where the solution is likely to lie, the search is called blind search or uninformed search.

Because of the large proportion of the state space it may explore, blind search is impracticable for non-trival problems. However, it provides a useful foundation for the understanding of heuristic search techniques.

The two blind-search methods described below differ from one another mainly in the order in which nodes are examined. It is assumed that there is a procedure for finding all the successors of a given node-that is, all the states can be reached from the current state by a single operator application.

ADVERTISEMENTS:

The algorithms also make two other assumptions:

1. The state-space graph is a tree. The implication is that there is only one start state (the root) and that the path from the start node to any other node is unique.

2. Whenever a node is expanded, creating a node for each of its successors, the successor nodes contain pointers back to the parent node. When a goal node is finally generated, the path from the root to the goal can be found easily.

In other words, a blind or uninformed search algorithm is one which uses no information other than the initial state, the search operators and a test for a solution. A blind search proceeds in a systematic way by exploring nodes in a predetermined order or by selecting nodes at random. We would confine ourselves only to systematic search procedures in the discussion of blind search.

ADVERTISEMENTS:

We will now present two tropical algorithms for generating the state space for search.

These are:

1. Breadth first search.

2. Depth first search and

ADVERTISEMENTS:

1. Breadth First Search:

The breadth first search visits the nodes of the tree along its breadth, starting from the level with depth 0 to the maximum depth. For instance, consider the tree, given in Fig. 4.1. Here, the nodes in the tree are traversed following their ascending ordered labels.

The algorithm for traversal in a tree by breadth first manner can be presented with a queue as follows:

ADVERTISEMENTS:

Algorithm Breadth-First-Search Begin:

(i) Place the starting node in a queue;

(ii) Repeat:

Delete queue to get the front element;

If the front element of the queue = goal,

Return success and stop;

Else do

Begin:

insert the children of the front element,

if exist in any order at the rear end of the queue;

End:

Until the queue is empty;

End:

The breadth first search algorithm, presented above, rests on a simple principle.

If the current node is not the goal add the offspring of the current in any order to the rear end of the queue and redefine the front element of the queue as the current. The algorithm terminates, when the goal is found.

Time Complexity:

For the sake of analysis, we consider a tree of equal branching factor b, from each node and largest depth d. Since the goal is not located within depth (d-1), the number of false search is given by

I + b + b2 + b3 + … + bd-1 = (bd-1)/(b-1)b>>1.

Further, the first state within the fringe nodes could be the goal. On the other hand,, the goal could be the last visited node in the tree. Thus, on an average, the number of fringe nodes visited is given by

(1 + bd)/2

Consequently, the total number of nodes visited in an average case becomes

(bd-1) /(b – 1) + (1 + bd)/2

=̃ bd(bd + 1)/2(b-1).

Since the time complexity is proportional to the number of nodes visited, therefore, the above expression gives a measure of time complexity.

Space Complexity:

The maximum number of nodes will be placed in the queue when the left most node at depth d is inspected for the goal. The queue length under this case becomes bd. The space complexity of the algorithm which depends on the queue length, in the worst case, thus, is of the order of bd.

2. Depth First Search:

The depth first search generates nodes and compares them with the goal along the largest depth of the tree and moves up to the parent of the last visited node, only when no further node can be generated below the last visited node. After moving up to the parent, the algorithm attempts to generate a new offspring of the parent node.

The above principle is employed recursively to each node of a tree in a depth first search, One simple way to realize the recursion in the depth first search algorithm is to employ a stack. A stack-based realization of the depth first search algorithm is presented below.

Algorithm Depth First Search:

Begin:

1. Push the starting node at the stack,

pointed to by the stack-top;

2. While stack is not empty do

Begin:

Pop stack to get stack-top element;

If stack-top element = goal, return success and stop

Else push the children of the stack-top

element in any order into the stack;

End while;

End.

In the above algorithm, a starting node is placed in the stack, the top of which is pointed to by the stack-top. For examining the node, it is popped out from the stack. If it is the goal, the algorithm terminates, else its children are pushed into the stack in any order.

The process is continued until the stack is empty. The ascending order of nodes, in figure 4.3 represents its traversal on the tree by depth first manner. The contents of the stack at the first few iterations are shown below in figure 4.4. The arrowhead in the figure denotes the position of the stack-top.

Space Complexity:

Maximum memory in depth first search is required, when we reach the largest depth at the first time. Assuming that each node has a branching factor b, when a node at depth d is examined, the number of nodes saved in memory are all the unexpanded nodes up to depth d plus the node being examined.

Since at each level there are (b -1) unexpanded nodes, the total number of memory required – d (b -1) + 1. Thus, the space complexity of depth first search is a linear function of b, unlike breadth first search, where it is an exponential function of b. This, in fact is the most useful aspect the depth first search.

Time Complexity:

If we find the goal at the leftmost position at depth d, then the number of nodes examined = (d+1). On the other hand, if we find the goal at the extreme right at depth d, then the number of nodes examined include all the nodes in the tree, which is

1 +b + b2 + b3+ … + bd = /(bd-1)/(b-1)

So, the total number of nodes examined in an average case

= (d+1)/2 + (bd+1 -1)/2(b-1)

=̃ b (bd +d)/2(b-1), b>>1

This is the average case time complexity of the depth first search algorithm. Since for large depth d the depth first search requires quite a large runtime, an alternative way to solve the problem is by controlling the depth of the search tree. Such an algorithm, where the user mentions the initial depth cut-off at each iteration, is called an Iterative Deepening Depth First Search or simply an Iterative deepening search.

Iterative Deepening Search:

When the initial depth cut-off is one, it generates only the root node and examines it. If the root node is not the goal, then depth cut-off is set to two and the tree up to depth 2 is generated using typical depth first search. Similarly, when the depth cut-off is set to m, the tree is constructed up to depth m by depth first search.

One may thus wonder that in an iterative deepening search, one has to regenerate all the nodes excluding the fringe nodes at the current depth cut-off, since the number of nodes generated by depth first search up to depth h is

(bh+1 -1)/(b-1)

the total number of nodes expanded in failing searches by an iterative deepening search will be

The last pass in the algorithm results in a successful node at depth d, the average time complexity of which by typical depth first search is given by

b(bd +d)/2(b-1)

Thus, the total average time complexity is given by

b(b4 – d)/(b – 1)2 + b(bd + d)/2(b -1)

=̃ (b + 1)bd+1/2(b – 1)2, b>>1

Consequently, the ratio of average time complexity of the iterative depending  search to depth first search is given by

{(b + 1)bd+1/2(b – 1)2} : {bd+1/2(b – 1)}

= (b + 1) : (b – 1)

The iterative deepening search thus does not take much extra time, when compared to the typical depth first search. The unnecessary expansion of the entire tree by depth first search, thus, can be avoided by iterative depending.

A formal algorithm of iterative deepening is presented below:

Algorithm Iterative—Deepening:

Begin:

1. Set current depth cut-off = 1;

2. Put the initial node into a stack pointed to by stack-top;

3. While the stack is not empty and the depth is within the given depth cut-off do

Begin:

Pop stack to get the stack-top element;

if stack-top element = goal, return it and stop

else push the children of the stack-top in any order into the stack;

End While;

4. Increment the depth cut-off by 1 and repeat through step 2;

End:

It can be shown that Depth First Iterative Deepening is asymptotically optimal, among brute force tree searches, in terms of time, space and length of solution. In fact, it is linear in its space complexity like depth first search and is asymptotically optimal to breadth first search in terms of the number of nodes expanded.

In general iterative deepening is the preferred uninformed search method when there is a large search space and the depth of the solution is not known.

Iterative deepening search is analogus to breadth-first search in that it explores a complete layer of new nodes at each iteration before going to the next layer. It would seem worth while to develop an iterative analog to uniform-cost search, inheriting the depth-first’s guarantee to find a solution and avoiding its memory requirements.

The idea is to use increasing path cost limits instead of increasing depth limits. The resulting algorithm called iterative lengthening search, incurs substantial overhead as compared to uniform-cost search.

The breadth first, depth first and the iterative deepening search can be equally used for Generate and Test type algorithms. However, while the breadth first search requires an exponential amount of memory, the depth first search calls for memory proportional to the largest depth of the tree. The iterative deepening, on the other hand, has the advantage of searching in a depth first manner in an environment of controlled depth of the tree.

Generate and Test:

This method works in two modules: the Generator which creates possible solutions, the second, Tester, which evaluates each proposed solution, either accepting or rejecting that solution. Action may stop when one acceptable solution is found or action may continue until all possible solutions are found.

Algorithm of Generate and Test:

1. Until a satisfactory solution is found or no more candidate solutions can be generated.

2. Generate a candidate solution.

3. Test the candidate solution.

4. If an acceptable one is found, announce it, otherwise announce failure.

Branch and Bound Search Methods for the Problem of Combinational Explosion:

This is one of the search techniques which can reduce the search complexity. One is the branch and bound. This method generates paths one at a time, keeping track of the best path found so far. This value is used as a bound on future candidates.

As paths are created one at a time, the algorithm examines each partially completed path. If the algorithm determines that the best possible extensions to a path, the branch will have greater cost than the bound, it eliminates that partial path and all of its possible extensions. This search technique reduces paths from N1 to 1.26N in the case of travelling salesman problem.

Depth-First Branch and Bound:

This is an important variation of depth-first search which guarantees optimal solution. All the variations of depth-first search (Unbounded Depth-first, Bounded Depth- first, Cost Bounded Depth-first search) experience the undesirable property that a solution obtained (if at all) is not guaranteed to be optimal.

This is because all these algorithms discontinue search once a solution is found. The branch and bound variants of these algorithms do not, in general, stop searching immediately after finding the first solution.

The algorithms of these depth-first variants in the light of the application of branch and bound are given below:

Unbounded Depth-First Branch and Bound Search [UDFBB(n)]:

Algorithm:

Begin:

if f(n) > cost (solution) then return;

if n is a goal node then

solution = superior (m, n);

for each ni of n do

UDFBB(ni);

return;

end;

Here, superior (m, n) reports the better (in terms of cost) of the two goal nodes m and n.

The most glaring lacuna of UDFBB is that it may blindly go deeper into some unpromising paths and thus generate too many nodes in the worst case.

Bounded Depth-First Branch and Bound Search [BDFBB (n, d)]:

Procedure:

Begin:

If (d < 0) of f (n) > cost (solution) then return;

If n is a goal node then

Solution: = superior (n, solution);

For each successor n. of n do, if n is not an ancestor n. Then DDFBB (n, d-1); return;

end;

Uniform-Cost Search:

The breadth-first algorithm can be generalised slightly to solve the problem of finding the cheapest path from the start state to a goal state. A non-negative cost is associated with every arc joining two nodes; the cost of a solution path is then the sum of the arc costs along the path.

The generalised algorithm is called a uniform-cost search. If all arcs have equal cost, the algorithm reduces to breadth-first search. In the uniformed algorithm given below, the cost of the arc from node i to node; is denoted by c(i, j). The cost of a path from the start node to any node i is denoted by g (i).

Algorithm of Uniform-Cost Search:

1. Put the start node son OPEN. If the start node is a goal node, a solution has been found. Otherwise, set g (s) = 0.

2. If OPEN is empty, no solution exists.

3. Select from OPEN node i such that g (i) is minimum. (Resolve ties in favour of a goal node). Move node i from OPEN to CLOSED.

4. If node, is a goal node, a solution has been found.

5. Expand node i. If it has no successors, go to (2).

6. For each successor node j of node z, computer g(j) = g(i) + c(i, j) and place all the successor nodes j in OPEN.

7. Go to (2).

Beam Search Methods for the Problem of Combinational Explosion:

It is a variant of breadth-first search in which searches progresses level by level. Unlike breadth-first, however, beam search moves downward only through the best W nodes at each level by applying heuristics, the other nodes are ignored.

Consequently, the number of nodes explored remains manageable, even if there is a great deal of branching and the search is deep. Whenever beam search is used, there are only W nodes under consideration at any depth rather than the exponentially explosive number of nodes as is done in the breadth-first search.

The width of the beam is fixed and whatever be the depth of the tree, the number of alternatives to be scanned is the product of the width of the beam and the depth. The method is explained in the Fig. 4.5. This technique maintains the size of search bound, often a function of the number of steps taken in the search.

Figure below illustrates how beam search would handle the map-traversal problem:

The numbers on the nodes are straight line distance to the goal node G. Expansion spreads through the search tree level by level, but only the best W nodes are expanded, here W = 2. Remaining paths, shown terminated by underbars are rejected.

Local Beam Search:

Keeping just one node in memory might seem to be an extreme reaction to the problem of memory limitations. The local beam search algorithm keeps track of k states rather than just one. It begins with k randomly generated states. At each step, all the successors of all k states are generated. If anyone is a goal the algorithm halts otherwise, it selects the k best successors from the complete list and repeats.

At first sight parallel a local beam search with K states might seem to be nothing more than running k random restarts in parallel instead of in sequence. In fact, the two algorithms are quite different. In random restart search, each search process runs independently.

Informed Search or Heuristic Search for the Problem of Combinational Explosion:

The blind search consists of depth-first and breadth-first. These two techniques search ‘any’ or ‘a’ solution by using brute force. They are of generate and test type algorithms.

The number of nodes expanded before reaching a solution may become prohibitively large to nontrivial problems. Because the order of expanding the nodes is purely arbitrary and does not use any properties of the problem being solved, too many nodes may be expanded. In such cases we may run out of time or space, or both even in any but simplest problems (causing combinatorial explosion).

Information about the particular problem domain can often be invoked to reduce the search. The techniques for doing so usually require additional information about the properties of the specific domain which is built into the state and operator definitions. Informed search is the one is which we see to how information about the state space can prevent search from wandering about the dark.

Information of this sort is called heuristic information and the search methods using this heuristic search or informed search. An heuristic function defined, in the form of a number, can be determined whose value determines how good or bad node can be.

The value of the node ev(n) or ev(n, p) points out whether it forms the best solution state or the path respectively. In a state problem the evaluation function will usually depend on the node itself ev(n) but in a path problem it will depend on the path to the node ev(n, p). Heuristic search methods help in deciding.

(a) Which node to expand next, instead of depth/breadth first-type of expansion.

(b) Which successor (s) of the node be generated instead of blindly generating all possible successors of that node, thus pruning the search tree.

Two techniques which are available and search the best solution of a problem based up on an evaluation function are Hill-Climbing and Best-First Search.

Simulated Annealing Search:

A hill climbing algorithm that never makes,”downhill” moves towards states with lower value (or higher cost) is guaranteed to be incomplete, because it can get stuck on a local maximum. In contrast, a purely random walk — that is, moving to a successor chosen uniformly at random from the set of successors is incomplete, and extremely inefficient.

Therefore, it seems reasonable to try to combine hill climbing with a random walk in some way which yields both efficiency and completeness. Simulated annealing is such an algorithm.

In metallurgy, annealing is the process used to temper or harden metals and glass by heating them to a high temperature and then gradually cooling them thus allowing the material to coalesce into a low energy crystalline state. To understand simulated annealing, let’s switch our point of view from hill climbing to gradient descent (i.e., at a high temperature) and then gradually reduce the intensity of the shaking, (i.e., lower the temperature).

The innermost loop of the simulated annealing algorithm is quite similar to hill climbing, instead of picking the best move, however, it picks a random move. If the move improves the situation, it is always accepted, otherwise, the algorithm accepts the move with some probability less than one, the probability decreases, exponentially with the “badness” of the move. ߜ E by which the evaluation is worsened.

The probability also decreases as the temperature T goes down. “Bad” moves are more likely to be allowed at the start when temperature is high and they become more unlikely as T decreases. We can prove that if the schedule lowers T slowly enough, the algorithm will find a global optimum with probability approaching one.

Simulated annealing was first used extensively to solve VLSI layout problems in the early 1980s. It has been applied widely to factory scheduling and other large-scale optimization tasks.

Comparison of Search Methods:

Each search procedure discussed here, has an advantage as discussed below:

1. Non-deterministic (Blind) search is good when we are not sure whether depth- first or breadth first would be better.

2. Depth-first is good when unproductive partial paths are not too long or if we always prefer to work forward from the state we generated earlier in time..

3. Breadth-first is good when the branching factor is not too large or if we prefer to work from states which were generated last.

4. Beam search is good when natural measure of goal distance is available and a good path is likely to be among the partial paths which appear to be good at all levels.

All these methods form part of a search family of procedures as shown in Fig. 4.13. Thus, we see that all of the weak methods, (why are they so called?) are subsumed by an architecture and reason with explicit search control knowledge.

Different methods may be employed for different problems, specific domain knowledge can override the more general strategies.

In large knowledge bases which contain thousands of rules, the intractability of search is an overriding concern. When there are many possible paths of reasoning, it is critical that fruitless ones are not pursued. Knowledge about which paths are most likely to lead quickly to a goal state is often called search control knowledge.

The search methods described so far do not require sufficient knowledge of the domain.