Ideas for searching in game have led to new algorithms in heuristic search. One such idea is Iterative Deepening, originally used in a program called chess 4.5. Rather than searching the game tree upto a fixed length, in this program the tree is generated up to a single ply, and static evaluation function is applied to the result of each of the possible moves.

The minimax search is then initiated up to a depth of two plies and to more plies and so on. The name “iterative deepening” derives its name from the fact that on each iteration, the tree is searched one level deeper. Fig. 5.18, illustrates the method. This method is also called progressive deepening.

At the first instance the procedure seems wasteful. Then why peruse? There are some justifiable reasons for this iteration, especially in game playing programs subjected to time constraints. (For example, chess game may be required to be completed in 2 hrs.). Since it is impossible to know in advance how long a fixed depth tree will take because of the variation in pruning efficiency and the need for selective search, a program may run of time.

ADVERTISEMENTS:

With iterative deepening the current search can be aborted at any time and the best move found by previous iteration can provide invaluable move ordering constraints. If one move was judged to be superior to its siblings in a previous iteration, it can be searched first in the next interaction. With effective ordering, the α-β procedure can prune many more branches and the total search time can be decreased drastically. This allows more time for deeper iterations.

Chess 4.5’s success with iterative deepening, can be applied to the single-agent search to solve the problems like that of 8 puzzle. An algorithm combining the salient features of depth-first and breadth first, is called Depth First Deepening (DFID).

Advantages of Iterative Deepening Idea of Game Searching:

ADVERTISEMENTS:

1. Find the shortest path to the goal state.

2. The maximum amount of memory used by DFID is proportional to the number of nodes in that solution path.

This algorithm has some disadvantages also.

All the interactions except the final one are essentially wasted, which is not a serious problem because most of the activity during an given interaction occurs at the leaf node level. Assuming a complete tree, we see that there are as many leaf nodes at level n as there are total nodes in levels 1 to n. Thus, the work expended using the nth iteration is roughly equal to the work expended during all previous iterations.

ADVERTISEMENTS:

This means that DFID is only slower than the depth-first search by a consistent factor. In the case of Depth First (DF) there is no way to know in advance how deep the solution lies in the search space. DFID avoided the problem of choosing cut-off without sacrificing efficiency. In fact, DFID is the optimal algorithm, in terms of time and space, for blind search.

This can be proved as follows:

The number of nodes requiring evaluation function at the bottom of a tree with depth d and effective branching factor b is bd. With a bit of algebra, it is easy to show that the number of nodes in the rest of the tree is

Thus, the ratio of number of nodes in the bottom level to the number of nodes upto the bottom level is

For b = 16, the number of evaluations needed to do minimaxing at every level upto bottom level is but one-fifteenth of the static evaluation needed to do minimising at the bottom level.

Iterative Deepening A*:

Iterative deepening can also be used to improve the performance of heuristic informed search like the A* search algorithm. The difficulty with A* of requiring the average amount of memory, to maintain the search node lists can be obviated to a great extent with iterative deepening.

ADVERTISEMENTS:

Algorithm Iterative Deepening A* (IDA*):

1. THRESHOLD = the heuristic evaluation of the start state.

2. Correct a depth-first search, pruning any branch when its total function (g + h’) exceeds THRESHOLD. If a solution path is found during the search, return it.

3. Otherwise, increment THRESHOLD by minimum amount it was exceeded during the first step. Go to step 2.

Like A*, IDA* is guaranteed to find an optimal solution provided h’ is an admissible heuristic. The IDA* search technique is very efficient with respect of space, because of it being a depth-first search. IDA* was the first heuristic algorithm to find the optimal solution paths for the 15 puzzle within reasonable time and space constraints.

Moreover, game playing has been a fertile ground for experiments in machine learning. Machine learning and neural networks have been applied to the game GO which has led to the field for Pattern Matching.

Contributions of them as C:

Scheelling and Robert Aumann Nobel prize/winner in Economics lay stress of showing how game theory could be applied on business situations like wage negotiations, trade disputes, price fixing by competitors and even organize crime. Game theory encourage can intelligent anticipation of what a partner or adversary would do on is lively to do in a set of circumstances so that own action can be modified to maximize gains and minimize losses.

Contributions of Thomas C Schelling and Robert Aumann Nobel prize winners in economics lay stress in showing how Game Theory could be applied to business situations like wage negotiations, trade disputes, price fixing by computers and even organized crimes.

Game Theory encourages an intelligent anticipation of what a partner or adversary would do/likely to do in the set of circumstances so that own action can be modified to maximize gains and minimize losses. Further, game theory could be used to discipline a child or deal with employees or employers or even neighbours.