In this article we will discuss about:- 1. Introduction to the Full Resolution Rule (Robinson’s Inference Rule) 2. Clausal Form of Knowledge Base According to Robinson’s Inference Rule 3. Natural Deduction 4. Types of Resolution.

Introduction to the Full Resolution Rule (Robinson’s Inference Rule):

Now that we know about unification, we can properly describe the full version of resolution:

This resolves literals pj and qk. Note that we have to add ¬ to qk to make it unify with pj, so it is in factor pj which is the negative literal here. The rule is more general than first-order binary resolution in that it allows an arbitrary number of literals in each clause. Moreover, Ө is the most general unifier, rather than an arbitrary unifying substitution.

ADVERTISEMENTS:

To use the rule in practice, we first take a pair of sentences α express. Then we find two literals, pj and qk for which we can find a substitution mu to unify pj and ˥qk. Then we take a disjunction of all the literals (in both the sentences) except pj and qk. Finally, we apply the substitution 0 to the new disjunction to determine what we have just inferred using resolution.

Example:

Let P = Loves (X, father-of (X)).

Q1 = Likes (X. mother-of (X)),

ADVERTISEMENTS:

Q2 = Likes (john, Y),

R = Hates (X, Y)

After unifying Q and Q1 we have

Q = Q1 = Q2 = (john, mother-of (john))

ADVERTISEMENTS:

Where ‘the substitution S is given by

S = {john /X, mother-of (X) / Y}

= {john/X mother-of (john)/Y}.

The resolvent (P v R) [s] is, thus, computed as follows.

ADVERTISEMENTS:

(P v R)[S]

= Loves (john, father-of (john)) v hates (john, mother-of (john)).

More Examples of Resolution by Robinson’s Inference Rule:

Example 1:

ADVERTISEMENTS:

The example we present here is one to which all Al students should be exposed at some point in their studies. It is the famous “monkey and bananas problem”, another one of those complex real life problems solvable with Al techniques. We imagine a room containing a monkey, a chair, and some bananas which have been hung from the center of the ceiling, out of reach from the monkey.

If the monkey is clever enough, he can reach the bananas by placing the chair directly below them and climbing on top of the chair. The problem is to use FOPL to represent this monkey- banana world and, using resolution, prove the monkey can reach the bananas.

In creating a knowledge base, it is essential first to identify all relevant objects which will play some role in the anticipated inferences. Where possible, irrelevant objects should be omitted, but never at the risk of incompleteness.

For example, in the current problem, the monkey, bananas, and chair are essential. Also needed is some reference object such as the floor or ceiling to establish the height relationship between monkey and bananas. Other objects such as windows, walls or doors are not relevant.

The next step is to establish important properties of objects, relations between them, and any assertions likely to be needed. These include such facts as the chair is tall enough to raise the monkey within reach of the bananas, the monkey is dexterous, the chair can be moved under the bananas, and so on. Again, all important properties, relations, and assertions should be included and irrelevant ones omitted. Otherwise, unnecessary inference steps may be taken.

The important factors for our problem are described below, and all items needed for the actual knowledge base are listed as axioms. These are the essential facts and roles. Although not explicitly indicated, all variables are universally quantified.

Relevant Factors for the Problem:

Constants:

(floor, chair, bananas, monkey)

Variables:

{x, y, z}

Predicates:

clip_image006

Axioms:

Using the above axioms, a knowledge base can be written down directly in the required clausal from. All that is needed to make the necessary substitutions are the equivalences

and De Morgan’s laws

It may be noted that cause 13 is not one of the original axioms. It has been added for the proof as required in refutation resolution proofs.

Clausal Form of Knowledge Base According to Robinson’s Inference Rule:

1. in_ room (monkey)

2. in_ room(bananas)

3. in_ room(chair)

4. tall (chair)

5. dexterous (monkey)

6. can_ move(monkey, chair, bananas)

7. can. climb (monkey, chair)

8. ‾close (bananas, floor)

9. ‾can_ climb(x, y) V get_ on(x, y)

10. ‾dexterous (x) V ‾close(x, y) V can_ reach(x, y)

11. ‾get_ on(x, y) V “under(y, bananas) V ‾tall(y) V close(x, bananas)

12. ‾in_ room(x) V In. room(y) V _in_ room(z) V ‾can_ move(x, y, z) V close, floor) V under (y, z)

13.‾can_ reach(monkey, bananas)

Resolution Proof:

A proof that the monkey can reach the bananas is summarized below. As can be seen, this is a refutation proof where the statement to be proved (can_ reach(monkey, bananas)) has been negated and added to the knowledge base (number 13).

The proof then follows when a contradiction is found (see number 23, below):

In performing the above proof, no particular strategy was followed. Clearly, however, good choices were made in selecting parent clauses for resolution. Other­wise, many unnecessary steps may have been taken before completing the proof. Different forms of resolution were completed in steps 14 through 23.

Example 2:

The law of Indian Defence states that it is a crime for an Indian to sell defence S/W (S/W) Netra (made say in India) to hostile nations.. The hypothetical country, say None, an enemy of India has S/W Netra. This is suspected to be sold by an Indian Colonel George, who is an Indian.

We have to prove if col. George is a criminal.

Solution:

First we shall pre-present these facts as first order definite clauses and then apply the Generalized Modus Ponens to convert them into clause form, definite clause contains single positive literal conversion of kB into FOL.

1. It is crime for an Indian to sell Indian software to hostile nations, Netra. Indian (x) Software (y) ^ Sells (x, y, z) ^ Hostile (z) Criminal (x)

2. Hostile Country None has Software Netra.

ⴺx owns (None (x) ^ Netra (x))

which is transformed into two definite clauses by Existential Elimination, introducing new constant, say I.

3a. Owns (None I)

3b. Netra is (suspected is be) sold to None by col George

Netra (x) ^ Owns (None, x) Sells (George, x, None)

Additional knowledge is that netra is Indian software

Netra (x) Software (x)

and also that an enemy of India is taken as ‘hostile’ of India

Enemy (x by z, India) Hostile (x by z)

4. George is an Indian

Indian (George)

5. The country None, is an enemy of India.

Enemy (None, India)

This kB contains no function symbols, so inference procedure shall be simple.

In CNF they are:

(a) ¬ Indian (x) ¬ software (y) v ¬ Sells(x, y, z) ¬ Hostile (z) criminal (x).

(b) ¬ Netra (x) ¬ Owns (None, x) Sells (George, x) None).

(c) ¬ Enemy (x India) Hostile (x).

(d) ¬ Netra software (x).

(e) Owns (None,I).

(f) Netra (x)

(g) Indian (George)

(h) Enemy (None, India)

and the negated goal ¬ criminal (George)

Resolution works on the principle that | KB│ = P by proving KB ^ ¬ P is unsatisfiable that is derivation is an empty clause. Resolution proof true is given in Fig. 6.5.

Definite clause are a suitable normal form for use with Generalized Modulus Poners.

We may notice the “single spine” beginning with the goal clause, resolving against clauses from the knowledge base until the empty clause is generated. This is characteristic of resolution on Horn clause knowledge bases.

Example 3:

This example makes use of Skolemisation and involves clauses which are not definite clauses. This results in a some what complex structure.

The problem in English is as follows:

To prove Vibhuti Killed Lucy from the following K.B:

1. Everyone who loves all animals is loved by someone.

2. Anyone who kills an animal is loved by no one.

3. Gaurav loves all animals.

4. Either Gaurav or Vibhuti killed the cat who is named Lucy.

5. Did Vibhuti kill the cat?

Goal, G to prove that Vibhuti killed Lucy.

Conversion in First-order Logic of the given K.B. and the negated goal.

(a) ᗄx [ᗄy Animal(y) Loves (x, y)] [ⴺy Loves (y, x)].

(b) ᗄx [ⴺy Animal (y) ^ Kills (x, y)] [ᗄy ¬ Loves (z, x)].

(c) ᗄx Animal (x) Loves (Gaurav, x).

(d) Kills (Gaurav, Lucy) Kills (Vibhuti, Lucy)

(e) Cat (Lucy)

(f) ᗄx Cat (x) Animal(x).

(g) ¬ Kills (Vibhuti, Lucy) Goal

and conversion to CNF to each sentence gives:

(a1) Animal (F(x)) Loves (G(x), x)

(a2) ¬ Loves (x, F(x)) Loves (G(x), x)

(b) ¬ Animal (y) ¬ Kills(x, y) ¬ Loves(z, x)

(c) ¬ Animal (x) Loves (Gaurav, x)

(d) Kills (Gaurav, Lucy) Kills(Vibhuti, Lucy)

(e) Cat (Lucy)

(f) ¬ Cat(x) Animal(x)

(g) ¬ Kills(Vibhuti, Lucy)

After conversion the first sentence from order conversion gives two sentences.

The inference can be paraphrased in English as:

Suppose Vibhuti did not kill Lucy. We know that either Gaurav or Vibhuti did; thus Gaurav must have. Now Lucy is a cat and cats are animals, so Lucy is an animal. Because anyone who kills an animal is loved by no one, so no one loves Gaurav. But in the knowledge base it is given that Gaurav loves all animals, so some one loves him, so we have a contradiction. Therefore Vibhuti killed the cat, Lucy.

Now let us draw the resolution true given in Fig. 6.6.

The proof answers the question “Did Vibhuti kill the cat?”

But often we ask more general questions, such as “who killed the cat?”

Resolution can do this by doing a little more work. The goal is ⴺw kills (w, Lucy) which when negated becomes ¬ kills (w, Lucy) in CNF. Repeating the proof in the above Fig. with the new negated goal, we obtain a similar proof tree, the substitution then becomes W/Vibhuti in one of the steps.

Unfortunately resolution, can produce non constructive proofs for existential goals. For example, ¬ Kills(w, Lucy) resolves with Kills(Gaurav, Lucy) v Kills (Vibhuti, Lucy) to give Kills (Gaurav, Lucy), which resolves again with ¬ Kills (w, Lucy) to yield the empty clause. Here, w has two different bindings in this proof – Vibhuti or Gaurav. This is surprising, the procedure of resolution tells that someone, either Vibhuti or Gaurav killed Lucy.

This type of double bindings is not quite common, and can be avoided by:

(i) Either restricting the allowed resolution steps so that w (query variable) is bound once in a given proof. Then we need to be able to back track over the possible bindings.

(ii) Or by adding a special answer literal to the negated goal.

This then becomes ¬ Kills(w, Vibhuti) v Answer(w).

At present the resolution process generates an answer only when a clause is generated containing just a single answer (Vibhuti) in the Fig.6.6. The non-constrictive proof would produce the clause

Answer(Vibhuti) v Answer(Gaurav)

which is in fact, the no answer.

Thus, we have seen how the process of deduction can derive new facts (answer) from the given facts (knowledge base). Further, resolution theorem can be used to answer yes-no questions such as Did Vibhuti kill the cat?

Resolution can also be used to answer fill-in-the-blanks type of questions, such as “Was the cat killed?” or “Who killed the cat?” Even “When was the cat killed” can be answered by adding new facts in the knowledge base. But yes-no and fill-in the blank questions are not the only kinds of questions which can ever be asked.

It must be mentioned here that the technique of deduction does not contain the answer to all conceivable questions. In general, it is necessary to decide on the kinds of questions which will be asked and to decide a knowledge representation method appropriate for those questions. For example, even we might ask “How was the cat killed?” or “Why was the cat killed?” So method of resolution is not yet complete and needs further research.

Natural Deduction by Robinson’s Inference Rule:

We have discussed resolution as a potential method of theorem proving but it is not without its limitations.

The first difficulty is that for resolution theorem to be applicable sentences in first order logic need to be converted in to clause form. Sometimes the heuristic information contained in the sentence remain logically true. For example, consider the sentence.

All judges who are not crooked are well educated, expressed in predicate logic:

ᗄx Judge(x) ¬ Crooked(x) Educated(x)

and after conversion into CNF.

¬ Judge(x) v Crooked (x) v Educated (x)

This form appears to deduce that some one is not a judge by showing that he is not crooked and not educated. And this is not a good method of defining a judge.

Yet another difficulty with resolution as theorem prover is that people do not think in terms of resolution. In the method of resolution there is no scope for the theorem prove either to give advice or to be advised. There should be some interaction between the theorem prover and the computer howsoever difficult it may be.

That is the computer should behave’ like the theorem prover in the sense that machine theorem proving should be similar to human theorem proving.

In other words natural deduction should look more natural in not requiring just conversion of the sentences in to clausal form and the inference rules should appear natural to humans. That is the first order predicate need not to have just conversion of sentence into CNF. In a way the inference rules should appear to computer as natural as to human being.

such as yields (P ∼ Q R) and ∼ Q R

Types of Resolution According to Robinson’s Inference Rule:

Several types of resolution are possible depending on the number and types of parents.

We define a few of these types below:

Binary Resolution:

Binary resolution gets its name from the fact that each sentence is a disjunction of exactly two literals. Two clauses having complementary literals are combined as disjuncts to produce a single clause after deleting the complementary literals, For example, the binary resolvent of –

-P(x, a) V Q(x) and ‾Q(b)V R(x)

is just

-P(x, a) V R(b).

The substitution {b/x} was made in the two parent clauses to produce the complementary literals Q(b) and ‾Q(b) which were then deleted from the disjunction of the two parent clauses.

Binary resolution resolves exactly two literals in P v Q ∼ Q v R and we get ∼ PVR, written as:

Unit Resolution:

Resolution where one of the sentences is a single literal (called unit clause).

For resolving a unit clause P with any other sentence (clause) such as:

∼ P v ∼ Q v R

yields ∼ Q v R

which is shorter than the other clause ∼ P v ∼ Q R. Unit resolutions, also called unit preference, directs the program toward generating unit clauses which play an important role in automated reasoning. This is an ordering strategy, and when combined with other strategies can act as a useful heuristic in problem solving.

Subsumption:

This method of resolution works on the principle of subsumption when specific clauses can be inferred from general one, throw away the specific clauses. This reduces the search space but still contains false.

For example, if P(x) is already in the knowledge base then there is no sense in adding P(a) and even less sensible in adding P(a) Q(b).

Thus, given the clause C = P(x) Q(x, y) P(f(z)) and the factor C’ = Cβ, P(f(z)) Q(f(z),y) P(f(z)) is obtained, with β = {f(z)/x}.

However, the reduction of the search space is out weighed by the expense.

Unit Resulting Resolution (CUR):

A number of clauses are resolved simultaneously to produce a unit clause. All except one of the clauses are unit clauses, and that one clause has exactly one more literal than the total number of unit clauses. For example, resolving the set

{ ‾MARRIED(x, y) V ‾MOTHER (x, z) V FATHER(y, z),

MARRIED (sue, joe),

‾FATHER (joe, bill)}

where the substitution a = {sue/x, joe/y, bill/z) is used, results in the unit clause ‾MOTHER (sue, bill).

Linear Resolution:

Suppose two clauses Cl1and Cl2 are resolved to generate Cl3, then Cl1 and CI2 are called the parents of Cl3. Now for i = 1 to n if CIi is the parent of Cln+1, then the resolution process by which Cln+1 is generated is called linear resolution. When one of the parents in linear resolution comes from the given set of CNF clauses we call it linear in put resolution.

Linear Input Resolution:

If one the parents in linear resolution is always from the original set of CNF clauses we call linear input resolution. For example, given the set of clauses S = {PVQ‾PVQ, PV~Q, ‾PV‾Q} let, C0 = (PV Q), Choosing B0 = ‾PVQ from the set S and resolving this with C0 we obtain the resolvent Q = C1. B1must now be chosen from S and the resolvent of C1and B1 becomes C2 and so on.

Unification and resolution give us one approach to the problem of mechanical inference or automated reasoning, but without some further refinements, resolution can be intolerably inefficient. Randomly resolving clauses in a large set can result in inefficient or even impossible proofs. Typically, the curse of combinatorial explosion occurs. So methods which constrain the search in some way must be used.

When attempting a proof by resolution, one ideally would like a minimally unsatisfiable set of clauses which includes the conjectured clause. A minimally unsatisliable set is one which is satisfiable when any member of the set is omitted.

The reason for this choice is that irrelevant clauses which are not needed in the proof but which participate are unnecessary resolutions. They contribute nothing toward the proof. Indeed, they can sidetrack the search direction resulting in a dead end and loss of resources. Of course, the set must be unsatisfiable otherwise a proof is impossible.

A minimally unsatisfiable set is ideal in the sense that all clauses are essential and no others are needed. Thus, if we wish to prove B, we would like to do so with a set of clauses S = {A1 A2. . ., Ak} which become minimally unsatisfiable with the addition of B.

Choosing the order in which clauses are resolved is known as a search strategy. While there are many such strategies now available, we define only one of the more important ones, the set-of-support strategy. This strategy separates a set which is unsatisfiable into subsets, one of which is satisfiable.

Set-of-Support Strategy:

Let S be an unsatisfiable set of clauses and T be a subset of S. Then T is a set-of-support for S if S – T is satisfiable. A set-of-support resolution is a resolution of two clauses not both from S-T. This essentially means that given an unsatisfiable set {A 1….,Ak}, resolution should not be performed directly among the Ai as noted above.

It can now be easily shown that the negation of the theorem (goal) if resolved with the CNF form of expressions A, through A3 the resulting expression would be a null clause for a valid theorem. To illustrate this we will now form pairs of clauses, one of which contains a positive predicate, while the other contains the same predicate in negated form.

Thus by Robinson’s rule, both the negated and positive- predicates will drop out and the value of the variables used for unification should be substituted in the resulting expression. The principle of resolution is illustrated below (Fig. 6.6) to prove the goal that Loves (john, mary).

Double Resolution: Avoid it:

Sometimes more than one literal is present with opposite signs in two CNF clauses.

For instance consider the following two clauses.

p ¬ q r

and ¬ p q s

Resolving the above clauses twice, we may derive r s, which is incorrect.

To understand the implication of this, let us represent these rules in the following format:

q p r

and p q s.

Replacing p in the first clause by the second clause, we have

q q s r.

which implies that if q is true then either q or r or s is true, but this does not mean that (q r) only is true.

A simpler but interesting example which illustrates the scope of mistakes in double resolution is given below.

Let us consider the following clauses:

¬ p q

and ¬ q p

Resolving these twice yields a null clause, which is always false. But the above system comprising of {p q, q p} implicates p p and q q by chain rule, which in no way supports the falsehood of the resulting clause after resolution such a resolution should be avoided.

Semi-Decidability of Predicate Calculus:

A logic is called decidable if there exists a method by which we can correctly say whether a given formula is valid or invalid. Readers may remember that validity of a formula a means satisfiability of the formula for all possible interpretations.

A sound and complete proof method is able to prove the validity of a formula. But if the formula is invalid, the proof procedure (by resolution principle or otherwise) will never terminate. This is called semi-decidability. First order logic is semi-decidable, as it is unable to prove the invalidity of a formula.