Logic is the scientific study of the process of the reasoning. This form of representation uses expressions in formal logic to represent the knowledge required. Inference rules and proof procedures can apply on this knowledge to solve specific problems.

We can derive a new piece of knowledge by proving that it is a consequence of knowledge which is already known. The significant features of the domain can be represented as logical assertations and general attributes can be expressed using variables in logical statements. It has the advantage of being computable, though in a restricted form.

The idea is summed up in Robert Kowalski’s (1976) equation:

Algorithm = Logic + Control

First Order Logic:

ADVERTISEMENTS:

This method of knowledge representation system is based on propositional logic which is declarative and posses semantics but is context independent, unambiguous and builds a more expressive logic on a foundation which borrows representational ideas from natural language while avoiding its drawbacks. The language of first-order logic is built around objects and relations.

It has been so important to mathematics, philosophy and artificial intelligence precisely because these fields-and indeed, much of everyday human existence, can be usefully thought of as dealing with objects and relation among them. First-order logic can also express facts about some or all of the objects in the universe. This enables us to represent general laws or rules.

The primary difference between propositional and first order logic lies in the following points:

Propositional logic assumes that there are facts which either hold or do not hold in the world.

ADVERTISEMENTS:

Each fact can be in one of the two states:

i. True or

ii. False.

First order logic assumes more-the world consists of objects and certain relations among them which do or do not hold. Special purpose logics, for example, temporal logic assumes that facts hold at particular times and those times (points or intervals) are ordered. Higher- order logic views the relations and functions referred to by first-order logic as aspects.

ADVERTISEMENTS:

This allows us to make assertions about all relations, for example we could wish to define what it means for a relation to be transitive. Unlike most special purpose logic, higher order logic is strictly more expressive than the first-order logic, in the sense that some sentences of higher-order logic cannot be expressed by any finite number of first-order logic sentences.

From epistemological point of view of logic (propositional and first-order logic) a sentence represents a fact which can be true, false or nothing. These logics have three possible states of knowledge regarding any sentence.

Systems using probability theory, on the other hand, can have any degree of belief ranging from 0 (total disbelief) to 1 (total belief). (Compare degree of belief in probability theory and degree of truth in fuzzy logic).

Syntax:

ADVERTISEMENTS:

The basic syntactic elements of first-order logic are the symbols which can be objects, relations and functions.

The symbols therefore come in three kinds:

i. Constant symbols which stand for objects;

ii. Predicate symbols which stand for relations and

ADVERTISEMENTS:

iii. Function symbols.

As a convention, these symbols will begin with upper case letter.

For example:

Constant symbols- Talwar, Chopra, a ,b, c

Predicate symbols- Brother, Married, King, Crown, P, Q, R….

Function symbols- Left leg, Right thigh ,f, g. h .

As with proposition symbols, the choice of symbols is entirely up to the user. Each predicate and function symbol comes with an arity which fixes the number of arguments.

Semantics:

Term:

A term is a logical expression which refers to an object. Constant symbols are therefore terms, but it is not always convenient to have a distinct symbol to name every object. For example, in English we can use the expression King Rama’s Crown rather than giving a name to his crown.

This is what for function symbols are used instead of using a constant symbol we use for Rama’s Crown. Quite often a complex term is formed by a function symbol followed by a parenthesized list of terms as arguments to the function symbol. We may keep in mind that a complex term is just a complicated kind of name. It is not a subroutine “Call which “returns a value”.

There is no Crown subroutine which takes a person as input and returns a crown. Thus, a term refers to the object which is the value of function in the given domain. We are say, working in the domain of legendary epic, Ramayana. This is the domain or our world of action, to explain syntax of first-order logic.

Atomic Sentences:

After having both terms for referring to objects and predicate symbols; for referring to relations we can put them together to make atomic sentences’ (well-formed formulas) which state facts. Anatomic sentence is formed from a predicate symbol followed by a parenthesized list of arguments which can be constant, a variable or a function even. For example, Brother (Lakshman, Rama) is a well formed statement (atomic).

This states that Lakshman is the brother of Ram.

Atomic sentences can have complex terms (functions,) as arguments, for example:

Married ((Father (Rama), Mother (Lakshman)). stating that Rama’s father is married to Lakshman’s mother, in a given domain.

An atomic sentence is true in a given domain, if the relation referred to by the predicate symbol holds among the objects referred to by the arguments.

Complex Sentences:

The same logical connectors, as used in propositional logic, can be used to construct more complex sentences, following are the sentences formed using connectives: (true in an intended situation)

¬ Brother (Crown (Rama, Lakshman)

Brother (Rama, Lakshman) Brother (Lakshman, Rama)

King (Rama) King (Ravana)

¬ King (Rama) King (Ravana)

The English interpretation of these sentences is left as an exercise for the reader. You can wait for the solution till you become more familiar with translation (interpretation).

Quantifiers:

Once we have accepted that logic allows objects, it is natural to express properties of entire collection of objects, instead of enumerating each and every object by name. Quantifies allow us to do so. First-order logic contains two standard quantifiers.

Universal quantifier, ᗄ (upside down A) stands for all, is pronounced as For all. The sentence

says that for all values of x, if x is a king then x is a person. The symbol x is called a variable. By convention variables are lowercase letters. A variable is a term all by itself and as such can also serve as argument of a function as in Crown (x); a term with no variables is called aground term. Intuitively, the sentence ᗄx P where, P is an logical expression, says that P is true for every object x.

For example, x can have five interpretations in the context under discussion:

x King Rama

x King Ravana

x Rama’s Crown

x Ravana’s Crown

x Crown

The universally quantified sentences, ᗄx King (x) Person (x) is true under original interpretation (context) if the sentence

King (x) Person (x) is true in each of above five interpretations.

That is, the universally quantified sentence is equivalent to asserting the following five sentences:

King Rama is a King King Rama is a person

King Ravana is a King King Ravana is a person

Rama’s Crown is a King Rama’s Crown is a person

Ravana’s Crown is a King Ravana’s Crown is a person

Crown is a King Crown is a person

All these five interpretations are allowed but the last three are not tenable because Crown, Rama’s Crown or Ravana’s Crown cannot be persons in real life.

A good illustration of relationship of grandfather can be written, through. ᗄx Parent (x, y) Parent (y, z) Male (x) grand parent (x, z). This says that, for all values of x, y and z if x is parent of y and y is parent of z and x is a male then x is parent of z.

Existential Quantifier ⴺ:

This is pronounced “there exists an object such that…” or for some x…”.

This quantifier makes a statement about some object the universe without naming it. The sentence

says that King Ram has a Crown on his head:

Intuitively, the sentence ⴺx P says that P is true for atleast one object x or precisely, ⴺx P is true in a given domain (interpretation) if P is true for atleast one interpretation which assigns x to a domain element.

There is a variant of existential quantifier usually written as ⴺ’, which means there exists exactly one.

Nested Quantifiers:

We often want to express more complex sentences using multiple quantifiers. The commutative law of addition for numbers requires two quantifiers, as in

This states that for every x and for every y, the sum of x and y equals the sum of y and x. the simplest case is where the quantifiers are of the same type. For example, “Brothers are siblings” can be written as

Consecutive quantifiers of the same type can be written as one quantifier with several variables (x, y) for example, to show the symmetric relationship, Sibling hood can be written as:

We may encounter mixture of quantifiers, for example

ᗄx ⴺy loves (x, y) meaning in English that every body loves somebody, on the other hand when we want to say There is someone who is loved by every one, is written as

Thus, the order of quantification is very important. Similarly, the use of parenthesis should be made carefully. ᗄx (ⴺy loves (x, y)) says that everyone has a particular property that somebody loves him and ⴺx (ᗄy loves (x, y)) says that someone has a particular property of being loved by everybody.

Some confusion can arise when two quantifiers are used with the same variable.

Consider the sentence:

Here, variable x in Brother (Rama, x) is existentially quantified (or bound) Earlier it was bound to V through crown (x). So a single variable can not be quantified with two. The rule is the variable which belongs to the inner most quantifier which mentions it, then it will not be subjected to any other quantifier.

So in order to avoid confusion different variables be used with ᗄ and ⴺ such as in the example:

Connection between ᗄ and :

These two quantifiers are actually intimately connected to each other, through negation.

Saying that everyone dislikes garlic is the same as saying that there does not exist someone who likes the garlic:

and “Everyone likes ice-cream” means that there is no one who does not like ice-cream:

Because V is really a conjunction over the universe of objects and3 is a disjunction, it should not be surprising that they obey De-Morgan’s rules.

The De Morgan’s rules for quantified and unquantified formulas are:

Thus, we do not need both ᗄ and ⴺ as we do not really need both and .

Well Formed Formulas:

If P and Q are Wffs, then

¬P, P Q, P Q, P Q, P Q, ᗄx P(x) and ⴺx P(x), ⴺx Married (x, sita) etc. all are Wffs.

The above rules state that all Wffs are formed from atomic formulas with the proper application of quantifiers and logical connectors.

Some valid Wffs are given below:

MAN (Rama)

PILOT (Father-of, sham)

Some invalid Wff’s with the reason for invalidity are given below:

Some characteristics of Wff (compare with those of PL) as in the case of propositional logic are:

1. A Wff is said to be valid if it is true under every interpretation, for example P ∼ P is valid.

2. A Wff which is false under every interpretation is inconsistent or unsatisfiable.

For example, P ∼P is inconsistent.

3. A Wff which is not valid (that is false) for some interpretation is invalid.

4. A Wff which is not inconsistent (true) for some interpretation is satisfiable.

To illustrate some of these concepts consider two Wffs forming the Knowledge Base (KB).

Intelligent (Romy) and ᗄx Intelligent (x) Succeed (x) assumed true under a situation.

Then, Intelligent (Romy) Succeed (Romy) is certainly true, since the Wff was assumed to be true for all values of x, including x = Romy.

This Wff, Intelligent (Romy) Succeed (Romy) is also equivalent to ∼ Intelligent v Succeed (Romy) (State the reason).

Since Intelligent (Romy) is true from our knowledge base from equivalence formula ∼ Intelligent (Romy) is flase. Therefore, Succeed (Romy) must be true.

Thus, we conclude that Succeed (Romy) is a logical sequence of the above two Wffs in the knowledge base. This again explains the process of inference.

Verify the following statements about Wffs (i.e., true or not):

1. If P (t1, t2…, tn is an -ary predicate, P is an atomic formula.

2. If P and Q are Wff then P Q, PQ, PQ, P Q are Wff formula. How about ᗄxP and ⴺ xP?

Inference Rules for Quantifies — Problem Solving:

We should be now in a position to understand FOL inference system. For this we begin with simple inference rules which can be applied to sentences with quantifiers to obtain sentences without quantifiers. It means that first-order inference can be done by converting the knowledge base to propositional logic and using propositional inference techniques already discussed i.e., propositional calculus inference is a special case of FOPL.

Let us continue with quantifier. Suppose our knowledge base contains the general rule – All greedy kings are evil:

Any of the following sentences can be inferred by substituting different values for x: (now domain of discussion is Mahabharta).

The rule of Universal Instantiation (UI) says that we can infer any sentence obtained by substituting a ground term (term without a variable) for the variable α. To write out the inference rule formally we use the notation of substitution in propositional logic. Let SUB (θ, α) denote the result of applying substitution θ to the sentence P.

Then the rule becomes:

for any variable v and ground term g.

For examples, our knowledge base contains the standard axiom stating that all greedy kings are evil:

clip_image00215_thumb2

and king (Daryodana) , King (Kansa), father of (Daryodana).

By making substitutions x/Daryodana, x/Kansa and x/Father of (Daryodana) the three sentences are obtained:

King (Daryodana) Greedy (Daryodana) Evil (Daryodana)

King (Kansa) Greedy (Kansa) Evil (Kansa)

King (Father of Daryodana) Greedy (Father of Daryodana) Evil (Father of Daryodana))

The corresponding Existential Instantiation (EI) rule for the existential quantifier is slightly more complicated.

For any sentence P, variable v and constant symbol k which does not appear elsewhere in the knowledge base:

as long as u, does not appear in the knowledge base elsewhere. Basically, the existential sentence says there is some object satisfying a condition, and the instantiation process is just giving a name to that object (constant). Naturally that name must not belong to another object. In logic the new allotted name is called Skolem constant. Existential instantitation is a special case of a more general process called Skolemisation, discussed under Resolution.

El plays a different role than that played by UI, whereas UI can be applied many times co produce many different consequences, EI can be applied once and then Existential quantifier can be discarded. For example, once we have added the sentence, Married (Man, Woman) in the knowledge base we no longer need the sentence ⴺx Married (Man, Woman).

We have seen above in propositional logic that in order to apply inference we should get clauses connected through operator v. This wff such as, ∼ intelligent v succeed (Romy), is said to be in Normal clause form. So it is pertinent to take up this process: conversion of sentences in the A: B into clause form in predicate calculus, as well.

Steps to transform a sentence into clause form:

Before we describe the steps, it is imperative to know how the existential quantifiers are eliminated through a substitution process.

This requires that all such variables be replaced by something called SKOLEM function – these are arbitrary functions which can always assume a correct value required of an existentially quantified variable.

For simplicity, we assume that all quantifiers have been properly moved to the left hand side of the expression, and each quantifies a different variable. Skolemisation, is the replacement of existentially quantified variables with Skolen function and deletion of the respective quantifiers.

It is then accomplished as under:

1. If the first (left most) quantifier in an expression is an existential quantifier, we replace all occurrence of the variable it quantifies, with an arbitrary constant not appearing else where in the expression. The same procedure may be followed for all other existential quantifies not preceded by a universal quantifier, in each case, using different symbols in the substitution. This process is called skolemization.

2. For each existential quantifier which is preceded by one or more universal quantifiers, replace all occurrences of the existentially quantifiers variable by a function symbol not appearing else where in the expression. The argument assigned to the function should match all the variables appearing in each universal quantifier which preceded the existential quantifier.

This existential quantifier should then be deleted. The same procedure should be repeated for each remaining existential quantifier using a different function symbol and choosing function arguments which correspond to all universally quantified variables which precede the existentially quantified variable being replaced.

For example:

Its Skolem form is determined as:

Here variable u binding the first existential quantifier has been replaced in the second expression by the arbitrary constant a. This constant (a) did not appear else where in the first expression. Similarly, the variably has been replaced by the function symbol g having the variables v and x as arguments, since both of these variables are universally quantified to by the left of the existential quantifier for y.

Now let us take complete procedure to convert a First Order Logic sentence in to clausal form making use of rules discussed in the table 6.2.

Table 6.2: List of tautologies in propositional logic:

1. Eliminate all implication and equivalency connectives

2. Move all negations inside to immediately precede an atom. This can be done by using P in palce of ∼ (∼P) and making use of De-Morgan’s laws and the equivalence law enumerated in table 6.2. regarding quantifier.

3. Rename variables, if necessary, so that variables bound by one quantifier are not the same as variables bound by a different quantifier.

For example,

that is the second ‘dummy’ variable x which is bound by existential quantifier has to be a different variable say y from the variable bound by universal quantifier.

4. Skolemise by replacing all existentially quantified variables with Skolem functions and deleting the corresponding existential quantifiers.

5. Move all universal quantifiers to the left of the expression and put the expression on the right into CNF.

6. Eliminate all universal quantifiers and conjunctions since they are retained implicity. The resulting expressions are clauses and the set of such expression is said to be in clausal form.

Soundness and Completeness of Predicate Logic:

To prove the completeness of the resolution theorem of predicate logic, the following definitions and theorems are presented.

Definition:

The Herbrand Universe (Hs) for a given set of clauses S is defined as the set of all possible ground terms, constructed by replacing the variables in arguments of functions by the same or other functions or constants, so that they remain grounded (free from variables) after substitution. It is to be noted that H is an infinite set.

For example, suppose that there exists a single clause in S, given by

Q(X, f(X, a)) P(X, a) R(X, b)

where (a, b) is the set of constants, (X) is a set of variables, (f) is a set of functions, (P, Q, R) is a set of predicates. Here Hs = {a, b, f(a, a),f(b, a),f(a ,f(a, a),f(a, f(a, b))…} is an infinite set.

Let S be a set of clauses and P be the set of ground terms. Then P(S), the saturation of S with respect to P, is defined as the set of all ground clauses obtained by applying all possible consistent substitutions for variables in S with the ground terms in P.

For example, let p = {(a, b, f {a, b))} and S = {Q (X,f(X, a)) a p(X, a), R(X, b)}.

Then P(S) is computed as follows:

Definition:

The saturation of S, a set of clauses with respect in the Herband universe Hs is called the Herbrand base Hs(S).

It may further be noted that H (S) too is an infinite set.

The following two theorems will be useful to prove the completeness of the resolution theorem:

The proofs of these theorems are beyond the scope of the book and are thus excluded.

Herbrand’s Theorem:

If a set of clauses Sis unsatisfiable, then there must exist a finite subset of Herbrand base Hs(S) that too is unsatisfiable [5],

Lifting Lemma:

Given that C1 and C2 are two clauses with no shared variables. Further given that C1 and C 2 are the ground instances of C1 and C2 respectively.

If C is a resulting clause due to resolution of C1 and C2 then there exists a clause C that satisfies the following criteria:

(i) C is a resulting clause due to resolution of C1 and C2, and

(ii) C is a ground instance of C.

For example, let:

Thus, we found C as a ground instance of C.

Let us now prove the completeness theorem of predicate logic.

Theorem:

The resolution theorem of predicate logic is complete.

Proof:

Given a set of clauses S and a formula α such that S / = α. We have to prove that S/- a, i.e., there exists a logical proof of a from S. We shall prove it by the method of contradiction. Thus let S ├  α, i.e., S is not logically provable from S. Thus S1 = S ∪ (¬α), all expressed in clause form is unsatisfiable. So, by Herband’s theorem, there must exist a Herbrand base Hs (S1) that is also unsatisfiable.

Now, by ground resolution closure of Hs (S1) contain the clauses ‘false’. Now, by lifting the lemma, if the false clause then that must also appear in the resolution closure of S1. Now, the resolution closure of S1 containing the false clause is a contradiction to the assumption that S ├α is wrong and hence S ├α follows.

Now, we narrate the proof of soundness of the resolution theorem in predicate logic.

Theorem:

The resolution principle of predicate logic is sound.

Proof:

To prove the soundness, we first look at the proof procedure for a particular problem that proves a formula a from a given set of clauses S. i.e., S ├α. Let it be a linear resolution. It can be shown that if the soundness can be proved for example for linear resolution, however, can be extended, similarly for other types of resolution.

For proving the resolution theorem is sound we need to have the following steps:

1. After the proof procedure terminates, substitute back the constants by variables in the tree.

2. Instantiate these clauses with all possible constants. We get thus Herband base corresponding to the clauses which had participated in the proof procedure.

3. The resolution theorem of propositional theorem is now applied to that subset of the Herband base; since resolution theorem in propositional logic is sound.

Since the Herband base generated from the clauses were from proposition logic so this theorem in propositional logic is also sound.