# Introduction to Artificial Intelligence - Knowledge Representation and Inferencing in First-Order-Logic

Τεχνίτη Νοημοσύνη και Ρομποτική

17 Ιουλ 2012 (πριν από 5 χρόνια και 10 μήνες)

621 εμφανίσεις

1
1
Introduction to Artificial Intelligence 2005 – Axel Polleres
Introduction to
Introduction to
A
A
rtificial
rtificial
I
I
ntelligence
ntelligence
MSc 2005 Semester 2
Knowledge Representation and Inferencing in
First-Order-Logic: Chapters 8+9
2
Introduction to Artificial Intelligence 2005 – Axel Polleres
Overview
• Why FOL?
• Syntax and semantics of FOL
• Using FOL
• Wumpus world in FOL
• Knowledge engineering in FOL
• Automated inference in FOL
3
Introduction to Artificial Intelligence 2005 – Axel Polleres
Pros and cons of propositional
logic
☺ Propositional logic is declarative
☺ Propositional logic allows partial/disjunctive/negated
information
– (unlike most data structures and databases)
☺ Propositional logic is compositional:
– meaning of B
1,1
∧ P
1,2
is derived from meaning of B
1,1
and of P
1,2
☺ Meaning in propositional logic is context-independent
– (unlike natural language, where meaning depends on context)
/Propositional logic has very limited expressive power
– (unlike natural language)
– E.g., cannot say "pits cause breezes in adjacent squares“
• except by writing one sentence for each square
4
Introduction to Artificial Intelligence 2005 – Axel Polleres
First-order logic
• Whereas propositional logic assumes the world
contains facts,
• first-order logic (like natural language) assumes
the world contains
– Objects: people, houses, numbers, colors, baseball
games, wars, …
– Relations: red, round, prime, brother of, bigger than,
part of, comes between, …
– Functions: father of, best friend, one more than, plus,

5
Introduction to Artificial Intelligence 2005 – Axel Polleres
Syntax of FOL: Basic elements
• Constants KingJohn, 2, NUS,...
• Predicate symbols Brother, >,...
• Function symbols Sqrt, LeftLegOf,...
• Variables x, y, a, b,...
• Connectives ¬, ⇒, ∧, ∨, ⇔
• Equality =
• Quantifiers ∀, ∃
6
Introduction to Artificial Intelligence 2005 – Axel Polleres
Atomic sentences
Atomic sentence = predicate (term
1
,...,term
n
)
or term
1
= term
2
term = function (term
1
,...,term
n
)
or constant or variable
• E.g., Brother(KingJohn,RichardTheLionheart) >
(Length(LeftLegOf(Richard)), Length(LeftLegOf(KingJohn)))
Remark: equality is a
special Predicate
with a fixed
semantics!
2
7
Introduction to Artificial Intelligence 2005 – Axel Polleres
Complex sentences
• Complex sentences are made from atomic
sentences using connectives
¬S, S
1
∧ S
2
, S
1
∨ S
2
, S
1
⇒S
2
, S
1
⇔S
2
,
E.g. Sibling(KingJohn,Richard)

Sibling(Richard,KingJohn)
>(1,2) ∨ ≤ (1,2)
>(1,2)

¬ >(1,2)
8
Introduction to Artificial Intelligence 2005 – Axel Polleres
Truth in first-order logic
• Sentences are true with respect to a model and an interpretation
• Model contains objects (domain elements) and relations among them
• Interpretation specifies referents for
constant symbols → objects
predicate symbols → relations
function symbols → functions
• An atomic sentence predicate(term
1
,...,term
n
) is true
iff the objects referred to by term
1
,...,term
n
are in the relation referred to by predicate
9
Introduction to Artificial Intelligence 2005 – Axel Polleres
Models for FOL: Example
10
Introduction to Artificial Intelligence 2005 – Axel Polleres
Truth in the example
11
Introduction to Artificial Intelligence 2005 – Axel Polleres
Models in FOL
…probably not a good idea to try to enumerate models.
…you should have heard (in some previous lectures) that
FOL nonentailment is even undecidable, i.e. cannot be computed/!
12
Introduction to Artificial Intelligence 2005 – Axel Polleres
Universal quantification
• ∀<variables> <sentence>
"Everyone at UIBK is smart":
∀x At(x,UIBK) ⇒Smart(x)
• ∀x P is true in a model miff P is true with x being each possible
object in the model
• Roughly speaking, equivalent to the conjunction of instantiations of P
At(KingJohn, UIBK) ⇒Smart(KingJohn)
∧ At(Richard, UIBK) ⇒ Smart(Richard)
∧ At(UIBK, UIBK) ⇒Smart(UIBK)
∧...
3
13
Introduction to Artificial Intelligence 2005 – Axel Polleres
A common mistake to avoid
• Typically, ⇒is the main connective with ∀
• Common mistake: using ∧ as the main connective with ∀:
∀x At(x,UIBK) ∧ Smart(x)
means “Everyone is at UIBK and everyone is smart”
• Correct: ∀x At(x,UIBK) ⇒Smart(x)
• As you can see many axioms can be written as rules!
• We already discussed special rules in the last lecture:
Horn Rules: (∀) B
1
, …, B
n
⇒H
where (∀) means that all variables are univ. quantified…
14
Introduction to Artificial Intelligence 2005 – Axel Polleres
Existential quantification
• ∃<variables> <sentence>
•"Someone at UIBK is smart":
• ∃x At(x,UIBK) ∧ Smart(x)
• ∃x P is true in a model miff P is true with x being some possible
object in the model
• Roughly speaking, equivalent to the disjunction of instantiations of P
At(KingJohn,UIBK) ∧ Smart(KingJohn)
∨ At(Richard,UIBK) ∧ Smart(Richard)
∨ At(UIBK,UIBK) ∧ Smart(UIBK)
∨...
15
Introduction to Artificial Intelligence 2005 – Axel Polleres
Another common mistake to
avoid
• Typically, ∧ is the main connective with ∃
• Common mistake: using ⇒as the main connective with ∃:
∃x At(x,UIBK) ⇒Smart(x)
is true if there is anyone who is not at UIBK!
Usually used in Queries:
"Is there someone in UIBK who is smart?"
Correct: ∃x At(x,UIBK) ∧ Smart(x)
16
Introduction to Artificial Intelligence 2005 – Axel Polleres
Properties of quantifiers
• ∀x ∀y is the same as ∀y ∀x
• ∃x ∃y is the same as ∃y ∃x
• ∃x ∀y is not the same as ∀y ∃x
• ∃x ∀y Loves(x,y)
– “There is a person who loves everyone in the world”
• ∀y ∃x Loves(x,y)
– “Everyone in the world is loved by at least one person”
• Quantifier duality: each can be expressed using the other
• ∀x Likes(x,IceCream) ¬∃x ¬Likes(x,IceCream)
• ∃x Likes(x,Broccoli) ¬∀x ¬Likes(x,Broccoli)
17
Introduction to Artificial Intelligence 2005 – Axel Polleres
Equality
• term
1
= term
2
is true under a given interpretation if and
only if term
1
and term
2
refer to the same object
• E.g., definition of Sibling in terms of Parent:
∀x,y Sibling(x,y) ⇔[¬(x = y) ∧ ∃m,f ¬ (m = f) ∧ Parent(m,x) ∧
Parent(f,x) ∧ Parent(m,y) ∧ Parent(f,y)]
18
Introduction to Artificial Intelligence 2005 – Axel Polleres
Using FOL
The family domain:
• Brothers are siblings
∀x,y Brother(x,y) ) Sibling(x,y)
• One's mother is one's female parent
∀m,c Mother(c) = m ⇔(Female(m) ∧ Parent(m,c))
• “Sibling” is symmetric
∀x,y Sibling(x,y) ⇔Sibling(y,x)
Attention! Mother is a
function symbol here,
whereas female and
parent are predicate
symbols!!!
4
19
Introduction to Artificial Intelligence 2005 – Axel Polleres
20
Introduction to Artificial Intelligence 2005 – Axel Polleres
Interacting with FOL KBs
• Suppose a wumpus-world agent is using an FOL KB and perceives a smell and a
breeze (but no glitter) at t=5:
Tell(KB,Percept([Smell,Breeze,None],5))
• I.e., does the KB entail some best action at t=5?
• Answer: Yes, {a/Shoot} ←substitution (binding list)
• Given a sentence S and a substitution σ,
• Sσ denotes the result of plugging σ into S; e.g.,
S = Smarter(x,y)
σ = {x/Hillary,y/Bill}
Sσ = Smarter(Hillary,Bill)
• Ask(KB,S) returns some/all σ such that KB╞ σ
21
Introduction to Artificial Intelligence 2005 – Axel Polleres
Knowledge base for the wumpus
world
• Perception
– ∀t,s,b Percept([s,b,Glitter],t) ⇒Glitter(t)
• Reflex
– ∀t Glitter(t) ⇒BestAction(Grab,t)
22
Introduction to Artificial Intelligence 2005 – Axel Polleres
Deducing hidden properties
[a,b] ∈ {[x+1,y], [x-1,y],[x,y+1],[x,y-1]}
Properties of squares:
• ∀s,t At(Agent,s,t) ∧ Breeze(t) ⇒Breezy(s)
Squares are breezy near a pit:
– Diagnostic rule---infer cause from effect
∀s Breezy(s) ⇒∃r Adjacent(r,s) ∧ Pit(r)
– Causal rule---infer effect from cause
Cannot be translated
to Horn!
23
Introduction to Artificial Intelligence 2005 – Axel Polleres
Knowledge engineering in FOL
2.Assemble the relevant knowledge
3.Decide on a vocabulary of predicates, functions, and
constants
4.Encode general knowledge about the domain
5.Encode a description of the specific problem instance
6.Pose queries to the inference procedure and get
7.Debug the knowledge base
24
Introduction to Artificial Intelligence 2005 – Axel Polleres
The electronic circuits domain
5
25
Introduction to Artificial Intelligence 2005 – Axel Polleres
The electronic circuits domain
– Does the circuit actually add properly? (circuit verification)
2.Assemble the relevant knowledge
– Composed of wires and gates; Types of gates (AND, OR, XOR,
NOT)
– Irrelevant: size, shape, color, cost of gates
3.Decide on a vocabulary/encoding
– Alternatives:
Type(X
1
) = XOR
Type(X
1
, XOR)
XOR(X
1
)

26
Introduction to Artificial Intelligence 2005 – Axel Polleres
The electronic circuits domain
4.Encode general knowledge of the domain
∀t
1
,t
2
Connected(t
1
, t
2
) ⇒Signal(t
1
) = Signal(t
2
)
∀t Signal(t) = 1 ∨ Signal(t) = 0
1 ≠ 0
∀t
1
,t
2
Connected(t
1
, t
2
) ⇒Connected(t
2
, t
1
)
∀g Type(g) = OR ⇒(Signal(Out(1,g)) = 1 ⇔∃n Signal(In(n,g)) = 1)
∀g Type(g) = AND ⇒(Signal(Out(1,g)) = 0 ⇔∃n Signal(In(n,g)) = 0)
∀g Type(g) = XOR ⇒(Signal(Out(1,g)) = 1 ⇔Signal(In(1,g)) ≠ Signal(In(2,g)))
∀g Type(g) = NOT ⇒Signal(Out(1,g)) ≠ Signal(In(1,g))
27
Introduction to Artificial Intelligence 2005 – Axel Polleres
The electronic circuits domain
5.Encode the specific problem instance
Type(X
1
) = XOR Type(X
2
) = XOR
Type(A
1
) = AND Type(A
2
) = AND
Type(O
1
) = OR
Connected(Out(1,X
1
),In(1,X
2
)) Connected(In(1,C
1
),In(1,X
1
))
Connected(Out(1,X
1
),In(2,A
2
)) Connected(In(1,C
1
),In(1,A
1
))
Connected(Out(1,A
2
),In(1,O
1
)) Connected(In(2,C
1
),In(2,X
1
))
Connected(Out(1,A
1
),In(2,O
1
)) Connected(In(2,C
1
),In(2,A
1
))
Connected(Out(1,X
2
),Out(1,C
1
)) Connected(In(3,C
1
),In(2,X
2
))
Connected(Out(1,O
1
),Out(2,C
1
)) Connected(In(3,C
1
),In(1,A
2
))
28
Introduction to Artificial Intelligence 2005 – Axel Polleres
The electronic circuits domain
6.Pose queries to the inference procedure
What are the possible sets of values of all the terminals
∃i
1
,i
2
,i
3
,o
1
,o
2
Signal(In(1,C_1)) = i
1
∧ Signal(In(2,C
1
)) = i
2

Signal(In(3,C
1
)) = i
3
∧ Signal(Out(1,C
1
)) = o
1
∧ Signal(Out(2,C
1
)) =
o
2
7.Debug the knowledge base
Maybe you have omitted assertions like 1 ≠ 0, etc?
29
Introduction to Artificial Intelligence 2005 – Axel Polleres
Summary
• First-order logic:
– objects, relations and functions are semantic primitives
– syntax: constants, function symbols, predicate
symbols, equality, quantifiers
• Increased expressive power: sufficient to define
wumpus world, with time stamps, etc.
30
Introduction to Artificial Intelligence 2005 – Axel Polleres
Automated inference in FOL
• Reducing first-order inference to propositional
inference?
• Unification
• Generalized Modus Ponens
• Forward chaining
• Backward chaining
• Resolution
6
31
Introduction to Artificial Intelligence 2005 – Axel Polleres
Universal instantiation (UI)
• Every instantiation of a universally quantified sentence is entailed by it:
∀v α
Subst({v/g}, α)
for any variable v and ground term g
• E.g., ∀x King(x) ∧ Greedy(x) ⇒Evil(x) yields:
King(John) ∧ Greedy(John) ⇒ Evil(John)
King(Richard) ∧ Greedy(Richard) ⇒Evil(Richard)
King(Father(John)) ∧ Greedy(Father(John)) ⇒Evil(Father(John))
.
.
.
Note: Instantiation is infinite as soon as function symbols (like Father) appear!
32
Introduction to Artificial Intelligence 2005 – Axel Polleres
Existential instantiation (EI)
• For any sentence α, variable v, and constant symbol k that does not appear
elsewhere in the knowledge base:
∃v α
Subst({v/k}, α)
• E.g., ∃x Crown(x) ∧ OnHead(x,John) yields:
Crown(C
1
1
,John)
provided C
1
is a new constant symbol, called a Skolemconstant.
i.e., 9 v α has a model iff Subst({v/k}, α) has a model, but:
- The model's are not the same!
Example: The sentences 9
x Crown(x) and Crown(C
1
) have different models!
- Think about it: Why does it need to be a new constant?
Example: 9
33
Introduction to Artificial Intelligence 2005 – Axel Polleres
Reduction to propositional
inference
Suppose the KB contains just the following:
∀x King(x) ∧ Greedy(x) ⇒Evil(x)
King(John)
Greedy(John)
Brother(Richard,John)
• Instantiating the universal sentence in all possible ways, we have:
King(John) ∧ Greedy(John) ⇒Evil(John)
King(Richard) ∧ Greedy(Richard) ⇒Evil(Richard)
King(John)
Greedy(John)
Brother(Richard,John)
• The new KB is propositionalized: proposition symbols are
King(John), Greedy(John), Evil(John), King(Richard), etc.
34
Introduction to Artificial Intelligence 2005 – Axel Polleres
Reduction contd.
• Every FOL KB can be propositionalized so as to preserve entailment
• (A ground sentence is entailed by new KB iff entailed by original KB)
• Idea: propositionalize KB and query, apply resolution, return result
• Problem: with function symbols, there are infinitely many ground
terms,
– e.g., Father(Father(Father(John)))
35
Introduction to Artificial Intelligence 2005 – Axel Polleres
Reduction contd.
Theorem: Herbrand (1930). If a sentence α is entailed by an FOL KB, it is
entailed by a finite subset of the propositionalized KB
Idea: For n = 0 to ∞ do
create a propositional KB by instantiating with depth-n terms
see if α is entailed by this KB
Problem: works if α is entailed, loops if α is not entailed
Theorem: Turing (1936), Church (1936) Entailment for FOL is semidecidable
(algorithms exist that say yes to every entailed sentence, but no algorithm
exists that also says no to every nonentailed sentence.)
36
Introduction to Artificial Intelligence 2005 – Axel Polleres
Problems with propositionalization
• Propositionalization seems to generate lots of irrelevant sentences.
• E.g., from:
∀x King(x) ∧ Greedy(x) ⇒Evil(x)
King(John)
∀y Greedy(y)
Brother(Richard,John)
• it seems obvious that Evil(John), but propositionalization produces lots of
facts such as Greedy(Richard) that are irrelevant
• With p k-ary predicates and n constants, there are p∙n
k
instantiations.
7
37
Introduction to Artificial Intelligence 2005 – Axel Polleres
Unification
• We can get the inference immediately if we can find a substitution θ such that
King(x) and Greedy(x) match King(John) and Greedy(y)
θ = {x/John,y/John} works
• Unify(α,β) = θ if αθ = βθ
p q θ
Knows(John,x) Knows(John,Jane)
Knows(John,x) Knows(y,OJ)
Knows(John,x) Knows(y,Mother(y))
Knows(John,x) Knows(x,OJ)
• Standardizing apart eliminates overlap of variables, e.g., Knows(z
17
,OJ)…
{x/Jane}}
{x/OJ,y/John}}
{y/John,x/Mother(John)}}
Fail!
38
Introduction to Artificial Intelligence 2005 – Axel Polleres
Unification
• To unify Knows(John,x) and Knows(y,z),
θ = {y/John, x/z } or θ = {y/John, x/John, z/John}
• The first unifier is more general than the second.
• There is a single most general unifier (MGU) that is
unique up to renaming of variables.
MGU = { y/John, x/z }
39
Introduction to Artificial Intelligence 2005 – Axel Polleres
The unification algorithm
40
Introduction to Artificial Intelligence 2005 – Axel Polleres
The unification algorithm
41
Introduction to Artificial Intelligence 2005 – Axel Polleres
Generalized Modus Ponens
(GMP)
p
1
', p
2
', …, p
n
', ( p
1
∧ p
2
∧ …∧ p
n
⇒q)

p
1
' is King(John) p
1
is King(x)
p
2
' is Greedy(y) p
2
is Greedy(x) q is Evil(x)
θ is {x/John,y/John}
qθ is Evil(John)
• GMP used with KB of definite clauses (exactly one positive literal)
• All variables assumed universally quantified
where p
i
'θ = p
i
θ for all i
42
Introduction to Artificial Intelligence 2005 – Axel Polleres
Example knowledge base
• The law says that it is a crime for an American to sell weapons to
hostile nations. The country Nono, an enemy of America, has some
missiles, and all of its missiles were sold to it by Colonel West, who is
American.
• Prove that Col. West is a criminal
8
43
Introduction to Artificial Intelligence 2005 – Axel Polleres
Example knowledge base contd.
... it is a crime for an American to sell weapons to hostile nations:
(8) American(x) ∧Weapon(y) ∧Sells(x,y,z) ∧Hostile(z) ⇒Criminal(x)
Nono …has some missiles, i.e., ∃x Owns(Nono,x) ∧ Missile(x):
Owns(Nono,M
1
) ∧ Missile(M
1
)
…all of its missiles were sold to it by Colonel West
Missile(x) ∧Owns(Nono,x) ⇒Sells(West,x,Nono)
Missiles are weapons:
Missile(x) ⇒Weapon(x)
An enemy of America counts as "hostile“:
Enemy(x,America) ⇒Hostile(x)
West, who is American …
American(West)
The country Nono, an enemy of America …
Enemy(Nono,America)
44
Introduction to Artificial Intelligence 2005 – Axel Polleres
Forward chaining algorithm
45
Introduction to Artificial Intelligence 2005 – Axel Polleres
Forward chaining proof
46
Introduction to Artificial Intelligence 2005 – Axel Polleres
Forward chaining proof
47
Introduction to Artificial Intelligence 2005 – Axel Polleres
Forward chaining proof
48
Introduction to Artificial Intelligence 2005 – Axel Polleres
Properties of forward chaining
• Sound and complete for first-order definite Horn clauses, which
means that it computes all entailed facts correctly.
• Datalog = first-order definite clauses + no functions
• May not terminate in general if α is not entailed
• This is unavoidable: entailment with definite clauses is semidecidable
9
49
Introduction to Artificial Intelligence 2005 – Axel Polleres
Efficiency of forward chaining
Incremental forward chaining: no need to match a rule on iteration k if a
premise wasn't added on iteration k-1
⇒match each rule whose premise contains a newly added positive literal
Matching itself can be expensive:
Database indexing allows O(1) retrieval of known facts
– e.g., query Missile(x) retrieves Missile(M
1
)
Forward chaining is widely used in deductive databases
50
Introduction to Artificial Intelligence 2005 – Axel Polleres
Backward chaining algorithm
SUBST(COMPOSE(θ
1
, θ
2
), p) = SUBST(θ
2
, SUBST(θ
1
, p))
51
Introduction to Artificial Intelligence 2005 – Axel Polleres
Backward chaining example
52
Introduction to Artificial Intelligence 2005 – Axel Polleres
Backward chaining example
53
Introduction to Artificial Intelligence 2005 – Axel Polleres
Backward chaining example
54
Introduction to Artificial Intelligence 2005 – Axel Polleres
Backward chaining example
10
55
Introduction to Artificial Intelligence 2005 – Axel Polleres
Backward chaining example
56
Introduction to Artificial Intelligence 2005 – Axel Polleres
Backward chaining example
57
Introduction to Artificial Intelligence 2005 – Axel Polleres
Backward chaining example
58
Introduction to Artificial Intelligence 2005 – Axel Polleres
Properties of backward chaining
• Depth-first recursive proof search: space is linear in size
of proof
• Incomplete due to infinite loops
– ⇒fix by applying breath-first search!
• Inefficient due to repeated subgoals (both success and
failure)
– ⇒fix using caching of previous results (extra space)
• Widely used for logic programming
59
Introduction to Artificial Intelligence 2005 – Axel Polleres
Logic programming: Prolog
• Algorithm = Logic + Control
• PROLOG. Basis: backward chaining with Horn clauses
Widely used in Europe, Japan (basis of 5th Generation project)
Program = set of clauses = head :- literal
1
, … literal
n
.
criminal(X) :- american(X), weapon(Y), sells(X,Y,Z), hostile(Z).
• Depth-first, left-to-right backward chaining
• Built-in predicates for arithmetic etc., e.g., X is Y*Z+3
• Built-in predicates that have side effects (e.g., input and output
• predicates, assert/retract predicates)
• Closed-world assumption ("negation as failure")
– e.g., given alive(X) :- not dead(X).
– alive(joe) succeeds if dead(joe) fails
60
Introduction to Artificial Intelligence 2005 – Axel Polleres
Resolution: brief repitition/summary
• Full first-order version:
l
1
∨ ∙∙∙ ∨ l
k
, m
1
∨ ∙∙∙ ∨ m
n
(l
1
∨ ∙∙∙ ∨ l
i-1
∨ l
i+1
∨ ∙∙∙ ∨ l
k
∨ m
1
∨ ∙∙∙ ∨ m
j-1
∨ m
j+1
∨ ∙∙∙ ∨ m
n

where Unify(l
i
, ¬m
j
) = θ.
• The two clauses are assumed to be standardized apart so that they share no
variables.
• For example,
¬Rich(x) ∨ Unhappy(x) Rich(Ken)
Unhappy(Ken)
with θ = {x/Ken}
• Apply resolution steps to CNF(KB ∧ ¬α); complete for FOL
Remark: Quite similar to GMP!
11
61
Introduction to Artificial Intelligence 2005 – Axel Polleres
Conversion to CNF
• Everyone who loves all animals is loved by someone:
∀x [∀y Animal(y) ⇒Loves(x,y)] ⇒[∃y Loves(y,x)]
• 1. Eliminate biconditionals and implications
∀x [¬∀y ¬Animal(y) ∨ Loves(x,y)] ∨ [∃y Loves(y,x )]
• 2. Move ¬ inwards: ¬∀x p ≡ ∃x ¬p, ¬ ∃x p ≡ ∀x ¬p
∀x [∃y ¬(¬Animal(y) ∨ Loves(x,y))] ∨ [∃y Loves(y,x)]
∀x [∃y ¬¬Animal(y) ∧ ¬Loves(x,y)] ∨ [∃y Loves(y,x)]
∀x [∃y Animal(y) ∧ ¬Loves(x,y)] ∨ [∃y Loves(y,x )]
62
Introduction to Artificial Intelligence 2005 – Axel Polleres
Conversion to CNF contd.
3.Standardize variables: each quantifier should use a different one
∀x [∃y Animal(y) ∧ ¬Loves(x,y)] ∨ [∃z Loves(z,x)]
4.Skolemize: a more general form of existential instantiation.
Each existential variable is replaced by a Skolemfunction of the enclosing
universally quantified variables:
∀x [Animal(F(x)) ∧ ¬Loves(x,F(x))] ∨ Loves(G(x),x)
5.Drop universal quantifiers:
[Animal(F(x)) ∧ ¬Loves(x,F(x))] ∨ Loves(G(x),x)
6.Distribute ∨ over ∧:
[Animal(F(x)) ∨ Loves(G(x),x)] ∧ [¬Loves(x,F(x)) ∨ Loves(G(x),x)]
63
Introduction to Artificial Intelligence 2005 – Axel Polleres
Resolution proof:
- In each resolution step unification is applied!
-green:goal
-red:KB rules
64
Introduction to Artificial Intelligence 2005 – Axel Polleres
Summary
• FOL, Knowledge Engineering in FOL:
• There are proof algorithms, but they are only semi-
decidable i.e.might not terminate for non-entailed queries
• Propositionalization is not possible due to function
symbols!
• One common approach for automated Proofs in FOL:
• Convert to CNF ÎApply resolution/unification!
65
Introduction to Artificial Intelligence 2005 – Axel Polleres
Third
Third
Tutorial
Tutorial
Assignment
Assignment
!
!
Knowledge Engineering in FOL (exercises: AIMA 8.6, 8.15,
8.16)
8.6 (3 points) Represent the following sentences in first-order logic, using a
consistent vocabulary (which you must define):
• a)
Some students took French in spring 2001.
• b) Every student who takes French passes it.
• c) Only one student took Greek in spring 2001.
• d) The best score in Greek is always higher than the best score in
French
• e) Every person who buys a policy is smart.
• f) No person buys an expensive policy.
• g) There is no agent who sells policies only to people who are not insured.
• h) There is a barber who shaves all men in town who do not shave
themselves.
66
Introduction to Artificial Intelligence 2005 – Axel Polleres
Third Tutorial Assignment!
Third Tutorial Assignment!
8.6 (continuation)
• i) A person born in the UK, each of whose parents is a UK citizen or a
UK resident, is a UK citizen by birth
• j) A person outside the UK, one of whose parents is a UK citizen by
birth, is a UK citizen by descent.
• k) Politicians can fool some of the people all of the time, and they can
fool all of the people some of the time, but they can't fool all of the
people all of the time.
8.15 (3 points) Explain what is wrong with the following proposed
definition of adjacent squares in the wumpus world:
8.16 (3 points) Write out the axioms required for reasoning about the
wumpus's location, using a constant symbol Wumpus and a binary
predicate In(Wumpus, Location). Remember that there is only one
wumpus.
[
]
[
] [ ][ ]
).1(1 ,
+

+

12
67
Introduction to Artificial Intelligence 2005 – Axel Polleres
Third Tutorial Assignment!
Third Tutorial Assignment!
GMP/backward-chaining (exercises: AIMA 9.9., 9.10 a-c )
9.9 (3 points) Write down the logical representation for the following
sentences, suitable for use with Generalized Modus Ponens:
• a) Horses, cows and pigs are mammals.
• b) An offspring of a horse is a horse.
• c) Bluebeard is a horse.
• d) Bluebeard is Charlie's parent.
• e) Offspring and parent are inverse relations.
• f) Every mammal has a parent.
9.10 (4 points) In this question we will use sentences you wrote in the
previous exercise to answer a question using a backward-chaining
algorithm.
• a) Draw the proof tree generated by an exhaustive backward-
chaining algorithm for the query , where clauses are
matched in the order given.
)( hHorseh∃
68
Introduction to Artificial Intelligence 2005 – Axel Polleres
Third Tutorial Assignment!
Third Tutorial Assignment!
(9.10 – continuation)