1

Semantic Web Ontology and Natural Language

from the Logical Point of View

Marie Duží

Department of Computer science

VSB – Technical university of Ostrava

17. Listopadu 15

708 33 Ostrava – Poruba

E – mail: marie.duzi@vsb.cz

Abstract

The development of Semantic Web ontology languages in the last decade can be

characterised as “from mark-up languages to metadata”, i.e., a bottom up approach. The

current ontology languages are first briefly described and characterised from the logical

point of view. Generally, these languages are based on the first-order predicate logic (FOL)

enriched with ad hoc higher-order constructs wherever needed. FOL is a mathematical logic,

and its language is stenography for mathematics with nice mathematical properties.

However, its expressive power is not rich enough to render the semantics of natural

languages, in which web users need to communicate. We argue that in the Semantic Web a

rich expressive language with transparent semantics is needed, in order to build up metadata

in the conceptual level of the Semantic Web architecture, to formally analyse natural

language and to conceptually analyse the content of the Web. A powerful logical tool of

transparent intensional logic (TIL) is described, which provides a logical-semantic

framework for a fine-grained knowledge representation and conceptual analysis of natural

language. TIL is based on a rich ontology of entities organised in an infinite ramified

hierarchy of types. The conceptual and terminological role of TIL in a multi-agent world is

described, and we show that such a system can serve as a unifying logical framework both for

natural language representation and for an adequate knowledge representation. Concluding

we define the notion of inferable knowledge and show that the proposed logic accommodates

philosophical desiderata that should be met in a multi-agent world of the Semantic Web.

1. Introduction

The Web was proposed as a tool for representing relationships between named objects,

drawing together knowledge from scattered systems into a common framework [2]. The main

aim of the Semantic Web initiative is to develop the current Web towards the original

proposal. W3C’s Semantic Web Activity develops standards and technologies, which are

designed to help machines to understand more information on the Web [12, 24]. The word

“semantic” in the context of Semantic Web is said to mean “machine-processible” [4]. The

main idea is of having the data on the Web defined and linked in such a way that it can be

used for more effective information retrieval, knowledge discovery, automation, integration

and reuse of information across various applications, organisations and communities.

To meet these goals, the World Wide Web Consortium (W3C) has defined a layer

model for the Semantic Web (Figure 1), and knowledge representation and ontology

languages are being developed. The Semantic Web is vitally dependent on a formal meaning

assigned to the constructs of its languages. For Semantic Web languages to work well

together their formal meanings must employ a common view (or thesis) of representation

2

[13], otherwise it will not be possible to reconcile documents written in different languages. A

common underpinning is especially important for the Semantic Web as it is envisioned to

contain several languages, as in Tim Berners-Lee's “layer cake” diagram (Figure 1) first

presented at XML 2000 [3]. The diagram depicts a Semantic Web Architecture in which

languages of increasing power are layered one on top of the other. Unfortunately, the

relationships between adjacent layers are not specified, either with respect to the syntax or

semantics. Naturally but unfortunately, the model is being gradually realised in a bottom up

way; languages in particular layers come into being in a rapid but ad hoc way, without a deep

logical insight. Thus the languages often lack an exact semantics and ontological definitions

of particular entities; in the syntactic constructs, particular abstract levels are mixed together.

Figure 1 Semantic Web architecture

In philosophy, the notion of ontology has been understood as covering the ‘science of

being’, raising questions like “what is, what could be, or cannot be there, in the world, what

we can talk about”. In informatics, the notion ‘ontology’ is nowadays understood in many

distinct ways: formalisation, conceptual analysis, hierarchical classification,

conceptualisation, etc. In general, ‘ontology’ can be conceived of as a conceptual analysis of a

given universe of discourse, i.e., of what (which entities) we talk about, and how, by means of

which concepts we capture these entities. However, we are going to warn against confusing

the two levels: the level of using entities⎯ concepts and/or functions, and the level of

mentioning them. Such confusion is a source of never ending discrepancies and

misunderstandings. At the same time, the analyses have to take into account the fact that we

operate in a multi-agent world.

Logical forms rendering the basic stock of explicit knowledge of a particular agent

serve as the base, from which logical consequences can be derived by an inference machine

so that to obtain the inferable stock of knowledge [8]. The more fine-grained the analysis is,

the more accurate inferences the machine can perform. In the ideal case, the inference

machine can in principle derive just the logical consequences of the base: it does not over-

infer (derive something that does not follow), nor under-infer (not being able to derive

something that does follow).

We show that the current state-of-arts is far from the ideal case. Web ontology

languages are based on the 1

st

-order predicate logic (FOL). Though FOL has become

stenography of mathematics, it is not expressive enough when used at the natural language

Trust level

Digital signature, annotation

Logical and inference level

Rule-based systems, SWRL, ML,

Ontology level

OWL, WordNet, RosettaNet, …

Metadata level

RDF, RDFS, SKIF, …

Structure level

XML, XLink, XML Schema, …

Internet level

Unicode, URI, …

3

area. The obvious disadvantage of the FOL approach is that treating higher-order properties

and relations like individuals conceals the ontological structure of the universe, and

knowledge representation is not comprehensive. Moreover, when representing knowledge,

paradox of omniscience is inevitable: this is a critical defect with respect to a multi-agent

system, which may lead to inconsistencies and chaotic behaviour of the system. For

applications where even the full power of FOL is not adequate, it would be natural to extend

the framework to higher-order logic (HOL). A general objection against using HOL logic is

its computational intractability. However, HOL formulas are relatively well understood, and

reasoning systems for HOLs do already exist, e.g., HOL [10] and Isabelle [21]. Though the

Web languages have been enriched by a few constructs exceeding the power of FOL, these

additional constructs are usually not well defined and understood. Moreover, particular

languages are neither syntactically nor semantically compatible. The W3C efforts at

standardization resulted in accepting the Resource Description Framework (RDF) language as

the Web recommendation. However, this situation is far from satisfactory. Quoting from

Horrocks and Schneider [13]: “The thesis of representation underlying RDF and RDFS is

particularly troublesome in this regard, as it has several unusual aspects, both semantic and

syntactic. A more-standard thesis of representation would result in the ability to reuse existing

results and tools in the Semantic Web.”

In this paper we provide a brief overview of the relevant portions of an expressive

logical system of Transparent Intensional Logic (TIL) from the point of view of the ‘multi-

agent web world’. We concentrate on the two features of TIL which make it possible to

consequently distinguish between using and mentioning entities, namely the explicit

intensionalisation and temporalisation, and the rich ontology of entities organised into a two-

dimensional infinite hierarchy of types. TIL provides a logico-semantic framework for a fine-

grained logical analysis of natural language and an adequate representation of knowledge

possessed by autonomous agents who are (less or more) intelligent, but not omniscient.

A common objection against such a rich system, namely that it is too complicated and

computationally intractable, is in our opinion rather irrelevant: formal knowledge

specification in TIL is semantically transparent and comprehensible, with all the semantically

salient features explicitly present. In the Semantic Web we need such a highly expressive

language so that to first know what is there, and only afterwards to try deriving the

consequences. The fact that for higher-order logics such as TIL there is no semantically

complete system is not important. Only when knowing “what is there” we are able to derive

some consequences. Moreover, the TIL framework might at least serve as a both semantic and

terminological standard for the development of new languages, as an ideal to which we

should aim at.

The paper is organised as follows. First, in Chapter 2 we recapitulate the current state

of Web ontology languages from the logical point of view. We show here that the Web

ontology languages are mostly based on the first-order predicate logic approach, which is far

from being a satisfactory state. In Chapter 3, an expressive system of the transparent

intensional logic (TIL) is introduced. After an informal introduction we provide particular

precise definitions. The method of logical analysis of language expressions is described

together with characterising important conceptual (analytical) relations, and the TIL method

of knowledge representation is introduced. In Chapter 4, the modern procedural theory of

concepts is described (for details see [18], [19], [5]), and important analytical relations and

properties of concepts are defined. Finally, in Chapter 5 the notion of inferable knowledge is

defined, and computational semantics is described in more details. Concluding Chapter 6

recapitulates the conceptual and terminological role of TIL in building multi-agent systems in

the ‘Internet-Web Age’.

4

2. Web Ontology Languages

The lowest levels of the Semantic Web are based on mark-up languages. By means of

schemas and ontology classifications, types of resources and types of inter-relationships

between the resources can be specified. At the bottom level, the Unicode and URI layers

make sure that international character sets are used and provide means for identifying the

resources in the Semantic Web. At the very core level of the Web, there is the XML layer

with linking, style and transformation, namespace and schema definitions, which forms the

basis for the Semantic Web definitions. On the metadata level we can find RDF and RDF

Schema for describing resources with URI addresses and for defining vocabularies that can be

referred to by URI addresses, respectively. The ontology level supports the evolution of a

shared semantic specification and conceptualization of different application domains. The

ontology level is based on the OWL recommended by W3C, which is based on the 1

st

-order

Description logic [1] framework. Based on common logic, the SKIF language accommodates

some higher-order constructs. At the logical level, simple inferences based on ontologies can

be drawn. As far as we know, the only ontology language supporting inferences at this level is

a Semantic Web Rule Language (SWRL) combining OWL and RuleML [14]. At the highest

level, the focus is on trust, i.e., how to guarantee the reliability of the data obtained from the

Web. Currently, a common research project CoLogNet II (Network of Excellence in

Computational Logic II) of fourteen European universities, lead by the Free University of

Bozen-Bolzano, Italy, is prepared. The goal of this project can be characterised as the

development of a powerful Web-inference machine based on a highly expressive logical

semantics.

2.1 Web Ontology languages from the logical Point of view

According to Horrocks and Patel-Schneider [13] building ontologies consists in a hierarchical

description of important concepts in a domain, along with descriptions of properties of the

instances of each concept and relations between them. Current ontological languages

correspond roughly in their expressive power to the first-order predicate logic (FOL), with

some higher-order ad hoc extensions. None of them makes it possible to express modalities

(what is necessary and what is contingent), distinguish between analytical and empirical

concepts, and handle higher-order concepts; perhaps only languages based on the Description

logic framework partly meet these goals. Concepts of n-ary relations are unreasonably

modelled as properties. True, each n-ary relation can be expressed by n unary relations

(properties): for instance the fact that G.W. Bush ordered to attack Iraq can be modelled by

two facts, namely that G.W. Bush has the property of ordering the attack to Iraq, and Iraq has

the property of being attacked by G.W. Bush’s order, but such a representation is not

comprehensive, and the equivalence of the two statements is concealed.

The basis of a particular way of providing meaning for metadata is embodied in the

model theory for RDF. RDF has unusual aspects that make its use as the foundation of

representation in the Semantic Web difficult at best. In particular, RDF has a very limited

collection of syntactic constructs, and these are treated in a very uniform manner in the

semantics of RDF. The data model of the RDF includes three basic elements. Resources are

anything with an URI address. Properties specify attributes and/or (binary) relations between

resources and an object used to describe resources. Statements of the form ‘subject, predicate,

object’ associate a resource and a specific value of its property. The RDF thesis requires that

no other syntactic constructs than the RDF triples are to be used and that the uniform semantic

treatment of syntactic constructs cannot be changed only augmented [13]. In RDFS we can

specify classes and properties of individuals, constraints on properties, and the relation of

subsumption (subclass, subproperty). It is not possible, for instance, to specify properties of

5

properties, e.g., that the relation (property) is functional or transitive. Neither is it possible to

define classes by means of properties of individuals that belong to the class.

Recognition of the limitations of RDFS led to the development of new Web ontology

languages such as OIL, DAML-ONT and DAML+OIL [23, 25]. These are used as the basis of

a new W3C Web ontology language called the OWL. As a second language for the Semantic

Web, OWL has been developed as an extension of RDFS. OWL (like DAML+OIL) uses the

same syntax as RDF (and RDFS) to represent ontologies, the two languages are syntactically

compatible. However, the semantic layering of the two languages is more problematical. The

difficulty stems from the fact that OWL (like DAML+OIL) is largely based on the

Description Logic [1], the semantics of which would normally be given by a classical first-

order model theory in which individuals are interpreted as elements of some domain (a set),

classes are interpreted as subsets of the domain and properties are interpreted as binary

relations on the domain. The semantics of RDFS, on the other hand, are given by a non-

standard model theory, where individuals, classes and properties are all elements in the

domain. Properties are further interpreted as having extensions which are binary relations on

the domain, and class extensions are only implicitly defined by the extension of the rdf:type

property. Moreover, RDFS supports reflection on its own syntax: interpretation of classes and

properties can be extended by statements in the language. Thus language layering is much

more complex, because different layers subscribe to these two different approaches.

The third group of ontology languages lies somewhere between the FOL framework

and RDFS. The group of relatively new languages include SKIF and Common Logic [11].

The SKIF syntax is compatible with functional language LISP, but in principle it is FOL

syntax. These languages have like RDFS a non standard model theory, with predicates being

interpreted as individuals, i.e., elements of a domain. Classes are however treated as subsets

of the domain, and their redefinition in the language syntax is not allowed.

Thus from the logical point of view, the ontological languages can be divided into

three groups: the FOL approach, the SKIF approach, and the RDF approach.

a) The FOL approach (DAML+OIL, OWL) is closely connected to the rather expressive

Description Logic (DL): Languages of this group talk about individuals that are elements of a

domain. The individuals are members of subclasses of the domain, and can be related to other

individuals (or data values) by means of properties (n-ary relations are called properties in

Web ontologies, for they are decomposed into n properties). The universe of discourse is

divided into two disjoint sorts: the object domain of individuals and the data value domain of

numbers. Thus the interpretation function assigns elements of the object domain to individual

constants, elements of data value domain to value constants, and subclasses of the data

domain to data types. Further, object and data predicates (or properties) are distinguished, the

former being interpreted as a subset of the Cartesian product of object domain, the latter a

subset of the Cartesian product of value domain. DL is relatively rather rich, though being an

FOL language. It makes it possible to distinguish intensional knowledge (knowledge on the

analytically necessary relations between concepts) and extensional knowledge (of contingent

facts).

The knowledge base of DL is divided into the so-called T-box (according to

terminology or taxonomy) and A-box (according to contingent attributes of objects). T-box

contains verbal definitions, i.e., a new concept can be defined composing known concepts.

For instance, a woman can be defined: WOMAN = PERSON & SEX-FEMALE, and a

mother: MOTHER = WOMAN & ∃child(HASCHILDchild). Thus the fact that, e.g., mother

is a woman is analytical (necessary) true. In the T-box there are also specifications of

necessary properties of concepts and relations between concepts: the property satisfiability

(corresponding to a nonempty concept), the relation of subsumption (intensionally contained

concepts), equivalence and disjointness (incompatibility). Thus, e.g., that a bachelor is not

6

married is analytically (necessarily) true proposition. On the other hand, the fact that, e.g., Mr.

Jones is a bachelor is a contingent unnecessary fact. Such contingent properties (attributes) of

objects are recorded in A-boxes.

b) The SKIF approach: SKIF languages are syntactically compatible with LISP, i.e., the FOL

syntax is extended with the possibility to mention properties and use variables ranging over

properties. For instance, we can specify that John and Peter have a common property:

∃p . p(John) & p(Peter). The property they have in common can be, e.g., that they both love

their wives. We can also specify that the property P is true of John, and the P has the property

Q: P(John) & Q(P). If P is being honest and Q is being eligible, the sentence can be read as

that John is honest, which is eligible. The interpretation structure is a triple

<D, ext, V>, where D is the universe, V is the function that maps predicates, variables and

constants to the elements of D, and ext is the function that maps D into sets of n-tuples of

elements of D. SKIF does not reduce the arity of predicates.

c) The RDF approach: These languages originally did not have a model theoretic semantics,

which led to many discrepancies. The RDF syntax consists of the so-called triples – subject,

predicate and object, where only binary predicates are allowed. This causes serious problems

concerning compatibility with more expressive languages. RDF(S) has become a Web

ontological recommendation defined by W3C, and its usage is world spread. The question is

whether it is a good decision. A classical FOL approach would be better, or even its standard

extension to HOL would be more suitable for ontologies. Formalisation in HOL is much more

natural and comprehensive, the universe of discourse is not a flat set of ‘individuals’, but

properties and relations can be naturally talked about as well, which is much more apt for

representation of ontologies.

Ontologies will play a pivotal role in the Semantic Web by providing a source of

shared and precisely defined concepts of entities that can be used in metadata. The degree of

formality employed in capturing these concepts can be quite variable, ranging from natural

language to logical formalisms, but increased formality and regularity clearly facilitates

machine understanding. We argue that conceptual formalisation should be meaning driven,

based on natural language. Formal language of FOL is not a natural language; it is a language

of non interpreted formulas that enable us to talk about individuals, expressing their properties

and relations between individuals. We cannot talk about properties of (individuals, properties,

functions, relations, properties of concepts, generally of higher-order objects), unless they are

simple members of the universe. Thus inference machine based on FOL can under-infer, or

paradoxically over-infer.

Here is an example: The office of the President of USA is certainly not an individual.

It can be occupied by individuals, but the holder of the office and the office itself are two

completely distinct things. The office (Church’s individual concept) necessarily has some

requisites (like being occupied by at most one individual), which no its occupant has. In FOL,

however, we have to treat the office as an individual. Paradoxes like the following arise:

• John Kerry wanted to become the President of USA.

• The President of USA knows that John Kerry wanted to become the President of USA.

• George W. Bush is the President of USA.

–––––––––––––––––––––––––––––––––––––

Hence what?

Systems based on FOL do not handle such a realistic situation in an adequate way.

Analysing (as it should be) ‘is’ in the third premise as the identity of individuals, we obtain

the obviously non-valid (sense-less) conclusion:

• George W. Bush knows that John Kerry wanted to become George W. Bush.

7

True, this shortcoming is usually overcome in FOL approach by introducing a special binary

predicate ‘holds’. The third premise would then be translated into:

• (individual) George W. Bush holds (another individual) the President of USA.

But then the inference machine under-infers, because it does not make it possible to infer a

valid consequence of the above premises, namely that

• George W. Bush knows that John Kerry wanted to become the President of USA.

Another shortcoming of FOL approach is the impossibility to handle (contingently)

‘non-denoting terms’ like the President of Czech Republic in January 2003, or the King of

USA. In other words, translating the sentence

The President of CR does not exist

into an FOL language, we obtain a paradox of existence (that being an individual, it exists,

but actually it does not exist): ¬∃x (x = Pres(CR)), but from (x = Pres(CR)) by existential

generalisation we immediately derive that ∃x (x = Pres(CR)).

A theory formalising reasoning of intelligent agents has to be able to talk about the

objects of agents’ attitudes, to quantify over them, to express iterated attitudes and/or self-

referential statements, like agent a knows that an agent b knows that (he) believes that P,

which in FOL leads to inconsistencies.

Such theories should also make it possible to express the distinction between

analytical and empirical concepts (what is necessary and what is just contingent), to express

empty concepts, to talk about concepts and, last but not least, to express n-ary relations-in-

intension between any entities (not only individuals) of our ontology. While this is beyond the

expressive power of FOL, many richer logical systems with non-standard operators are

proposed: modal, epistemic, intensional, temporal, non-monotonic, paraconsistent, etc. These

logics can be characterized as theories with ‘syntactically driven axiomatization’. They

provide ad hoc axioms and rules that define a set of models, each logic partly solving

particular problem. Ontology language should be, however, universal, highly expressive, with

transparent semantics and meaning driven axiomatisation. We have such a system at hand:

the system of transparent intensional logic (TIL).

3. Transparent Intensional Logic (TIL)

3.1 Hierarchy of Types

TIL is a logic that does not use any non standard operators; in this sense it is classical.

However, its expressive power is very high: formalisation of meaning is comprehensive, with

transparent semantics, closed to natural language. Notation is an adjusted objectual version of

Church’s typed λ-calculus, where all the semantically salient features are explicitly present.

The entities we can talk about are in TIL organised into two-dimensional hierarchy of types.

This enables us to logically handle structured meanings as higher-order, hyper-intensional

abstract objects, thus avoiding inconsistency problems stemming from the need to mention

these objects within the theory itself. Hyper-intensionally individuated structured meanings

are procedures, structured from the algorithmic point of view, known as TIL constructions.

Due to typing, any object of any order can be safely, not only used, but also mentioned within

the theory.

On the ground level of the type-hierarchy, there are set-theoretical entities

unstructured from the algorithmic point of view belonging to a type of order 1. Given a so-

called epistemic base of atomic types (ο-truth values, ι-individuals, τ-time points (or real

8

numbers), ω-possible worlds), mereological complexity is increased by an induction rule of

forming partial functions: where α, β

1

,…,β

n

are types of order 1, the set of partial mappings

from β

1

×…× β

n

to α, denoted (α β

1

…β

n

), is a type of order 1 as well.

TIL is an open-ended system. The above epistemic base {ο, ι, τ, ω} was chosen,

because it is apt for natural-language analysis, but in the case of mathematics a (partially)

distinct base would be appropriate; for instance, the base consisting of natural numbers, of

type ν, and truth-values. Derived types would then be defined over {ν, ο}.

A collection of constructions that construct entities of order 1, denoted by *

1

, serves as

a base for the induction rule: any collection of partial functions (α β

1

…β

n

) involving *

1

in

their domain or range is a type of order 2. Constructions belonging to a type *

2

that identify

entities of order 1 or 2, and partial functions involving such constructions, belong to a type of

order 3. And so on, ad infinitum.

Example: Binary mathematical functions like adding (+), dividing (:) are mappings of type

(τττ). The set of prime numbers is a mapping of type (ον)⎯ the characteristic function that

associates each natural number with a truth-value: True, in case the number is a prime, False

otherwise.

3.2 Constructions

Constructions are structured from the algorithmic point of view; they are procedures

consisting of instructions specifying the way of arriving at lower-level (less-structured)

entities. Since constructions are abstract, extra-linguistic entities, they are reachable only via a

verbal definition. The ‘language of constructions’ is a modified version of the typed λ-

calculus, where Montague-like λ-terms denote, not the functions constructed, but the

constructions themselves. The modification is extensive. Church’s λ-terms form part of his

simple type theory, whereas our λ-terms belong to a ramified type theory [22].

Constructions qua procedures operate on input objects (of any type, even higher-order

constructions) and yield as output objects of any type. One should not conflate using

constructions as constituents of composed constructions and mentioning constructions that

enter as input objects into composed constructions, so we have to strictly distinguish between

using and mentioning constructions. The latter is, in principle, achieved by using atomic

constructions. A construction is atomic if it is a procedure that does not contain as a

constituent any other construction but itself. There are two atomic constructions that supply

objects (of any type) on which complex constructions operate: variables and trivialisations.

Variables are constructions that construct an object dependently on valuation: they v-

construct. Variables can range over any type. If c is a variable ranging over constructions of

order 1 (type *

1

), then c belongs to *

2

, the type of order 3, and constructs a construction of

order 1 belonging to *

1

: the type of order 2. When X is an object of any type, the trivialisation

of X, denoted

0

X, constructs X without the mediation of any other construction.

0

X is the

atomic concept of X: it is the primitive, non-perspectival mode of presentation of X.

There are two compound constructions, which consist of other constituents:

composition and closure. Composition is the procedure of applying a function f to an

argument A, i.e., the instruction to apply f to A to obtain the value (if any) of f at A. Closure is

the procedure of constructing a function by abstracting over variables, i.e., the instruction to

do so. Finally, higher-order constructions can be used twice over as constituents of composed

constructions. This is achieved by a fifth construction called double execution (

2

C). For

instance, if a variable c belongs to *

2

, the type of order 3, and constructs a construction of

order 1 belonging to *

1

, the type of order 2,

2

c constructs an entity belonging to a type of

order 1.

9

3.3 Definitions

Definition 1 (Construction)

i) Variables x, y, z, … construct objects of the respective types dependently on valuations v;

they v-construct.

ii) Trivialisation: Where X is an object whatsoever (an extension, an intension or a

construction),

0

X constructs X.

iii) Closure: If x

1

, x

2

, …,x

n

are pairwise distinct variables that v-construct entities of types α

1

,

α

2

, …, α

n

, respectively, and Y is a construction that v-constructs an entity of type β, then

[λx

1

…x

n

Y] is a construction called closure, which v-constructs a partial function of type

(β α

1

…α

n

) mapping α

1

×…× α

n

to β.

iv) Composition: If X v-constructs a function f of a type (β α

1

…α

n

), and Y

1

,…,Y

n

v-construct

entities A

1

, …, A

n

of types α

1

,…,α

n

, respectively, then the composition [X Y

1

… Y

n

] v-

constructs the value (an entity, if any, of type β) of the (partial) function f on the argument

〈A

1

, …, A

n

〉. Otherwise the composition [X Y

1

… Y

n

] does not v-construct anything: it is

v-improper.

v) Double execution: If X is a construction of order n, n ≥ 2, that v-constructs a construction

X’ (of order n–1), then

2

X v-constructs the entity v-constructed by X’. Otherwise the

double execution

2

X is v-improper.

vi) Nothing is a construction, unless it so follows from i) through v).

Definition 2 (Ramified hierarchy)

Let B be a base, i.e. a collection of pair-wise disjoint, non-empty sets.

T

1

(types of order 1)

i) Every member of B is an elementary type of order 1 over B.

ii) Let α, β

1

, ..., β

m

(m > 0) be types of order 1 over B. Then the collection (α β

1

... β

m

) of all

m-ary (total and partial) mappings from β

1

× ... × β

n

into α is a functional type of order 1

over B.

iii) Nothing is a type of order 1 over B unless it so follows from i) and ii).

C

n

(constructions of order n)

i) Let x be a variable ranging over a type of order n. Then x is a construction of order n

over B.

ii) Let X be a member of a type of order n. Then

0

X,

2

X are constructions of order n over B.

iii) Let X, X

1

, ..., X

m

(m > 0) be constructions of order n over B. Then [X X

1

...X

m

] is a

construction of order n over B.

iv) Let x

1

, ..., x

m

, X (m > 0) be constructions of order n over B. Then [λx

1

...x

m

X] is a

construction of order n over B.

T

n+1

(types of order n + 1)

Let ∗

n

be the collection of all construction of order n over B.

i) ∗

n

and every type of order n are types of order n + 1.

ii) If m > 0, and α, β

1

,...,β

m

are types of order n + 1 over B, then (α β

1

... β

m

) (see T

1

ii)) is a

type of order n + 1 over B.

iii) Nothing is a type of order n + 1 over B unless it so follows from i), ii).

Examples (a) The function +, defined on natural numbers (of type ν), is not a construction. It

is a mapping of type (ν νν), i.e., a set of triples, the first two members of which are natural

numbers, while the third member is their sum. The simplest construction of this mapping is

0

+. (b) The composition [

0

+ x

0

1] v-constructs the successor of any number x. (c) The closure

λx [

0

+ x

0

1] constructs the successor function. (d) The composition of this closure with

0

5, i.e.,

10

[λx [

0

+ x

0

1]

0

5], constructs the number 6. (e) The composition [

0

: x

0

0] does not v-construct

anything for any valuation of x; it is v-improper. (f) The closure λx [

0

: x

0

0] is not improper, as

it constructs something, even though it is only a degenerate function, viz. one undefined at all

its arguments.

The constructions

0

+, [

0

+ x

0

1], λx [

0

+ x

0

1], [λx [

0

+ x

0

1]

0

5], [

0

: x

0

0], λx [

0

: x

0

0], all

mentioned above, are members of *

1

. When IMP is a set of v-improper constructions of order

1, i.e., when IMP is an object of type (ο*

1

), the composition [

0

IMP

0

[

0

: x

0

0]] is a member of

type *

2

, and it

constructs the truth-value True. The constituent

0

[

0

: x

0

0] of this composition (a

member of type *

2

) is an atomic proper construction that constructs [

0

: x

0

0], a member of *

1

.

It is atomic, because the construction [

0

: x

0

0] is not used here as a constituent but only

mentioned as an input object. For further details, see [6], [22].

If ARITH-UN is a set of arithmetic unary functions, then the composition

[

0

ARITH-UN

2

c] v-constructs True if c v–constructs [λx [

0

+ x

0

1]]. The double execution

2

c v-

constructs what is v-constructed by [λx [

0

+ x

0

1]], i.e., the arithmetic successor function.

Notational conventions An object A of the type α is called an α-object, denoted A/α. That a

construction C constructs an α-object will be denoted C → α. We use infix notation without

trivialisation for truth-value connectives ∧ (conjunction), ∨ (disjunction), ⊃ (implication), for

an identity sign = and for binary number relations ≥, <, >, ≤.

Thus, for instance, we can write:

c → *

1

, c / *

2

,

2

c → (ττ),

2

c / *

3

,

ARITH-UN / (ττ),

0

ARITH-UN / *

1

, [

0

ARITH-UN

2

c] → ο,

[

0

ARITH-UN

2

c] / *

3

, x → τ, x / *

1

,

0

+ / *

1

,

0

+ → (τττ),

0

1 / *

1

,

0

1 → τ, [λx [

0

+ x

0

1]] / *

1

,

[λx [

0

+ x

0

1]] → (ττ).

Definition 3 (α-)intension, (α-)extension

(α-)intensions are members of a type (αω), i.e., functions from possible worlds to the

arbitrary type α. (α-)extensions are members of the type α, where α is not equal to (βω) for

any β, i.e., extensions are not functions from possible worlds.

Remark Intensions are frequently functions of the type ((ατ)ω), i.e., functions from possible

worlds to chronologies of the type α (in symbols: α

τω

), where a chronology is a function of

type (ατ).

Examples of intensions

• being happy is a property of individuals / (οι)

τω

.

• The president of the Czech Republic is an individual office (‘individual concept’) / ι

τω

.

• That Charles is happy is a proposition / ο

τω

.

• Knowing is an attitude of an individual to a construction, i.e., a relation that is a higher-

order intension / (ο ι ∗

n

)

τω

.

3.4 Logical Analysis

We adhere to the constraint on natural-language analysis dictated by the principle of subject

matter: an admissible analysis of an expression E is a construction C such that C uses, as its

constituents, constructions of just those objects that E mentions, i.e., the objects denoted by

sub-expressions of E (for details, see [16, 17]). Any such analysis is an adequate analysis of

E, the best one relative to a conceptual system determined by a set of atomic constructions

[19]. The principle is central to our general three-step method of logical analysis of language:

(i) Type-theoretical analysis Assign types to the objects mentioned, i.e., only those that are

denoted by sub-expressions of E, and do not omit any semantically self-contained sub-

expression of E, i.e., use all of them.

11

(ii) Synthesis Compose constructions of these objects so as to construct the object D denoted

by E.

(iii) Type checking Use the assigned types for control so as to check whether the various

types are compatible and, furthermore, produce the right type of object in the manner

prescribed by the analysis.

A construction of an intension is usually of the form λwλt X, w → ω, t → τ. If C is a

construction of an intension Int, the composition [[C w] t] — the intensional descent of Int to

its extension (if any) at w,t — will be abbreviated C

wt

.

Example of analysis We are going to analyse the sentence,

“The President of USA is G.W.Bush”.

(i’) President-of / (ιι)

τω

—(an empirical function that dependently on the states of affairs

assigns an individual to an individual), USA / ι — (for the sake of simplicity), the

President of USA / ι

τω

— an individual office, G.W.Bush / ι, = / (ο ιι) — the identity

of individuals. The whole sentence denotes a proposition / ο

τω

.

(ii’) λwλt [

0

President-of

wt

0

USA] → ι

τω

(the individual office – role PUSA)

[λwλt [

0

President-of

wt

0

USA]]

wt

→ ι (the occupant of the office PUSA at w,t)

[

0

= [λwλt [

0

President-of

wt

0

USA]]

wt

0

G.W.Bush] → ο

λwλt [

0

= [λwλt [

0

President-of

wt

0

USA]]

wt

0

G.W.Bush] → ο

τω

.

(iii’) λwλt [

0

= [λwλt [

0

President-of

wt

0

USA]]

wt

0

G.W.Bush]

(ιι) ι

(ο ιι) ι ι

ο

Abstracting over t: (οτ)

Abstracting over w: ((οτ)ω), i.e., ο

τω

.

When being the President of USA, G.W. Bush is identical with the individual that holds the

office PUSA. If, however, John Kerry wanted to become the President of USA, he certainly

did not want to become G.W. Bush. He simply wanted to hold the office PUSA, i.e., he is

related not to the individual, but to the individual office. Now the paradoxical argument

mentioned in Chapter 2 is easily solved away:

“John Kerry wanted to become the President of USA”

λwλt [

0

Want

wt

0

J.Kerry [λwλt [

0

Become

wt

0

J.Kerry λwλt [

0

President-of

wt

0

USA]]]],

where Want / (ο ι ο

τω

)

τω

, Become / (ο ι ι

τω

)

τω

.

The construction λwλt [

0

President-of

wt

0

USA], i.e., a concept of the president of USA, is in

the de dicto supposition [6], ‘talking about’ the office itself, not its occupant in w,t.

Knowing is a relation-in-intension of an agent a to the meaning of the embedded

clause, i.e., to the construction of the respective proposition [8]. Thus when the President of

USA knows that John Kerry wanted to become the President of USA, he is related to the

construction of the proposition, and the former concept of the president of USA is used de re,

the latter de dicto.

The whole argument is analysed as follows:

λwλt [

0

= [λwλt [

0

President-of

wt

0

USA]]

wt

0

G.W.Bush]

λwλt [

0

Know

wt

[λwλt [

0

President-of

wt

0

USA]]

wt

0

[λwλt [

0

Want

wt

0

J.Kerry λwλt [

0

Become

wt

0

J.Kerry λwλt [

0

President-of

wt

0

USA]]]]]

12

Now the former concept of the President of USA is ‘free’ for substitution: we can substitute

0

G.W.Bush for [λwλt [

0

President-of

wt

0

USA]]

wt

, thus deducing that G.W. Bush knows that

John Kerry wanted to become the President of USA, but not that he wanted to become

G.W.Bush:

λwλt [

0

Know

wt

0

G.W.Bush

0

[λwλt [

0

Want

wt

0

J.Kerry [λwλt

0

Become

wt

0

J.Kerry λwλt [

0

President-of

wt

0

USA]]]]]

The undesirable substitution of

0

G.W.Bush for the latter occurrence of the construction

λwλt [

0

President-of

wt

0

USA] is blocked.

4. Theory of Concepts

Category of concepts has been almost neglected in the modern logic (perhaps only Bolzano,

Frege and Church studied the notion of a concept). A new impulse to examining concepts

came from computer science. Finnish logician Rauli Kauppi [15] axiomatised the classical

conception of concept as the entity determined by its extent and content. This theory is based

on the (primitive) relation of the intensional containment. Ganter-Wille [9] theory defines

formal concept as the couple (extent, content) and makes use of the classical law of inversion

between extent and content, which holds for the conjunctive composition of attributes. Due to

this law a partial ordering can be defined on the set of formal concepts, which establishes a

concept lattice. Actually, ontologies viewed as classifications are based on this framework.

All these classical theories make use of the FOL apparatus classifying relations between

concepts that are not ontologically defined here.

Our conception defines concept as a closed construction [5, 18, 19], an algorithmically

structured procedure. An analogical approach can be also found in [20]. We do not consider

only general concepts of properties, but also a concept of a proposition, of an office, of a

number, etc. Simply, any closed construction (even an improper one) is a concept.

Comparison of the procedural theory of concepts with the classical set-theoretical one can be

found in [7].

When building ontologies, we aim at conceptual analysis of entities talked about in the

given domain. In TIL, the analysis consists in formalising the meaning of an expression, i.e.,

in finding the construction of the denoted entity. If a sentence has a complete meaning, the

construction is a complete instruction of how to evaluate the truth-conditions in any state of

affairs w,t. The meaning is a closed construction of the proposition of the form: λwλt C,

where C does not contain any free variables except w,t, and constructs a truth value. However,

not all the sentences of natural language denote propositions. Sometimes we are not able to

evaluate the truth conditions without knowing the (linguistic or situation of utterance) context.

In such a case the respective construction is open, contains free variables. For instance, the

sentence “He is happy” does not denote a proposition. Its analysis λwλt [

0

Happy

wt

x] contains

a free variable x. Only after x is evaluated by context supplying the respective individual, we

obtain a proposition. Thus the sentence does not express a concept of the proposition. On the

other hand, sentences with a complete meaning are complete instructions of arriving at the

proposition. They express concepts of a proposition.

Constructions (meanings) are assigned to expressions by linguistic convention. Closed

constructions are concepts assigned to expression with a complete meaning. Natural language

is not, however, perfect. In a vernacular we often confuse concepts with expressions. We say

that the concept of a computer had been invented, or that the concept of a whale changed, and

so on. As abstract entities, concepts cannot change or being invented. They can only be

discovered. We should rather say that the new expressions had been invented to express the

respective concept, or that the expression changed its meaning. Moreover, there are

13

homonyms (expressions with more concepts assigned), and synonyms (more expressions

expressing the same concept). Anyway, understanding an expression we know the respective

concept, we know what to do, which does not, however, mean that we know the result of the

procedure.

We have to make the notion of a concept still more precise before defining the

concept. Constructions are hyper-intensionally individuated procedures, which is a fine-

grained explication of meaning, but from the conceptual point of view it is rather too fine-

grained. Some constructions are almost identical, not distinguishable in a natural language,

though not strictly identical. We define a relation of quasi-identity on the collection of all

constructions, and say that quasi-identical are constructions that are either α-equivalent or η-

equivalent. For instance, constructions λx[x >

0

0], λy[y >

0

0], λz[z >

0

0], etc., are α-

equivalent. They define the class of positive numbers in a conceptually indistinguishable way.

Similarly, conceptually indistinguishable constructions are η-equivalent ones, like

0

+,

λxy [

0

+ x y], where the latter is an η-expansion of the former. Each equivalent class of

constructions can be ordered, and we say that the first one is the construction in the canonical

normalised form. Concept is then defined as a canonical closed construction, the other quasi-

identical constructions point at the concept.

Definition 4 (open / closed construction)

Let C be a construction. A variable x is ο-bound in C, if it is a subconstruction of a

construction C’ that is mentioned by trivialisation. A variable y is λ-bound in C, if it is a

subconstruction of a closure construction of the form λy and y is not ο-bound. A variable is

free in C, if it is neither ο-bound nor λ-bound. A construction without free variables is closed.

Examples. Construction [

0

+ x

0

1] is open: variable x is free here. Construction λx[

0

+ x

0

1] is

closed: variable x is λ-bound here. Construction

0

[λx[

0

+ x

0

1]] is closed: variable x is ο-bound

here.

Definition 5 (concept)

Concept is a closed construction in the canonical form.

Concept C

1

is contained in the concept C

2

, if C

1

is used as a constituent of C

2

.

Content of a concept C is a set of concepts contained in C.

Extent of a concept C is the entity constructed by C.

Extent of an empirical concept C

E

in a state of the world w,t is the value of the intension I

constructed by C in w,t (C

wt

).

Examples.

The concept of the greatest prime is strictly empty, it does not have any extent:

[

0

Sing λx ( [

0

Prime x] ∧ ∀y [[

0

Prime y] ⊃ [x ≥ y]] )].

(The Sing function ⎯ ‘the only x such that’ ⎯ returns the only member of a singleon, on

other sets (empty or sets of more than one member) it does not return anything.)

• The content of this concept is:

{

0

Sing,

0

Prime,

0

∀,

0

∧, λx ( [

0

Prime x] ∧ ∀y [[

0

Prime y] ⊃ [x ≥ y]] ),

0

⊃,

0

≥, and the

concept itself}.

The empirical concept of the President of the USA identifies the individual office:

λwλt [

0

President-of

wt

0

USA] → ι

τω

.

• Its content is the set {λwλt[

0

President-of

wt

0

USA],

0

President-of,

0

USA}.

• Its extent is the office.

• Its current extent is G.W. Bush. The concept used to be empirically empty before the

year 1789.

14

To make the Web search more effective, important conceptual properties and relations

should be followed. Among them, perhaps the most important is the relation called in the

Description logic subsuming (or in [15] intensional containment, known also as subconcept-

superconcept). Conceptual relations are analytical. In other words, understanding, e.g., the

sentences like A cat is a feline, Whales are mammals, No bachelor is married, we do not have

to investigate the state of the world (or search in the Web) to evaluate them in any state of the

world as being true. This relation can be defined extensionally, i.e., in terms of the extents of

concepts. For instance, the property of being a cat has as its requisite the property of being a

feline: necessarily ⎯ in each state of affairs the population of cats is a subset of the

population of felines. Or, necessarily, the concept of the President of USA subsumes the

concept of the highest representative of USA. We have seen that except of DL, languages

based on FOL cannot specify this important relation. Subsuming is in TIL defined as follows:

Let C

1

, C

2

be empirical concepts. Then C

1

subsumes C

2

, denoted C

1

≥ C

2

,

iff in all the states

of affairs w, t the extent of C

1

is contained in the extent of C

2

. Formally:

Definition 6 (subsuming)

[

0

Subsume

0

C

1

0

C

2

] = ∀w∀t ∀x [[C

1wt

x] ⊃ [C

2wt

x]],

where (Subsume (ο ∗

n

∗

n

), C

1

→ (οα)

τω

, C

2

→ (οα)

τω

, x → α).

Thus for instance, since the concept of bachelor subsumes the concept of being never

married ex definitione, once we obtain a piece of information that Mr. X is a bachelor we will

not search any more for the information whether X is, or has ever been, married.

To follow such necessary relations between concepts, each important term of a domain

should be provided with an ontological definition of the entity denoted by the term. The

ontological definition is a complex concept composing as constituents primitive concepts of a

conceptual system [19], the given ontology. For instance, the concept woman can be defined

using primitive concepts person and female. The concept mother can be further defined

using the former, existential quantifier and the concept child:

0

woman = λwλt λx [[[

0

sexof

wt

x] =

0

female] ∧ [

0

person

wt

x]],

0

mother = λwλt λx [[

0

woman

wt

x] ∧ ∃y [[

0

childof

wt

x] y]]

where: x → ι, y → ι, woman / (οι)

τω

, female / (οι)

τω

, sexof / ((οι)

τω

ι)

τω

, person / (οι)

τω

,

childof / ((οι) ι)

τω

.

From the above definitions it follows that necessarily each woman is a female person, each

mother is a woman having children, etc.

Other analytical relations between concepts that can be defined using Web ontological

languages based on the Description logic are equivalence and incompatibility (disjointness in

DL):

Definition 7 (equivalence, incompatibility): Concepts C

1

, C

2

are equivalent, if they have

exactly the same extent (construct the same entity). Concepts C

1

, C

2

are incompatible, if in no

state of affairs w, t the extent of C

1

is a part of the extent of C

2

, and vice versa.

Note that concepts C

1

, C

2

are equivalent iff C

1

≥ C

2

and C

2

≤ C

1

, but not if their contents are

identical.

Examples:

Concepts bachelor and a married man are incompatible.

Concepts of the proposition that It is not necessary that if the President of USA is a

republican then he attacks Iraq and of the proposition that It is possible that the President of

USA is a republican and he does not attack Iraq are equivalent.

Some procedures can even fail, not producing any output. They are empty concepts.

There are several degrees of emptiness. From the strict emptiness, when the respective

concept does not identify anything (like the greatest prime) to emptiness, when the respective

15

procedure identifies an empty set. Empirical concepts always identify a non-trivial intension

of a type α

τω

. They cannot be (strictly) empty, they can be rather empirically empty when the

respective identified intension does not have any value or the value is an empty set in a

current state of the world w, t. A concept empirically empty in the actual state of affairs is,

e.g., the King of France. Not to search for extensions of empty concepts, classes of empty

concepts should be defined:

Strictly empty concept does not have an extension (the construction fails).

Empty concept has an empty extension.

Empirical concept C

E

is strictly empty in w,t, if it does not have an extent in w,t (the

intensional descent of I, i.e., C

wt

is improper.

Empirical concept C

E

is empty in w,t, if its extent in w,t is an empty set (the intensional

descent of I, i.e., C

wt

, constructs an empty set).

5. TIL Knowledge Representation in a Multi-Agent World

A rational agent in a multi-agent world is able to reason about the world (what holds true and

what does not), about its own cognitive state, and about that of other agents. Theory

formalizing reasoning of autonomous intelligent agents has thus to be able to ‘talk about’ and

quantify over the objects of propositional attitudes – structured meanings (constructions) of

the embedded clauses, iterate attitudes of distinct agents and express self-referential

statements, like agent a knows that b knows that he believes that P. Last but not least, the

theory has to respect different inferential abilities of particular agents.

The agents have to communicate in a (pseudo-) natural language, in order to

understand each other, and to provide relevant information to ‘whom-ever’, whenever and

where-ever needed.

Obviously, any classical set-theoretical theory is not able to meet these goals.

There are three kinds of knowing:

• implicit (of an agent who is a logical/mathematical genius, which leads to an ‘explosion’

of knowledge and the paradox of omniscience)

• explicit (which deprives an agent of any inferential capabilities)

• inferable (of a realistic agent with some inferential capabilities, who, however, is not

logically omniscient).

Model-theoretic intensional logics (with Kripke or Montague semantics) conceive

possible-world propositions (intensions) as objects of knowing. These approaches are apt for

modelling implicit knowledge, but cannot handle explicit or inferable knowledge in an

adequate way. Since equivalent formulas are indistinguishable, the problem of logical

omniscience cannot be avoided. Either an agent is bound to know all the logical consequences

of his/her/its known assumptions, or at least its equivalents, which is the tightest restriction to

the problem of omniscience obtainable by the set-theoretical approach. Thus, for instance, if

an agent a knows that the number of inhabitants in Prague is equal to 1048576 people, he

should also assent to the statement that the number of inhabitants in Prague equals to 16

5

(the

hex-number 100000). Well, you may say that the agent knows it only “implicitly”. But when

being ordered to behave according to the latter statement, for instance to organise an

emergency service for 100000 hex-number of people, he must be aware of the fact so that to

be able to be active. Otherwise the system becomes chaotic and inconsistent.

Syntactic approaches adopting formulas as objects of knowing are the other extreme.

Though they are fine-grained enough to be suitable for modelling explicit knowledge, they are

prone to inconsistencies when disquoting formulas, stemming from the need to model self-

referential statements and the necessity to mention formulas within the theory. Moreover, an

agent a is deprived of any inferential abilities; it is just assigned the set of formulas⎯ its

16

explicit knowledge. Thus the agent a becomes an “agent idiot”, just a passive object. There

are some technically sophisticated ways of partly overcoming the above problems. But there

is a major philosophical objection to a syntactic approach: when knowing a statement S the

agent is not related to a piece of formal syntax, a formula, but to the meaning of S.

TIL approach to knowledge representation is technically as fine-grained as the

syntactic approach, with two major distinctions:

• When knowing that S, an agent is related to the meaning of S, i.e., to TIL construction⎯

the hyper-intensionally individuated mode of the presentation of the respective

proposition.

• We do not restrict the set of formulas the agent is said to know, instead we compute the

inferable knowledge relative to the inference rule(s) the agent is able to use.

Thus knowing is an object of type (ο ι *

n

)

τω

, a relation-in-intension of an individual to the

respective construction. The analysis of the above statement comes as follows:

λwλt [

0

Know

wt

0

a

0

[λwλt [

0

Card λx [

0

Inh

wt

x

0

Prague] =

0

1048576

10

]]].

The agent a is related to the construction

[λwλt [

0

Card λx [

0

Inh

wt

x

0

Prague] =

0

1048576

10

]],

which is mentioned here. Although the construction

[λwλt [

0

Card λx [

0

Inh

wt

x

0

Prague] =

0

100000

16

]]

identifies the same truth-conditions, it is a distinct procedure, and the agent a does not have to

be able to evaluate it in a given state of affairs w,t, provided a does not master the rules of

transition from decimal to hexadecimal number system.

In what follows we just outline the TIL theory of computing agents’ inferable

knowledge relativized to particular inferential abilities, the sets of inference rules the agents

master.

Having an agent a equipped with a finite set of constructions (concepts of

propositions) K

exp

(a)

wt

(a’s current knowledge) and some intelligence (the set of inference

rules a masters), we compute the epistemic closure of K

exp

(a)

wt

. An exterior agent b

attempting to draw valid inferences about the interior agent a’s inferential knowledge needs a

closure principle to validate his inferences. To specify such a principle, we introduce the

functions Inf(R) / ((ο∗

n

) (ο∗

n

)) associating an input set C of constructions with the set of

constructions derivable from C using a set of rules R.

Let c (c → ∗

n

) now stand for an individual inferable piece of knowledge, d (d → (ο∗

n

))

for a stock of knowledge, and let R / (ο(∗

n

(ο∗

n

))) be a set of derivation rules, r → (∗

n

(ο∗

n

)) a

particular element of R. The following constructional schema specifies the function Inf(R):

λd λc [[d c] ∨ [∃r [

0

R r] ∧ (d |—

r

c)]]

where (d |—

r

c)

denotes derivation in accordance with r, i.e., the composition [[r d] = c]. The

schema can be read as follows: from any set d of constructions (λd) a construction c is

inferable (λc), if c belongs to d ([d c]), or c is derivable from d using a rule r.

For instance, let R contain the rule of disjunctive syllogism, the substitution rule, the

β-reduction rule and the rule

20

C |– C. Then Inf(R) is defined as follows:

0

Inf(R) =

λd λc [[d c] ∨ [ ∃c’ [d c’] ∧ [d

c,c’

[λwλt [¬(

2

c’)

wt

∨

(

2

c)

wt

]]]]].

There are technical complications. First, the stock of knowledge constructed by d is

usually a set of empirical concepts, i.e., constructions of intensions, propositions of types ο

τω

.

Second, since we are talking about the very objects of a’s epistemic attitudes, we need to

mention the constructions by trivialising them (which corresponds to calling a subprocedure

with formal parameters c, c’). To release the variables c, c’ bound by trivialisation, we have to

use the special substitution functions Sub, which realize a substitution of the actual values for

the formal parameters: if applied to the constructions C

1

, C

2

, C

3

, Sub / (∗

n

∗

n

∗

n

∗

n

) returns the

17

construction C such that C is the result of substituting C

1

for C

2

in C

3

. The double-executing

variables ranging over propositional constructions

2

c,

2

c’ returns the respective propositions,

the intensional descent of which constructs a truth-value. Finally, β-reductions and the rule

transforming

20

C into C (

20

C |– C) are to be performed. The upper index c,c’ is a notational

abbreviation of these devices.

Thus,

c,c’

[λwλt [¬(

2

c’)

wt

∨

(

2

c)

wt

]]

is to be unpacked as

[

0

Sub [

0

Tr c]

0

c [

0

Sub [

0

Tr c’]

0

c’

0

[λwλt [¬(

2

c’)

wt

∨ (

2

c)

wt

]]]].

Example Let a’s knowledge base contain the two facts (i) that Charles is bald and (ii) that

Charles is not bald or he is a king. Then, if a currently masters the rules R, he is able to

deduce that Charles is a king:

d →

v

{... [λwλt [

0

Bald

wt

0

Charles]], …,

[λwλt ¬[

0

Bald

wt

0

Charles] ∨ [

0

King

wt

0

Charles]], ...}

c’ →

v

[λwλt [

0

Bald

wt

0

Charles]]

c →

v

[λwλt [

0

King

wt

0

Charles]]

[

0

Sub [

0

Tr c]

0

c [

0

Sub [

0

Tr c’]

0

c’

0

[λwλt [¬(

2

c’)

wt

∨ (

2

c)

wt

]]]] →

v

[λwλt [¬

20

[λwλt [

0

Bald

wt

0

Charles]]

wt

∨

20

[λwλt [

0

King

wt

0

Charles]]

wt

]] =

(

20

C |– C)

[λwλt [¬[λwλt [

0

Bald

wt

0

Charles]]

wt

∨ [λwλt [

0

King

wt

0

Charles]]

wt

]] =

(β-reduction)

[λwλt [¬[

0

Bald

wt

0

Charles] ∨ [

0

King

wt

0

Charles]]].

Above we introduced the notion of a rule as a function of type (∗

n

(ο∗

n

)). We assume

that the rules R assigned to a are valid rules of inference and that the function Inf(R) meets the

following conditions for any agent a.

• Inf(R) is subclassical: if ϕ is derived from a stock of knowledge Γ, then ϕ is entailed by

Γ, i.e., if C

n

is the function assigning Γ with the set of its logical consequences, then

[

0

Inf(R)

Γ] ⊆ [C

n

Γ].

• Inf(R) is reflexive: Γ ⊆ [

0

Inf(R)

Γ].

(“a does not forget what a already knows.”)

• If Inf(R) is subclassical and reflexive, then it is monotonic:

if Γ ⊆ Γ’ then [

0

Inf(R)

Γ] ⊆ [

0

Inf(R)

wt

Γ’].

• Inf(R) is not idempotent: [

0

Inf(R) [

0

Inf(R)

A]] is not a subset of [

0

Inf(R)

A].

At this point we are able to recursively define the inferable knowledge of an agent a

mastering the rules R in a state w, t using the fixed-point technique. The knowledge of an

agent a in the state w, t, whether explicit, inferable or implicit, is a set of propositional

constructions (concepts of propositions). The drawing of valid inferences about a’s inferable

knowledge is, for any w,t, executed step-wise. (for the sake of simplicity we now omit

trivialisations when no confusion can arise.) At step 0 we take a’s explicit knowledge as the

base of the induction K

0

(a)

wt

= K

exp

(a)

wt

. Step 1 consists in applying the function Inf(R) to this

knowledge, thus obtaining a new set of derived constructions K

1

(a)

wt

= [Inf(R) K

exp

(a)

wt

]. The

new set is a superset of the initial knowledge. But it is not necessarily equal to a’s inferable

knowledge yet: there may be more inferences to be drawn. Step 2 consists in applying Inf(R)

to the result of step 1 to obtain a new set: K

2

(a)

wt

= [Inf(R) K

1

(a)

wt

]. By iteration, an increasing

sequence of sets of constructions K

1

(a)

wt

⊆ K

2

(a)

wt

⊆ K

3

(a)

wt

… is obtained, such that each set

18

K

n+1

(a)

wt

depends only on the preceding set K

n

(a)

wt

. But at which step will the iteration stop?

There are two possibilities. Either there is a step m such that no more constructions can be

inferred: K

m+1

(a)

wt

= K

m

(a)

wt

, and K

m

(a)

wt

is the supremum of Inf(R). Or else there is no such

finite m, the sequence increasing ad infinitum for want of a maximum element. Still, even in

the latter case there exists a least upper bound of the sequence:

wt

k

kwt

aKaK )()(

1

U

∞

=

∞

=

This potentially infinite set is well-defined: it is the result of a potentially infinite

number of finite computational steps, and K

∞

(a)

wt

= [Inf(R) K

∞

(a)

wt

] holds. If the initial set of

explicit knowledge is a finite set of constructions, K

∞

(a)

wt

is countable.

In any case, the Inf(R) function is increasing and has a supremum. According to

Tarski’s fixed-point theorem, there is a least fixed point of Inf(R) containing K

exp

(a)

wt

, and

since no more inferences can be drawn, this fixed-point set is the whole inferable knowledge

of a in w, t.

Definition 8 (

inferable knowledge)

•

K

0

(a)

wt

= K

exp

(a)

wt

•

K

n+1

(a)

wt

= [ Inf(R) K

n

(a)

wt

]

•

nothing other …

The whole set of constructions validly inferable by a

⎯

a’s inferable knowledge

⎯

is

the fixed point of Inf(R):

K

inf

(a)

wt

= [ Inf(R) K

inf

(a)

wt

]

and it is the least fixed point of Inf(R) containing a’s explicit knowledge:

K

inf

(a)

wt

= µ λx [ Inf(R)

[ x ∪ K

exp

(a)

wt

]].

6. Conclusion

By way of conclusion, given an agent a furnished with a stock of recursively enumerable

explicit knowledge and a flawless command of only some rules R of inference, there is an

upper limit to the new knowledge it would be logically possible for the agent to derive from

the agent’s old knowledge: it is the closure, a’s inferable knowledge. If another agent b

masters the same set of rules R but its initial stock of knowledge is distinct, he may arrive at

another closure, even in case all the concepts of his initial knowledge are equivalent to a’s

concepts. This is as it should be: a and b would not understand each other providing the set R

does not contain a rule enabling them to “realise” the equivalence.

Without the assumption that every rule the agent uses is valid, we would have to

consider non-monotonic reasoning as well. However, as long as we are modelling knowledge,

which we regard as factive and incapable of giving rise to inconsistent information bases, all

the reasoning must be monotonic. On the other hand, if we wish to model belief we must

allow the agent to use invalid rules of non-monotonic reasoning potentially giving rise to

inconsistencies.

To pursue the research on knowledge management in a multi-agent world, there is still

a lot to be done. First, we intend to use the method of conceptual lattices to partly order

classes of agents furnished with equivalent concepts and mastering the same set of rules. The

method of conceptual lattices should then facilitate involving dynamic aspects of the system

in a plausible way.

19

Apparently, we solved away the problem of logical omniscience. However, an agent is

resource bounded in many other aspects, in particular restricted time for calculation and

storage capacity. Thus the following subjects are a matter of further research:

•

Involving complexity problems (time and space limitations)

•

Dynamic aspects of the system (the assignment of the rules R to an agent is world / time

dependent, as well as its initial conceptual knowledge)

•

Doxastic logic of Beliefs

(managing hypotheses)

•

Belief revision and updating the base b (i.e., transition from a state w,t to w’,t’)

•

Non-monotonic reasoning

•

Full axiomatisation of a ‘multiagent’ logic

•

Modelling uncertainty, vagueness

References

[1]

Baader, F., Calvanese D., McGuiness D.L., Nardi D. and Patel-Schneider P.F.: The

Description Logic Handbook. Theory, Implementation and Applications. Cambridge:

Cambridge University Press, 2003.

[2]

Berners-Lee, T.: Information Management Proposal, (referred 10.1.2005) <URL:

http://www.w3.org/History/1989/proposal.html>.

[3]

Berners-Lee, T. 2000: Semantic Web on XML. XML 2000 Washington DC, (referred

10.1.2005) <URL: http://www.w3.org/2000/Talks/1206-xml2k-tbl/>.

[4]

Berners-Lee, T., Hendler, J. and Lassila, O. 2001: The Semantic Web. Scientific

American, Vol. 284, No. 5, pp. 28 – 37.

[5]

Duží, M.: Concepts, Language and Ontologies (from the logical point of view). In:

Information Modelling and Knowledge Bases XV, Eiji Kawaguchi, Hannu Kangassalo

(Eds), Amsterdam: IOS Press, 2004, pp 193-209.

[6]

Duží, M.: Intensional Logic and the Irreducible Contrast between de dicto and de re.

ProFil, Vol. 5, No. 1, 2004, pp. 1-34,

<URL: http://profil.muni.cz/01_2004/duzi_de_dicto_de_re.pdf>.

[7]

Duží, M., Ďuráková D. and Menšík, M.: Concepts are Structured Meanings. In:

Information Modelling and Knowledge Bases XVI, Y. Kiyoki, B. Wnagler, H.

Jaakkola, H. Kangassalo (Eds). Amsterdam: IOS Press, 2004, pp 258-276.

[8]

Duží, M., Jespersen, B. and Müller, J.: Epistemic Closure and Inferable Knowledge.

The Logica Yearbook 2004, L. Běhounek, M. Bílková (Eds.). Filosofia Prague 2005,

125-140.

[9]

Ganter, B. and Wille, R.: Formal Concept Analysis. Berlin: Springer – Verlag, 1999.

[10]

Gordon, M. J. C. and Melham, T. F. (Eds): Introduction to HOL: A Theorem Proving

Environment for Higher Order Logic. Cambridge: Cambridge University Press, 1993.

[11]

Hayes, P. and Menzel, Ch. 2001: A Semantics for the Knowledge Interchange Format,

(referred 10.1.2005) <URL: http://reliant.teknowledge.com/IJCAI01/HayesMenzel-

SKIF-IJCAI2001.pdf>.

[12]

Hendler, J., Berners-Lee, T. and Miller, E.: Integrating Applications on the Semantic

Web, Journal of the Institute of Electrical Engineers of Japan, Vol 122, 2000, No. 10,

pp. 676 – 680.

[13]

Horrocks, I. and Patel-Schneider, P.F.: Three These of Representation in the Semantic

Web. WWW2003, May 20-24, Budapest, Hungary, 2003, (referred 10.1.2005) <URL:

http://www2003.org/cdrom/papers/refereed/p050/p50-horrocks.html>.

20

[14]

Horrocks, I., Patel-Schneider, P.F., Boley, H., Tabet, S., Grosof, B. and Dean, M:

SWRL: A Semantic Web Rule Language Combiming OWL and RuleML. W3C

Member Submission, May 2004, (referred 10.1.2005)

<URL: http://www.w3.org/Submission/SWRL/>.

[15]

Kauppi, R.: Einführung in die Theorie der Begriffssysteme, Acta Universitatis

Tamperensis A/15, Tampere 1967.

[16]

Materna, P., Duží, M.: Parmenides Principle. Philosophia, Philosophical Quarterly of

Israel. Vol. 32 (1-4), May 2005, Bar-Ilan University, Israel, pp. 155-180.

[17]

Duží, M., Materna, P.: Logical Form. Essays on the Foundations of Mathematics, and

Logic. G. Sica, ed., Polimetrica International Scientific Publisher, Monza / Italy, 2005,

pp. 115-153.

[18]

Materna, P.: Concepts and Objects. Acta Philosophica Phennica, Vol.63, Helsinki

1998.

[19]

Materna, P.: Conceptual Systems. Berlin: Logos Verlag, 2004.

[20]

Moschovakis, Y.: Sense and Denotation as Algorithm and Value. In J. Oikkonen and

J. Vaananen, (Eds.) Lecture Notes in Logic, #2 (1994). Berlin: Springer, pp. 210-249.

[21]

Paulson. L. C.: Isabelle: A Generic Theorem Prover. Number 828 in LNCS. Berlin:

Springer, 1994.

[22]

Tichý, P.: The Foundations of Frege’s Logic. De Gruyer, New Yourk, 1988.

[23]

W3C 2001: The World Wide Web Consortium. DAML + OIL Reference Description,

(referred 10.1.2005) <URL: http://www.w3.org/TR/daml+oil-reference >.

[24]

W3C 2001: The World Wide Web Consortium. Semantic Web Activity, (referred

10.1.2005) <URL: http://www.w3.org/2001/sw/ >.

[25]

Welcome to the OIL.: Description of OIL, (referred 10.1.2005) <URL:

http://www.ontoknowledge.org/oil/ >.

––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

This research has been supported by the program "Information Society" of the Czech Academy of Sciences,

project No. 1ET101940420 "Logic and Artificial Intelligence for multi-agent systems"

## Comments 0

Log in to post a comment