Theory of natural and man-made disasters

chatventriloquistΤεχνίτη Νοημοσύνη και Ρομποτική

1 Δεκ 2013 (πριν από 3 χρόνια και 11 μήνες)

205 εμφανίσεις


1

Theory of natural and man
-
made disasters


"
Nothing

is more
practical

than a good theory
.
"

(Ludwig Boltzmann)


1
The a
im of
the
research and utilization
possibilities

1.1
Timeliness of the topic and its military scientific aspects


The National Security Str
ategy of the Republic of Hungary
1

states that during the Euro
-
Atlantic integration process
,

after the change of the political system
,

Hungary bec
a
me a
member of such integration organizations, in which the stability of member states is a
common interest, a
nd is based on democracy and rule of law, and on the implementation of
human rights and basic liberties
. I
n order to defend them the nations are ready and able to
help each other.
However, new threats and challenges have appeared
,
to which
effective
respon
se can only be given
through

governmental
countermeasures coordinating our national
efforts
,
conscious development and flexible use of our capabilities
, and wide international
cooperation
.

As far as my specialized field



i.e.

disaster management
,
civil p
rotection and the protection
against fire



my scientific research study is so abstract that the conclusions
,
scientific results
are valid and applicable in the wide range of security
science
s

like military science and the
science of law enforcement
.
As a
consequence of this, although my research activities
originate

from the issues of disaster management, the results have a more general value.


The role of disaster management is to maintain safe conditions for life and work. It performs
these tasks in the
uniform scope of tasks of prevention, response and recovery, integrating
them in the country’s security system.

Its place is amongst law enforcement
organizations
, in close cooperation with the population,
public administration, businesses, charitable orga
nizations and all
other

actors of society.


In Hungary, one of the timeliest national tasks nowadays is the protection against natural and
man
-
made disasters. Public opinion, the political
and professional leadership pay

a special
attention to it, it deter
mines the country’s development and basically influences the lives of
citizens.

By today, it has become evident that safety and security are not simply a technical problem,
but a complex social issue. It is not simply a local issue affecting only a few pro
fessions, but a
global one. One cannot count with a short
-
term solution
. W
e have to face long
-
lasting and
dragging challenges.

Safet
y and security, and within them,
the protection against natural and man
-
made disasters is
not only an important and basic hu
man and natural value, but, at the same time, it also serves
international interests.

Having examined Hungary’s social and economic growth, it can be ascertained that the
unsolved security and disaster management issues can become an obstruction in the cou
ntry’s
development. They may endanger the implementation of basic strategic objectives and
deteriorate the nation’s evaluation.

In a country stable from a security and disaster
management

aspect
,

and in its environment
,

people
are
not
afraid
or

uncertain
,
the probability of errors is on an acceptable social level
,
and therefore

people

are
self
-
a
s
sured
.





1

Government Decision 2073/2004. (IV.15.)


2

Nowadays, disaster management has to tackle two major
challenges
.

An increased
burden is
witnessed during

so
-
called traditional
tasks
: fires, technical resc
ue, the protection against
other emergencies
or

incidents
.

However, Hungary’s disaster management will have to face quite serious challenges in the not
too distant future
,

besides the above
phenomena
.
These are the disaster management issues of
global clim
ate change
,
critical infrastructure protection, sustainable development and
sustainable security and the fight against terrorism
.


How can these
conventional and new types of protection tasks be solved?

The up
-
to
-
date, sustainable safety and security servi
ce seems
efficacious,
integr
at
ing

and
enhanc
ing

the development of

society
.
Theoretically, the weakest point of

a

sustainable

safety
and security
service
, in practice
, is

the absence of scientific grounding

it can be found in

a

disorganized

state
.
The upgr
ade of these weak points i
s

quite important because
of
the
expectations and

r
equirements

originating in Hungary’s

membership in
NATO
and EU
,
the
rapid and continuous adaptation and response

compulsion
,

forced by
globaliz
ation
,
the
absence

of the social bas
es of safety culture and

safety awareness
,

to relieve the tension
between
central efforts and
territorial inequalities the methods and systems used so far are not
suitable
,
the
y have evidently run out of
internal and external reserves
.
The reserve for an u
p
-
to
-
date,
sustainable

safety service can be the scientific researches and, beyond their results
,
the

adaptation

of methods used
by

the entrepreneur and NGO
spheres
.
These are
, amongst
others, the professionally adapted application of change
management
, cr
isis
communication
,
quality control and safety innovation methods.

With the
help of

t
he achievements of science and the use of the above
methods,

new
safety/security strategic objectives can also be
determined
.
These include
:



-

reduction of social, environ
mental
and

individual
risks
, increase
of

tolerability
;

-

increase of social satisfaction
amongst

the population
,
growth of
citizen
-
friendliness
;

-

qualitative, sustainable development and safety/security
;

-

quality
-
oriented
safety

service
;

-

integr
ated national d
efense and law enforce
ment capacity, the use of up
-
to
-
date
control and planning models
;

-

improvement of partnership relations with
form
al and
informal communities
;

-

shift towards
a
probl
em
-
solving

service
;

-

use of
best practice;

-

intelligent
, innovat
ive safet
y/security
.


Fo
r

the exact scientific
examination

of n
atural
and

man
-
made disaster phenomena and the
protection against them
, first of all, it is necessary to fix the most important elements of the
approach model
,
i.e.

my starting point is the following as
sumption
.
The elaboration of the
used and
[
developed
]
introduced notions will take place in the thematic chapters
of the
dissertation
.




The state of risk systems in the application field of
logical

risk

theory

can be
described

with
the

so
-
called fault
-
tre
e, their
behavior

can be analyzed with the so
-
called

fault
-
tree

a
nalysis

[Henley].
The
fault
-
tree

method

is
now almost half a century old,
therefore

I regard it
in this context as known
.
In the narrower, mathematical sense of the
theory

the use of a
fault
-
tree

is the same as using a
Boole
an


function
,
which
leads back

to
an undesired phenomenon
affecting a
risk system in question

(
more precisely, a statement or declaration

on its
occurrence
)
,

with
logical

operations
,

to certain simpler
,

so
-
called prime even
ts that are in our
competence
.
It is quite frequent, and, in conflict
situations,

it is definitely typical that the same
event
can be
simultaneously
judged

in different ways
.
Thus, the collision of an aircraft with a
skyscraper for a terrorist can be desir
able, while for others, it is not.


3



The

fault
-
tree

method
, both in its traditional and more modern form,
implicitly

assumes
that the events of a risk system

in question have
fixed
logical

stru
cture
.
In other words, it
assumes that a

risk

system, during its

contact

with its environment,
preserves

its
identit
y
.
This
is a necessary and inevitable condition of the applicability of the theory.



L
ogical

risk

theory

takes the

expli
catum
2

of
a risk
system

in question

as granted
,
therefore
it is theoretically unable
to describe
or

analyze its change
s

in an adequate way
.
Therefore, a
new theory is needed to study the risk systems in
strong

interaction
in such a
sense
. In my
study
, I named
it

(
logical

or
expli
cative
)
c
onfli
ct t
heory
.



In a conflict situation,
participant
s

are affected by such undesired
effects
, which
(
slightly
or more significantly
)
simultaneously hinder
their normal
behavior
.
The participants of a
conflict may not only be persons
,
but
theoretically

all systems, relating to which
a
state notion
can be int
erpreted, and a normal or abnormal
change of state
(
behavior
, modus operandi)
.
Thus,
in the application field of a general c
onfli
ct
theory
, in principle, a geographical unit
suffering global warming can also belong
to
,

just

as an
undisciplined

soldier or h
is responsible
superior, and so on, up to the limit
3

of a
paradigm
.



Between non
-
probability
risk

theory

and c
onfli
ct
theory
,

a third phenomenon can be
observed
.
In this case the entirety of events occurring in individual but a great number of
different ri
sk systems is examined, namely in an environment, in which individual events, i.e.
risk systems described by the fault
-
trees representing them are in
weak

interaction
.
In such
cases,

risk

system
s preserve their
self
-
identities
,
they are
auto
-
identical
.
Ho
wever,

their
massive
co
-
effect

exceeds
the theoretical performance of a traditional

fault
-
tree

method
.
I
examine this area with a
cellular automaton model
.


Based on the above, the
thematic

of dissertation covers three research
probl
em scopes
:


-

L
ogical (
non
-
probabilistic) r
isk theory
of

individual isolated events


-

Cellular automaton model of
auto
-
identical

tessellation

safety risk systems


-

Logical

c
onfli
c
t

theory

1.2
The paradigm convention


The term "
paradigm
" in the
general
philosophy of science sen
se,
by a slight generalization
of
Kuhn’s
paradigm term,

is interpreted in the following way:


P
aradigm



in the
philosophy

of
scien
ce (and not
linguistic)

sense
of the word



means an
approach

model
of a discipline or a branch of discipline of
science
,
who
se
components

and
criteria are
:


1.

P
henomena
, examined by a relevant branch of science,

i.e.
on which it makes valid
ascertainments
.

2.

M
ethod
s
, by which
the relevant science
examines phenomena
.

3.

Theory,
i.e.

the

logical system of ascertainments regarded

as vali
d

by the relevant
science
,
whose elements are a language
, a
truth criterion,

axi
oms
, defin
itions

and
theses
.

4.

M
odel
, i.e. a system

of things
, on the elements of which the statements of
the relevant
science

regarded as valid,
aut
omatically
prov
e

to be true,
acco
rding to the model’s
definition
.

5.


R
elevance term
,

based on which it can be
dete
rmined
, which phenomena
the relevant
science regard
s

as
suitable for examination
.




2

The notion of explicatum of risk systems has a central significance in the theory.

3

On
an even higher level of abstraction level, we can, and we will see that we have to speak of two events, even
two theories, approaches or conflicts of paradigms.


4

6.

C
ompetence

term
,
based on
which
it can be
determine
d
, in which
issues the relevant

science
regard
s

itself as competent

to make a statement and take a position.

7.

A
value criterion
,
based on
which
the relevant science determines on itself
, wh
at it
regard
s

as val
u
able, what
values it accepts
.


I made my researches
considering

the above

paradigm com
ponents
.

1.3

My scientific objectives

My scientific objective

is to prove that the
theory

of disasters can be construed in a general
paradigmati
c approach
,
which
is capable of simultaneously applying the mutual part of
paradigms of different specialized di
sciplines of
sometimes opposite appr
oach
es

(
in a
philosophy of science sense
)
.

This mutual part
, according to my assumption,
is the
direct
logical

approach
.


Before the
demonstration

of the results of my dissertation,
due to the nature of the subject, I
ha
d to deal with the issues of the coordination of
abstra
ctions and specifics in principle.

The
theory

of disasters, in the sense

of the paradigm elaborated in my
dissertation
, cannot be
discussed in the framework of
any

branch of science as a discipline
,
be
cause
to interpret and
manage disasters the mutual content part of paradigms of disciplines should be used
.

However, such mutual parts do
not

exist
. Because mechanics examines
mechanical systems
,
t
h
ermod
ynamics examines
t
h
ermod
ynamic

system
s
,
with a diamet
rical
opposite approach
.
M
echani
cs, as far as its content, is not a special case of
t
h
ermod
ynamics
.
The situation is
similar with the other sciences as well
.
If

because of this
,
as far as their content, the sets of
terms and conclusions of disciplines stud
ying certain partial phenomena

of natural and man
-
made disasters

cannot be unified
,
however,
method
ically the situation is basically different
.
It
is undisputable that all branches of science meet the requirements of being
scientific
.
From
this originates
the fact that the branches of science in question are not common as far as their
terminology

appar
atuses
,
but in their
method
s
.
In their m
ethod
s, there is an indispensable
significant
feature
:
all of them must be logical
.
They may not contradict the laws o
f logics
even if they
contradict simple common sense sometimes
.
They all are logical and su
it

the laws
of logics
.


Specific goals

of

research
ing

the following topics:




Event typology
,



Interpretation of event indicators
,



Critical points
,



R
isk

theory

examin
ation of corporate decisions
,



Logical

level
,



Strat
egic indicator
,



Modeling

of risk systems in weak interaction
,



Cyclic

centric

examination of risk systems in weak interaction
,



Sustainable safety/security
,



Tolera
bility
, immunit
y
,



Conflict
-
t
heor
etical

axi
oma
tics
,



The disaster theoretical
further development of
Klein
-
Kis

disfunc
tion
theory
,




C
onfli
ct typology
.

2

Applied
method
s


2.
1

Conceptual
, specifi
c

method
s



Expli
cation
;


5



Iter
ation
;



Network
theory
;



T
y
pol
ogy
;



Taxon
omy
.


In the focus of my
study
, there is an
observation: with the method of direct

logical

approach
,
heterog
enic
,
moreover, phenomena
being the subject of disciplines
that are
sometimes
opposing each other

can be conceptually
comprehended

in a uniform way
.
I strove to
elaborate it
method
ologically
.
Where it
seemed

to
be possible
,
I tried to substitute the special
,
particular (
i.e.

probabili
stic
) assumptions used in different
discipl
ines with more general
(
i.e.
logical
), abstra
c
t
assumptions,
to formulate them

in the
uniform
symbol system of
form
al

lo
gi
c
.
Thus it became possible that
,

as a
necessary or sufficient condition of a

certain disaster
incident
, disjunctions or conjunctions of an incident like

the

praxiol
ogical “failure to control

and

the


occurrence of over
-
pressure

,

belonging to the resear
ch area of
mechani
cs
,

came

into

provable

logical

relation
.


A common
method
ological element in my
dissertation

is that I study

risk

system
s
.
The term of
risk

system

is a basic term
. Intuit
ively, a
risk

system

is
,
in (
to)

which

a
disaster occurs
(happens)
,
which is the “subject” of a disaster, which suffers
or bears
or tolerates
a disaster

(
its
damages
),
or perhaps resists it
,
prevents it, responds to it
, rehabilit
ates it, etc
.
Of course,
this word usage is
imprecise yet
.
It is just clarified in the present
dissertation
.


My
dissertation

consists of three
t
h
emati
c chapters
,
through which the
interaction
of

risk
systems
follows through

as a guiding principle
.
Each
syllabus

also
uses specific methods
,

besides a general
expli
cation
.


The chapter on
risk

theory

d
eals with
risk

system
s
,
which
, during their
interaction

with their
environment, preserve their self
-
identity
,
i.e. their state can be managed as an independent
variable
,
the rules determining the change of their state



in other words, the logical structur
e
of systems


are unch
a
nged
.
We can express it in a way that the chapter dealing with
risk

theory

elaborates
i
solated

risk

system
s
.

Their typical
specifi
c

method

is the determination of the necessary and sufficient conditions
of
statements relating to the

events in question
,
as opposed to
using
descriptive

defin
itions
.
Besides, its inevitable m
ethod
ological
inherent is

that it
does
not determine,
for all possible

system

state
s
,

a

logical

value

(
i.e. the characteristic main event of the
risk

system

has occu
rred
or not
),
but it can serve to prove it and to
form
ally deduce it, too.

Besides
this logical

state
evaluation method
,
and
also to evaluate the technical and economic
state, I established
a

special

met
hod to manage it
,
which I named
Franklin

method
.

The
chapter using a cellular automaton model deals
with risk

systems in

weak interaction.

They are characterized by
the fact that
the states of
risk

system
s depend on each other’s states
according to defined rules
,
and

preserve their self
-
identity
,
the rules d
etermining the change of
their states remain unchanged
.

Its characteristic
specifi
c
method

is
iter
ation
,
which is widely used in cellular
automaton

models. In my
study,

I have elaborated a m
ethod

to define and evaluate the
cycle
of processes
.
This
method

h
as been specified in
a c
ycle calculation
algorithm
.

The subject of the chapter dealing with conflict
theory

is the examination
of risk

systems in
strong interaction, in conflict
.
These are the
risk

system
s, which, during their interaction, may
lose their s
elf
-
identity
,
they can become other
system
s

(
wrecks, ruins, inundated areas
,
devastated
system
s, etc
.),
in a general situation, the change of their states
,
behavior

can

originate
from the

logical structure of conflict situations
,
which they
tolerate
to som
e extent in
a
certain

technical sense of the word
.
The application of the
method

resulted in the creation of
conflict typology
.


6

In this context,
typology could have been referred not only to c
onfli
cts
,
but
generally
also
to
the taxonomy of indicators of r
isk systems being on a much higher level of exactness.

In the three different
cases,

I used three kinds of
paradigm
s
,
but the
explicative
approach in
the
method
s
used
is uniform
.


3
.

New

scientific results

3
.1
Isolated
risk

system

3
.1.1
Ordinal

I have elab
orated the term
of
event explicatum of isolated
risk

system
s and introduced
its ordinal
.

According to the definition of “isolated risk system”
, introduced by me, a

risk

system
, during
its interaction with its environment
,

preserves its
logical

stru
cture, a
nd its state can be
regarded

as
an
independent variable
.

A
uto
-
identi
cal

system
s are
techni
cally characterized by their
expli
ca
tum
.
The explicatum of a
risk

system

is a
speci
al Boolean algebraic equation system with
n

members
, which only
contains
a
c
onjunc
t
ion and
a
disjun
ction
,
and in which there
are

variable
s numbering
m

<
n
,

in
whose function the remaining
n



m

number of variables can be produced
.
These are
prime
explicants
.

The variable on the left hand side of
the first member of the equation
system

is

the


main event
of

a
risk

system
.
A main event produced as a function of
pr
ime e
xpli
cants



as a
Boolean function with
m

variables
,
is the
system
’s
state function
.

Based on the strict mathematical logical definition of the explicatum of isolated
risk

sys
tem
s


in
short
:
system

expli
catum



I
have
introduced the term
of
e
vent explicatum
.
To characterize
an event explicatum
,

in short:
expli
catum



I

have

introduced the term

of

ordinal of an
explicatum, in

short
ordinal
.
To define an ordinal
, I have elaborat
ed an
algorithm

that can be
IT implemented
,
using

which
the
hierarchi
c

logical
relation
ship

between
any two explicants
can be unambiguously
defined
based on their ordinals
,
i.e. whether one of them
implicates
the

other
, and through which
expli
cation

route
can one be reached from

the other
.
This
m
ethod

has
significance

in
the safety engineering assessment of
risk

systems

involving
classified materials
.

3
.1.2 Franklin

param
eters

I
interpreted

the Franklin (cost
-
time) param
eters of
the prime

events of a
risk

s
ystem
.

The change of a prime event of
risk

system

is called
action
.
All actions, conceptually, have
a
well
-
defined

cost
and
time

requirement.

In ri
sk

management, I divided
a
ctions into thre
e

basic classes
:



prevention,



response
,



and recovery
.

Since all th
e above are linked to a time and cost factor
, Franklin

pa
r
am
eters have two
subclasses each
, such as



prevention time
,



response time
,



recovery time
,



prevention cost,



response cost,



and recovery cost
.

The common name for them
is

Franklin param
eters.

The
inte
rpretation

of
the
Franklin parameters
,

related to
prim
e events
,

can be extended to
any
e
xpli
catum
.
To define them, I have elaborated
algorithms

that can be IT implemented, with
the use of which the technical and economic calculations
of managing risk syste
ms have
gained a conceptual foundation
.


7

3
.1.3
Critical points

I have defined the critical (weak and strong) points of a risk system.

The members of the disjunctive normal form
and

the factors of the normal
form

of
a

state
function of an
expli
cated
risk

sys
tem
, as

a Boolean function with positive variables
m
,
usually

serve for the necessary and sufficient conditions of
activating

a
nd
passivating
the main event
.
Their common name is
critical points
.
To define them, I have elaborated
algorithms

that can
be IT
implemented, with the use of which
the
management of safety risk systems, combined
with the calculation
algorithm

of Franklin parameters,
serves

for

technical and economic
calculations
.

3
.1.4 Quorum

function

I have elaborated the consensus limit
calculatio
n

of decisions

necessary for managing a
risk

system
.

The state of
risk

system
s, in
practice
, cannot be known in
each

case
.

D
epending on the
expected outcome of the main event, the actions to be taken are de
termined through
corporate
voting
.
In such cases,
the power often stipulates
a
qualified voting
,
in the statute of which the
consensus limit
is predetermined, i.e. the minimal majority of votes
necessary
for decision
.

The determination of the consensus limit is
essentially
peremptory
, or is based on a pre
vious
corporate decision, but is independent from the logical structure of the event being the subject
of the decision.

I have recognized that
the consensus limit of decisions necessary for the management
of
explicated

risk

system
s can be unambiguously ca
lculated from the
system
’s
expli
catum.

This can be technically derived

from the
system
’s so
-
called
Quorum

function
.

3
.1.5 Flórián model

I have elaborated a strategic game theory model for disaster management in the
logical

risk

management

interpretation
.

The
tradi
tional
met
h
odol
ogy of risk management
,
i.e.
fault

tree

analysis
,
has a
fun
ctional

aspect
.
This means that it
statically demonstrates the main
events

of the
risk

system

under
examination

(
depending on prime events
),
and does not study the processes
,
which can lead to
the
formation of the main event
.

The
tradi
tional
fault

tree

analysis

expressis verbis does not use the term “state”
,
therefore it
does not
even
have
the possibility to describe the process
.
Therefore
, it can only provide an
answer

to th
e basic question: what is the necessary and sufficient condition for the main
events

to occur

if all the
possible

critical points are
determined

and statically stored
.
However,
in a general case it is
insupportable from both a conceptual and practical aspe
ct
.
The
conceptual obstacle is the non
-
probability character of the risk of an individual event
,
the
practical obstacle is caused by
a

combinatorial

explosion
.


The
probl
em is solved by the
FLÓRI
Á
N

model
in a way that the
fun
c
tional approach
is
replaced b
y
a proced
ural one
.
Accordingly,
in
s
t
ead of storing the critical points of
risk

system
s the necessary managing actions are to be determined depending on
t
he state of the
all
-
time process
es
,
including the Franklin dimensions of
costs incurred and the time
n
ecessary.


3
.1.6
Protection

level

I have proved that by
protecting

any level of a

risk

system
,

all higher levels become
protected
.

During explication, during the
stages of
sequential
expli
cation
s

we
always

(
if it is not a
pr
ime
event
)
provide
the necessary

and sufficient condition of events found in
a

stage one stage
earlier
.
This
suggests
the

intuitive

concept

that we
are
get
ting

“deeper and deeper”

from the
main event
during e
xplication
,
we are reaching a “lower and lower level”
.
From this
originates that

the main event
, i.e. from which
the
explication

starts,

is sometimes called
a
top

8

event
.
This suggests that prime events
are on the same “level”



occasionally on the “deepest
level”
.
However, this is, of course, not the case in a general situation
,
since

the “deepest level”
in a general case simply does not exist
.
Because
generally
, it takes routes of
different

lengths
from the main event
to two prime explicants, i.e. it takes
different

number of steps to get
there.


The term “level” has a deep intuitive
content different from the latter
,
which the
introducers

of the
logical

level
ignor
e
.

The
term “level” can be expressively
interpreted by the term “
dyke
”.


A dyke protects from floods. A dyke
prevents
the
attack
of
a

flood, thus providing protection
agains
t its effects
that
caus
e

flood damages
.
There are
dyke protection
event
s

or can be
created
.
Dyke protection events

will
also
have, in general cases,
complex
consequences
,

which can save homes, houses and lives
.
The
consequences

of consequences form a chain

and
can be arranged into
level
s
.
However,
not from
the
“top to bottom”, as suggested by

a
primit
ive
intu
ition
,
but
during the
use

of

protection
,
,
from
the
“bottom to top”
.

I have
elaborated the theory of

logical

level

protection, whose
bases are as follo
ws.



Defin
ition
:



We
interpret

the entirety of
pr
ime events of a
system
on the
zeroth

logical level


in
short on its

level

0




of

an
expli
cated safety
risk

system
.




We
interpret

the entirety of prime events of a system on the
first logical level


in
sho
rt, on its
level
1



of an explicated safety risk system, whose members have a
prime explicant.



On the
n
th
(
n

> 1)

logical level
of an explicated safety risk system



in short on its
level

n



w
e
interpret

the entirety of complex events of
a
system
,
whose

members have an
n
-
1 level explicant, but do not have an explicant
lower than
n
-
1

level
.

If the
highest

level in a system is
m
,
we say that the system is of
level

m
.

If

n

> 0
and all the
level
n
elements of
a
system

are
pass
i
v
e
,
we say that
level

n

is
pro
tected
,
and the
system

on
level

n

is
protected
,
in short
n

protected.
.


Comment
:

It can happen that a
level

n

event

has
a
higher
-
level

explicant than
n
.

It can also happen that an event has an
explicant

of the
same
level
.

In such
cases,

we speak of
trans
ient

event
s

[Bukovics
-
1].

It can also happen that the level of the main event is not the highest
,
where the name “top
event” is deceptive
4
.

3
.1.7 Strat
egic typology

I have
defined

s
trat
egic

indi
cators and elaborated
a typology for them
.

I
have

indicated

t
hat in the scope of
risk

system
s




System determinant
s
can be defined, based on which the
exclusive type system of
risk

system
s can be interpreted.



Types
characterize

t
he
behavior

of

a

system
dete
r
mined by
its

relationship with
its

environment
,
i.e.
the
cha
nge of state or the
possible

state change proces
s
es
.



Furthermore, strategic
indicators
can be defined, which, within each type, form
new

information on
risk

system
s
that can be more and more extended
.


I have introduced the
term “
strat
egic
indi
cators”

thr
ough the term “strategic
model
s”
.

I have used some of the terminologies of the theory of strategic games for creating
strat
egic
model
s
.




4

Referring to this, one of the most interesting examples is the logical risk analysis of

a

pla
nt genetically
modified to be Herbicide
-
tolerant
. It

was elaborated in the research institute of the Australian CSIRO
(Commonwealth Scientific & Industrial Research Organization) under the guidance of [Hayes].


9

In
games,

serving as a basis for
strat
egic
model
s the number of players is always two. The
common thing in their roles i
s that they change the state of certain
pr
i
m
e

e
vent
s
.
Systems that
can be found in reality and are in actual interaction with their environment

(
i.e.
accompanied
by the change of state of the
risk

system
)
are, however, better characterized if
pr
i
m
e

e
vent
s
have no
t

two
,
but
three kinds
of states attributed to them
.
The third state is
“unde
termined
” or
“free”, besides
being
“active” or “passive

.


Th
e
refore, r
isk

system
s in this
paradigm

are not

described by a
Boole
an function
,
but by a
ternal

monoton
e

logica
l

function
.
It is as follows:

FT
(p
1
, p
2
,…,p
n
), (
FT refers to “
Fault
Tree”)
.

Here,
n

is
an

optional

fixed integer and

p
i

(i = 1, 2,…,n) is

also a

tern
al

variable with values
0, u and

1.
Their
interpret
ation is as follows:

p
i

= 0,
anytime when the occurrenc
e

of

pr
ime
event

p
i

is valid
,

p
i

= 1,
anytime when the
occurrence of

prime event
p
i

is valid,

p
i

= u,
anytime when the state of prime event
p
i

is

undetermined
” (
after the English word(s)

uncertain”,
“undetermined” or “unknown”
).

The fact that the stat
e of a pri
m
e

e
vent

is undetermined can be interpreted in the following
way
:

If a prime event occurs
,
in other words becomes active,
it
means that
the
change of
its
state
has been
caused by an effect. However,
a
pr
ime event, according to the
model

suggest
ion, is
in our competence
,
i.e. can be prevented, can be responded to
,
the change of its state

can be
caused
.
According to the
model

suggestion, all state changes have a time
demand
.
Therefore,
it is not indifferent
,
since when

and for how long has a
pr
ime

event been
pass
ive or active
.
Accordingly, an active
event

cannot become
immediately

pass
ive
,
or a
pass
ive
event

cannot
become
immediately

active
.
In other words, in the m
odel
, neither
0

> 1,
nor
1

> 0
transitions have
a meaning
.
Therefore, a third, in
termediate state has to be introduced
, marked
u
,
referring to which
the
activity can be described with the transition
0

> u

> 1
, and
the
passivity with
1

> u

> 0.

In other words, I postulate that the change
of

state between active and passive states c
an only
take
place

through an intermediate,
t
hird state.

This can be also interpreted that the intermediate state is which can be
activated
and
passiv
ated
.

I have grouped
risk

system
s into types, and characterized them with two kinds of data groups.
I have

named them comprehensively s
trat
egic
indi
cators
.

The first group consists of
s
trat
egic
type determinants

(
in short: type determinants
).
They characterize and identify the
individual

types themselves
.
The name of the second group is
strat
egic
type
indi
cato
rs

(
in short:
type
indicators
).
These mean some of the new characteristic features of individuals (risk systems)
belonging to one of the types
.
During the practice of
risk

analysis
, as the experience of a
risk

analyzer

increases,
they may
be

extended by
further data

(t
y
p
e
indi
cators
).
Besides, I have
introduced general sy
stem
s and
strat
egic characteristics.


Four data characterize
strat
egic types
.
These data
characterize

the
behavior

of
risk

system
s
according to
general
aspects
.


Type determinants mean f
our
strat
egies
.
These are as follows:




Struggling



Shannon
’s



Maintenance



Ad hoc

strat
egy
.


The two latter ones have subcases.

The subcases of the Maintenance
Strategy
are
directed

towards the optimization of the

Franklin

param
eters
.


10

The subcases of the Ad
Hoc Strategy relate to the reliability levels of prime events featuring
reliability.

All games are for two.
One of the players is the
"
Attacker
"
and the other one

is the
"
Defender
"
.

The players do not play according the same rules. Attacker represents the

environmental
effects of the above risk system. Defender models the intention (the form of behavior of a risk
system), which aims at the prevention or the liquidation of the Main Event of a risk system.


From the
games,

Struggling and Shannon’s games are
“anti
-
nature games”. It means that
Attacker moves randomly, without any intelligent plan. The moves of the game are the
activation, passivation or renovation of a
free prime
. A free prime is which has not yet been
activated by Attacker and has not yet been

passivated by Defender. At the start of the game,
all prime events are
free
.


Strat
egic type
indi
cators serve for the
characterization

of
risk

system
s within
certain

types.

Two major features of strategic type indicators are the risk system and the strat
egy
implemented on it.
The first one is determined by
system

features,
the

second
o
ne by
strategic
features
.

Here belong the following:
system

name,
strat
egy name
,
number of
pr
ime
event
s and the
entire
event

number.

3
.1.8 Indi
cator
taxon
omy

On the one han
d,
I have developed the typology of
risk

system
s,
on a
strategic basis, into
t
axon
omy,
on the other, I have interpreted
ontological

taxonomy based on a general
indicator term.

The
method
s used by me made it not only possible to develop
typologies
,
but also

to
create

a
so
-
called
taxon
omy
.
In common language, one
quite often
uses the words typology and
taxonomy as synonyms, but the
difference

between them is that
taxon
omy is

a

higher

order
t
y
pol
ogy
,
in which the
t
y
pol
ogical units, besides c
lassifi
cation
hiera
rchi
es,
have certain
ontological

attrib
utes
.
To them, the road led through so
-
called
Galois

relations and
through

several areas
.
On the one hand, I could interpret an
i
ndi
cator
taxon
omy,

on the other,
a
strategic taxonomy
, in close connection with it, int
roduced in a kind of game theory model of
risk

system
s
.



3
.2
Risk systems in weak interaction


3
.2.1
Cellular automata

I have elaborated a cellular space
(SORS
)
,
by which

risk systems in weak interaction
can be
model
ed.

In both natural and man
-
made disas
ters, the
collectives
of

risk

system
s
quite often
participate
.
Such phenomena
are
well known
, ranging from forest fires to floods
,
from epidemics to

c
limati
c

extremit
ies
.
Their common feature is that there is a characteristic
interaction


between the
parti
cipating

r
isk

system
s
,
during the implementation of which their states
depend
on

each other according to a determined normality
.
We say about these
system
s that
they are in
weak interaction with
each

other
.
The attribute “weak” intends to express that the
interactions are not
strong

enough
to change the logical structure and self
-
identity of the
co
mponen
t
s.
In my study
, I have used a
cellular

automaton model to study risk systems in
weak interaction.



The acronym SORS refers to “
S
elf
-
O
rganizing
R
aiding
S
ys
tem”.

The basic idea is:


11

(1)


Every operation’s (including strategic games played against nature) weakest point is
usually the unorganized state.


(2)

To avoid unorganized state and to restore organized state
self
-
organizing

systems are the
most suitabl
e means.


(3)

Artificial
self
-
organizing

systems,
self
-
reproducing

automata are known as cellular
automata


networks of automata.


(4) Cellular automata have also been used in practice on the present level of development of
information technology.

In my
dissertation, I started with a cellular space, in which an
explicated
risk system belongs
to each of the cells
.
Different

risk systems may belong to different cells
.
This is determined
by
allocation, implementable by IT equipment
,

elaborated by me, and the

allocation
algorit
hm.



Allo
cation

Allocation
,
intuit
ively means
the

kind of explicated risk system (fault
-
tree)
that
can be
assigned to
a

site element (cell). To interpret it exactly, I
had to

elaborate an
adequate

terminology system
.


I have defined
the

terms of SORS of central importance
:
Site

and
scene
;
Cellular space
;

Transitional
function
;
Cell state
;
Threat degree
;
Prime event
;
Logical risk management
.

3
.2.2.
Attack and defense

With
SORS
, I

have
model
ed

the process of attack and
defense
.

In the cell
ular space of a SORS
mode
l, there are two types of cells
:
public cells

and
guard
cells
.
Public cells

follow the general transi
ent

rule

(
which is determined by the
relevant
transient

function
).

Attack
is interpreted as a change of state occurring after an
external effect.

Guard cells move according to the algorithm of guard cell movement, and serve
for modeling
defense
,

representing the reaction to an attack
.

The
movement of the Guard
Cell

(
state transient function)
figuratively
means that the guard
cell
,
in every
t

time, “looks around” the neighboring cells, looking for “
cells to be defended

.

Guard

cell
G
’s

cell to be defended

(defense cell)

DC(G)

(if there is such)

is a public cell
in
a
maximum

real state. If there is none,
G

randomly selects a public ce
ll from the neighboring
cells.


Later, at
t

+ 1
time, the guard
cell occupies

the place of the cell to be defended
,
and

takes up
the state of the cell to be defended

with
“virtu
al threat

, i.e. in a virtual type of state
.
In other
cases (if there is no cel
l amongst the neighboring ones to be defended
),
guard

cell
G

does not
move, i.e. does not change its state.

In order to implement
,

in practice
,

the movement of the guard cell
,
I have elaborated an
a
lgorit
hmic procedure with special regard to the effects oc
curring
through

the cellular space
’s
b
order
5
.

A public cell’s state can change in two ways
:
spont
aneously
or

on external effect
.
The
spontaneous change of state
occurs

according to the law on
state transition
.
The next

spontaneous state of
a

public cell

can be simply defined
by a

state transition function. The
next state of a public cell
,
generated by external effect can be defined with the help of
a

State
Assessment Algorithm, belonging to the cell, elaborated by me
.

3
.2.3
Strategies

Two types of defens
e strategies,
offensive

and
defensive
,

were
modeled

in silico

in
SORS.

Their cost and time demand, and efficiency in several specific cases were
studied

by me
through

computer
-
assisted
experiments, and I have documented the results in tables and
graphical

format
.




5

The SORS cellular space, in a cellular autom
aton sense,
does not have
a border, because it is
confined.
Here,
border refers to the interpreted, i.e. established cellular space.


12

The theoretical
grounds of the tolerability and immunity of risk systems
can

be
traced

back to
these procedures
.

3.2.4 Cycles and sustainability

I have elaborated a cycle
term according to the existence

of the cycle
-
centric risk
management
and in

silico,
I have elaborated a suitable experimental procedure
technique.

Risk analysis does not know the
formally defined exact
term
of a cycle
. I see the reason for
this deficit because risk analysis does not know the
expressis verbis
term of
state
either,

although it would be greatly needed, since the

strategic
-
game theory approach has a huge role
in preventing and responding to a hazard. As far as the theory of strategic games, the
introduction of the term “state” is indispensable
6
.

If

(
in a

given
determ
inistic system
with finite state
)

the same state occurs twice in a series

of
sequential states in time
,
we speak of
a
cycle
,
and graphically we say that the s
ystem

“returned to a previous
(
earlier
)
state

, or “it fell

(got, etc.)

into a cycle
”.
In the majo
rity of
s
ystems discussed in environmental
(
cellular automaton
) model
s
, the number of states of
systems in question is finite
.
These kind of
finite
(deterministi
c
)
system
s

necessarily get into a
cycle
eventually
. The important theoretical aim of
risk

manag
ement is the elaboration
of
finite
models

(models with

finite

number of states)
.
The task is to elaborate
method
s
,
by which a
system’s
cycles

(and not states, which means

an

impermissible

restriction
)

can be
efficiently
characterized
.
Efficient characteriz
ation here means that it can be deduced from
the principles
of
theory

(
its
fundamental assumption
s,

axi
oms
)
,

which cycle
,

out of any two
,

has to be
regarded as “
better


(
more favorable and to be more
preferred
,
more desirable
,
more
sustainable, etc.) or as


worse
” (
to be more
avoided, to be more changed
,
to be more repaired
,

to be more evaded, etc.
)
than
t
he other
.
In other words,
one can expect

from a cycle
-
centric
risk

management to be able to fully
order

cycles of a studied
system
.
More precisely, to be
able to
defi
ne
a full
order of

relation amongst
t
he cycles of
a studi
ed
(
finite
)
system
.
This is
the time when the possibility arises that
the
correctness of
risk management operations
(arrangements
)
are
to be judge
d based on uniform and clear principles
,
furthermore
,
to be able
to reach well
-
founded cycle management
decision procedures

consistently.

3
.2.5
Tolerability

and
immunit
y

I have
in silico

elaborated an experimental procedure

to interpret tolerability and
im
munit
y in the SORS m
odel.

I have experim
entally studied t
he
tolerability

or more precisely
and technically: the
immunity

of risk systems in weak interaction with each other

in the framework of a SORS
model.

An

(
in silico
)
experiment
,
which is performed with a system
modeled

(or rather normativel
y
described) using
SORS
, is generally divided into three main stages
:



Stagn
ati
on
. This is a time
interval
, in which every cell is
in
a virtual state, there is no
missing state
,
the str
u
cture (state configuration) of
a

cellular space is more or less
unorder
ed
,
the guard cells are randomly moving
.

During stagnation, as shown by experience, the states of public cells of the cellular
space form a
nStates

= 16
longitudinal cycle
.



A
ttack
.
The threat of public cells
randomly selected from an optional group in a
virtu
al state changes
to
real
,
and their state, depending on the risk explicatum of the
cell, changes
,
independently from the state transitional
necessities
,
which are given by
a calculation algorithm
of state occurring
on an external effect
.



Defense
.
If a

public cell in
real

state

has a guard cell
neighbor
, the threat of the cell
becomes
virtu
al,
its state stays unchanged, and the guard

cell
occupies

the cell
.



6

As a synonym of state, sometimes (in the theory of strategic games) the word strategy is used. See. e.g.
[Szidarovszky],
especially page 158.


13

Experience shows that defense is always successful
after some defensive steps
.
See
figures 9
-
11 i
n the
dissertation
.


These three sections form, according to the definition, an
experimental run

(
X
-
run,

“eXperiment
-
run”
)
. According to the d
efin
ition,
experiment
is a series of
sequential

runs

(
X
-
run)
, which ends with the
last run

(
L
-
run, “Last
-
run”
).
Th
e number of runs within the
experiment

is marked with

nRuns
.


The
last

run
of the experiment or its
stochasti
c limit
is
X
-
Run if the difference between its

relative frequency and of the one preceding it is relatively small
.
I have determined this
differenc
e in 1 percent during my
experiments
.

Conventionally, an experiment
ends
if a
s
tochasti
c convergence
is created
.

The term
immunit
y
7

(
in a
SORS
-
type
system
)
is based on the intuitive term of
vulnerability
.

If
a system is damaged,

it gets lost or its re
habil
itation capability is weakened
,
or its capacity to
cope with an attack
.
Immunity
,

in a certain sense, is the
opposite of vulnerability
.

The “easier”
a

system is rehabilitated from its damaged state
,
the better or higher is its
immunit
y
.

The exact characte
rization of the rehabilitation process greatly depends on
how one defines
the

“easy” or rather the “difficult” side

of rehabilitation
.
The most successful
8

seems to be

if
we define its difficulty: how many steps are necessary (and suff
i
cient
) altogether t
o reach a
global
system

state

(
i.e. cellular space state
),
in which no cells are in an
endangered

position
,
i.e. in order to reach a
stagnating system state
after an attack
.
From an i
ntuit
ive aspect,
immunity is somewhat similar to
toleranc
e

[Bukovics
-
2]
.
The main
difference

between
tolerance and immunity is that
immunit
y is
tolerance defined as a function of the safety level.


When creating the term

immunity
”,

I have taken into
consideration

the following
.

Attack
itself is
undoubtedly
Stochastic
under all

circumstances
.
This is the primary reason that
systems exposed to an attack should be examined from the point of view of
immunit
y
,
with
in
silico

experiments
.
Consequently, the results of an experiment necessarily refer to the
experiment
.
However, experim
ents are always random
-
like
.
Thus, in order to receive
theoretical results from generality and from the one with general
validity
,
we have
excluded

all references
relating the experiment
.
This gives the term of
theor
etical
immunit
y
.
However,
here two equal
ly justifiable aspects emerge
.
I have
named

one of them
run
-
view
,
the other
one
step
-
view

(
r
-
view
,
s
-
view
)
.

After the fixation of the
quantitative

(measured in
percents
)

term of vulnerability,
belonging

to the
experiment

carried out on the given safety l
e
vel

had taken place
,
I have defined

empiri
cal
vulnerability

and
empirical

immunit
y
.
The relationship between them
was
defined

by me with the formula

Immunit
y

(SL, X) = 100

percent
-

Vulnerability

(SL, X).

One can see that there are two
ways
, more or less
natural, available for the
explication

of

empirical
vulnerability
.
These

are the already
mentioned

r
-
view

and
s
-
view
.

Accordingly,
one has to speak of
r
-
Immunit
y

and
s
-
Immunit
y
according to the appropriate
view
.

In
r
-
view,
one of the basic terms is
nRuns

(SL, X)
,
number of runs.

This is the number of
runs
, during experiment
X,
performed at the given
Safety

Level

(SL)t,

necessary (and
sufficient) to reach
Stochasti
c C
onvergenc
e
.

In
s
-
view
, one of the basic terms is
nDS

(SL, X)
number of

defense steps
.
Thi
s is the number
of
runs
, during experiment X, performed at the given
Safety

Level (SL)t,

necessary (and
sufficient) to reach the
Stagnation of the Cellular Space,
i.e.
cellular
space

configuration



7

In the present case, I understand immunity in its intuitive form, disregarding its legal, medical or other
explicative meaning.

8

The ascertainment: the solution is
successful
, does not mean in any case that it is reliable, even in t
he intuitive
term of vulnerability. This is in close relation to the problem of explication. See: [Carnap]. In the case of a
concept, success versus reliability, or in connection with the issue of consistence, see: [Kreisel].


14

without
real
(
vulnerability
) cellular sta
t
e.
See
these
deta
ils in the
figures
on pages 25
-
29 of
the dissertation
.

3
.3
Risk systems in strong interaction
:
co
nfli
c
t
s

I have elaborated

the bases of conflict theory
.

In my dissertation, I rely on researches relating to the theoretical approach of conflicts caused
by na
ture or human beings, which have, on the one hand, explored the relationship of
extremities and the evolvement of human conflicts, on the other hand, on researches, which
relate to the description of the general nature of conflicts [M.Kis].
The former
rese
arches

have shown
extremely

clearly

how deep and close
logical

relationship exists between
extr
eme situations
,
the increase of safety risks and the spread of human and
social

conflicts
.

This new paradigm, in my opinion, is
mainly

hallmarked by the works of

[Barnett] and
colleagues
.
It
results

from my approach that in the dispute of the adaptation of
extremit
ies

versus

mitig
ation, I argue for the former
,
laying a special emphasis on
the management of
conflict problems originating in the consequence of change
.

In my work, I have paradigmatically elaborated the grounds of th
is

conflict theory
.

3
.3.1
The axiomatic characterization of conflicts

I have given the axiomatic
interpretation

of c
onfli
ct
theory
.

We speak of conflict, of one between two

event
s,
in
ever
yday

sense

[Boulding], [Fáy
-

Nováky] [Gordon]
if the
two

event
s
cannot

occur at the same time.

The clearest example of this is, par excellence, is the decision situation
,
which
used
to be

addressed with the attribute “full of conflicts
”.

We speak of a con
flict, implicitly, between two
agents, “
event

carriers”, “
event

holders


if events relating to the two
agents

cannot occur
simultaneously
.
Here, I have to
disperse

a misunderstanding
immediately
.
Everyday concept
does not mean that two agents are in confli
ct if two events occur to both of them
at the same
time
,
which cannot occur. This does
not make

sense
.
Everyday usage is absolutely unsuitable
for making the expression “it cannot occur”

strict and accurate. According to an everyday
usage
:
“it cannot occu
r”,
because

a law (written or unwritten)

forbids it
.
Of course “it
cannot

occur” that two solids

are simultaneously at the same place
,
although no law of
physics

forbids it
. Mechanics describes the collision of solids

without any
difficulty

without dealing

with the definition of the term (as such)

of

collision
.
Two events cannot occur simultaneously
at the same time if its supposed occurrence led to
logical

contradiction
.

Criterion of
logi
c as
theory

is that
it
should be
without

contradictions
,
i.e. it shou
ld not be possible to prove a
statement based on a law of logic
together

with its negation
.
From
this
, however,
it do
es

not

ensue that
logi
c does not have to deal with the
contradiction as such
.
It can be
ascertained

that, at present, there is no uniform
c
onfli
c
t

theory
,
only different trends exist. The reason for
it is that it is not possible to examine conflict situations methodically with a theory
without
interest
.
Not even if we speak of relation of
interest
.
Politics prefers to try to solve c
onfli
c
t

s
ituations without grounding its
rhetoric
,
arguments

and actions

on understanding them
,
with
emphasis on
without

interest
.
It is a
characteristic

and significant circumstance that the pure
mathematical
theory

of conflicts were already elaborated by
[Sheffer
]
and
[Nicod]
almost a
century ago

[Veroff].
Their achievements
were included

in the
second

edition of

Principia
Mathematica

in 1927, an epoch
-
making

work grounding
mat
h
emati
cal
logic
.
The
mat
h
emati
cal
-
logical

c
onfli
c
t

theory

became a
deductive

system
.
Th
e name of
this

theory

was
appropriated

by
sociol
ogy
,
according to its original
definition

it was
in
c
ompatibilit
y
theory
.

The
mathematical
-
logical

c
onfli
c
t

theory
,
i.e. the theory of
incompatibility
,
is still developing
.
Most recently
[Veroff]
achieved
in
silico

results in the field of
axi
omatics
.
This makes
it

possible that c
onfli
c
t

theory

receive, at the same time,
empirical
-
intuit
ive
grounding

and
sufficient
deductive

disciplinarit
y
.


It can be expected from a uniform conflict theory, according to knowle
dge
-
based sociological
exigencies, that it should, in a certain sense, include all previous conflict theories. In other

15

words, to fulfill the c
orrespondence principle
, highly appreciated in theoretical physics as a
criterion. The emergence of this principl
e cannot be seen in social conflict theories even in
traces.

In my work, the problem of correspondence emerges the following way. Conflict theory
wishes to be the generalization of logical risk theory. Logical risk theory is essentially the use
of the Bool
ean algebra to analyze the main events of risk systems. Therefore, to see conflict
theory really become the generalization of logical risk theory, it is sufficient to prove that all
true statements of the Boolean algebra are true in conflict theory. Techni
cally, I have
implemented it by defining the basic terms of the Boolean algebra [Jaglom], based on the
basic terms of conflict theory


more precisely conflict algebra, and from its axioms, I have
deduced the axioms of the Boolean algebra [Huntington].


Th
e conflict of events x and y is marked x | y.

[Veroff] found two conflict theory axioms, superseding Bernstein (almost a century later),
from which Bernstein axioms can be deduced, at the same time, they can be quite figuratively
interpreted.

The first axi
om of Veroff merely states that a conflict is symmetric, (the operation of the
conflict formation is commutative), i.e. always:

x | y = y |x



(V1)

This is so evident that its negation in everyday word usage is scarcely possible.


According to the second a
xiom of Veroff:

x = (x | y) | (x | (y | z)


(V2)


The advantage of this axiom is, as opposed to the axioms of Bernstein, that it can be
directly

interpreted
, i.e. its conflict theory meaning can be given without any further auxiliary term or
convention. Ho
wever, for this, one has to pay the price that from these two axioms the
original [Sheffer]’s axiom system of conflict theory
c
an only be deduced in an awkward way.
Therefore, Veroff needed 86 steps to deduce Sheffer’s axioms, which was performed by
Bernst
ein with jaunty elegance. Veroff did not deal with the deduction of Bernstein’s axioms,
at present, it is not known whether he would have reached his goal earlier if he had dealt with
it.

From my part, I proved that Bernstein’s first axiom can be deduced
from Veroff’s first and
second axiom. Furthermore, I gave the conflict theory interpretation of Veroff’s axioms.

3.3.2 Conflict space

I have elaborated the features of conflict space and the conflict attributes
and conflict
types
.

I have axiomatically post
ulated that all conflict situations can be characterized by three factors
(so
-
called parameters).

These are “Agent”, “Site” and “Disturbance”.

Certain formalized, logically manageable statements and predicates serve for characterizing
conflicts. These are

conflict attributes.

Using [M. Kis]’s work, I have postulated that every situation can be judged based on
altogether eight kinds of attributes. Four out of these eight attributes are opposites of one of
the other four. In fact, my basic assumption
9

is th
at every situation can be characterized by
giving four basic attributes simultaneously; each one of these is a member of an attribute pair.
More specifically, the four basic attributes (characterizing the situation) and their opposites
are as follows:



"
A
ctivity
", marked A,

opposite:

"
Reactivity"
(R)




9

This basic assumption was ma
rvelously justified by [M.Kis]’s researches based on [Klein]’s works.


16


"
Interiors
", marked B,

opposite:

"
Exteriors
" (K)


"
Groupedness
", marked C,

opposite:

"
Uniqueness
" (E)


"
Directness
", marked D,

opposite:

"
Indirectness
"(I)


This can also be
expressed that

the
type
of a
ll (conflict) situations is unambiguously defined
by the above four attribute pairs, i.e.



Activity

is therefore
A
or

R
,


Interiors

is therefore
B

or
K


Groupedness

is therefore
C

or
E

and


Directness

is therefore
D

or
I
.


This axiom means
the implicit

de
finition
of the type term. In other words, I
name

the attribute
fours that can be selected from eight attributes
that
are
compatible

(i.e. consisting of attributes
logically not contradicting each other) (conflict)
types
after [M. Kis].

The elementary cons
equence of this axiom is that there are 16 kinds of conflict types.

In order to operationalize typology, I have introduced conflict types as points of the conflict
space with certain changes in their marking. The management of conflict theory problems,
thr
ough the Boolean net, has been reduced to the management of Boolean algebraic problems
from conflict space.

3.3.3 Toler
ance domain, evacuation dilemma

I have interpreted the evacuation dilemma in the terminology
of tolerance

domain.

I have introduced the t
erm
tolerance domain

on
all
the types of situations found conflicting
(offending, hurting, disturbing, frustrating, perturbing, etc.) by all
agents
, and I have
characterized it by
Ledley’s
number. Mathematically, it means that I have represented the
term a
gent by the non
-
empty partial sets of the conflict space. With this choice, I have
provided the structural characterization of the agent, by which it has become possible to create
a kind of a structural typological axiom of agents. I have noticed that, fro
m the
topological

(i.e. deducible to the set theory containment) relations between tolerance domains, some can
be interpreted with astonishing simplicity. So
,

for instance, in the form of a tolerance domain
multiply coherent, an agent and behavior type, cl
assically well known from the practice of
disaster management has manifested. This occurs in the following typical decision situation,
i.e. conflict situation, which could be named “evacuation dilemma”. A firefighter in site A has
to evacuate a person in r
oom B. The air is running out in room B, breathing is possible for a
certain time, but not on the connection route. To cover the route between A and B, two
minutes are necessary, the physical features of the firefighter make it possible to work for a
maxim
um of two minutes of not breathing. The firefighter gets into the following decision
situation. After a minute, he either returns or continues his trip. If he turns back, he fails
because he has not fulfilled his mission. If he continues his trip, he risks

not having enough air
in site B, and both of them die. The obvious
necessary
condition
of solving

this evacuation
dilemma is that the firefighter undertakes the risk, i.e. in such a sense, he should be a person
with a risk
-
assuming
character
. To recognize

it is of central significance in disaster
management. I make a remark here that ethology also knows this problem and speaks about it
as “squirrel effect”. In everyday usage, it is known as the idiom “burn up the bridge behind
oneself”, in cave research an
other name is used “siphon floater”.

Based on the representation of tolerance domains in conflict space, it became possible deduce
the problem of evacuation dilemma to the strict topological features of tolerance domains.

3.3.4 Tolerance function

I have
introduced the tolerance function to describe the behavior of agents in conflict
situations.



17

Mathematically, tolerance function is a so
-
called Berstein polynomial, which, from a certain
point of view, can be regarded as the generalization of Shannon’s q
uorum function relating to
autoidentical risk systems. From interpretation aspects, however, there is a significant
difference between the two functions. While quorum function quantifies the suitability of the
explicatum of the given risk system for corpor
ate decision, tolerance function quantifies the
tolerability

of an agent depending on the

perturbations
belonging to the tolerance domain.
From this aspect, tolerance function shows relativity with the classic constitutional
Yerkes
-
Dodson’s
character fun
ctions. Mathematically, it is interesting to compare it with
[Feigenbaum]’s concept. This approach was followed by
Kretschmer’s constitutional science
that time, at the beginning of the last century, after
Yerkes’
and
Dodson’s
works, to yield its
place now
adays to
Selye’s
stress theory
-
based characterology. Its importance nowadays
shows, in a peculiar way, in the management of disasters caused by terrorist attacks.

In case of a hijacking, the pilot must follow the instructions of the attacker. An attacker

intelligent enough can hinder the pilot in giving valuable information on the real emergency
using some kind of a coding, on the level of today’s IT and measuring equipment level.

However, to empirically define tolerance function
,

today’s IT and measuring

equipment level
seems suitable, in principle. The first results of the researches carried out in international
cooperation with relevant great (western) governmental and EU financial support are quite
reassuring. The basic idea is the speech recognition c
ombined with stress analysis. From the
spectral deconvolution, one can make a conclusion on the pilots stress status, and besides,
knowing the (pilot’s)
constitutional character
a number of information can be obtained on the
phenomena perceived by it in it
s environment, mostly unconsciously, the hindrance of
transfer of which simply falls outside the competence of the attacker
10
.

3.3.5 Conflict typology

I have elaborated the typology of tolerance functions, I have introduced the terms
irrational agent and to
lerance loss.

In the deduction of the two generic characteristics of conflict situations between the entire
(2
16
=65536
-
member) system of tolerance domains and the entire (700
-
member) system of
tolerance functions, I have discovered the
phenomenon of tolera
nce loss
. Accordingly, the
tolerance function of the common parts of any two tolerance domains minors the infimum of
individual tolerance functions.

I have elaborated a typology of tolerance functions, which I have named
KYDS
typology.
The name comes from
the works of
K
retschmer,
Y
erkes
-
D
odson and
S
hannon.

I have proved that each

of the 21 partitions resulting from the coordinated partition
,

leading to
the
KYDS
typology
,

form
s

a symmetry class.

I have defined the basic structures of conflict spaces formed b
y conflict types, as conflict
space’s nets, partial nets.

I have introduced and defined the mathematical term of
irrational agent
.

I have demonstrated that the conjunction of the tolerance function of two agents, in a general
case, is a function, which is
not the tolerance function of a tolerance domain. At the same
time, it is obvious that the conjunction of the tolerance function of two agents describes the
compositional behavior of the two agents in [Berry]’s sense.


I have postulated that every agent A

and B has an agent C
11
, whose behavior is described by
the function

Q
C
(p) = Q(T(A))


Q(T(B)).


I have elaborated a procedure to calculate the tolerance function of irrational agents.




10

In the framework of [EUROCONTROL] project , funded by EU, the question has been intensively scrutinized
for several years.

11

For the deep analysis of the term agent, see [du Toit]’s ex
cellent study.


18

I have given the functional interpretation of the term of irrational ag
ent, as which
compositionally represents the mutual behavior of agents A and B.

I have proved that the dominance of the two agent components alternatively prevails in the
behavior of an irrational agent.

I have demonstrated that in a general case, the beh
avior of an irrational agent is characterized
by tolerance loss. I have elaborated a procedure to reduce tolerance loss.