A methodology for integrative multimodeling: connecting dynamic and geometry models

farmpaintlickInternet and Web Development

Oct 21, 2013 (3 years and 1 month ago)

90 views



A
m
ethodology

for
i
ntegrative
m
ultimodeling
:
c
onnecting
d
ynamic and
g
eometry
m
odels


Minho Park
*
1

and Paul Fishwick
*
2


Department of Computer and Information Science Engineering, University of Florida

Gainesville, Florida, USA



ABSTRAC
T


Modeling
techniques tend to be found in isolated communities: geometry models in CAD and Computer Graphics and
dynamic models in Computer Simulation. When models are included within the same digital environment, the ways of
connecting them together seam
lessly and visually are not well

known even though elements from each model have many
commonalities.
We

attempt to address this deficiency by studying specific ways in which models can be interconnected
within the same 3D space through effective ontology c
onstruction and human interaction techniques
.


Keywords:
Integrat
ive

M
ultimodeling,
Modeling, Simulation
,
HCI (Human
-
Computer

Interaction)
,
Ontology,


3D Visualization



1.

INTRODUCTION


A

model


1

is something that we u
se in lieu of the real thing in order to understand something about that thing.

Something which is modeled can be described by several different model types
, such as
a
geometry model,
a
dynamic
model
or

an
information model,
depending on
the
user

s
viewpoi
nt
.
For example,
The
Computer
Graphics
community
focus
es

on
geometry models.

On the other hand,
The
Computer Simulation community is interested in dynamic models.



Our research beg
a
n with the following
three
general questions:

1)


Is there any way
we
can

include different model types
within the same environment?


If so,
2)

H
ow
can we

connect them together seamlessly
,

visually

and effectively
?


and
3)

H
ow
can we

overcome
semantic heterogeneity

between the models?


We started
to investigate ways in whi
ch we can
integrate different models within the same scene environment. For example, consider
this

scenario: a region
with

several
key military vehicles and targets: planes (both fighter as well as command and control center), surface
-
to
-
air missile
(SAM)
sites, and drones.
A

variety of models define the geometry, information, and dynamics of these objects. Ideally,
we can
explore and execute these models within the 3D scene. How can this be
accomplished
? The ability

of creating

customized 3D models
,
effect
ive

ontology
construction
,

and
H
uman
-
C
omputer
I
nteraction

(HCI)

techniques

should help
us to blend and stitch together different model types, touching the SAM site, and exploring one of its functions that holds
a prominent place in one of its models.


We
present a
new modeling

and simulation process

called
int
egrat
ive

m
ultimodeling
.
2

The purpose of integrat
ive

multimodeing is to provide a
H
uman
-
C
omputer
I
nteraction environment that allows components of different model types
to be linked to one another

most

notably dynamic models used in simulation to geometry models

for the phenomena
being modeled.

To support integrat
ive

multimodeling,
OWL (
Web
Ontology
Language
)
3

is

employed to bridge semantic
gaps between the different models and facilitate mapping proces
ses between the
components of the
different models
.
HCI techniques are needed to provide effective interaction method
s

inherent within the process and application of






*
1

E
-
mail: mhpark@cise.ufl.edu

*
2

E
-
mail: fishwick@cise.ufl.edu

modeling
.

This involves

designing, manipulati
ng
, and executing the model,
while leveragi
ng

model presentation

and

interface metaphors
.





In this paper, we provide
a
methodology

for integrat
ive

multimodeling

using a
n example

to be

discussed
in section
4
.
Related work is addressed in section 2. The technologies
used for supporting integr
at
ive

multimodeling
are discussed in
section 3, and section 5 concludes the paper by discussing future research.


2.

RELATED WORK


In the area of simulation modeling,
we can find several model classifications in the literature. One of the
model

classification
s

is
Dynamic
, which is usually compared with
Static
.
1,

4
,

5

A dynamic model represents a system as model
variables in the system evolves over time.
The dynamic model types are Functional Block Model

(FBM)
, Finite State
Model

(FSM)
, Equation Model, System D
ynamics Model, and Rule Based Model.

1

On the other hand, a variety of
geometric model types
are classified in a Graphics and CAD community. For example,
Boundary Representations
,

Constructive Solid Geometry

(CSG), and
wireframe

models
are used to represen
t geometric models.

6

In
the area of
information modeling,
generally
there are t
hree

types of information models;
hierarchical, network, and relational
models.

7
,

8

Examples of the hierarchical model includes file trees and
organization

charts. Examples of

the network
model include program flow charts and hypertext structures.

T
he
r
elational model is commonly used for database system
products.


M
any techniques
are
being used for interaction and navigation in
the
Virtual Environment

(VE)
. For instance,
Cubau
d
and Topol
9

present

a
VRML
-
based user interface for a virtual library. They employ 2D widows
-
based interfaces in a 3D
world to
allow a user to see
each book in the library.
They also allow the user to move, minimize, maximize, or close
windows by dragging

and dropping them or by pushing a button
,

which are usually provided in
a
traditional
windows
system environment.
Campbell and his colleagues
1
0

develop

a
virtual Geographical Information System (GIS) using
GeoVRML and Java 3D software
development

packages
.
The distinct point

is that they present a menu bar and toolbars
for ease of use, since most users immediately understand how to use the menu bar and toolbars.

Lin and Loftin
11

provide
a functional Virtual Reality application for geoscience visualization.

They

employ Virtual Button
and bounding box
concept
s

to interact

with geoscience data. If interaction is needed, all the control buttons on the frame can be visible;
otherwise, they are set to
be
invisible

so that the frame simply acts as the reference ou
tside the
bounding

box.

Hendricks,
Marsden, and Blake
12

present a VR authoring system. The system provides three main modules, graphics, scripting, and
events module
s
,
for supporting

interaction
s
. The
important

point

is that they support
a
scripting
-
based
interaction

method
for non
-
computer expert users.



If we consider all interaction methods
previously
described, t
he possible interaction
ways

with
in
a
desktop VR
environment are

Virtual Button

,

windows

, and

scripting
-
based

interaction”

approaches
.

In

the

Virtual Button


and

windows


case
s
, we can implement the concept
s

using

touch sensor


and

IndexedFaceSet.


I
f we employ
an
additional technology
,

such as
Hypertext Preprocessor
13

(
PHP
)
,

to
a
desktop VR environment,


scripting
-
based
interaction


co
uld be
a possible method.



The Semantic Web
14
, which is the next step in the evolution of the World Wide Web, enables computers to understand
the
information

on the internet

and communicate with one another.
In a variety of communities, t
he Semantic Web
t
echnologies
,

such as ontology languages involving

RDF
1
5
, RDF
-
S
1
6
, and OWL
, are being used to share or exchange the
information
as well as dealing with semantic gaps between different domains.

In the area of ontology,
we can find
many

ontology
-
related appli
cations
1
7
,

1
8

and framewor
ks
19,

2
0

used by
an
information systems community
.

In
an
information
systems community
,
including

a
database systems community,

ontology
is used
to
achieve semantic interoperability in
heterogeneous information systems by
overcom
i
ng

structural heterogeneity and semantic heterogeneity between t
he

information

systems
.

I
n
a
simulation and modeling community,
Miller, Sheth, and Fishwick propose
t
he
Discrete
-
event
Modeling Ontology
2
1

(DeMO)
, which is
an
OWL
-
based ontology.

To represent
core concepts in the discrete
-
event
modeling domain, they define four main abstract classes:
Demodel
,
ModelConcepts
,
ModelComponents

and

ModelMechanisms
. Liang and Paredis
2
2

define an OWL
-
based
port

ontology
to capture both syntactic and semantic
informat
ion for allowing modelers to reason about the system configuration and corresponding simulation models.




3.

INFRASTRUCTURE



3.1.

VRML

VRML (Virtual Reality Modeling Language) represents the standard 3D language for the web
.
2
3

To be precise, VRML is
not a modeli
ng language but an effective 3D file interchange format, a 3D analog to HTML. 3D objects and worlds are
described in VRML files using a hierarchical scene graph, which is composed of entities called nodes. Nodes can contain
other nodes and this scene graph

structure makes it easy to create complex, hierarchical systems from subparts. VRML
also defines an event or message
-
passing mechanism by which nodes in the scene graph can communicate with each
other and Script nodes can be inserted between event generat
ors and event receivers.


3.2.

XML

XML
2
4

(eXtensible Markup Language) is a new markup language developed by the World Wide Web Consortium
(W3C), mainly to overcome limitations in HTML such as lack of separation of content and formatting. XML provides a
way to

define your own structure for documents and a way to keep the structure and presentation information separate.
Unlike traditional methods of presenting data, which relied on extensive bodies of code, the presentation techniques for
styling XML are data dr
iven. MXL styling is accomplished through another document dedicated to the task called a style
sheet.


3.3.

X3D

The next
-
generation X3D (
eX
tensible 3D) Graphics specification was designed and implemented by the X3D Task
Group of the Web3D Consortium
2
5
. They ex
pressed the geometry and behavior capabilities of the VRML 97 using XML
(
eX
tensible Markup Language). In short, we can think of X3D as an XML version of VRML with several added
functionalities.


3.4.

MXL


MXL
2
6

(Multimodel eXchange Language)
was

developed by
o
ur research group to capture dynamic model content
s
.
Dynamic model components, model behaviors
,

and simulation information are included in MXL. Currently
,

MXL
supports
the
Finite State Machine (FSM), Functional Block Model (FBM)
,

and Equation Model
.


3.5.

OWL
Web
Ontology
Language

An ontology describes meaning of terms and their interrelationships used in a particular
domain
.
Ontologies

consist of
three
general

elements: classes, properties
,

and the relationships between classes and properties.
OWL can be used
to
represent

ontolog
ies so that the information contained in
documents c
an be processed by applications.
OWL has more
power to express meaning and semantics than RDF
,

which is
a general
-
purpose language for representing information
on

the Web
, and RDF
-
S
,

w
hich describes

how to use RDF to describe RDF vocabularies
, since OWL is built on RDF and
extends a vocabulary of RDF
-
S.



4.

Integrat
ive

Multimod
eling


W
e
are now in a position to

discuss a
methodology

for integra
t
ive

multimodeling using
the

example of
the
combat scene
,
as

shown in Figure
4
.

In this
particular
example, we
limit the boundary of

integrat
ive

multimodeling
, which means we
focus on

a geometry model and a dynamic model

for the example

in

a

VRML world
, an
d do not
consider

at any time
any
general
properties
,

such as color and weight of objects
. Recall that the purpose of integrat
ive

multimodeing is to provide
a
H
uman
-
C
omputer
I
nteraction environment that allows components of different model types to be link
ed to one another.


To
do

integrat
ive

multimodeling

in 3D space
, we need
the
following

components:



Ontology describing the combat scene
explicitly



Geometry model for the scene



Dynamic model for the scene



Dynamic model for Human
-
Computer
I
nteractions


First
,

we have to create an ontology for the example.

Figure
s
1

and
2

show

the graphical representation and OWL
representation

of the ontology
for
the
combat scene
,
respectively
.

F
ive classes represent each sub
-
domain
,

such as
the
Aircraft class, Terrain class,

Agent (Human) class, HUD (Heads
-
Up Display) class
,

and HCI

class.
The reasons why we
include
the
Agent class and HCI class as sub
-
domains are as follow
s
:

for
the
Agent class, we
regard

the
Agent

(Human)
as
the only

object
communicat
ing

with
the
Virtual Re
ality environment
;

for

the

HCI class, we
need
to represent
the
interaction means to allow components of different model types to be linked
with

other.

We will discuss each class
and
justify our assumption and approach

in the following subsections
.
The
Terr
ain class will be skipped, since
it does not
have any roles in the boundary of our scenario.

However
,

i
f we consider
the
information model, it might be
meaningful

since the terrain will have some
geographic
information
,

such as
latitude.







Figure 1.

Scene Ontology


4.1.

Aircraft class


As shown in Figure 4, the scene involves two F
-
15s, one J
-
STAR, and an automated vehicle gathering battle field
information. In Figure 1, two classes, th
e Fighter class and the S
urveillance

and Control class, are further defined as
subclasses of the Aircraft class. The Aircraft class has two properties like

has_geometry


and

has_dynamic,


since all
aircraft have the geometry component (X3D) and dynamic c
omponent (X3D or MXL). Accordingly, two subclasses
have inherited two properties from the Aircraft class. Two
F
-
15
s are created as an instance of Fight class. A
J
-
STAR
and
a
UAV

are created as instances of the S
urveillance

and Control class.


Each inst
ance has data for a geometry model and a
dynamic

model.
J
-
STAR
, for instance, has

j_star


as a geometry data,
which is used as an ID in the X3D file, and

Block_2


as a dynamic data, which is used as an ID in the MXL file or the

X3D file. From these prop
erties we can infer that J
-
STAR, which is a part of a model,
can be interconnected
semantically between the geometry model and the dynamic model
within the same 3D space
.


4.2. Agent class

As shown in Figure 1, A
User

is created as an instance of the Agen
t class. This class also has

has_geometry


and

has_dynamic,


since we consider the user as a part of objects involved in 3D space and as the only object which can
interact with virtual environment through the keyboard and a mouse. Even though we use a de
sktop VR environment, we
regard the VR environment involving external user environment as a kind of immersive VR environment. Figure 3
shows that the MXL component of the user instance is included as a part of Interaction
M
odel
(IM)
which will be
explained

later.





Figure 2.
A segment of
OWL
representation

of Scene Ontology


4.3. HUD class

The
HUD class consists of
the
Button

class
, Display

cla
ss

and Slide

class, since we can find four buttons, two display
areas and one slide bar in Figure 1.

Each touch sensor is created as an instance of
the
Button class, and each display part
and one slide bar are also created as instances of
the
Display class

and Slide class. They also have the same properties
such as the
Aircraft class and Agent class.
The touch sensor instance is involved in
the
Interaction Model (IM)

as a sort
of bridge between
a

user and
a
VR environment, as shown in Figure
3
.



4.4. HCI c
lass

Figure
3

shows the dynamic model representation of the instance
of the Interaction Model (IM)
class.

And this
also
represents

the Human
-
Computer
I
nteraction model.

The model contain
s

the dynamic model component of
the user
instance
, which
came from
th
e
Agent class,
the dynamic component of
the touch sensor instance came from
the
Button
class
,

and all
dynamic and geometry components of all
instances came from
the
Fighter class and
the S
urveillance

and

C
ontrol class.

In addition,
the
third and fourth bl
ocks represent the internal processes of
the
VRML world

which will be
needed to change
the
geometry model into dynamic model and vice versa

through
a
morphing process
. All internal
processes will be
explained

in the
next

subsection
.





Figure 3.

Dynamic Model representation of Interaction

Model (IM)


4.5. Process

If we construe the
Interaction Model (IM)

verbally
, the following interpretation is possible:

<?xml version="1.0" encoding="UTF
-
8" ?>

<rdf:RDF xmlns:rss="http://purl.org/rss/1.0/"

…..

xmlns:dc="http://purl.org/dc/elements/1.1/">

…..

<owl:Ontology rdf:about="" />

<owl:Class
rdf:ID="BattleField" />

<owl:Class rdf:ID="Terrain“/>


<owl:Class rdf:ID="Aircraft">

…..


<owl:onProperty>


<owl:ObjectProperty rdf:about="#has_Geometry"/>


</owl:onProperty>


<owl:minCardinality rdf:datatype="http://www.w
3.org/2001/XMLSchema#int"


>1</owl:minCardinality>


</owl:Restriction>


</rdfs:subClassOf>


<rdfs:subClassOf>


<owl:Restriction>


<owl:onProperty>


<owl:ObjectProperty rdf:about="#has_Dynamic"/>


</owl:onProper
ty>


<owl:minCardinality rdf:datatype="http://www.w3.org/2001/XMLSchema#int"


>1</owl:minCardinality>


</owl:Restriction>


</rdfs:subClassOf>

</owl:Class>

…..

<Fighter rdf:ID="F
-
15A">


<has_Geometry rdf:resource="#F15A"/>


<ha
s_Dynamic rdf:resource="#Block_3"/>

</Fighter>

1)

A user

exists

2)

The user want
s

to see
and know
the scene
by touching touch sensors

3)

The scene
provides

two

kinds of model types, geometry and
dynamic

model
s

4)

The t
wo models are inter
changeable

5)

The
user
can ch
oose

a

model type

6)

The conversion process is achieved through morphing


7)

The
3D environment displays a prope
r model type according to
the
user

s requests



We create
an
integrat
ive

multimodeling environment
manually
based on the above interpretation

and the scene ontology
.
In
the
VRML
world
, touch sensors

are generated

to allow a user to change a model type
.

T
hen we connect geometry
model components with dynamic model components using
Router

based on
the
pair
s

of the property data of instance
s
,

that is, (

has_geometry


and

has_dynamic

), which are specified in the ontology.
We change the
transparenc
ies

o
f
obje
cts

to obtain the desired effect, morphing.

Consider Figures
4

through
7
,

which represent a conversion process from
a geometry model to a dynamic model applied to two F
-
15
s
, one J
-
STAR and one UAV.






Figure
4
.

Initial scene

(Geomet
ry Model)


Figure
5
.

Scene prior to interaction






Figure
6
.

Geometry/Dynamic Model morphing


Figure
7
.

Dynamic Model





5.

CONCLUSIONS AND FUTURE
RESEARCH


We have presented
a strategy for integrat
ive

multimodeling using the example of
the
combat scene, and

we have

justi
fied
our approach
.

An ontology for the example and a Human
-
Computer
I
nteraction model are created to facilitate
integrat
ive

multimodeling. We have learned that e
ffective ontology construction and
H
uman
-
Computer

I
nter
ac
tion

(HCI)

techniques

are essential factors to support i
ntegrat
ive

multimodeling.

HCI techniques play an
especially
important role in
integrat
ive

multimodeling to connect different model types together visually and s
eamlessly

within the same digital
environment
.



One of our discoveries begins with examini
ng the Scene Ontology in Figure 1. This ontology relates all of the major
scene components, but the topology of the ontology is a tree, not a graph as it is within other semantic networks. One of
the reasons for this is that our dynamic models (i.e., the h
uman interaction model in Figure 3) is taken out of the
ontology and "wrapped up" into a model that is connected directly to "World" in Figure 1. This practice suggests that
other models might be formed in similar ways, thus simplifying the structure of on
tologies.


The ontology and Human
-
Computer
I
nteraction model
s
, which are used for the example, are intended to be a starting
point for integrat
ive

multimodeling. We add
advanced
interface

technologies, such as position and orientation tracking
for physical

objects, to
our

current
approach
,

as well as consider
ing

Human
-
Computer Interaction requirements, such as
usability
,
emotion
,
immersion

and
customization
,
2

for the next stage of integrat
ive

multimodeling.


Remaining tasks to
integrat
ive

multimodeling

inc
lude:



P
erforming in
-
depth research on HCI
technologies




Concreting and elaborating
an appro
a
ch for
integrat
ive

multimodeling



Extending the area of
the
model type to
the
information model



Connecting integrat
ive

multimodeling with
a
rube

framework
2
7




Develop
ing a
n

automatic process for connecting different model types in
a
VRML
world
,

based on an ontology


ACKNOWLEDGEMENTS


We would like to thank the National Science Foundation under grant EIA
-
0119532 and the Air Force Research
Laboratory under grant F30602
-
01
-
1
-
05920119532. We
also
thank Jinho Lee and Hyunju Shim for their technical
assistance
s

and fruitful discussions.


REFERENCES



1
.
P. Fishwick

Simulation Model Design and Execution
,


Englewood Cliffs, NJ: Prentice
-
Hall, 1995.



2
.
P. Fishwick

Toward

an Integrative Multimodeling Interface: A Human
-
Computer Interface Approach




to Interrelating Model Structures
,


accepted for SCS Trans. on Simulation, Special Issue in Grand




Challenges in Computer Simulation, 2004.


3
.

Web
-
Ontology (Web
Ont) Working Group

http://
www.w3.org/2001/sw/WebOnt/
.


4.
A. Law and W. Kelton

SIMULATION MODELING & ANALYSIS,


McGraw
-
Hill, 1991
.


5. S. Hoover and R. Perry

Simulation: a problem
-
solving approach,


Addison
-
Wesley

Publishing Company, 1989
.


6. E. Angel

Interactive computer graphics: a top
-
down approach with OpenGL,


Pearson Education, 2003
.


7. P. O

neil

DATABASE: principles programming performance,


San Francisco, CA: Morgan




Kaufmann Publishers, 199
4.


8. A. Dik, J. Finlay, G. Abowd and R. Beale

Human
-
Computer Interaction,


Prentice Hall Europe, 1998.


9.
P
.

Cubaud

and

A
.

Topol


A VRML
-
based user interface for an online digitalized antiquarian




collection
,


Proceedings of the sixth intern
ational conference on 3D Web technology
, 2001, pp. 51
-
59
.

10.
Bruce Campbell, Paul Collins, Hunter Hadaway, Nick Hedley, Mark Stoermer

Web3D in ocean


science learning environments:

virtual big beef creek
,


Proceeding of the seventh international



conference on 3D Web technology
, 2002, pp. 85
-
91.

11.
C
.

Lin

and

R.

Loftin


Application of virtual reality in the interpretation of geoscience data
,




Proceedings of the ACM symposium on Virtual reality software and technology
,

1998
, pp. 187
-
1
94

12
.
Zayd Hendricks, Gary Marsden, Edwin Blake


A meta
-
authoring tool for specifying interactions in


virtual reality environments
,


Proceedings of the 2nd international conference on Computer graphics,


virtual Reality, visualisation and int
eraction in Africa
, 2003, pp. 171
-
180

1
3
.

PHP,


http://www.php.net/

14.

T
.
Berners
-
Lee, J
.

Hendler and O
.

Lassila


http://www.sciam.com/article.cfm?articleID=00048144
-
10D2
-
1C70
-
84A9809EC588EF21

15.

Resource Description Framework (RDF)


http://www.
w3.org/RDF/


16.

RDF Vocabulary Description Language 1.0: RDF Schema


http://
www.w3.org/TR/rdf
-
schema/



1
7
.
P
.

Bouquet, A
.

Dona, L
.

Serafini and S
.
Zanobini


ConTeXtualized Local Ontologies Specification via


CTXML
,


Proceedings of the AAAI Wo
rkshop on Meaning Negotiation, Edmonton (Alberta, Canada),200
2,


pp. 64
-
71


1
8
.
V
.

Kashyap

and

A
.

Sheth


Semantic and schematic similarities between database objects: a context
-


based approach
,


The VLDB Journal


The International Journal o
n Very Large Data Bases
, 1996, pp. 276
-
304

1
9
.
A
.

Maedche, B
.

Motik, N
.

Silva

and

R
.
Volz


MAFRA
-

A MApping FRAmework for Distributed


Ontologies
,


Proceedings of the 13th International Conference on Knowledge Engineering and


Knowledge Man
agement. Ontologies and the Semantic Web
, 2002, pp. 235
-
250

20
.
H. Wache, T. Voegele, U. Visser, H. Stuckenschmidt, G. Schuster, H. Neumann and S. Huebner




Ontology
-
Based Integration of Information
-

A Survey of Existing Approaches
,


Proceedings of the


IJCAI
-
01

Workshop on Ontologies and Information Sharing
, 2001
, pp. 108
-
118

2
1
.
J
.

Mil
ler, G
.

Baramidze, P
.

Fishwick and A
.

Sheth


Investigating

Ontologies

for

Simulation



Modeling

,


Proceedings of the 37th Annual Simulation Symposium (
ANSS'04
),


2004

2
2
.
V
.

Liang and C
.

Pared
is


A Port Ontology for Automated Model Composition
,


WSC, 2003, pp. 613
-
622.

2
3
. R. Carey and G. Bell, The annotated VRML 2.0 Reference Manual, Addison
-
Wesley, 1997.

2
4
. “World Wide Web Consortium”
http://www.w3c.org
.

2
5
. “X3D Graphics Working Group”
h
ttp://www.web3d.org/x3d.html
.

2
6
.
T
.

Kim, J
.

Lee

and

P
.

Fishwick


A two
-
stage modeling and simulation process for web
-
based modeling



and simulation
,


ACM Transactions on Modeling and Computer Simulation (TOMACS)
, 2002, pp. 230
-
248

27.
T
.

Kim, P
.

F
ishwick


A 3D XML
-
based customized framework for dynamic models
,


Proceeding of


the seventh international conference on 3D Web technology
, 2002, pp. 103
-
109