Component Approach to Distributed Multiscale Simulations

saucecopywriterInternet και Εφαρμογές Web

2 Φεβ 2013 (πριν από 4 χρόνια και 7 μήνες)

246 εμφανίσεις

Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands

Component Approach to
Distributed Multiscale
Simulations

Katarzyna Rycerz(1,2), Marian Bubak(1,3)

(1)
AGH University of Technology,
Institute of Computer Science
AGH, Mickiewicza 30, 30
-
059 Kraków, Poland

(2) ACC Cyfronet AGH, ul. Nawojki 11, 30
-
950 Kraków, Poland

(3)University of Amsterdam, Institute for Informatics, Amsterdam,
The Netherlands


Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands

Outline


Requirements of multiscale simulations


Motivation for a component model for such simulations


HLA
-
based component model: idea, design challenges
and solutions


Experiment with Multiscale Multiphysics Scientific
Environment (MUSE)


Execution in GridSpace VL (demo)


Summary



Multiscale Simulations

Consists of modules of different
scale

Examples:

virtual physiological human
initiative

reacting gas flows

capillary growth

colloidal dynamics

stellar systems

and many more ...



virtual physiological human
virtual physiological human
fusion
fusion
hydrology
hydrology
nano material science
nano material science
computational biology
computational biology
the reoccurrence of stenosis,

a

narrowing of a blood
vessel,

leading to
restricted blood flow

Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands

Multiscale

Simulations
-

Requirements


Actual connection of two or more models together


obeying laws of physics (e.g. conservation law)


advanced time management: ability to connect modules
with different time scales and internal time management


support for connecting models of different space scale


Composability and reusability of existing models of different
scale


finding existing models needed and connecting them either
together or to new models


ease of plugging in and unplugging of models from a
running system


standarized models’ connections + many users sharing their
models = more chances for general solutions


Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands

Motivation


To wrap simulations into recombinant components that can be selected
and assembled in various combinations to satisfy requirements of
multiscale simulations


machanisms specyfic for distributed multiscale simulation


adaptation of one of the existing solutions for distributed
simulations


our choice


High Level Architecture (HLA)


s
upport

for
long running simulations

-

setup and steering of
components
should be
possible also during runtime


possibility to wrap legacy simulation kernels into components


Need for a
n infrastructure that
facilitates cross
-
domain
exchange

of
components
among scientists


need for support for the component model


using Grid solutions (e
-
infrastructures) for crossing administrative
domains


Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands

Related Work


M
odel
C
oupling
T
oolkit



message passing

(MPI) style of communication between simulation models.


domain data decomposition of the simulated problem


support

for advanced data transformations between different models


J. Larson, R. Jacob, E. Ong ”The Model Coupling Toolkit: A New Fortran90

Toolkit for Building
Multiphysics Parallel Coupled Models.” 2005: Int. J. High

Perf. Comp. App.,19(3), 277
-
292.


Multiscale Multiphysics Scientific Environment (MUSE), now AMUSE


The Astrophysical Multi
-
Scale Environment


scripting approach (Python) is used to couple models together.


models include: stellar evolution, hydrodynamics, stellar dynamics and radiative transfer


S. Portegies Zwart, S. McMillan, at al. A Multiphysics and Multiscale Software Environment for
Modeling Astrophysical Systems, New Astronomy, volume 14, issue 4, year 2009, pp. 369
-

378


The Multiscale Coupling Library and Environment (MUSCLE)


a software framework to build simulations according to the complex automata

theory


concept of kernels that communicate by unidirectional pipelines

dedicated to pass a specific
kind of data from/to a kernel

(asynchronous
communication
)


J. Hegewald, M. Krafczyk, at al.. An agent
-
based coupling platform for complex automata.
ICCS, volume 5102 of Lecture Notes in Computer Science, pages 227
-
233. Springer, 2008.


Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands

Why High Level Architecture (HLA) ?


Introduces the concept of
simulation

systems

(federations)
built
from
distributed
elements

(federates)


Supports joining models of different time scale
-

ability to connect
simulations

with different internal time management in one system


Supports d
ata management (
publish/subscribe mechanism
)


Separates actual simulation from communication between fedarates


Partial support for interoperability and reusability (Simulation Object
Model (SOM), Federation Object Model (FOM), Base Object Model
(BOM))


W
ell
-
known
IEEE and OMT standard


Reference implementation


HLA Runtime Infrastructure (HLA RTI)


Open source implementations available


e.g. CERTI, ohla


Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands

HLA Component Model



Model differs from common
models (e.g. CCA)


no direct
connections, no remote
procedure call (RPC)


Components run concurrently
and communicate using HLA
mechanisms


Components use HLA facilities
(e.g. time and data
management)


Differs from original HLA
mechanism:


interactions can be
dynamically changed at
runtime by a user


change of state is triggered
from outside of any federate



B
B
A
uses port
provides port
CCA model

HLA model

A
B
B
federation
time management
data management
other HLA mechanisms
join/resign
C
set time policy
publish/unpublish
subscribe/unsubscribe, etc.
Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands

HLA Components Design Challenges


T
ransfer of control
between many layers


requests from the Grid

layer outside the
component


simulation code layer


HLA RTI layer.


The component should be
able to efficiently process

concurrently:


actual simulation that
communicates with other
simulation components
via


RTI layer


external requests of

changing state of
simulation in HLA RTI
layer .

Simulation Code

CompoHLA library

HLA RTI

Component

HLA

Component HLA

Grid platform (H2O)

External
requests:

start/stop

join/resign

set time policy

publish/subscribe

Grid
platform
(H2O)

Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands

HLA RTI

C
oncurrent
A
ccess
C
ontrol


Use concurrent access exception
handling available in HLA


T
ransparent to developer


S
ynchronous mode
-

requests
processed

as
they come


simulation is running in a
separate thread


D
ependent on implementation of
concurrency

control in used HLA
RTI


C
oncurrency difficult to handle
effectively


e.g starvation of requests that

causes overhead in simulation
execution

Simulation Code

CompoHLA library

HLA RTI

(concurrent access control)

Component

HLA

Component HLA

Grid platform (H2O)

External
requests

Grid
platform
(H2O)

Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands

Advanced Solution
-

Use Active Object Pattern


R
equires to call a single routine
in a simulation

loop


Asynchronous mode
-

separate
s

invocation

from execution


R
equests processed when
scheduler is

called from
simulation loop


I
ndependent on behavior of
HLA implementation


C
oncurrency easy to handle


JNI used for communication
between Simulation Code,
Scheduler and CompoHLA
library

Simulation Code

CompoHLA library

HLA RTI

Component

HLA

Component HLA

Grid platform (H2O)

External
requests

Grid
platform
(H2O)

Scheduler

Queue

Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands

Interactions between Components


Modules taken from Multiscale Multiphysics
Scientific Environment (MUSE)


Multiscale simulation of dense stellar systems


Two modules of different time scale:


stellar evolution (macro scale)


stellar dynamics
-

N
-
body simulation (meso
scale)


Data management


mass of changed stars are sent from evolution
(macro scale) to dynamics (meso scale)


no data is needed from dynamics to evolution


data flow affects whole dynamics simulation


Dynamics takes more steps than evolution to reach
the same point of simulation time


Time management
-

r
egulating federate

(evolution)

regulate the progress in time

of constrained
federate (dynamics)


The maximal

point in time which the constrained
federate can reach

(LBTS)

at certain moment is

calculated dynamically according to the position of
regulating federate on the

time axis







LBTS
-
Other federates will not
send messages before this time.
Federate may only advance
time within this interval
Federate

s
current logical time.
Federate

s
effective logical time.
Federate may not publish
messages within this interval
Federate

s
current logical time.
t=0
Lookahead
Constrained federate(dynamics)
Regulating federate (evolution)
Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands

Experiment Results


Concurrent execution, conservative
approach of dynamics and evolution as
HLA components (total time 18.3 sec):


Pure calculations of more
computationally intensive

(dynamics) component 17.6 sec


Component architecture
overhead:


Request processing (through grid
and component layer) 4
-
14 msec
depending on request type


Request realisation (scheduler)
0.6 sec


HLA
-
based distribution overhead:


Synchronization with evolution
component 7 msec



H2O v2.1 as a Grid platform and
HLA CERTI v 3.2.4


open source


Experiment run on DAS3 grid
nodes in:


Delft (MUSE sequential
version and dynamics
component)


Amsterdam UvA (evolution
component)


Leiden (component client)


Amsterdam VU (RTIexec
control process)


Detailed results
-

in a paper



Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands



HLA Components in GridSpace VL

Demo

Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands

Demo experiment


allocation of
resources

H2O kernel

node A

H2O kernel

node B

Ruby script

(snippet 1)

Run PBS job

allocate nodes

start H2O kernels

GridSpace

user

PBS

run job (start H2O kernel)

Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands

Ruby script

(snippet 1)

H2O kernel

node A

H2O kernel

node B

Jruby script

(snippet 2)

Asks

selected

components

to
join

simulation

system

Asks

selected

components

to
publish

or

subscribe

to data
objects

(
stars
)

Asks

components

to set
their

time
policy

Determines

where

output
/
error

streams

should

go


HLA communication

join federation

join federation

subscribe

publish

be constrained

be regulating

Dynamics

HLAComponent

Evolution

HLAComponent

GridSpace

set streaming

set streaming

user

create components

Demo experiment


simulation setup

Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands

Ruby script

(snippet 1)

GridSpace

H2O kernel

node A

H2O kernel

node B


Asks components to start

Alters the time policy at
runtime

Stop

Dynamics

HLAComponent

Evolution

HLAComponent

HLA communication

start

start

unset regulation

Star data object

Star data object

unset constrained

Jruby script

(snippet 2)

user

Jruby script

(snippet 3)

Jruby script

(snippet 4)

stop

stop

Dynamics

view

Evolution

view

Out/err

Demo experiment
-

execution

Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands

Ruby script

(snippet 1)

H2O kernel

node A

H2O kernel

node B

Ruby script

(snippet 5)

Delete job

stop H2O kernels

release nodes

GridSpace

user

PBS

Delete job ( stop H2O kernels)

Ruby script

(snippet 1)

Ruby script

(snippet 1)

Ruby script

(snippet 5)

Demo experiment


cleaning up

Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands



Recorded demo: HLA Components in
GridSpace VL

Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands

Summary


Presented
HLA component model

enables the

user to
dynamically compose/decompose distributed simulations from
multiscale

elements residing on the Grid


A
rchitecture of

the HLA component supports steering of
interactions with other components

during simulation runtime


The presented approach differs from that in

original HLA,
where all decisions about actual interactions are made by
federates

themselves.


The functionality of the prototype is shown on the example of

multiscale simulation of a dense stellar system


MUSE
environment.


Experiment

results

show that that

grid and

component layer
s

do not introduce

much overhead.


HLA components

can be run and managed
with
in

GridSpace
Virtual Laboratory

Simultech

2011, 29
-
31
July
, 2011,
Noordwijkerhout
, The Netherlands

For more information see:

http://dice.cyfronet.pl


https://gs2.cyfronet.pl


http://www.mapper
-
project.eu