3 Resource management implementation - AMI@Work on-line ...

ruralrompΛογισμικό & κατασκευή λογ/κού

2 Δεκ 2013 (πριν από 3 χρόνια και 8 μήνες)

214 εμφανίσεις

TEFIS

-

D4.1.2


Resource management implementation

Page
1

of
35




Deliverable Title

D
4.1.2


Resource management implementation

Deliverable Lead:

INR

Related Work package:

WP
4

Author(s):

Elvio Borrelli

Dissemination level:

Public

Due submission date:

3
0
/0
4
/20
1
2

Actual submission:

08
/0
6
/20
1
2

Project Number

25814
2

Instrument:

IP


Start date of Project:

01/06
/20
10

Duration:

30

months

Project coordinator:

THALES


Abstract

This
document is the update of D4.1.1. It describes the

TEFIS
resources manager able to provision
all the resources
required by the
experimen
t. The overall management,
deployment, and dynamic provisioning of resources are
addressed in this document
.

The document outlines how

the
appropriate resources

will be reserved among the plugged
testbeds

using

the

management
of
resources

which will come
w
ith the services exposed

by the testbed itself. The
interaction with the TEFIS Experiment and Workflow
Scheduler is also considered in this document.




TEFIS

-

D4.1.2


Resource management implementation

Page
2

of
35







Project funded by the European Commission under the 7th European
Framework Programme for RTD
-

ICT theme of the Cooperation
Programme.

License



This work is licensed under the Creative Commons Attribution
-
Share Alike 3.0 License.

To view a copy of this license, visit http://creativecommons.org/licenses/by
-
sa/3.0/ or send a letter to
Creative
Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

Project co
-
funded by the European Commission within the Seventh Framework Programme (2008
-
2013)



Copyright by the TEFIS Consortium



TEFIS

-

D4.1.2


Resource management implementation

Page
3

of
35



Versioning and Contribution History


Vers
ion

Date

Modification reason

Modified by

0.1

15/05/2012

Initial document

Elvio Borrelli

(INR)

0.2

08/06/2012

Added the executive summary, introduction
and conclusion sections. Integration of the
BOTNIA, ETICS, FUSP and
TCF
contributions

Elvio Borrelli (I
NR),
Annika Sallstrom (LTU),
Gabriele Giammatteo
(ENG), Jose Junior
(FUSP), Farid Benbadis
(
TCF
)























TEFIS

-

D4.1.2


Resource management implementation

Page
4

of
35



TABLE OF CONTENT

Terminology

................................
................................
................................
................................
...................

6

Executive Summary

................................
................................
................................
................................
.......

8

1

Introduction

................................
................................
................................
................................
...........

9

2

Novelties and design decisions

................................
................................
................................
............

10

3

Resource management implementation

................................
................................
.............................

12

3.1

TEAGLE

................................
................................
................................
................................
.......

12

3.1.1

TEAGLE adoption in TEFIS

................................
................................
................................
......

14

3.2

ProActive Resourcing tool

................................
................................
................................
..........

15

3.2.1

ProActive Resourcing tool adoption in TEFIS

................................
................................
.........

17

3.3

TEFIS resource provisioning use case

................................
................................
.........................

18

3.3.1

Testbed resource provisioning

................................
................................
..............................

18

3.3.2

Experiment execution

................................
................................
................................
............

19

4

Testbeds’ resource management

................................
................................
................................
........

20

4.1

ETICS

................................
................................
................................
................................
...........

20

4.2

PACAGrid

................................
................................
................................
................................
....

21

4.3

PlanetLab

................................
................................
................................
................................
....

23

4.4

Botnia Living Lab

................................
................................
................................
........................

25

4.5

Kyatera

................................
................................
................................
................................
.......

25

4.6

Testbed resource management implementation through TEAGLE

................................
...........

27

5

Resource management installation

................................
................................
................................
.....

31

6

Future improvements

................................
................................
................................
..........................

33

7

Conclusion

................................
................................
................................
................................
...........

34

References

................................
................................
................................
................................
...................

35

TEFIS

-

D4.1.2


Resource management implementation

Page
5

of
35


TABLE OF FIGURES

Figure 1 High level view of Teagle and the underlying control framework

................................
................

13

Figure 2 Mapping Teagle and PTM components on the TEFIS functional architecture

..............................

14

Figure 3 ProActive Resourcing tool architecture

................................
................................
.........................

16

Figure 4 ProActive Resourcing tool in TEFIS

................................
................................
................................

17

Figure 5 Testbed resource provisioning scenario

................................
................................
........................

18

Figure 6 Provisioning of internal computing resources for experiment execution

................................
....

19

Figure 7 PlanetLab architecture

................................
................................
................................
..................

24

Figure 8
:

PACAGrid testbed registered at the TEFIS portal

................................
................................
.........

28

Figure 9
: PACAGrid resource specification provided by the testbed PACAGrid

................................
.........

29


TEFIS

-

D4.1.2


Resource management implementation

Page
6

of
35


Terminology
1

Configuration
:

information required to define the resources and environment needed to execute a test
or

test
-
run.


Execution Instance
:
An Execution Instance (EI) is a single execution at a
Testbed
, for example an
Application’s

execution; under certain circumstances, it may represent a long
-
term rental of some
storage or allocation of resource. Execution Ins
tances are related to
Tasks

in two ways


there may be
multiple
Tasks

per Execution Instance (e.g. a long
-
lived EI), and there may be multiple EI’s per
Task

(e.g.
hiding complexity of multiple executions inside a single
Task
). It is the decision of the
Exp
erimenter

or the
Testbed Provider

to determine the exact mapping between a
Task

and an EI. Also, because a
Test Run

is
an instance of a Workflow, a single
Test Run

may contain multiple Execution Instances.

Experiment
: in general terms,
any investigation i
nvolving a set of controlled steps which aim to generate
results which support or disprove any given hypothesis
. In TEFIS the experiment is a

set
-
up of the TEFIS
infrastructure including a TEFIS configuration, the low
-
level workflow to process, the credent
ials to use
and any other parameter necessary to reproduce it.
The purpose of an experiment is to verify or validate
a series of assumptions made about a software
-
intensive system deployed in a TEFIS configuration.

Experimenter
: is the main user of TEFIS,
and one of the three major stakeholders. (S)he is a legal entity
(a person or an organisation) who wants to run an
Experiment

using TEFIS. The Experimenter is the one
who owns the
Experiment

and entrusts TEFIS to run it.

Resource instance
: the instance of
a given resource specification. It is the set of values defined for the
attributes of a given resource specification.

GPU
: Graphics Processing Unit. In this document this term is used to reference a specific group of the
computational resources available a
t PACAGrid: the resources running on machines equipped with GPUs.

Resource specification
: it represents the type of resource provided by a testbed. It is stored in the TEFIS
Resource Repository (TEAGLE) and it is a set of metadata that represents the attr
ibutes the user has to
define to create the instances of that resource specification.

RPM
: It is a package management system usually used by Unix/Linux operating systems. The name RPM
variously refers to the “.rpm” file format, files in this format, softwa
re packaged in such files, and the
package manager itself.

Task
:
it is a single activity or process from the
Experimenter’s

viewpoint


it is a logical process that
makes sense (and has value) to the
Experimenter
. A Task is distinct from
Execution Instance
s

because
there may not necessarily be a 1
-
1 mapping between Tasks and
EI’s
. As described in the
Execution
Instance

entry, there may be more complex
Execution Instances

hidden inside one Task by TEFIS that the
user need not be concerned about, or multiple
Tasks may use the same
EI
.

TEFIS
:
Te
stbed for
F
uture
I
nternet
S
ervices.




1

In the Terminology we used the definitions that seemed more “project wide”. We should reference a document
that contai
ns the “project wide” terminolgy, but such a document was not available at the moment this deliverable
was written.

TEFIS

-

D4.1.2


Resource management implementation

Page
7

of
35


TEFPOL
: augmenTed rEality collaborative workspace using Future internet videoconferencing Platform
fOr remote education and Learning. It is one of the four new use case from the TEFIS
Open Call. It is
implemented by Poznan Supercomputing and Networking Center, Poland.

Test
:
the instance of an experiment with defined values for all parameters of the experiment.


Testplan
: it represents the plan of the experiment as the composition of the

user tasks and resource
instances (of the resource specification selected during the definition of the experiment). It also defines
which resources a user task needs and the scheduling of the execution of the user tasks.

Testrun
:
it is a single execution
of a
Workflow
. There may be many runs per experiment, for example,
different
Workflows
may be evaluated or a parameter space explored (that is separate
Test Runs

using
the same
Workflow
but slightly different parameters or circumstances).
Test Runs

in the
same test may
be executed at the same or multiple

Testbeds
. Because a
Workflow

has multiple
Tasks
, a Test Run may
itself have multiple steps (known as
Execution Instances q.v.
), each of which may be run at an individual
Testbed
. Thus an
Experiment

can cont
ain many Test Runs, and a Test Run can contain many
Execution
Instances
.

Testbed
: The hosting environment for an
Execution Instance

providing access to large amounts of test
data accessible to authorised users. Data are stored using the TIDS.

Testbed provi
der
: t
he organisation providing the
Testbed
. The Testbed Provider is the third and final
major stakeholder in TEFIS
.

TIDS
: Testbed Infrastructure Data Service,
a TEFIS specific component used to store the test data. . This
component can be installed at the

Testbed site and at the Experimenter site. The TIDS is the remote
storage equivalent to the RPRS.

Workflow
: a sequence of
Tasks

in a determined order. The purpose of a Workflow is to enable
Task

sequences to be recorded and rerun (possibly with a differen
t configuration) at a later date.
The
Test Run

is an instance of a Workflow.


TEFIS

-

D4.1.2


Resource management implementation

Page
8

of
35


Executive Summary

This document describes how the resources are managed by the TEFIS platform. Both kind of resources
are covered: the TEFIS internal resources used to execute t
he workflows representing the experiments
and the testbed resources leveraged to, actually, execute the steps of the experiment the TEFIS user has
designed and configured.

Since this document is an update of the deliverable D4.1.1, we report the content of

that deliverable
adding
the needed modification in order to take into account the changes in the implementation of the
TEFIS platform made during the second year of the project. We also include additional content that
reflects the work, related to the res
ource management, done when implementing the second TEFIS
prototype.

The document is organized as follow.

The introduction outlines the changes since the previous version of this document is in Section
1
. There
more details abo
ut the content that changed or was added to the previous version are given.

In Section
2

we report the design decisions behind the TEFIS Resource Manager: the adoption of TEAGLE
and ProActive Resourcing tool.

In particular, we
describe that the analysis of the first prototype of the
TEFIS platform has also shown that the adoption of the ProActive Resourcing tool was good since thanks
to that the TEFIS platform became highly reconfigurable and scalable.

TEAGLE and ProActive Resou
rcing tool adoption is reminded in Section
3
, in particular in
3.1

and
3.2
.
While Section
4

analyses the characteristics of the testbed regi
stered at the TEFIS platform. That section
focuses, mainly, on the resource management tool provided by the testbeds an try to draw conclusion
about how the testbed resources can be managed, leveraging the provided tools, by TEFIS platform via
the testbed
connectors. The current implementation status of the testbed connectors does not allow

us

to completely address the issues of testbed resource management. The analysis can only be used to
define future development guidelines.

S
ection
5

presents the installation of the TEFIS Resource Manager prototype and explains why some
components, TEAGLE Gateway and TEAGLE Policy Engine, are not installed yet, even if in the document
their usefulness is established.

Finally, Section
6

describe
s

what can be the future improvement of the TEFIS Resource Manager while in
Section
7

we draw the conclusion.

TEFIS

-

D4.1.2


Resource management implementation

Page
9

of
35


1

Introduction

This document draws on the content structure of deliverable
[8]

since it contains the updates of that
deliverable and the additional content. Furthermore it reflects the changes made to the first version of
the
TEFIS resource management

in order to carry out the second version which takes into accoun
t all the
new functionalities and components integrated in the TEFIS platform that is to be ready by the end of
the second year of the TEFIS project.

The updates and addition this document includes compared to the previous version are listed below:



Section

4

Testbed
s’

resource
management

represents a complete new section. It contains the
analysis of the testbeds registered at the TEFIS platform (i.e., BOTNIA, ETICS, Kyatera,

PlanetLab,
and PACAGrid) in terms of resource management opportunities. In particular, it focus on the
possibility to reserve the resource provided by the testbeds from the TEFIS platform. The section
does not end up with a general solution but it stated
that the heterogeneity of the testbeds
represents a huge obstacle to the resource management.

The
entire structure of the previous document has been reorganized. The
content
, instead, remains the
same. But it’s noteworthy that Section
6

has been added in order to define the future development
guidelines for TEFIS resource management.

Finally
, during the review of this deliverable errors have been
corrected and any incongruity or inconsistency with the previous version of this de
liverable has been
fixed
2
.




2

The information contained in this document has the priority in case some incongruity or inconsistency with the
deliverable
[8]

must be still fixed.

TEFIS

-

D4.1.2


Resource management implementation

Page
10

of
35


2

Novelties and design decisions

TEFIS gives to the user the opportunity to specify and configure the experiment he wants to run. The
experiment is made up of different process steps (e.g., the build of an application and then the
execution
of a performance test for the same application). Each process step requires some resources to be
executed. Those resources are provided by the testbeds. During the specification and configuration of
the experiment the user has to define the resou
rce types to instantiate or to choose existing resources
instances to use. To allow users to instantiate or choose resources, the TEFIS platform must store the
information about the resource types provided by the different testbeds and about the existing r
esource
instances.

Testbeds, that provide resources, are heterogeneous and, usually, are managed by different
administrative domains. A product that deals with heterogeneous resources that reside under different
administrative domains already exists, it is

TEAGLE
3
. TEAGLE is a

central coordination instance that holds
to
gether all TEAGLE
partner test labs

that are needed to provide the user

with the maximum range of
testing and prototyping possibilities. All
the

TEAGLE
partner labs form a federation of testb
eds.

Via
Teagle
the user

can:



Browse resources provided by Panlab Partner labs
;



Configure and deploy the user

own Private Virtual Test Lab
4
;



Register new resources to be provided by the federation
.

As it’s clear, the functionalities offered by TEAGLE are t
he same required by TEFIS to perform the
management of the testbeds’ resources. This is the reason behind the adoption of TEAGLE when
implementing the TEFIS resource management.

The different components of TEAGLE are involved in the definition, configurati
on and execution of the
user experiment in TEFIS. Before the experiment is defined (see

[7]

to get more details), the user must
choose the resource type. The choice (as explained in the deliverable
[7]
) is done following a sequence of
steps (domain selection, keywords selection and, finally, resource type selection from a list of resource
types) during the execution of which the involved TEFIS components interact with the TEAGLE repository
to retriev
e the needed information. Information about the domains, the list of keywords associated to
each domain and the list of resource specifications (and so the list of resource types) are retrieved from
the TEAGLE repository. Once the list of the resource type
s is available, the experimenter can choose the
types of the resources involved in his experiment to complete the experiment definition.

After the experiment is defined, the experiment configuration begins. In this phase the list of resource
instances, who
se type belong to the list of the resource types the user selected during the definition of
the experiment, is retrieved from the TEAGLE repository. The user, then, can select the resource
instances from the list or can create new resource instances. While

in the first case the interaction with



3

http://www.fire
-
teagle.org/

4

A Private Virtual Test Lab is a testbed that is composed by a number of distributed software and hardware
components situated in existing testbeds across Europe. It provides you with a

testing and prototyping
environment that supports your very specific requirements.

TEFIS

-

D4.1.2


Resource management implementation

Page
11

of
35


the TEAGLE repository takes place only to load the resource instances, in the second case an additional
interaction with the TEAGLE repository is needed to store the new created resource instances. To store
the resou
rce instance, TEAGLE will store the information and the configlet
5

associated with that instance.
The configlet contains the actual data values that are associated with the configuration parameters of
the resource specification that is instantiated.

Concer
ning the execution of the experiment, when the user choose to run a testrun associated to a
specific testplan, the TEAGLE repository is used to check the availability of
the resource

instances to be
used to allow or deny the testrun execution. The TEAGLE r
epository is also used to store the configlets
associated to the testrun tasks and resources and, finally, to perform the booking of the resources
needed during the execution of that testrun.

Even if the TEAGLE adoption in TEFIS provides to the user the fe
atures to allow them to define and
configure the resources needed in their experiments, TEAGLE is not enough to fully match the
requirements of the TEFIS resource manager component. In fact, it’s noteworthy that in TEFIS there are
two set of resources to m
anage. One is the set of resources provided by the testbeds plugged into the
TEFIS platform, the other is the set of resources that the Experiment and Workflow Scheduler uses to
execute the low level workflows (see [3] for more details) that represent the
experiments the TEFIS users
run. To perform the management of the set of resources provided by testbeds TEAGLE is used. To
manage the resources needed by the Experimental and Workflow Scheduler to enact the user
experiment on the testbeds’ resources anothe
r component is needed since those resources are used
only from the Experiment and Workflow Scheduler of the TEFIS platform and are not visible and accessed
by the TEFIS users (i.e., the TEFIS user cannot create, register and/or browse those resources).

Sin
ce in the deliverable “
D4.2.1 Experiment and Workflow Scheduler

prototype
” we stated we chose
ProActive Scheduling tool to implement the features of the Experiment and Workflow Scheduler
component of the TEFIS platform, we find suitable and convenient to u
se the ProActive Resourcing tool
as the resource manager of TEFIS internal computational resources. In this way we assure that the two
TEFIS internal components, the TEFIS scheduler and the TEFIS resource manager, are highly compatible
(i.e., they belong t
o the same suite, ProActive Parallel Suite).

The resources managed by the ProActive Resourcing tool are computational nodes on which
activities of

the low level workflow (i.e., ProActive tasks) can be executed. That component provides only the
management
of the computational nodes (e.g., creation, booking, freeing, monitoring etc…) but does not
provide the scheduling of the activities to execute on those nodes. It’s the ProActive Scheduling tool that
will interact with the ProActive Resourcing tool to book

computational nodes and, after the booking is
complete, to schedule activities (the tasks of the low level workflow) on those nodes.

The ProActive Resourcing tool allows the TEFIS platform to be highly reconfigurable and scalable. In fact,
since the ProAc
tive resources can run on every machine run
ning a JVM, additional resources can be
added to the platform in order to change the characteristics of the physical machines on which the
platform is running or to increase the number of those machines in order t
o obtain a better scaling
factor.




5

The configlet is a single piece of configuration and, usually, is not stand alone.

TEFIS

-

D4.1.2


Resource management implementation

Page
12

of
35


3

Resource management implementation

The TEFIS Core Services are in charge of all the main interactions with testbeds, through the TEFIS
connectors which implement the TE
FIS Connector API defined in
[2]
.

The Resource Manager is the Core Service in charge of the management of the heterogeneous resources
that can be provisioned for an experiment by the testbeds plugged into the TEFIS plat
form (cf. the
deliverable
[3]

fo
r a global view of the functional architecture of the TEFIS platform). The Resource
Manager leverages mainly the TCI
-
RM API in order to list the available resources, request the
instantiation of new resources, release them or

access to their configuration.

T
he TEFIS Resource Manager is implemented, actually, by two
separate
software components: the
TEAGLE Resource Manager and the ProActive Resourcing tool.

Sections
3.1

and
3.2

describe

these two
comp
onents, their role and how they are integrated into the TEFIS platform.


3.1

TEAGLE

Teagle

is a federation control unit that describes and controls arbitrary and heterogeneous resources
across administrative domains.
An overview of the
Teagle
framework
has
bee
n presented in

Figur
e
1
,
where an assessment of the framework with respect to the TEFIS architecture and its main requirements
has been presented. Here we give a quick remind about the main concepts presented there with th
e aim
to present a mapping between the Teagle components and the TEFIS functional architecture
components defined in

[3]
.

Before focusing on Teagle architectural components, the
r
esource description model

used as a basis for
t
he overall approach adopted in Teagle, is quickly reminded here, for convenience.
A resource can be
anything that can be controlled with software
. A class of resources is described by a Resource Type
,
that

describe
s

common attributes, which are named, type
d and can further specify a default value
.

A specific
resource is represented as an instance of a Resource Type and it has

individual values for the attributes
the type specifies.

Each Resource Instance is

uniquely
identified by an ID
across a single domai
n

and,
thanks to the domain’s identifier, it can be uniquely identified in a multi
-
domain environment as well.
The model allows the definition of two kinds of relationships between resources: ‘Containment‘
relationship, denoting

a resource that
"lives" ins
ide another resource, and ‘
References
’ relationship,
denoting a
resource instance that "uses" another one to function
.

Keeping in mind the mentioned resource model, the Teagle components depicted in
Figur
e
1

are quickly
re
minded below.

In the picture the underlying Panlab control framework for testbeds is reported as well. The main
component of the control framework is the
Panlab Testbed Manager

(PTM). The picture highlights how
for each administrative Domain, such as an or
ganization providing a collection of resources, a specific
Domain Manager provides the resource abstraction and exposes the interface to gain access, via the
Resource Adapters, to the resources available for external use.

TEFIS

-

D4.1.2


Resource management implementation

Page
13

of
35



Figur
e
1

High level view of Teagle and the underlying control framework

The
Teagle

R
epository

(Repo in the picture)
stores information about available resource types, the
configuration of existing resource inst
ances, data about individual users and their role, and finally Virtual
Customer Testbed
s

(
VCTs
) which represent the set of configured resources needed by the final user
.

The
repository
is accessed by the other components via its
HTTP
-
based RESTful interfac
e.

The
Orchestration Engine

component is in charge of instantiating a VCT on top of one or more testbeds
that are part of the federation. This is achieved by translating the original VCT layout model in the form
of an executable orchestration script that i
ncludes the necessary sequence of service invocations to
achieve the actual provisioning of the requested resources on each testbed. The service calls issued by
the Orchestration Engine are redirected to the Teagle Gateway for intermediation purposes.

The
Teagle

Web
Portal

is a website
acting as

the primary contact point for customers and partners to use
Teagle tools and interact with Panlab resources.

A testbed operator that wants to join the Teagle
infrastructure by making his resources available to Teagl
e customers will register his organization and
the resources it provides using the functions that the portal provides
. An
experimenter can
retrieve from
there
the

creation environment tool (the VCT Tool) to define his own Virtual Customer Testbed (VCT).

Creation
Environment
(
VCTTool
)
Repo
(
REST interface
)
Request
Processor
(
RP
)
Orchestration
Engine
(
OE
)
TEAGLE
Gateway
(
TGW
)
store testbed
(
VCT
)
configuration
Domain
Manager
(
PTM
)
Policy
Engine
(
PE
)
TEAGLE
User
Domain
Manager
(
PTM
)
Domain
Manager
(
PTM
)
policy evaluation
indicate booking request
policy evaluation
retrieve VCT info
book VCT
individual provisioning requests
TGW dispatches requests to respective PTMs
CRUD interface
+
resource specific
operations
Resource
Adapter
(
RA
)
T
1
interface
(
CRUD
)
T
1
interface
(
CRUD
)
T
1
interface
(
CRUD
)
Actual
Resource
(
s
)
Resource specific
protocol
CRUD interface
+
resource specific
operations
OpenNebula
Resource
Adapter
OpenNebula
OCCI
CRUD interface
+
resource specific
operations
IMS
Resource
Adapter
IMS
Resources
(
Whatever you do to
control IMS resources
,
e
.
g
.
ssh
+
scripts
)
Domain A
Domain B
Domain C
Web
Portal
TEFIS

-

D4.1.2


Resource management implementation

Page
14

of
35


Th
e Teagle
VCT Tool

is a Web Start application where customers

can browse resources,
design
,
configur
e

resources in a VCT

and request resource booking.

The
Request Processor

is
a backend component responsible for validating
a user
VCT’s configuration and
tra
nsforming it to an XML format suitable as input for the Orchestration Engine
.

The validation process is performed by a
Policy Engine
, which enforces a set of policies stored in the
repository. Different kinds of policies can be configured to restrict, for
example, the access to resources
based on the user role or the user’s organization, or to restrict the combination of resources where a
resource connection does not make sense.

The Teagle
Gateway

instantiates and maintains service endpoints for all the reg
istered PTMs. These
service endpoints are utilised by the Orchestration Engine during VCT script execution
.


3.1.1

TEAGLE adoption in TEFIS

As assessed in

[9]
, the Panlab resource model adopted in Teagle is appropriate to represent h
ighly
heterogeneous resources

and generic enough to describe TEFIS testbed’s resources. The TEFIS resource
model has been based, then, on the Teagle resource model and the
TEFIS Connector specification
s have
been defined taking in account this resource mod
el.

In the following picture a mapping between the different Teagle components with the underlying control
framework and the TEFIS architectural components is presented. The presented mapping aims at
identifying a set of coarse grained functions common to
the components involved in the mapping.


Figure
2

Mapping Teagle and PTM components
on the TEFIS functional architecture

TEFIS

-

D4.1.2


Resource management implementation

Page
15

of
35


Looking at the bottom of the pictures we find a direct mapping between the TEFIS C
onnectors and the
PTM control framework. Both of them represent an abstraction layer for the testbed resources (in green
in the picture), even if the abstraction layer provided by the Panlab framework is confined to resource
management and a support for ex
periment execution or for data transfer for example are not in the
scope of the Panlab components.

On top of the abstraction layer we find on the left the Teagle components, and on the right the TEFIS
middleware functional blocks. As in the case of Teagle

the Repository stores the resource descriptions
and
configuration of existing resource instances

etc, in TEFIS the Resources Directory aims to store
information about the

tools, facilities, and any resources provided by the

heterogeneous testbeds.

As
lon
g as the resource description model is the same, there are many similarities between the two
components, from a functional point of view. That’s why the picture highlights a direct mapping between
the Teagle Repository and the TEFIS Resource Directory.

Th
e other Teagle components, except the VCT Tool and the Web Portal, as mentioned in the Teagle
overview, provide together the following main functions:

-

Resource provisioning;

-

Resource control;

-

Resource configuration support;

In TEFIS these features should b
e provided by the TEFIS Resource Manager component, in order to
support the users in the definition and execution of their experiments on the testbed resources. This
does not mean that the TEFIS Resource Manager should not provide any other additional feat
ure (such
as, for example, a support for internal computing resource provisioning or a stronger support for
resource reservation) but the identified mapping suggests that these Teagle components may represent
a good base for the implementation of the TEFIS

Resource Manager solution.

To complete the discussion, the last two Teagle components reported in the picture are the VCT Tool and
the Web Portal. The former is the graphical tool used by end users in the virtual customer testbed
definition and booking.
It is mainly mapped on a presentation layer entity, but part of its logic could be
mapped also on back end components acting as client of the Teagle repository and the Teagle request
processor for example. An exact mapping of this tool is not in the focus
of the Core Services definition,
and the same applies to the Web Portal as well.



3.2

P
RO
A
CTIVE
R
ESOURCING TOOL

The
ProActive Resourc
ing tool
,
[10]
,

is a client/server application
that aggregates, monitors, controls
computing reso
urces and made them available to external clients. A computing resource is represented
by a ProActive node and it
can be seen as an entry point to the
physical resource where the node runs
on. The server
selects

and allocates

nodes accordin
g to user reque
sts. It restricts the

access to these
nodes to other users

during their allocation
, monitors nodes states and provides up
-
to
-
date information
TEFIS

-

D4.1.2


Resource management implementation

Page
16

of
35


about resources utilization. Once
the suitable
no
des are selected and given to the
user
s
,
they access
them directl
y and at the completion of the computation, the resources are released thanks to a
notification to

the
server
.

A c
lient
can be an end user

or
an
entity requesting
computing nodes
.

T
he main
client is
the ProActive Scheduling tool,

which requests
computing
n
odes
to execute
jobs
on
. Clients

with
administrative rights

may trigger automatic
nodes

deployment, manually add

and remove nodes to the
server
.

The picture below, in
Figure
3

, reports a synoptic view of the ProActive Res
ourcing tool architecture.


Figure
3

ProActive Resourcing tool architecture

As mentioned above, the ProActive Resourcing tool

is a client/server application.
Clients

are remote and
may administrate or mon
itor the
server

state by means of
a C
ommand
L
ine

or a G
raphical
U
ser
I
nterface

(GUI)
. They also may request nodes for computations using Java API. The communication between
clients, server and

computing

nodes are performed by ProActive. Administrative acti
ons, like node
deployment, are delegated by the resourc
ing tool
core

(
RM Server
, in the picture)

to node sources and
then to
infrastructure managers.


A
Node Source

represents a set of computing nodes and is characterized by
an
I
nfrastructure
Manager
and a

P
olicy.

The

Infrastructure Manager

is

responsible for node deployment to
a

particular
infrastructure

with a given protocol or mechanism
.

The

Policy

defin
es
rules and limitations of node
source utili
zation, in terms of
users

allowed to use it or not and in

terms of
deployment
time and so on.

User requests of getting nodes are

processed by
Selection Manager
s

who are able

to know whether
a

node is suitable

or not
. Once
a
user has obtained nodes, he contacts them directly without involving the
resource manage
r.

A
Monitoring

module
track
s

resources and their states

and made this information available for external
clients.

TEFIS

-

D4.1.2


Resource management implementation

Page
17

of
35



3.2.1

ProActive Resourcing tool adoption in TEFIS

As explained in

[1]
, the ProActive Scheduling tool is used as a ba
se for the implementation of the TEFIS
Experiment and workflow Scheduler. It leverages the ProActive Resourcing tool to acquire the computing
resources where to execute tasks on.

In TEFIS the workflows enacted by the Scheduler are the experiments defined
by the TEFIS users, which
involve the heterogeneous resources provisioned by the testbeds. Those experiments are internally
represented as ProActive jobs, where the tasks composing each job represent the interactions with the
testbeds via their exposed Con
nectors.

While the testbeds provision the resources required by the experimenters to perform their experiments,
the ProActive Resourcing tool provisions the computing resources, internal to the TEFIS platform, needed
by the TEFIS Scheduler in order to orc
hestrate the user experiments on.

For each testbed a specific Node Source is configured in the ProActive Resourcing tool. When the
Scheduler needs to interact with a given testbed to execute a process step of the user experiment, it can
acquire a computin
g node from the right Node Source, which is configured with the appropriate code
and configuration parameter values needed to properly interact with the testbed’s connector. In this
way any specific configuration needed for the interaction of a given testb
ed is hidden in the ProActive
Resourcing configuration and the ProActive jobs corresponding to the user experiments can be kept
cleaner and simpler.

The picture below,
Figure
4

, shows what just explained.


Figure
4

ProActive Resourcing tool in TEFIS


TEFIS

-

D4.1.2


Resource management implementation

Page
18

of
35



3.3

TEFIS

RESOURCE PROVISIONIN
G USE CASE

The TEFIS Resource Manager solution is based on Teagle, as said above, for what concerns the
management of the resources provided by ext
ernal testbeds and involved in the user experiments. The
TEFIS Resource Manager solution includes also the ProActive Resourcing tool, for what concerns the
provisioning of computing resources internal to the TEFIS platform itself as required by the TEFIS
S
cheduler that needs to interact with external testbed connectors. The two most significant usage
scenarios of interaction between the involved entities are presented below.


3.3.1

Testbed resource provisioning

The main flow for resource provisioning starts from
the resource configuration design phase made by the
experimenter on the Portal. Once this design phase is completed, the experimenter has identified a set
of resources from one or several testbeds, each one with its own configuration. This set of informati
on
represents the ‘resources configuration’ required for the experiment execution.


Figure
5

Testbed resource provisioning scenario

This ‘resources configuration’ is stored, by the Experiment Manager, in

the Resources Directory (based
on the Teagle repository) in the form of a VCT file. In order to start the experiment execution, these
resources have to be provisioned. In the step 2: book resources(), the Experiment Manager asks the
Resource Manager (base
d on Teagle) to provision the required resources according to the VCT previously
defined. The Resource Manager, then, retrieves the VCT file and according to this file, for all the
resources it contacts the Connector of the appropriate testbed, in order to

request the creation of the
TEFIS

-

D4.1.2


Resource management implementation

Page
19

of
35


resource with the required configuration. Once the provisioning process is finished, the VCT file is
updated with additional information (like Resource ID, provisioning state) characterizing each resource
instance provisioned b
y the testbed.


3.3.2

Experiment execution

The experiment execution starts when the flow associated to the experiment test run is submitted to the
TEFIS Workflow Scheduler by the Experiment Manager component (cf.

[5]
, section “Experi
ment Manager


Other Building blocks”, for more details about this interaction). The Experiment Manager receives the
identifier of the submitted job (the jobID), to be able to monitor its execution.


Figure
6

Provisioning of internal computing resources for experiment execution

The TEFIS Workflow Scheduler schedules the execution of each process step according to the defined
flow structure. In order to execute a process step, the Scheduler asks the ProAct
ive Resourcing tool (PA
Resourcing tool, in the diagram) for a computing node to perform the task execution on the testbed
involved in the process step (Testbed X, TBx). The PA Resourcing tool gets a node from the Node Source x
and returns a ProActive Node

(PANode
-
x) to the Scheduler. The Scheduler delegates the PANode
-
x, that
is pre
-
configured with the right properties (such as the testbed endpoint, specific testbed configuration
properties like the more appropriate polling time for the execution status ch
eck, etc.), to interact with
the connector exposed by the testbed x, Connector
-
TBx. The execution logic is contained in the task
template for the testbed x (cf.

[1]
) available in the PANode
-
x environment.
An execution id, exec
id, is
returned in order to let the PANode
-
x monitor the process execution progress.

TEFIS

-

D4.1.2


Resource management implementation

Page
20

of
35


4

Testbed
s’

resource
management

The following sections describe

how,
t
aking into account the requirements defined in the scenario of an
experiment, the appropriate resources

can be

reserved among the testbeds

plugged at the TEFIS
platform
.
In the following, we analyze in which way

each testbed
expose
s

its resources and, testbed by
testbed, we outline

what are the tools

and the related features
, exposed as services

by the test
beds
,

used

to ma
nage the resources the testbeds

provide. Finally, we

describe the common interface

exposed
by the testbed
s

to the TEFIS experiment and workflow scheduler (see

deliverable
[1]
)

in order to manage
the resources
.

4.1

E
TICS

The ETICS testbed does not provide its users with “classical” resources like a physical/virtual machines,
networks or sensors that users can use to run their experiments. ETICS testbed offers to its users the
capability of configure a software project

and use its internal infrastructure to compile, test and produce
distribution packages from that source code. The main concept in the ETICS testbed is, therefore, the
software project under test and for those reasons, the most natural way of proceeding ha
s been to
expose in TEFIS resources of type “eticsproject”.

The
eticsproject

resource represents what in ETICS is called project: an entity where all data about a
software project is stored (e.g. name, description, source code repository URL, checkout, bui
ld, test and
packaging commands).

The
eticsproject

resource has the following configuration parameters that can be configured by the
experiment in the Experiment Manager:



name
: the name of the project in ETICS;



description
: the human readable description o
f the software project;



checkout
: commands to execute to download the source code;



configure
: commands that will be executed just after the checkout commands;



compile
: commands to execute to build the code. These commands are executed after the
configure
step;



test
: commands to execute tests against either source code or binaries (e.g. junit, functional test,
sloccount);



environment
: a list of environment variables in the form “name=value” that will be
automatically set by ETICS before the build/test execu
tion start;



externals
: a list of eticsexternals resources (see below) referenced by the project. They will be
made automatically available by ETICS during the execution. For instance, usually, a project
written in java will include the jdk eticsexternals r
esource in this list;

A second type of resource in ETICS is the
eticsexternals.
A resource of type
eticsexternals

represents a
third party software library stored in the ETICS software repository. This type of resources
is
read
-
only

TEFIS

-

D4.1.2


Resource management implementation

Page
21

of
35


and cannot be creat
ed by TEFIS users. They can be only used to be referenced from
eticsproject
resources.

ETICS resources are exposed and can be managed through the ETICS Connector Interface. In the
following list, the behaviour of resource management functions is provided:



add_resource()
: creates a new project in ETICS using the parameters specified in the resource
configuration;



update_resource()
: update the parameters of a project accordingly with configuration passed as
parameter;



get_resource()
: return the configuration
for given project;



delete_resource()
: delete the specified project from ETICS;



list_resource()
: will return a list ETICS projects;

Not being physical resources but only virtual resources created and managed in the ETICS database,
ETICS resources does not n
eed any reservation mechanism and all functions, unless internal errors, will
always manage to create and modify resources on the ETICS testbed.


4.2

PACAG
RID

The PACAGrid testbed represents a grid in which the
whole

set
of
physical
machines is
accessed and
ut
ilized
using the ProActive middleware.

The resources provided by t
he PACAGrid testbed, in TEFIS, are
called “
computational resources

. The word “computational”
,
here
,

is used

to describe the PACAGrid
resource as the
physical
machine, belonging to the grid,

that

can execute
the applications and
, in
general,
the
computations

defined by the PACAGrid user.

The computational res
ource in the ProActive middleware and, hence, in PACAGrid is abstracted by the
concept of

ProActive node
.
This is a software entity. It

can be seen as the

entry

point to the physical
resource where the
ProActive
node runs on. The computational resources are managed by the

ProActive
Resourcing tool running on PACAGrid, exclusively, on the behalf of the ProActive Scheduling

tool running
on P
ACAGrid too.

The PACAGrid user can only access the ProActive Scheduling tool.
This means
, the PACAGrid user does
not have direct access to the

com
putational resources but he can

use them indirectly
submitting the

applications to execute, i.e. the

ProActive

job
s
,

to the

ProActive Scheduling tool

running on PACAGrid. In
the context of TEFIS, th
e submission of the ProActive jobs take place

when the experimenter run an

experiment that contains a step that
requires the access to

the PACAGrid testbed.

The number
of computational resources
in
PACAGrid is
static and fixed.

I
t is defined by the PACAGrid
administrator
in

the

configuration of PACAGrid itself
. The user
s, hence,

are

not allowed to create and
add new resources dynamically. They can only select the set of

computational resources their
ProActive
jobs

should run

on
.

The selection of the computational resources is
performed by the

ProActive
selection
script
:
a
small application

(i.e. script)

executed
on each

computational resource to verify
if
it

satisf
ies

th
e
TEFIS

-

D4.1.2


Resource management implementation

Page
22

of
35


selection criteria

(
e.g.,
using a ProActive script the user can
select the physical machines
characterized b
y
the desired operating system, Linux or

Windows
or
by other properties
, on which the ProActive nodes
are running on
)
.

The ProActive selection scr
ipt must be attached to the ProActive job it must select
the
computational resources for.

The process to select the computational resources
is triggered

just after the ProActive job is submitted to
the ProActive Scheduling tool running on PACAGrid.

Since

t
he moment at which the execution of the
ProActive job
begins is the same as the one at which

the job is submitted to the ProActive Scheduling
tool, then it is clear that the selection of the computational resources cannot be performed before the
execution
of the ProActive job starts.
In fact,

the selection

process

is

itself part of the execution of the
ProActive job.
Having this

in mind, it is moreover clear that e
ven if the users c
an select

the
computational resources their
ProActive jobs

should run on,
th
is does not mean that, after the selection
process on PACAGrid has finished, the selected computational resources are reserved for the execution
of the
ProActive job:
i.e.

the ProActive Scheduling tool

starts the execution of the ProActive job as soon
as i
t is submitted and there are enough resources available

and
,

if a reservation mechanism had existed,
the resources would have been reserved before the execution

of the ProActive job

have been started
.

The lack of the reservation mechanism in PACAGrid leads

to the conclusion that the only execution policy
provided by the ProActive Scheduling tool is
the
“best effort” one. This implies that the user has no
guarantee that his ProActive job will be executed: the execution is triggered only if a sufficient amoun
t of
computational resources is available. But since all the resources the PACAGrid testbed provides are
shared among all the PACAGrid users, the probability to execute the ProActive job of
defined by a

given
user reduces if the number of ProActive jobs su
bmitted by the other users and the amount of
computational resources consumed by those ProActive jobs increase.

All what has been written above
means

that

during the implementation of the resource management

functionalities of the PACAGrid connector we had

to manage the difference in the semantic between the

TCI
-
RM API

(defined in the deliverable

[2]
)

and the PACAGrid resource management
tools
. In fact, the
TCI
-
RM API allows the

client

of the testbed connector

to identify and ge
t a specific set of resources that
will be used to run an experiment

later; basically the resources are assigned to the experiment till the
moment they are released. That

being said, the TCI
-
RM API does not seem to include the notion of
reservation. A reso
urce cannot be

reserved for a defined interval of time in the future. Resource
provisioning is triggered when the

‘add_resource’ operation is called and the resource should be
available to the caller till the invocation of

the ‘delete_resource’ operation,
forcing the release of the
resource. The difference with the current

approach

in PACAGrid

is that in PACAGrid the user doesn’t
really know if the resources will be available

when he
will
submit

the
ProActive
job because
that

job
will
be

run on a best effor
t basis. The implementation of the TCI
-
RM API

in the PACAGrid connector required
the implementation of a new functionality in the PACAGrid testbed

to allow the user to acquire a
resource during the ex
periment set
-
up phase so that the user can be sure that
the resource

will be
available during the experiment execution.

This new functionality is provided by a
ProActive infrastructure manager
: TefisInfrastructureManager. It
is

the entity that manages PACAGrid computational

resources for TEFIS experiments. When

it is started
i
t
manages no computational resource. It’s after the

“add
_
r
esource” operation is invoked that
computational resources are created/acquired and are

registered at the TefisInfrastructureManager. The
TEFIS

-

D4.1.2


Resource management implementation

Page
23

of
35


created/acquired resources are tagged with

t
he
information that identifies the experiment that will use
them. During the execution of the experiment a

selection script is executed to select the PACAGrid
computational resources created/acquired for that

specific experiment. That means the experiment
will
have exclusive access to the PACAGrid resources it

needs during its execution.

Then, w
hen the step of the experiment that is running on the PACAGrid testbed finishes and the

“delete
_r
esource” operation of the PACAGrid testbed connector is invoked, the

used computational

resources are unregistered from the TefisInfrastructureManager. That means they are destroyed and not

available for future experiments.


4.3

P
LANET
L
AB

PlanetLab is a world wide network dedicated to development of network services. The Plane
tLab
initiative has started on 2003, and since then, more than 1,000 researchers at top academic institutions
and industrial research labs have used PlanetLab to develop new technologies for distributed storage,
network mapping, peer
-
to
-
peer systems, distr
ibuted hash tables, and query processing. Currently, the
PlanetLab network is composed of 1123 nodes distributed over 544 sites.

The current PlanetLab user community consists primarily of researchers in networking and distributed
systems, although PlanetLa
b may host services with user communities who are unaware of its existence.
The testbed is best suited to services that need multiple, possibly geographicall
y dispersed “points of
presence

.

PlanetLab is designed to run on dedicated hosts. It provides purp
ose
-
built software from the ground
-
up,
including an operating system (currently modified Linux) with extensions for virtualization. PlanetLab. It
uses virtualization containers to manage resource allocation and to achieve isolation between a
potentially la
rge numbers of long
-
lived, independent services.

PlanetLab provides its users with a virtual container at each host to act as a “point of presence” for a
service.
From a service programmer’
s perspective, PlanetLab provides a distributed virtual machine wit
h
a relatively low
-
level system abstraction, in the form of (a distributed set of) virtual containers and a
familiar Unix
-
style API. It is envisaged that high
-
value services, such as storage or naming, will be built by
the user community, and that successf
ul ones will eventually be incorporated into the common core.

To better understand its functionting, PlanetLab’s architecture is depicted in

Figure
7
.

TEFIS

-

D4.1.2


Resource management implementation

Page
24

of
35



Figure
7

PlanetLab

architecture


When an organization joins PlanetLab, it hosts at least two physical servers at one or more locations.
Each location is called a site, and the servers are called nodes. Users (experimenters) create accounts on
one or more PlanetLab nodes to
perform their experiments. The nodes host one virtual machine per ac
-

count, and users, provided with superuser privilegies, can run any number of processes within their own
slivers. The virtual machines are called slivers, and the set of virtual machines
assigned to one account is
called a slice. PlanetLab is a shared testbed, so multiple slivers are running on the same node at any
given time.

With this information, we can see that a user can access resources available on many machines
distributed around t
he world. However, this user will share resources on each node with other users.
And, currently, it is not possible to book resources which implies use best effort.

Thus, the PlanetLab connector allows the following operations concerning resource managemen
t:



Ability to load a slice for the experimenter,



Ability to add nodes to the slice,



Ability to remove nodes from the slice,



Ability to display nodes available in the slice,



Ability to deploy code to nodes of the slice,



Ability to execute code on the nodes
of the slice

TEFIS

-

D4.1.2


Resource management implementation

Page
25

of
35


All slice and node management in PlanetLab is handled by a centralized database, called PlanetLab
Central (PLC), which is the interface through which the Planet
-

Lab Central database should be accessed
and maintained. The API is used by the Pl
anetLab connector, by nodes, by automated scripts, and by
users to access, manage, and update information about users, nodes, sites, slices, and other entities
maintained by the database. The API is accessed via XML
-
RPC over HTTPS and supports the standard

introspection calls system.listMethods, system.methodSignature, and system.methodHelp, and the
standard batching call system.multicall.


4.4

B
OTNIA
L
IVING
L
AB

The resource provided by Botnia for experimenters are the following:



User involvement expertise
;



Met
hodology for user involvement
;



The Botnia Living Lab user database
;



Business model expertise
;

Access to resources and result data are handled manually considering the resources to get access to. Via
the Connector user
-
interface the resource provisioning wi
ll be handled by the Botnia manager
.

In practice this mean
s

that all Botnia resources are handled by a physical
person, i.e.

the Botnia manager.
Then, the

Botnia user can only access the Botnia resources via the Botnia manager.
The users

are not
allowed to

create or add new resources. They can only select the available resource
s they need and want
to reserve. There is no automatic reservation: the reservation takes place

via e
-
mail through the
connector. The resources are insatiable but each instance can be

different depending on the
requirements of the users and every resource can be used several times.


4.5

K
YATERA

The KyaTera testbed serves two main proposals: the use of a dedicated optical fiber to research and a
measurement tool for network quality. Based o
n this, the testbed provides two different types of
resources: the
kyateranetwork

resource in which is the physical network itself and the
qualityresource

that is a set of tools to measure the quality of a network to the experiment.

The
kyateranetwork

reso
urce

uses a path of the KyaTera network provided by FAPESP in a previous
experiment in Brazil, and this path is still available in the University of Sâo Paulo facilities. This path
connects two different laboratories, one situated in the Engineering School

and the other in the Biologics
School. The path has some kilometers of distance and it is made with optical fibers, switches and linux
pcs in the nodes.

At this moment, the
kyateranetwork

resource

can be freely use,
in other words, there is no limit on
pa
rallel experiments, unless the experiment need a free path (to use the full capacity of the network), so in
this case he can asks for this.

TEFIS

-

D4.1.2


Resource management implementation

Page
26

of
35


To book this path, the experiment just need
s

to create a new
kyateranetwork

instance inside the KyaTera
network. In
the TEFIS connector level, when a new resource is called, the connector will call the KyaTera
facility and creates a new instance. This instance need
s

to have a name (that needs to be unique), the
nodes that will be used, and optionally the path to an appl
ication to be deployed in one node.

When this instance is created, a new experiment repository is created in the KyaTera facility and it will
contain the application deployed and all possible test results (th
e
s
e

tests will be created alongside with
the
qu
alityresource
)


As this resource is a path in the network, in the moment of this creation, it can be alive (excepted in the
case when it will be a test with a free network path), and the resource path dies when the delete
resource is called in the connecto
r.

When the connector calls the KyaTera webservice to delete a resource, the application stops, the active
repository is deleted (the results can persist in the database), the application is deleted and the instance
is removed from the facility memory.

The

qualityresource

resource

is a set of tools to measure the quality of the network (in this case the
minimal network features to provide a level of quality and reliability in a network).

Different from the
networkresource

the
qualityresource

is composed by
a set of software

that is running
on a Intel Core2 Quad Q8400, 4GB of RAM Ubuntu 11 machine

and, at this moment, has unlimited book.
The only restri
c
tion in this resource is that it needs a
networkresource, previously booked, in order to be
able to run. I
t makes sense, as the test could not run if there isn’t a network and an application running
to measure.

When a
qualityresource

is booked, the connector calls the KyaTera facility via ”add_resource” method
and a new quality instance is created with the des
ired experimenter parameters inside the KyaTera. This
instance will create the outputs and the results in the same repository as the
networkresource
, and when
the “delete_resource” method is called the results are persisted in the KyaTera data base, and th
e
repository is cleared (it will only be deleted in the call of “delete_resource” method in the
networkresource connector adapter) and finally the instance is removed from the KyaTera facility.

Focusing on the resource reservation mechanism provided by the

Kyatera testebd, it must be said that
t
he way that it is implemented
today
, the user don
't need even a kyatera account to book the Kyatera
resources. But
this will ch
ange during the TEFIS project.

In the future,

to use the kyatera resources, the
user will

only need a kyatera account, and when the user asks for the resource the system will provide it,
if it is

available.

Hence, the main requirement in order to access the Kyatera resources is that the user
owns an account at Kyatera testbed. The user can obt
ain such an account via the Kyatera web site that is
to be created. Since, the site does not exist yet, if needed it can provide

an additional RESTful interface in
order to let the TEFIS platform to create the proxy user account, on behalf of the TEFIS ex
perimenter, for
the Kyatera testbed.

Adding some details
, t
here is

no need to reservation in the K
yatera, The resource is only created through
TEFIS portal, when

the booking process is triggered by the Experiment M
anager
. Furthermore, t
here is
no

need to r
eserve a resource in K
yatera,
since

the only project using tho
se kyatera nodes is the TEFIS.
T
he resources are booked as the experimenter make the request on TEFIS.

A question could be arose:
TEFIS

-

D4.1.2


Resource management implementation

Page
27

of
35


TEFIS is used by many users and e
ac
h user can define an experime
nt then i
f each experiment needs
kyatera, after the booking phase is completed, does each experiment will get
its own

kyatera resource
instances

(i.e. the resource instances are used by that specific experiment and by no other experiment)
?
The answer to th
is question is yes. I
t is one resource per experiment. It is because the way that the
resour
ces are created in the kyatera:
only the owner can get the results, and delete the resource.
The
Kyatera testbed has not

a support team to manage the resources, so
all management needs to be done
at software level, and to avoid
all kind of issues
, each experiment
has its

own resources
.


4.6

T
ESTBED RESOURCE MANA
GEMENT
IMPLEMENTATION THROU
GH

TEAGLE

In Section
3

we stated that the resource
management implementation in the second version of the TEFIS
platform is based on two separated components: TEAGLE and the ProActive Resourcing tool. While the
latter is used in order to manage the computational resources on which the Experiment and Workfl
ow
Scheduler runs the TEFIS user’s experiments, the former is used to manage the resources provided by
the different testbeds registered at the TEFIS platform. The two workflows are presented respectively in
Figure
6

and

Figure
5
.

Those figures
show

how, actually, the resource provisioning works in the

second
version of the

TEFIS platform.

Here we want to present how th
e resource management provided by each testbed is leveraged by
TEAGLE in order to provide the management of the resources of a given testbed throughout the TEFIS
platform.

Even if the testbed connector exposes the following operations (as reported in
[12]
):



TCI
-
RM;



TCI
-
DM;



TCI
-
EXEC;



TCI
-
IDM;

only the resource management part
, i.e.

TCI
-
RM
,

is considered here.

Th
is

part is implemented taking into
account the T1 interface of the TEAGLE PTM. The operations
defined by

th
e T1

interface are the
following:



add_resource();



get_resource();



update_resource();



delete_resource();



list_resource().

Each testbed connector has its own implementation of the TEAGLE PTM interface. The testbed

connector

PTM is started at the same time

of the
other
testbed connector

software modules
.
The testbed
connector PTM

represents the connection point between the TEAGLE instance running at the TEFIS
TEFIS

-

D4.1.2


Resource management implementation

Page
28

of
35


platform and
the resource management of

the specific

testbed. In fact, as stated in delivera
ble
[9]
:
e
ach
individual request is sent to the Teagle Gateway
;

then this component d
ispatches the
resource
provisioning requests to the domain mana
gers of the respective domains and, finally, t
he domain
managers (PTMs) conduct

the provisioning of resources according to the incoming requests. From the
perspective of the
TEAGLE

platform, this operation is opaque and could be performed in a number of
ways. However, the implementation of a PTM available at present utilizes s
o called resource adapters as
an abstract
ion layer for actual resources.

The

testbed provider
, hence,

must
implement the ResourceAdapter when registering his own testbed(s)
at the TEFIS platform.
The

ResourceAdapter is responsible

for the management of
a given

type of
resources provided by the testbed. The testbed connector developer should, then, implement as many
ResourceAdapter as type of resources provided by the testbed.

Each resource type is identified by a
name (i.e.
it has
the same value of the field “
Name
” of the
resource specification

shown in

Figure
9
)

that
is used by TEAGLE

to select the ResourceAdapter to which the resource provis
ioning requests should be
sent

to.
The ResourceAdapter
instances
are

started after the testbed connector

itself.

I
n th
at

way
they

can register at the PTM
started during the initialization

of

the
testbed
connector

itself
.

This means that
when the testbed connector is s
tarted, first
,

its own PTM is started as well and
,

then
, its
ResourceAdapter instances are registered at that PTM.

In brief,
a
s can be observed
in

Figure
2
, the testbed connector and/or testbed provider
must
init
ialize/
implement all the functional components that are to be connected to the TEEAGLE Gatewa
y:
the PTM, the ResourceAdapter and the actual Resource provided by the testbed.


Figure
8
:
PACAGrid testbed registered at the TEFIS port
al

Then, the testbed provider, in registering is testbed at the TEFIS portal, must specify the URL of the
relative testbed connector. For example in
Figure
8

the PACAGrid connector is exposed as a RESTful
TEFIS

-

D4.1.2


Resource management implementation

Page
29

of
35


service at the UR
L “
http://tefis5
-
clone.inria.fr:8000/rest
”. The testbed connector URL is stored into the
TEAGLE repository of the TEAGLE instance running at the TEFIS platform.

The TEAGLE repository also contains the

resource specifications (or types) provided by each testbed and
the corresponding resource instances. It’s up to the testbed provider to register the resource
specification for a given testbed at the TEFIS portal (i.e.

they will be registered,

in turn
,

in
to the TEAGLE
repository running at TEFIS platform).
Figure
9

shows the PACAGrid computational resource specification
stored into the TEFIS platform.


Figure
9
: PACAGrid resource specification pro
vided by the testbed PACAGrid

A
fter the testbed provider has registered its own testbed(
s
)

and resource specification(s) the TEFIS
platform

stores the link between the testbed and the resource specification
(s)

of
the resource types

it

manage
s
.
What is not
defined yet is how the TEAGLE instance running at the TEFIS platform can select
the correct testbed connector in order, for example,
to
create an instance of a given resource
specification

at a specific testbed
.

This

selection

process

is
based on the

value

of

the field

Name


of the
resource specification (
i.e.
shown in
Figure
9
:


computational_resource

)
.

D
uring the testbed resource provisioning (
see Section
3.3.1
)

all the resource management operation
s
,
triggered by the TEFIS Experiment Manager,

need to be routed to
the PTM
that is part of the

testbed
connector

providing
the

specific

resource

specification/t
ype
.
All those resource management operations

TEFIS

-

D4.1.2


Resource management implementation

Page
30

of
35


pass through the single

instance of

the
TEAGLE
Req
uest Processor

running at the TEFIS
platform
.
The
value of the field “Name”

of the resource specification

is used by the TEAGLE
Request Processor
to
identify

which

testbed connector, and so PTM
,

is
res
ponsible for the management of
the

specific

resource sp
ecification
/type

and,

hence, which testbed connector

need to be accessed in order to
actually
serve

the

provisioning of the
given

resources
.

This process is enacted for a single testbed and it is
repeated for all the testbeds involved in an experiment.

Th
e
provisioning of the whole set

of the resource instances

required by a given experiment

is

triggered
by the Experiment Manager
,

as
Figure
5

shows
,

and

is
performed

before the
experiment starts running
.
This means that, actually i
n the second version of the TEFIS platform, the unavailability of

some

resource
instances at
one of the

testbed

involved into
the

experiment

prevents the execution of the
whole
experiment

and not only of the task

that

depend
s
on th
at

not available resource

instance
.

In fact, the resource access in TEFIS is
modelled

as

a

best
-
effort
quality

of service.

This approach ensures
that the access to the resources is shared equally and fairly among all users of the system, but can result
in long delays when competit
ion between users forces the experiments to wait for resources to become
available.
Those delays are encountered many times if the experiments
are

complex
(i.e. they involve
multiple testebeds and they have
execution

loops for example)
, otherwise for
simpl
e

experiments these
de
lays are encountered only once.

Furthermore, an advanced
resource provisioning model t
hat

avoid
s

the delays
resulting from the best
-
effort model and provide
s

the allocation of the resources to a given user for a given period of time i
s not
applicable in TEFIS due to the
existing
resource provisioning policies of
the

testbed
s

currently registered
at the TEFIS
platform

(
see Sections
4.1
,
4.2
,
4.3
,
4.4

and
4.5
)
.
For example PACAGrid shares its resources
among its
direct

users

(i.e. the users that access PACAGrid without using TEFIS)

and the TEFIS ones using
a best
-
effort model
.

W
e

can
, then,

suppo
se that

TEFIS

let
its

user
s

define the number of PACAGri
d
computational resources

to reserve

and

the
range of
time
during which to have

exclusive access to
those
resources. Since the PACAGrid resource provisioning policy is best
-
effort,
it could happen tha
t
the
PACAGrid direct users

consume all the PACAGrid resources. Then even if the TEFIS user had defined the
range

of time during which to have exclusive access to a sub
-
set of PACAGrid resources, that exclusive
access could not be granted.
Given this examp
le

we can state that:


The current implementation status
of the testbed connectors does not allow us to completely address the issues of testbed resource
management. The analysis

[outlined in Sections
4.1
,
4.2
,
4.3
,
4.4

and
4.5
]

can only be used to defin
e
future development guidelines

. This

means that
TEFIS

platform

and the testbed providers should
come
to an agre
ement about
what should be the

TEFIS resource provisioning policy
given the

existing
policie
s
of

each
testbed registered at the TEFIS
platform
.

It also means that the TEFIS user could expect to be
able to reserve the resources
for his experiment at the tes
tbeds involved in it but the TEFIS platform
could not guarantee that
since the resource provisioning policy of the TEFIS platform
strictly depend
s

on
the one defined at the
different
testbeds.

TEFIS

-

D4.1.2


Resource management implementation

Page
31

of
35


5

Resource management installation

As stated above, the TEFIS Resource Manager prototype includes: the Teagle Request Processor, the
Teagle Orchestration Engine, the Teagle Gateway, the Tea
gle Policy Engine and the ProActive Resourcing
tool.

In the settings of the current installation of the prototype the Teagle Gateway is not needed so it is not
included in the currently running platform. The same happens for the Teagle Policy Engine since

in the
scenarios considered for the first version of the TEFIS prototype any specific policy definition has been
planned.

The TEFIS Resource Manager prototype is currently installed in the TEFIS infrastructure operated by
INRIA (cf
.
[11]

for more details on the TEFIS infrastructure). All the involved components are available at
the following URLs.


Teagle Request Processor:

http://tefis1.inria.fr/reqproc


Teagle Orchestration Eng
ine (its web interface):

http://tefis1.inria.fr/teagle
-
site


ProActive Resourcing tool:

rmi://tefis2.inria.fr:1099/


Anyway, since all the components included in the TEFIS Resource Manager prototype are n
ot supposed
to be directly used by end users but they are accessed by other TEFIS components, then it’s not easy to
have a quick view of them.

An easier way to test the installed Teagle components is to use the VCT Tool to browse the repository
and to def
ine and request the booking of a new virtual customer testbed. The VCT Tool, to be used
testing purposes, is available at

http://tefis1.inria.fr/web
-
start
-
devel/vct.jnlp

(the access
c
redentia
ls to use are the ones
used to access

the TEFIS portal).


TEFIS

-

D4.1.2


Resource management implementation

Page
32

of
35


In order to
check the installation of

the ProActive Resourcing tool,
one of the ProActive Resourcing
clients can be used. The version 3.0.x of the clients available for download at
http://www.activeeon.com/community
-
downloads
, are actually compatible with the installed version on
the TEFIS infrastructure.



Dependencies:

t
he Teagle Request Processor and the Orchestration Engine leverage th
e
TEFIS Resource
Directory, that is based on the
Teagle Repository
,
available at:

http://tefis1.inria.fr/repository



(the Web interface)

http://tefi
s1.inria.fr/repository/rest

(the REST interface)


The Resource Directory deliverable is part of the Work Package 3, so refer to WP3 deliverables for more
information on this component.


TEFIS

-

D4.1.2


Resource management implementation

Page
33

of
35


6

Future improvements

The proposed solution is to be validated during

the second year review of the TEFIS platform and running
the eHealth use case. The demonstration at the Future Internet Week conference has already
demonstrated the effectiveness of the solution and its potential has emerged.

Some work still needs to be d
one. In particular, it should be defined the approach, common to all the
testbeds registered at TEFIS, to manage the testbeds resources and specifically the resource reservation.
The constraints and limitation of the testbeds and the tools they provide sho
uld be taken into account.

The most particular testbed is BOTNIA. There each interaction is not automatic and is performed by an
individual.

While, even if the interaction between the TEFIS platform and the other testbeds is automatic, some
issues still re
main unsolved. For example from Section
4
, it is clear that concerning
the PACAGrid testbed,
since we cannot “reserve” the resources, what we can do is to let the user define the time and/or date
when the execution should start
. This does not imply we can turn around t
he best effort policy applied

by PACAGrid when running the
ProActive jobs defined by the users
. The execution policy is still best
effort; the user can only
chose when

the execution

should start, if the needed amo
unt of computational
resources is available
.

TEFIS

-

D4.1.2


Resource management implementation

Page
34

of
35


7

Conclusion

In this document we have described the implementation of the TEFIS Resource Manager component.
The Resource Manager is the same used in the first prototype of the TEFIS platform. The internal tests
an
d demonstration performed at the Future Internet Week conference have validate the
implementation choices: the adoption of both TEAGLE and the ProActive Resourcing tool contained in
the ProActive Parallel Suite is needed to provide the resource management
features of the TEFIS
platform and only their cooperation allows us to fulfill the TEFIS platform requirements in terms of
resource management.

The two utilization scenarios, to show how TEAGLE and the ProActive Resourcing tool are used during
the testbed
resource provisioning and the experiment execution respectively, remain the same in the
second TEFIS prototype.

The most important issue to address is how to leverage the tools provided by the testbeds in order to
manage their resources. In particular, the

heterogeneity of the testbeds does not allow us to define a
general solution to the problem.
The initial analysis of the features and characteristics of each testbed
(i.e., BOTNIA, ETICS, PACAGrid, PlanetLab and Kyatera) has been included in this document
. It will be
used to find the common approach to address the problem of the reservation of the resources provided
by the testbeds. The analysis of the resource management in the IMS testbed should still be done.

The last part of the document is dedicated t
o the description of the prototype carried out, how it is
installed on the INRIA machines and how to get the access to the installed TEAGLE components and to
the ProActive Resourcing tool.

TEFIS

-

D4.1.2


Resource management implementation

Page
35

of
35


References

[1].

D4.2.2 Scheduler implementation

[2].

D5.3.1

Connector impleme
ntation and documentation for services facilities

[3].

D2.1.2 Global architecture and overall design

[4].

D3.3 User tools implementation

[5].

D3 2 Building blocks integrated into TEFIS portal.doc

[6].

D
4.2.1 Experiment and Workflow Scheduler prototype

[7].

D3.1.2 TEFIS Portal impl
ementation

[8].

D4.1.1
Resource Manager prototype

[9].

D3.1.1_Teagle_assessment_and_TEFIS_portal_specifications

[10].

http://proactive.inria.fr/release
-
doc/Res
ourcing/single_html/ProActiveResourceManagerManual.html

[11].

D2.3.2 Integration plan

[12].

D5.1.1 Generic and Specific Connectors specifications