ppt - LHC Computing Grid - CERN

bracechumpInternet and Web Development

Feb 5, 2013 (4 years and 7 months ago)

354 views

INFSO
-
RI
-
508833


Enabling Grids for E
-
sciencE

http://arda.cern.ch

LCG ARDA project


Status and plans


Massimo Lamanna / CERN

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

2

Overview


ARDA in a nutshell


ARDA prototypes


4 experiments


ARDA feedback on middleware


Middleware components on the development test bed


Outlook and conclusions


Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

3

The ARDA project


ARDA is an LCG project


main activity is to enable LHC analysis on the grid


ARDA is contributing to EGEE


uses the entire CERN NA4
-
HEP resource (NA4 = Applications)



Interface with the new EGEE middleware (g
L
ite)


By construction, ARDA uses the new middleware


Use the grid software as it matures


Verify the components in an analysis environments


Contribution in the experiments framework (discussion, direct
contribution, benchmarking,…)


Users needed here. Namely physicists needing distributed
computing to perform their analyses


Provide
early and continuous

feedback


Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

4

ARDA prototype overview


LHC

Experiment


Main focus


Basic prototype
component
/framework


Middleware


GUI to Grid


GANGA/DaVinci


Interactive
analysis


PROOF/AliROOT


High
-
level
services


DIAL/Athena

Explore/exploit
native g
L
ite
functionality


ORCA

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

5

Ganga4

Andrew Maier
4
th
ARDA Workshop, March 2005
12
Internal architecture
Application
Manager
Job
Manager
Remote
Registry
Cl
ient

Ganga
4
is decomposed
into
4 functional
components

These components also
describe the components in
a
distributed model
.

Strategy: Design each
component
so that it could
be a
separate service
.

But
allow to combine
two or
more
components
into a
single service
Andrew Maier
4
th
ARDA Workshop, March 2005
6
Ganga
3
-
The current release

The current release of
Ganga
(version 3)
is mainly a GUI
application

New in
Ganga
3
is the availability of a
command line
interface
Andrew Maier
4
th
ARDA Workshop, March 2005
8
Ganga
4
-
Introduction

While the current status
presented is the result of an
intensive 8 week discussion:

A lot of the information shown is
work in progress

Certain aspects are not yet fully
defined and agreed upon

Changes are likely to happen

Up to date information can be
found on the
Ganga
web page:
http://cern.ch/ganga

Major version


Important contribution from the

ARDA team


Interesting concepts


Note that GANGA is a joint ATLAS
-
LHCb

project


Contacts with CMS (exchange of ideas,

code snippets, …)

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

6

GANGA Workshop 13
-
15 of June


GANGA Workshop:
http://agenda.cern.ch/fullAgenda.php?ida=a052763


at Imperial College London (organised by U. Egede)

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

7

ALICE prototype



ROOT and PROOF


ALICE provides


the UI


the analysis application (AliROOT)



GRID middleware g
L
ite provides all the rest








ARDA/ALICE is evolving the ALICE analysis system


UI shell

Application

Middleware

end


to


end

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

8

USER SESSION

PROOF

PROOF SLAVES

PROOF MASTER SERVER

PROOF SLAVES

Site A

Site C

Site B

PROOF SLAVES

Demo based on a hybrid system using 2004 prototype


Demo at
Supercomp
uting 04 and
Den Haag


Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

9

ARDA shell + C/C++ API

Server
Client
Server Applicat ion
Applicat ion
C-API (POSIX)
Securit y-
wrapper
GSI
SSL
UUEnc
Securit y-
wrapper
GSI
gSOAP
SSL
TEXT
Server
Service
UUEnc
gSOAP
C++ access library for g
L
ite has been
developed by ARDA



High performance


Protocol quite proprietary...


Essential for the ALICE
prototype


Generic enough for general use


Using this API grid commands have
been added seamlessly to the
standard shell



Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

10

Current Status




Developed g
L
ite C++ API and API Service


providing generic interface to any GRID service



C++ API is integrated into ROOT


In the ROOT CVS


job submission and job status query for batch analysis can be done from inside ROOT



Bash interface for g
L
ite commands with catalogue expansion is developed


More powerful than the original shell


In use in ALICE


Considered a “generic” mw contribution (essential for ALICE, interesting in general)



First version of the interactive analysis prototype ready



Batch analysis model is improved


submission and status query are integrated into ROOT


job splitting based on XML query files


application (Aliroot) reads file using xrootd without prestaging

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

11

ATLAS/ARDA


Main component:


Contribute to the DIAL evolution


gLite analysis server


“Embedded in the experiment”


AMI tests and interaction


Production and CTB tools


Job submission (ATHENA jobs)


Integration of the gLite Data Management within Don Quijote


Active participation in several ATLAS reviews


Plan to demonstrate GANGA+Prod service (coming soon)


Benefit from the other experiments prototypes


First look on interactivity/resiliency issues


E.g. use of DIANE


GANGA (Principal component of the LHCb prototype, key
component of the overall ATLAS strategy)

Tao
-
Sheng Chen, ASCC

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

12

SE

Nordugrid

DQ server

RLS

DQ Client

DQ server

DQ server

DQ server

RLS

RLS

SE

SE

SE

RLS

g
L
ite

LCG

GRID3

Data Management




Don Quijote

Locate and move data
over grid boundaries

ARDA has connected g
L
ite

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

13

Combined Test Beam




Example:


ATLAS TRT data analysis done
by PNPI St Petersburg

Number of straw hits per layer

Real

data processed at

g
L
ite

Standard Athena for testbeam

Data from CASTOR

Processed on g
L
ite worker node

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

14

DIANE

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

15

DIANE on gLite running Athena

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

16


Pattern ARDA/CMS activity


Prototype (ASAP)


Contributions to CMS
-
specific components


RefDB/PubDB


Usage of components used by CMS


Notably Monalisa


Contribution to CMS
-
specific developments


Physh


ARDA/CMS

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

17


ARDA/CMS prototype


RefDB Re
-
Design and PubDB


Taking part in the RefDB redesign


Developing schema for PubDB and supervising development of the
first PubDB version



Analysis Prototype Connected to MonAlisa


To track the progress of an analysis task is troublesome when
the task is split into several (hundreds of) sub
-
jobs


Analysis prototype associates each sub
-
job with built
-
in ‘identity’
and capability to report its progress to the MonAlisa system


MonAlisa service receives and combines progress reports of
single sub
-
jobs and publishes the overall progress of the whole
task





ARDA/CMS

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

18

ARDA/CMS


PhySh


Physicist Shell


ASAP is Python
-
based and it uses XML
-
RPC calls for
client
-
server interaction like Clarens and PhySh


In addition, to enable future integration, the analysis
prototype has similarly structured CVS repository as
the PhySh project


Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

19

ARDA
-
CMS



CMS prototype (ASAP = Arda Support for cms Analysis
Processing)


First version of the CMS analysis prototype capable of creating
-
submitting
-
monitoring of the CMS analysis jobs on the gLite
middleware had been developed by the end of the year 2004


Demonstrated at the CMS week in December 2004


Prototype was evolved to support both RB versions deployed at the
CERN testbed (prototype task queue and gLite 1.0 WMS ).


Currently submission to both RBs is available and completely
transparent for the users (same configuration file, same functionality)


Plan to implement gLite job submission handler for Crab



Users?


Starting from February 2005 CMS users began working on the testbed
submitting jobs through ASAP


Positive feedback, suggestions from the users are implemented asap


Plan to involve more users as soon as preproduction farm is available


Plan to try and use in the prototype new functionality provided by WMS
(DAGs, interactive job for testing purposes)

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

20

ASAP: Starting point for users


The user is familiar with the experiment application needed to
perform the analysis (ORCA application for CMS)


The user knows how to create executable able to run the analysis task
(reading selected data samples, use the data to compute derived
quantities, take decisions, fill histograms, select events, etc…). The
executable is based on the experiment framework


The user debugged the executable on small data samples, on a
local computer or computing services (e.g. lxplus at CERN)



How to go for larger samples , which can be located at any
regional center CMS
-
wide?



The users should not be forced :


to change anything in the compiled code


to change anything in the configuration file for ORCA


to know where the data samples are located



Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

21

ASAP work and information flow

First scenario

RefDB

PubDB

ASAP UI

Monalisa

gLite

JDL

Job monitoring
directory

Defines in the configuration file

Application, application version,

Executable

ORCA data cards

Data sample,

Working directory,


Castor directory to save output,

Number of events to be processed,

Number of events per job


Job
running
on the
Worker
Node

Output files
location

Submission

Querying job status

Saving
output

Job
generation

CMS
catalogs

Monitoring
system

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

22

ASAP work and information flow


Second scenario

RefDB

PubDB

ASAP UI

Monalisa

gLite

JDL

Job monitoring
directory

ASAP Job

Monitoring

service

Publishing

Job status

On the WEB

Delegates user
credentials using
MyProxy

Job submission

Checking job
status

Resubmission in
case of failure

Fetching results

Storing results to
Castor

Output files
location

Application,applicationversion,

Executable,

Orca data cards

Data sample,

Working directory,


Castor directory to save output,

Number of events to be processed

Number of events per job

Job
running
on the
Worker
Node

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

23


CMS
-

Using MonAlisa

for user job monitoring

A single job

Is submiited


to gLite

JDL contains

job
-
splitting


instructions

Master job is


split by gLite


into sub
-
jobs

Dynamic

monitoring

of the total

number of

the events of

processed by

all sub
-
jobs

belonging to


the same

Master job

Demo at
Supercomputing
04

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

24

Job Monitoring



ASAP Monitor

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

25

Merging the results

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

26

First CMS users on gLite


Demo of the first working version of the prototype was done for
the CMS community in December 2004


ASAP is the first ARDA prototype which migrated to gLite version
1.0


First CMS physicists started to work on the gLite testbed using
ASAP in the beginning of February 2005


Currently we support 5 users from different physics group (can
not allow more before moving to the preproduction farm):


3 users
-

Higgs group


1 user
-

SUSY group


1 user


Standard Model


Positive feed back from the users, got many suggestions for
improving interface and functionality. Fruitful collaboration.


ASAP has a support mailing list and a web page where we start to
create a user guide:

http://arda
-
cms.cern.ch/asap/doc



Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

27

Bkg. samples

Processed with


s

Br, mb

Kine presel.

qcd, p
T

= 50
-
80 GeV/c


100K

Arda


2.08 x 10
-
2



2.44 x 10
-
4

qcd, p
T

= 80
-
120 GeV/c

200K

crab


2.94 x 10
-
3


5.77 x 10
-
3

qcd, p
T

= 120
-
170 GeV/c

200K

Arda


5.03 x 10
-
4


4.19 x 10
-
2

qcd, p
T

> 170 GeV/c


1M


1.33 x 10
-
4


2.12 x 10
-
1

tt, W
-
>
tn



㠰8

crab


5.76 x 10
-
9


4.88 x 10
-
2

Wt, W
-
>
tn


㌰3

Arda


7.10 x 10
-
10


1.38 x 10
-
2

W+j, W
-
>
tn

㐰か

crab


5.74 x 10
-
7


2.16 x 10
-
2

Z/
g
*
-
>
tt,

ㄳ〼m
tt
㰠<〰0G敖⽣
2



㜰7

Arda


1.24 x 10
-
8


9.53 x 10
-
2

Z/
g
*
-
>
tt,

m
tt

㸠㌰〠G敖⽣
2


60K

gross


6.22 x 10
-
10


3.23 x 10
-
1

H
-
>2
t
-
㸲樠慮慬y獩s
: bkg. data available

(all signal events processed with
Arda
)

A. Nikitenko (CMS)

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

28

Higgs boson mass (M
tt
⤠牥捯湳瑲畣瑩tn

s

H
) ~
s

T
miss
) / sin(
f
j1j2
)

Higgs boson mass was reconstructed after basic off
-
line cuts:

reco E
T
t

jet

> 60 GeV, E
T
miss

> 40 GeV. M
tt

evaluation is shown for the

consecutive cuts : p
t

> 0 GeV/c, p
n

> 0 GeV/c,
Df
j1j2

< 175
0
.

M
tt

and
s
(M
tt
) are in a very good agreement with old results CMS Note 2001/040,

Table 3: M
tt

= 455 GeV/c
2
,
s
(M
tt
)=77 GeV/c
2
. ORCA4, Spring 2000 production.

A. Nikitenko (CMS)

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

29

ARDA ASAP


First users were able to process their data on gLite


Work of these pilot users can be regarded as a first round of validation of
the gLite middleware and analysis prototypes



The number of users should increase as soon as preproduction
system will become available


Interest to have CPUs at the centres where data sits (LHC Tier
-
1s)



To enable user analysis on the Grid:


we will continue to work in the close collaboration with the physics
community and gLite developers


ensuring good level of communication between them


providing constant feedback to the gLite development team



Key factors to progress:


Increasing number of users


Larger distributed systems


More middleware components



Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

30

ARDA Feedback (gLite
middleware)



2004:


Prototype available (CERN + Madison Wisconsin)


A lot of activity (4 experiments prototypes)


Main limitation: size


Experiments data available!



Just an handful of worker nodes



2005:


Coherent move to prepare a gLite package to be deployed on the
pre
-
production service


ARDA contribution:


Mentoring and tutorial


Actual tests!


Lot of testing during 05Q1


PreProduction Service is about to start!



Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

31

WMS monitor



“Hello World!” jobs


1 per minute since last Febraury


Logging&Bookkeeping info on the web to help the developers


Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

32

Data Management


Central component together with the WMS


Early tests started in 2004


Two main components:


gLiteIO (protocol + server to access the data)


FiReMan (file catalogue)


The two components are not isolated, for example gLiteIO uses the
ACL as recorded in FiReMan, FiReMan exposes the physical location
of files for the WMS to optimise the job submissions…


Both LFC and FiReMan offer large improvements over RLS


LFC is the most recent LCG2 catalogue


Still some issues remaining:


Scalability of FiReMan


Bulk Entry for LFC missing


More work needed to understand performance and bottlenecks


Need to test some real Use Cases


In general, the validation of DM tools takes time!



Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

33

FiReMan Performance
-

Queries


Query Rate for an LFN


0


200


400


600


800


1000


1200


5


10


15


20


25


30


35


40


45


50

Entries Returned / Second

Number Of Threads

Fireman Single

Fireman Bulk 1

Fireman Bulk 10

Fireman Bulk 100

Fireman Bulk 500

Fireman Bulk 1000

Fireman Bulk 5000

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

34

FiReMan Performance
-

Queries


Comparison with LFC:


0


200


400


600


800


1000


1200


1


2


5


10


20


50


100

Entries Returned / Second

Number Of Threads

Fireman
-

Single Entry

Fireman
-

Bulk 100

LFC

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

35

More data coming…


C. Munro (ARDA & Brunel Univ.) at ACAT 05

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

36

Summary of gLite usage and testing


Info available also under

http://lcg.web.cern.ch/lcg/PEB/arda/LCG_ARDA_Glite.htm


gLite version 1


WMS


Continuous monitor available on the web (active since 17
th

of February)


Concurrency tests


Usage with ATLAS and CMS jobs (Using Storage Index)


Good improvements observed


DMS (FiReMan + gLiteIO)


Early usage and feedback (since Nov04) on functionality, performance and usability


Considerable improvement in performances/stability observed since


Some of the tests given to the development team for tuning and to JRA1 to be used
in the testing suite


Most of the tests given to JRA1 to be used in the testing suite


Performance/stability measurements: heavy
-
duty testing needed for real validation


Contribution to the common testing effort to finalise gLite 1 with SA1, JRA1
and NA4
-
testing)


Migration of certification tests within the certification test suite (LCG

gLite)


Comparison between LFC (LCG) and FiReMan


Mini tutorial to facilitate the usage of gLite within the NA4 testing

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

37

Metadata services on the Grid





g
L
ite has provided a prototype for the EGEE Biomed community (in 2004)


Requirements in ARDA (HEP) were not all satisfied by that early version


ARDA preparatory work


Stress testing of the existing experiment metadata catalogues


Existing implementations showed to share similar problems


ARDA technology investigation


On the other hand usage of extended file attributes in modern systems
(NTFS, NFS, EXT2/3 SCL3,ReiserFS,JFS,XFS) was analysed:


a sound POSIX standard exists!


Prototype activity in ARDA


Discussion in LCG and EGEE and UK GridPP Metadata group




Synthesis:


New interface which will be maintained by EGEE benefiting from the
activity in ARDA (tests and benchmarking of different data bases and
direct collaboration with LHCb/GridPP)

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

38

ARDA Implementation


Prototype


Validate our ideas and expose a
concrete example to interested parties


Multiple back ends


Currently: Oracle, PostgreSQL, SQLite


Dual front ends


TCP Streaming


Chosen for performance


SOAP


Formal requirement of EGEE


Compare SOAP with TCP Streaming


Also implemented as standalone
Python library


Data stored on the file system

Python Interpreter
Metadata
Python
API
Client
filesystem
Metadata Server
MD
Server
SOAP
TCP
Streaming
Postgre
SQL
Oracle
SQLite
Client
Client
Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

39

Dual Front End


Text based protocol














Data
streamed

to client in single
connection


Implementations


Server


C++, multiprocess


Clients


C++, Java, Python, Perl, Ruby



Most operations are SOAP calls
















Based on

iterators


Session created


Return initial chunk of data and session token


Subsequent request: client calls
nextQuery()

using session token


Session closed when:


End of data


Client calls
endQuery()


Client timeout


Implementations


Server


gSOAP (C++).


Clients


Tested WSDL with gSOAP, ZSI
(Python),

AXIS
(Java)


Client
Server
Database
<
operation
>
Create DB cursor
[
data
]
[
data
]
[
data
]
[
data
]
[
data
]
[
data
]
[
data
]
[
data
]
Streaming
Streaming
Client
Server
Database
query
Create DB cursor
[
data
]
[
data
]
[
data
]
[
data
]
[
data
]
nextQuery
[
data
]
nextQuery
[
data
]
Streaming
SOAP
with iterators
Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

40

More data coming…


N. Santos (ARDA & Coimbra Univ.) at ACAT 05


Test protocol performance


No work done on the backend


Switched 100Mbits LAN


Language comparison


TCP
-
S with similar performance in all
languages


SOAP performance varies strongly with
toolkit


Protocols comparison


Keepalive improves performance
significantly


On Java and Python, SOAP is several
times slower than TCP
-
S





Measure scalability of protocols


Switched 100Mbits LAN


TCP
-
S
3x faster
than gSoap (with
keepalive)


Poor performance without keepalive


Around
1.000 ops/sec
(both gSOAP and
TCP
-
S)


0

5

10

15

20

25
Execution Time
[
s
]
C
++ (
gSOAP
)
Java
(
Axis
)
Python
(
ZSI
)
TCP
-
S no KA
TCP
-
S KA
SOAP no KA
SOAP KA
1000 pings


1000

10000

1

10

100
Average throughput
[
calls
/
sec
]
#
clients
TCP
-
S
,
no KA
TCP
-
S
,
KA
gSOAP
,
no KA
gSOAP
,
KA
Client ran
out of sockets
Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

41

Current Uses of the ARDA
Metadata prototype


Evaluated by
LHCb bookkeeping


Migrated bookkeeping metadata to ARDA prototype


20M entries, 15 GB


Feedback valuable in improving interface and fixing bugs


Interface found to be complete


ARDA prototype showing good scalability


Ganga
(LHCb, ATLAS)


User analysis job management system


Stores job status on ARDA prototype


Highly dynamic metadata


Discussed within the community


EGEE


UK GridPP Metadata group


Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

42

ARDA workshops and related activities


ARDA workshop (January 2004 at CERN; open)


ARDA workshop (June 21
-
23 at CERN; by invitation)


“The first 30 days of EGEE middleware”


NA4 meeting (15 July 2004 in Catania; EGEE open event)


ARDA workshop (October 20
-
22 at CERN; open)


“LCG ARDA Prototypes”


Joint session with OSG


NA4 meeting 24 November (EGEE conference in Den Haag)


ARDA workshop (March 7
-
8 2005 at CERN; open)


ARDA workshop (October 2005; together with LCG Service
Challenges)



Wednesday afternoon meeting started in 2005:


Presentations from experts and discussion (not necessary from ARDA
people)







Available from
http://arda.cern.ch




Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

43

Conclusions (1/3)


ARDA has been set up to


Enable distributed HEP analysis on g
L
ite


Contact have been established


With the experiments


With the middleware developers



Experiment activities are progressing rapidly


Prototypes for ALICE, ATLAS, CMS & LHCb


Complementary aspects are studied


Good interaction with the experiments environment


Always seeking for users!!!


People more interested in physics than in middleware… we support them!


2005 will be the key year (gLite version 1 is becoming available on the pre
-
production service)


Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

44

Conclusions (2/3)



ARDA provides special feedback to the development team


First use of components (e.g. gLite prototype activity)


Try to run real
-
life HEP applications


Dedicated studies offer complementary information



Experiment
-
related ARDA activities
produce

elements of general use


Very important “by
-
product”


Examples:


Shell access (originally developed in ALICE/ARDA)


Metadata catalog (proposed and under test in LHCb/ARDA)


(Pseudo)
-
interactivity experience (something in/from all experiments
)

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

45

Conclusions (3/3)


ARDA is a privileged observatory to follow, contribute and influence
the evolution of the HEP analysis


Analysis prototypes are a good idea!


Technically, they complement the data challenges’ experience


Key point: these systems are exposed to
users


The approach of 4 parallel lines is not too inefficient


Contributions in the experiments from day zero


Difficult environment


Commonality can not be imposed…


We could do better in keeping good connection with OSG


How?

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

46

Outlook


Commonality is a very tempting concept, indeed…


Sometimes a bit fuzzy, maybe…


Maybe it is becoming possible (and valuable)…


Lot of experience in the whole community!


Baseline services ideas


LHC schedule: physics is coming!


Maybe it is emerging…
(examples are not exhaustive)


Interactivity is a genuine requirement: e.g. PROOF and DIANE


Portals


toolkits for the users to build applications on top of the
computing infrastructure: e.g. GANGA


Metadata/workflow systems open to the users: needed!


This area has yet to be “diagonalised”


Monitor and discovery services open to users: e.g. Monalisa in ASAP


Strong preference for a “a posteriori” approach


All experiments still need their system…


Since it is really needed, then we
*

should do it


No doubt

that technically we
*

can



We* = the HEP community
in collaboration with the
middleware experts

Massimo Lamanna
-

OSG Applications Meeting (SLAC)
-

June 1st, 2005

47

People


Massimo Lamanna


Frank Harris (EGEE NA4)



Birger Koblitz


Andrey Demichev


Viktor Pose


Victor Galaktionov



Derek Feichtinger


Andreas Peters



Hurng
-
Chun Lee


Dietrich Liko


Frederik Orellana


Tao
-
Sheng Chen



Julia Andreeva


Juha Herrala


Alex Berejnoi



Andrew Maier


Kuba Moscicki


Wei
-
Long Ueng



2 PhD students:


Craig Munro (Brunel Univ.) Distributed
analysis within CMS

working mainly with Julia


Nuno Santos (Coimbra Univ) Metadata and
resilient computing

working mainly with Birger



Catalin Cirstoiu and Slawomir Biegluk
(short
-
term LCG visitors)







LHCb

CMS

ATLAS

ALICE

Good collaboration with
EGEE/LCG Russian institutes

and with
ASCC Taipei