Trigger and DAQ of the LHC experiments upgrade

unkindnesskindUrban and Civil

Nov 15, 2013 (3 years and 8 months ago)

131 views

Upgrade


of Trigger and Data Acquisition Systems

for the LHC Experiments

Nicoletta

Garelli

CERN

XXIII International Symposium on Nuclear Electronics and
Computing, 12
-
19 September 2011, Varna, Bulgaria

Acknowledgment & Disclaimer


I would like to thank David Francis,
Benedetto

Gorini
,
Reiner Hauser,
Frans

Meijers
, Andrea
Negri
,
Niko

Neufeld, Stefano
Mersi
, Stefan
Stancu

and all other
colleagues for answering my questions and sharing
ideas.


My apologizes for any mistakes, misinterpretations and
misunderstandings.


This presentation is far to be a complete review of all
the trigger and data acquisition related activities
foreseen by the LHC experiments from 2013 to 2022.


I will focus on the upgrade plans of ATLAS, CMS and
LHCb

only.


9/13/2011

N.
Garelli

(CERN). NEC'2011

2

Outline


Large
Hadron

Collider (LHC)


today, design, beyond design


LHC experiments


design


trigger & data acquisition systems


upgrade challenges


Upgrade plans


ATLAS


CMS


LHCb


9/13/2011

3

N.
Garelli

(CERN). NEC'2011

LHC: a Discovery Machine

9/13/2011

N. Garelli (CERN). NEC'2011

4

SPS

PS



LHC

LHCb

Alice

ATLAS

CMS

Goal:
explore TeV energy scale to find Higgs Boson & New Physics beyond Standard
Model

How:
Large
Hadron

Collider (LHC) at CERN, with possibility of
steady increase of
luminosity



large discovery range

LHC Project in brief


LEP tunnel: 27 km Ø,

~100 m
underground


pp

collisions, center of mass E =
14 TeV



4 interaction points


4 big detectors


Particles travel in bunches at ~
c


Bunches

of O(10
11
) particles each



Bunch Crossing frequency: 40 MHz


Superconducting magnets cooled to 1.9 K
with 140 tons of liquid He. (Magnetic
field strength ~ 8.4 T)


Energy of one beam = 362 MJ (300x
Tevatron
)

Current Status

Design

Beyond Design

beam energy (
TeV)

3.5 (½ design)

7
(7x
Tevatron
)

-

bunch spacing (
ns)

50 (½ design)

25

-

colliding bunches
n
b

1331 (~½ design)

2808

-

peak

luminosity (
cm
-
2
s
-
1
)

3.1 1
0
33

(~3
0% design)

1
0
34

(30x
Tevatron
)

5 1
0
34

(leveled)

bunch intensity,

protons/bunch (10
11
)

1.25
(>design)

1.15

1.7

3.4 (with 50
ns
)

b
*

(
m
)

1 (~½ design)

0.55

0.15

LHC: Today, Design, Beyond Design

9/13/2011

N. Garelli (CERN). NEC'2011

5

b
*

=
beam envelope at
Interaction Point (IP),
determined by magnets
arrangements & powering.

Smaller
b
*

= Higher Luminosity


Interventions
needed to reach
design conditions

LHC can go
further


Higher
Luminosity

1

2

LHC Schedule Model

9/13/2011

N. Garelli (CERN). NEC'2011

6

Jan

Feb

Mar

April

May

June

July

Aug

Sept

Oct

Nov

Dec

TS

HWC

Phys

Phys

Phys
TS+MD

Phys

Phys

Phy

TS+MD

Phys

Phys

TS &
Ion

TS

Yearly Schedule


operating at unexplored conditions


long way to reach design performance


need for commissioning & testing periods


one 2
-
month Technical Stop (TS). Best period for power saving: Dec
-
Jan


every ~2 months of physics a shorter TS followed by a Machine Development (MD)
period necessary


1 month of heavy ion run (different physics program)

Every 3 years a 1 year long (at least) shutdown
needed for major
component upgrades

… and the experiments?


profit from LHC TS & shutdown periods for improvements &
replacements


LHC drives the schedule


experiments schedule has to be flexible

LHC: Towards Design Conditions


Don’t forget that life is not always easy


Single Event Effects due to radiation


Unidentified Falling Objects (UFO), fast beam losses


What LHC can do as it is today:


with 50 ns spacing:
n
b

= 1380, bunch intensity = 1.7 10
11
,
b
* = 1.0 m



L = 5 10
33

cm
-
2
s
-
1

at 3.5 TeV


with 25 ns spacing:
n
b

= 2808, bunch intensity = 1.2 10
11
,
b
* = 1.0 m



L = 4 10
33

cm
-
2
s
-
1

at 3.5 TeV



Not possible to reach design performance today:

1)
Beam Energy
: joints between s/c magnets limits to 3.5 TeV/beam

2)
Beam Intensity
: collimation limits luminosity to ~5 10
33

cm
-
2
s
-
1
with E = 3.5 TeV/beam

9/13/2011

N. Garelli (CERN). NEC'2011

7

LHC Draft Schedule


Consolidation

2013

CONSOLIDATION


fully repair joints
between s/c magnets


install magnet clamps

E = 6.5
-
7 TeV

L = 10
34
cm
-
2
s
-
1


9/13/2011

8

N. Garelli (CERN). NEC'2011


Electrical fault in bus
between super conducting
magnets
c
aused 19.9.2008
accident


limit E to 3.5 TeV


After joints reparation 7 TeV
will be reached, after dipole
training: O(100)
quench/sector


O(month)
hardware commissioning


fully repair joints between s/c
magnets


install magnet clamps

LHC Upgrade Draft Schedule


Phase1&2

2013

E = 6.5
-
7 TeV

L = 10
34

9/13/2011

9

N. Garelli (CERN). NEC'2011

2017

PHASE 1


collimation upgrade


injector upgrade (Linac4)

E = 7 TeV

L = 2 10
34
cm
-
2
s
-
1


2021

PHASE 2


new bigger
quadrupoles



smaller
b
*


new RF Crab cavities

E = 7 TeV

L = 5 10
34
cm
-
2
s
-
1


CONSOLIDATION

New collimation system
necessary to be protected from
high losses at higher luminosity

LHC Upgrade Draft Schedule


fully repair joints between s/c
magnets


install magnet clamps

2013

E = 6.5
-
7 TeV

L = 10
34
cm
-
2
s
-
1

9/13/2011

10

N. Garelli (CERN). NEC'2011

2017

PHASE 1


collimation upgrade


injector upgrade (Linac4)

E = 7 TeV

L = 2 10
34
cm
-
2
s
-
1

2021

PHASE 2

The Super
-
LHC


new bigger
quadrupoles



smaller
b
*


new RF Crab cavities

E = 7 TeV

L = 5 10
34
cm
-
2
s
-
1

3000 fb
-
1

by the end of 2030

x10
3

wrt

today

CONSOLIDATION

LHC Experiments Design


LHC environment (design)


s
pp

inelastic

~ 70
mb


Event Rate = 7 10
8

Hz


Bunch Cross

(BC) every 25 ns (40
MHz) ~ 22 interactions every
“active” BC


1 interesting collision is rare &
always hidden within ~22 minimum
bias collisions =
pile
-
up


Stringent requirements


fast electronics
response to resolve
individual bunch crossings


high granularity
(= many electronics
channels) to avoid that a pile
-
up
event
(1)

goes in the same detector
element as the interesting event
(1)


radiation resistant

9/13/2011

11

N. Garelli (CERN). NEC'2011

(1)
Event

= snapshot of values of all front
-
end electronics elements containing particle signals from single BC

LHC Upgrade:
Effects on Experiments


Higher peak luminosity


Higher pile
-
up


more complex trigger selection


higher detector granularity


radiation hard electronics


Higher accumulated luminosity


radiation damage: need
to replace
components


sensors: Inner Tracker in particular (~200 MCHF/experiment)


electronics? not guaranteed after 10 y use

9/13/2011

N. Garelli (CERN). NEC'2011

12

Challenge for
experiments:
LHC
luminosity x10
higher
than
today
after second long
shutdown (phase 1)

2013

2014

2017

2018

2021

2022

13

Interesting Physics at LHC

Fluegge
, G. 1994, Future
Research in High Energy
Physics, Tech. rep

11
)
GeV

500
(
10

1

100


pb
mb
H
tot
s
s
mb
tot

100

s
Total (elastic, diffractive, inelastic)
cross
-
section of proton
-
proton collision

pb
H

1
)
GeV

500
(

s
Cross
-
section of SM
Higgs Boson production

Find a needle …

Higgs
-
>

4
m

DESIGN ~22
MinBias

9/13/2011

N. Garelli (CERN). NEC'2011

…in the haystack!

BEYOND DESIGN


㕸 扩bg敲⁨eyst慣a

縱〰
䵩湂楡s

Trigger & Data Acquisition (DAQ) Systems

9/13/2011

N. Garelli (CERN). NEC'2011

14


@ LHC nominal conditions


O(10) TB/s of data produced


mostly useless data (min. bias
events)


impossible to store them


Trigger&DAQ
: select & store
interesting data for analysis at
O(100) MB/s


TRIGGER
: select interesting events
(the Higgs boson in the haystack)


DAQ
: convey data to local mass
storage


Network
: the backbone, large
Ethernet networks with O(10
3
)
Gbit

&
10
-
Gbit ports, O(10
2
) switches


Until now: high efficiency (>90%)

Local Storage


CERN Data
Storage

40 MHz

O(10)TB/s

O(100)MB/s

Trigger &
DAQ

Comparing LHC Experiments Today

9/13/2011

N. Garelli (CERN). NEC'2011

15

Experiment

Read
-
out
channels

Trigger
Levels

Read
-
Out Links
(type, out, #)

Level 0
-
1
-
2
Rate (Hz)

Event Size
(B)

HLT Out
(MB/s)

ATLAS

~90 10
6

3

S
-
link, 160 Mb/s

~1600

L1 ~ 10
5

L2 ~ 3 10
3

1.5 10
6

300

CMS

~90 10
6

2

S
-
link64, 400
Mb/s

~500

L1 ~ 10
5

10
6

600

LHCb

~1 10
6

2

G
-
link, 200 Mb/s

~400

L0 ~ 10
6

5.5 10
4

70

ATLAS:

partial & on
-
demand read
-
out @L2

CMS &
LHCb
:

read
-
out everything @L1

Similar read
-
out links

ATLAS

CMS

LHCb

ATLAS Trigger & DAQ
(today)

9/13/2011

16

N. Garelli (CERN). NEC'2011

ATLAS Data

Calo
/
Muon

Detectors

D
ata
-

F
low

ATLAS Event


1.5 MB/25 ns

Trigger

DAQ

H
igh
L
evel
T
rigger

ROI data

(~
2%)

ROI
Requests

~4
sec

EF
Accept
~200 Hz

~ 200 Hz

~
3
kHz

E
vent
F
ilter



L
evel
2

L2
Accept

~3 kHz

S
ub
F
arm
O
utput

S
ub
F
arm
I
nput

~4.5
GB/s

~
300
MB/s

Detector Read
-
Out

Level 1



FE

FE

FE

<2.5
m
s

Other
Detectors

R
egions
O
f
I
nterest

L1
Accept

75 (100) kHz

40 MHz

40
M
Hz

75
kHz

~40
ms

112 GB/s

Trigger Info


CERN Data
Storage

E
vent
B
uilder

ROD

ROD

ROD

Event Filter
Network

R
ead
O
ut
S
ystem

Data Collection
Network

CMS Trigger & DAQ
(today)

9/13/2011

17

N. Garelli (CERN). NEC'2011


LV1 trigger HW:


custom electronics


rate from 40 MHz to 100 kHz


Event Building


1
st

stage based on
Myrinet

technology: FED
-
builder


2
nd

stage based on TCP/IP over
GBE: RU
-
builder


8 independent identical DAQ
slices


100 GB/s throughput


HLT: PC farm


event driven


rate from 100 kHz to O(100) Hz

Detectors

Front
-
End pipelines

Read
-
out
buffers

Processors farms

O(
m
s)

O(s)

40 MHz

100 kHz

100 Hz

H
igh
L
evel
T
rigger

Level
1

Trigger

Mass storage

Switching
Networks

Experiments Challenges

Beyond Design


Beyond design


new working point to be established


Higher pile
-
up


increase pattern recognition problems


Impossible to change calorimeter detectors (budget, time, manpower)


Necessary to
change inner tracker


current damaged by radiation


needs for more granularity


Level
-
1

@ higher pile
-
up


select all interesting physics


simple increase of thresholds in
p
T

not possible: lot of physics will be lost


more sophisticated decision criteria needed


move software algorithms into electronics


muon

chambers


better resolution for trigger required


add inner tracker information to Level
-
1


Longer Level
-
1 decision time


longer latency


More complex reconstruction in
HLT


more computing power required

9/13/2011

N. Garelli (CERN). NEC'2011

18

DAQ Challenges


Problem
:


which read
-
out ?


at which bandwidth?


which electronics?


Higher detector
granularity



higher number of read
-
out
channels


increased event size


L
onger
latency

for Level
-
1 decisions


possible changes in

all sub
-
detector read
-
out systems


Larger
amount of data
to be treated by network & DAQ


higher data rate


network upgrade to accommodate higher bandwidth
needs


need for increased local data storage


Possibly higher
HLT output rate
if increased global data storage
(
Grid
) allows

9/13/2011

N. Garelli (CERN). NEC'2011

19

As of Today:
Difficult Planning


Hard to plan


while maintaining running experiments


with uncertain schedule


Upgrade plans driven by


Trigger: guarantee good & flexible selection


DAQ: guarantee high data taking efficiency


New technologies might be needed


Trigger: new L1 trigger & more powerful HLT


DAQ: read
-
out links, electronics &network


To be considered


replacing some components may damage others


new architecture must be compatible with existing components
in case of partial upgrade

9/13/2011

N. Garelli (CERN). NEC'2011

20

ATLAS

A

T
oroidal

L
HC
A
pparatu
S

9/13/2011

21

N. Garelli (CERN). NEC'2011

ATLAS Draft Schedule


Consolidation


TDAQ farms & networks
consolidation


S
ub
-
detector read
-
out
upgrades to enable Level
-
1
output of
100 kHz


C
urrent
innermost pixel

layer


will have significant radiation
damage,
largely reduced
detector efficiency


replacement

needed by 2015


I
nsertable
B
-
L
ayer (
IBL
) built
around a new beam
-
pipe &
slipped inside the current
detector

2013

9/13/2011

22

N.
Garelli

(CERN). NEC'2011

CONSOLIDATION

E = 6.5
-
7 TeV

L = 10
34

cm
-
2
s
-
1

Evolution of
TDAQ Farm


Today
: architecture with many farms & network domains:


cpu&network

resources balancing on 3 different farms (L2, EB, EF)
requires expertise


2 trigger steering instances (L2, EF)


2 separate networks (DC & EF)


huge configuration


Proposal
: merge L2, EB, EF within a single homogeneous
system


each node can perform the whole HLT selection steps


L2 processing & data collection based on ROIs


event building


event filter processing on the full event


automatic system balance


a single HLT instance

9/13/2011

N. Garelli (CERN). NEC'2011

23

TDAQ Network Proposal

9/13/2011

N. Garelli (CERN). NEC'2011

24


Current network architecture:


system working well


EF core router: single point of
failure


new technologies


2013: replacement of cores
mandatory (exceeded life
-
time)

SV

ROS

ROS

ROS

XPU

XPU

XPU

XPU

XPU

XPU

EF

EF

EF

SFI

SFO

DC

EF


Proposal
:

m
erge

DC&EF

networks


OK with new chassis


some cost reduction


p
erfect

for TDAQ farms evolution


m
ixing

functionalities


r
educe scaling potential with actual TDAQ
farms configuration

SV

ROS

ROS

ROS

PU

PU

PU

XPU

XPU

PU

SFO

SFI

ATLAS Upgrade Draft Schedule


Phase1


TDAQ farm & network consolidation


L1 @ 100 kHz


IBL

2013

E = 6.5
-
7 TeV

L = 10
34
cm
-
2
s
-
1


9/13/2011

25

N. Garelli (CERN). NEC'2011

2017

PHASE 1

E = 7 TeV

L = 2 10
34

cm
-
2
s
-
1

Level
-
1 Upgrade
to
cope
with pile
-
up after phase
-
1


New
muon

detector
S
mall
W
heel (SW)


Provide increased
calorimeter granularity


Level
-
1 topological
trigger


Fast Track Processor (FTK)

CONSOLIDATION

New
Muon

Small Wheel (SW)


Muon

precision chambers (CSC &
MDT)
performance deteriorated


need to replace with a better detector


Exploit new SW to provide also
trigger information


today: 3 trigger stations in barrel (RPC)
& end
-
caps (TGC)


New SW = 4
th

trigger station


reduce fake


improve
p
T

resolution


level
-
1 track segment with 1
mrad

resolution


Micromegas

detector: new
technology which could be used


9/13/2011

N. Garelli (CERN). NEC'2011

26

Small Wheel

L1 Topological Trigger


Proposal
: additional electronics to have a Level
-
1
trigger based on topology criteria, to
keep it efficient at
high luminosities:
Df
,
Dh
, angular distance, back
-
to
-
back, not back
-
to
-
back, mass


di
-
electron


low lepton
p
T

in Z,
ZZ/ZW,WW, H→WW/ZZ/
tt

and multi
-
leptons SUSY modes


jet topology,
muon

isolation, …


New
topological trigger processor
with input from
calorimeter &
muon

detectors, connected to new
Central Trigger Processor


Consequence
: longer latency, develop common tools
for reconstructing topology both in
muon

& calorimeter
detectors


9/13/2011

N. Garelli (CERN). NEC'2011

27


Fast Track Processor
(FTK)


Introduce highly parallel processor:


for full Si
-
Tracker


provides tracking for all L1
-
accepted events
within O(25μs)


Reconstruct tracks >1
GeV


90% efficiency compared to offline


track isolation for lepton selection


fast identification of b & τ jets


primary vertex identification


Tracks reconstruction has 2 time
-
consuming stages:


pattern recognition


Associative memory


track fitting


FPGA


After L1, before L2


HLT selection software interface to FTK output (tracks available earlier)

9/13/2011

28

N. Garelli (CERN). NEC'2011

Pattern from

reconstruction

Good match between

Pre
-
stored

&
Recorded

patterns

Discarded patterns

Pre
-
stored patterns

ATLAS Upgrade Draft Schedule


Phase2


Reduce heterogeneity in
TDAQ farms & networks

2013

PHASE 0

E = 6.5
-
7 TeV

L = 10
34
cm
-
2
s
-
1


9/13/2011

29

N. Garelli (CERN). NEC'2011

2017

PHASE 1

E = 7 TeV

L = 2 10
34

cm
-
2
s
-
1


FTK


L1 Topological trigger

2021

PHASE 2

E = 7 TeV

L = 5 10
34
cm
-
2
s
-
1

2. Precision
muon

chambers
used in trigger
logic


dismount as less
as possible

3. L1 Track Trigger

1. Full digital read
-
out of calorimeter
(data & trigger)


faster data transmission


trigger access to full calorimeter resolution (provides
finer cluster and better electron identification)



proposed solution: fast
rad
-
tolerant
10
Gb
/s links

Improve L1
Muon

Trigger



Phase2

Current
muon

trigger:



trigger logic assumes tracks to come from interaction point (IP)


p
T

resolution limited by IP smearing (Phase2: 50mm


~150mm)


MDT resolution 100 times better than trigger chambers (RPC)



Proposal
: use precision chambers (MDT) in trigger logic


reduce rates in barrel


no need for vertex assumption


improve selectivity for high
-
p
T

muons

9/13/2011

N. Garelli (CERN). NEC'2011

30


Current limitation
: MDT read
-
out serial & asynchronous



Phase2
: improve MDT electronics performance (solve latency problem)


Fast MDT readout options:


seeded/tagged method

use information from trigger chambers to define
RoI

& only consider small # of MDT tubes which
falls into the
RoI
. Longer latency


unseeded/untagged method

stand
-
alone track finding in MDT chambers. Larger bandwidth required to transfer MDT hit
pattern



Track Trigger


Phase2


Possible to introduce L1 track trigger


keep L1 rate @ 100 kHz


combine with calorimeter to improve electron selection


correlate
muon

with track in ID & reduce fake tracks


possible L1 b
-
tagging


L1 track trigger
Self Seeded


use high
p
T

tracks as seed


need fast communication to form coincidences between layers


latency of ~3
m
s


L1 track trigger
ROI Seeded


need to introduce a L0 trigger to select
RoI

at L1


long ~10
m
s L1 latency

9/13/2011

N. Garelli (CERN). NEC'2011

31

New Inner Detector


only
with silicon sensors


better resolution, reduced occupancy


more pixel layers for b
-
tagging

multi
-
jet event at 7 TeV

CMS

The
C
ompact
M
uon

S
olenoid

9/13/2011

32

N. Garelli (CERN). NEC'2011

CMS Consolidation Phase

Trigger & DAQ consolidation


x3 increase HLT farm
processing power


replace HW for Online DB

2013

9/13/2011

33

N. Garelli (CERN). NEC'2011

CONSOLIDATION

Muons

CMS design: space for a 4
th

layer of forward
muon

chambers (CSC & RPCs)


better trigger robustness in
1.2<|
h
|<1.8


preserve low
p
T

threshold


E = 6.5
-
7 TeV

L = 10
34

cm
-
2
s
-
1

CMS Upgrade Draft Schedule


Phase1

2013

E = 6.5
-
7 TeV

L = 10
34

cm
-
2
s
-
1

9/13/2011

34

N. Garelli (CERN). NEC'2011

2017

PHASE 1

E = 7 TeV

L = 2 10
34

cm
-
2
s
-
1


New pixel detector


Upgrade
hadron

calorimeter (HCAL)


silicon p
hotomultipliers.
F
iner segmentation of
readout in depth


New trigger system


Event Builder & HLT
farm upgrade

CONSOLIDATION


Trigger & DAQ consolidation


4
th

layer
muon

detectors

Phase
-
1
requirements&plans

as ATLAS


radiation damage


change silicon
innermost tracker


maintain Level
-
1 < 100 kHz, low
latency, good selection


tracking
info @ L1+ more granularity in
calorimeters



DAQ evolution to cope with new
design

CMS New Pixel Detector


Phase1


New pixel
detector (4 barrel
layers,
3 end
-
caps)


Need for replacement


radiation damage

(innermost layer might be replaced before)


read
-
out chips just adequate for L=10
34

cm
-
2
s
-
1

with 4% dynamic data loss due to read
-
out latency & buffer


to
improve


Goal


gives better tracking performance


improved b
-
tagging capabilities


reduce material using a new cooling system CO
2

instead of C
6
F
14

9/13/2011

N. Garelli (CERN). NEC'2011

35

CMS New Trigger System


Phase1


Introduce regional calorimeter trigger


to use full granularity for internal processing


more sophisticated clustering & isolation algorithms to handle
higher rates and complex events


New infrastructure based on
μTCA

for increased bandwidth,
maintenance, flexibility


Muon

trigger upgrade to handle additional channels &
faster
FPGA

9/13/2011

N. Garelli (CERN). NEC'2011

36

moving from custom ASICs to
powerful modern FPGAs with
huge processing & I/O capability
to implement more sophisticated
algorithms

Advanced Telecommunications

Computing Architecture (ATCA).
Dramatic increase in computing
power & I/O

CMS Upgrade Draft Schedule


Phase2

2013

9/13/2011

37

N. Garelli (CERN). NEC'2011

2017

PHASE 1

E = 7 TeV

L = 2 10
34

cm
-
2
s
-
1

2021

PHASE 2

E = 7 TeV

L = 5 10
34
cm
-
2
s
-
1


CONSOLIDATION


Trigger & DAQ consolidation


4
th

layer
muon

detectors


New pixel detector


Upgrade HCAL


silicon
p
hotomultipliers


New trigger system


EventBuilder&HLT

farm upgrade


Install new tracking system


track trigger


Major consolidation of
electronics systems


Calorimeter end
-
caps


DAQ system upgrade

E = 6.5
-
7 TeV

L = 10
34

cm
-
2
s
-
1

New Tracker


R/D projects for new sensors, new front
-
end, high speed link
(customized version of GBT), tracker geometry arrangement



>200M pixels, >100M strips


Level
-
1 @ high luminosity


need for L1 tracking

9/13/2011

38

N. Garelli (CERN). NEC'2011

~ 1 mm

~ 100
μ
m

pass

fail

2


Delivering information for Level
-
1


impossible to use all channels for individual
triggers


Idea
:
exploit strong 3.8 T magnetic field and
design modules able to reject signals from
low
-
p
T

particles


Different discrimination proposals to
reject hits from low
-
p
T

tracks


data
transmission at 40 MHz feasible:

1.
within a single sensor, based on cluster
width

2.
correlating signals from stacked sensor pairs

pass

fail

1

LHCb

The
L
arge
H
adron

C
ollider
b
eauty
experiment

B0s meson


μ+ μ
-


9/13/2011

39

N. Garelli (CERN). NEC'2011

LCHb

Trigger & DAQ Today

9/13/2011

N. Garelli (CERN). NEC'2011

40

Single
-
arm forward spectrometer (~300
mrad

acceptance) for
precision measurements of CP violation & rare B
-
meson decays

L0

e,
g

L0

had

L0

m

HLT1.

High
pT

tracks with IP != 0

Global reconstruction

HLT2.

Inclusive &
exclusive selection

40 MHz

< 1 MHz

30 kHz

3 kHz


Designed to run with average # of collisions per
BX ~ 0.5 & n
b
~2600


L ~ 2 10
32
cm
-
2
s
-
1


running with L = 3.3 10
32
cm
-
2
s
-
1


Reads
-
out 10 times more often than ATLAS/CMS
to reconstruct secondary decay vertices


very

high rate of small events (~55
kB

today)


L0 trigger: high efficiency on
dimuon

events, but
removes half of the
hadronic

signals


All trigger candidates stored in raw data &
compared with offline candidates:


HLT1: tight CPU constraint (12 ms), reconstruct
particles in VELO, determine position of vertices


HLT2: Global track reconstruction, searches for
secondary vertices

Event size

~35
kB

HW

SW

LCHb

Upgrade


Phase1

Interesting physics with ~ 50 fb
-
1

(design: 5 fb
-
1
):


precision measurements (charm CPV, …)


searches (~1 GeV
Majorana

neutrinos,…)

9/13/2011

N. Garelli (CERN). NEC'2011

41

LLT

p
T

of had,
m
, e,/y

40 MHz


2011: L ~O(150%) of design, O(35%) of
bunches


after 2017: Higher rate


higher E
T

threshold


even less
hadronic

signals

Calo
,
Muon

1
-
40 MHz

All sub
-
detectors

HLT

Tracking,
vertexing
,
inclusive/exclusive
selections

20 kHz

CPU farm

Custom
electronics

UPGRADE NEEDED


increase read
-
out to 40 MHz & eliminate
trigger limitations


LLT will not simply reduce rate as L0, but will enrich
selected sample


n
ew VELO detector


no major changes for
muon

&
calo


upgrade electronics & DAQ


data link from detector: components from GBT

readout
-
network made for ~ 24 Tb/s


common back
-
end read
-
out board: TELL40.
Parallel
optical I/Os (12 x > 4.8 Gb/s), GBT compatible



Need for Bandwidth


Phase2


New front
-
end


GigaBit

Transceiver (GBT) chipset


point
-
to
-
point high speed bi
-
directional link to send data from/to counting
room at ~5Gb/s


simultaneous transmission of data for DAQ, Slow Control, Timing Trigger &
Control (TTC) systems


robust error correction scheme to correct errors caused by SEUs


Advanced Telecommunications Computing Architecture (ATCA)


point
-
to
-
point connections between crate modules


higher bandwidth in output


Which electronics in 20 y? Will VME be still ok? Do we need ATCA
functionality?






9/13/2011

N.
Garelli

(CERN). NEC'2011

42

Front
-
End

~200 Mb/s


Board

Board

V
M
E

PC

~40 Mb/s

S
-
link

~200 Mb/s

Ethernet

1
Gb
/s

Read
-
Out System

Read
-
out from cavern to counting room

GBT
~5
Gb
/s

A
T
C
A

~40
Gb
/s

Ethernet ~40
Gb
/s

Conclusion


Trigger & DAQ systems worked extremely well until now


After the long LHC shutdown of 2017:

beyond design


increased luminosity


increased pile
-
up


Experiments need to upgrade to work beyond design


New Inner Tracker
: radiation damage & more pile
-
up


Level
-
1 trigger
: more complex hardware selection
& deal with
longer latency


New
read
-
out links
: higher bandwidth


Scale DAQ and Network


Difficult to define upgrade strategy as of today


unstable schedule


maintaining current experiments


One thing is sure:
LHC experiments upgrade will be exciting

9/13/2011

N. Garelli (CERN). NEC'2011

43