Distributed Algorithms and VLSI

connectionbuttsΗλεκτρονική - Συσκευές

26 Νοε 2013 (πριν από 3 χρόνια και 11 μήνες)

71 εμφανίσεις

KeynoteSSS‘08
DistributedAlgorithmsand VLSI
Ulrich Schmid
Vienna University of Technology
s@ecs.tuwien.ac.at
Keynote SSS'08U. Schmid
2
Content
ƒShort introduction to Very Large Scale Integration
(VLSI): A photo gallery …
–Great perspectives
–But …
ƒVLSI Circuits ↔Distributed Algorithms
–DAs and VLSI: Do’s and Don’t’s
ƒDo’s –an Example: DARTS Fault-tolerant Clocks
–Starting point: A simple distributed algorithm
–How to implement it in VLSI ?
–Proofs
–[Under the rug: Metastability…]
Keynote SSS'08U. Schmid
3
Short introduction to VLSI:
A photo gallery …
Keynote SSS'08U. Schmid
4
VLSI Circuits
Keynote SSS'08U. Schmid
5
Major Ingredients
Transistors (nMOS):
PolysiliconGate
SiO2
Insulator
n
n
p substrate
channel
Source
Drain
L
W
Gate
Source
Drain
Interconnect (wires):
Form & connect
gates
(Inverter)
Keynote SSS'08U. Schmid
6
Miniaturization:
Moore‘sLaw
Intel 4004 (1971)
Intel P4 (2001)
•2.250 transistors
•12 mm
2 / 10 µm
•0.74 MHz, 1W
•42.000.000 transistors
•217 mm
2
/ 0.180 µm = 180 nm
•2 GHz, 50 W
Keynote SSS'08U. Schmid
7
MulticoreProcessors
IBM POWER4 (dual-core)
IBM Cell(8-core)
TileraTILE64
Today: < 45 nm
Keynote SSS'08U. Schmid
8
Systems-on-Chip (SoC)
ƒAssemble whole
SoCfrom suitable
components
ƒMarket for “IP
cores”, from
different vendors
ƒSync/async
interfaces
NvidiaTegra
Keynote SSS'08U. Schmid
9
Great perspectives for VLSI circuits.
But …
Keynote SSS'08U. Schmid
10
Manufacturing Limitations
VLSILabPolitechnicoTorino
Optical Proximity Correction, Intel Corp.
Keynote SSS'08U. Schmid
11
Defects (Electromigration)
P. Gutman, IBM T.J.WatsonResearch Center
M. Ohring, Reliabilityand Failureof Electronic Materials and Devices,1998
ASM Corp. Shanghai
Wiskers
Hillock
Void
Keynote SSS'08U. Schmid
12
Defects (Gate Oxide BD)
K.-L. Pey, C.-H. Tung, Physicalcharacterizationof breakdownin metal-oxide-semiconductortransistors
Breakdown−inducedthermochemicalreactionsin (a) poly−Sigateand (b) p−Sisubstrateof n−channelMOSFETs.
Semitracks, Inc.
ESD-inducedgateoxidebreakdown
www.siliconfareast.com
Keynote SSS'08U. Schmid
13
Power Dissipation Problems
A. Choudhary, UMass
Small transistordissipating5mW in an SOI wafer; University of Bolton

Reduce supply voltage !
Keynote SSS'08U. Schmid
14
Radiation-induced Soft Errors
SLAC National Accelerator Lab
Stanford
SET ÆSEU
Powell, 1959
0 km10 km
1
10-3
Soft error rates dominate in VLSI !
Keynote SSS'08U. Schmid
15
Slow Signal Propagation
ƒTransistors
switch faster
BUT
ƒWires thinner
ƒLess transistor
driving strength
ƒRC Signal
propagation
along wires
dominate circuit
speed
Keynote SSS'08U. Schmid
16
Clock Distribution Problem
Circuit& physicaldesignof thePOWER4 microprocessor, IBM J. Res. Dev.
Cell processor
tPD,CLK
CLK
D
CLK
D
CLK
D
CLK
D

tdly,DATA,1m
tdly,DATA,2m
tdly,DATA,km
FF1
FF2
FFk
FFm
combin.
logic
Clocksignal(common!)
CLK
D
CLK
D
Combinat.
logic(gates)
Data
Synchronous design paradigm:
→Synchronous abstraction increasingly
difficult to maintain !
Keynote SSS'08U. Schmid
17
Hence, deep submicron VLSI circuits …
Keynote SSS'08U. Schmid
18
…are in fact FT Distributed Systems
ƒSpatial distribution
ƒMessage-passing communication
ƒMassive concurrency
ƒAsynchrony
ƒFailures
ƒSecurity issues (IP cores!)
Worth-while undertaking:
Explore the applicability of
DA results & approaches to
VLSI circuits …
Keynote SSS'08U. Schmid
19
Applying DA Research in VLSI ?
ƒ2008 Dagstuhl-Seminar Distributed Algorithms
in VLSI Chips(B. Charron-Bost, J. Ebergen, S.
Dolev, U. Schmid,
http://www.dagstuhl.de/08371
)
ƒ[Great place for such undertakings …]
Keynote SSS'08U. Schmid
20
DA and VLSI –Don’t’s
ƒApply standard DAs in the VLSI context–too
heavy weight in terms of computation &
communication
ƒApply standard replication-based FT(for coping
with “classic” VLSI faults) –too heavy-weight in
terms of power & area penalties
BUT …
Keynote SSS'08U. Schmid
21
DA and VLSI –Do’s (I)
ƒApply “light-weight” DAsfor decentralized handling of
[nowadays centralized] functions, e.g. in large multicores
–Memory access scheduling (Moscibroda& Mutlu, PODC’08)
–Apply self-stabilizing algorithms for handling transient failures (S.
Dolev& Haviv, IEEE ToC, 2006)
–Fault-tolerant clock generation in SoCs(Függer, Schmid, Fuchs,
Kempf, EDCC’06)
ƒApply replication-based FTto cope with malicious
failures in VLSI
–IP core security threats in SoCs
–Inconsistently propagated errors in high-dependability
applications
TileraTILE64
Keynote SSS'08U. Schmid
22
DA and VLSI –Do’s (II)
ƒApply VLSI results & approaches in DA research
–Error-correcting codes and asynchronous consensus (Friedmann,
Mostefaoui, Rajsbaum& Raynal, IEEE ToC, 2007)
–Corruption-resilient Codes (S. Dolev& Tzachar, DISC’08)
ƒExtend DA approaches,to contribute to a (still lacking!)
“Theory of Dependable VLSI Circuits”
–Early example: Arbiter-Problem (Lamport, ~1980)
–Handle massive concurrency (continuously computing gates!)
–Handle computation and communication resource restrictions
–Handle “non-closed” specifications
–Define suitable failure models
Keynote SSS'08U. Schmid
23
Do’s –an Example:
DARTS Fault-tolerant Clocks
Keynote SSS'08U. Schmid
24
DARTS –Distributed Algorithms
for Robust Tick Synchronization
Joint work with A. Steininger, M. Függer, G. Fuchs
[and many others]
http://ti.tuwien.ac.at/ecs/research/projects/darts
Keynote SSS'08U. Schmid
25
Clocking
in SoCs
(I)
Classic synchronous paradigm
™
Concept:
Common notion of time for entire chip
™
Method:
Single quartz oscillator
Global, phase-accurate clock tree
Disadvantages
-
Cumbersome clock tree design
-
H
igh power consumption
-
Clock is
single point of failure!
DSP
WLAN
Video
GP
R
S
GP
S
Keynote SSS'08U. Schmid
26
Clocking in SoCs(II)
Alternative: DARTS clocks
™
Concept:Multiple synchronized
tick generators
™
Method:Distributed FT tick generation alg(TG algs)
Interacting via dedicated clock network (TG net)
Advantages
-
No quartz oscillator(s)
-No critical clock tree
-Clock isno single point of failure!
-Reasonable synchrony
DSP
WLAN
Video
GPRS
GPS
Keynote SSS'08U. Schmid
27
ReasonableSynchrony?
Phase synchronization
Clocksynchronization
-
maxprecision,
-min/maxfrequency
Tick synchronization
Keynote SSS'08U. Schmid
28
Starting point: A Distributed Algorithm
Keynote SSS'08U. Schmid
29
On booting do:
send tick(0) to all; C:= 0; /* Cis last tick number sent */
Continuously do:
If received tick(C) from n–fdifferent processes:
send tick(C+1) to all; C := C+1;
On booting do:
send tick(0)to all; C:= 0; /* C is last tick number sent */
Continuously do:
If received tick(C)from all nprocesses:
send tick(C+1)to all; C := C+1;
Failure-free case (f= 0): Simple barrier synchronization
(Modified) Srikanth& Touegalgorithm
Failurecasef> 0 ?
A Distributed Algorithm (I)
On booting do:
send tick(0) to all; C:= 0; /* Cis last tick number sent */
Continuously do:
If received tick(X) from f+1 different processes and X > C:
send tick(C+1),…,tick(X) to all [once]; C := X;
If received tick(C) from n–fdifferent processes:
send tick(C+1) to all [once]; C := C+1;
Keynote SSS'08U. Schmid
30
A Distributed Algorithm (III)
For n ≥3f + 1 and up tofByz. failures,
with end-to-end delays ∈[d,d+
ε
]:
ƒSuppose process psends tick(C+1) at
time t
ƒThen, process q also sends tick(C+1)
by time t+d+2
ε
⇒Clock ticks occur approximately
synchronously
On booting:
send tick(0) to all;C:= 0;
If got tick(X) from f+1 procsand X > C:
send tick(C+1),…, tick(X) to all [once];
C := X;
If got tick(C) from n -fprocesses:
send tick(C+1) to all [once];
C := C+1;
f+ 1
n −f ≥2f+ 1
pat t
any q’at t+
ε
anyqat t+d+2
ε

ε
≤d+
ε
Keynote SSS'08U. Schmid
31
How to implement this DA in VLSI ?
Mind: We don’t have any clock available for a
synchronous implementation …
Keynote SSS'08U. Schmid
32
Asynchronous Basic Circuits
a
b
y
loop
b
y
a
y
prop
ab
0
1
0
0
1
1
0
1
yold
0
1
yold
AND, OR, …; Muller C-Gate:
-Continuously computes y = y(a,b) [with delay t
prop]
-AND gate for signal transitions (Æbarrier synchronization)
-Note: Inevitably involves feedback loop [tloop]
Keynote SSS'08U. Schmid
33
Asynchronous Communication
ƒConvey alternating up/down signal transitions only
ƒFIFO “zero-bit message” channels [with delay]
ƒperformance penalty (serial data transmission)
ƒadditional wires (parallel data transmission)
Sender
Receiver
k-bit
k-bit data transmission costly: Additional circuitry +
Signal wires
Keynote SSS'08U. Schmid
34
Major Challenges
If received tick(X) from f +1 processes and X > C :
send tick(C+1),…, tick(X) to all [once]
C := X
If received tick(C) from n −fprocesses :
send tick(C+1) to all [once]
C:= C+1
k-bitmessage,
kunbounded
Atomicityof
actions
To bereplacedby
zero-bitmessages
Æk keptat receiver
To beensuredby
architecture+ path
delayconstraints
Buildsuitable
thresholdcircuits
Threshold
comparison
Keynote SSS'08U. Schmid
35
k-bitÆZero-bitMessages
...
...
C
C
C
C
Rremote,in
C
C
C
C
NAND
NOR
NOR
NAND
NAND
NAND
GEQe
GRe
GEQ
o
GRo
Ctop
Pipe Compare Signal Generation
Diff-GateLocal Pipe
Remote Pipe
Counter Module
LocalClk
ƒTG netfeedsevery
clocksignalto every
TG alg(busof widthn)
ƒAt everyTG alg,n−1
CounterModules[one
per remoteTG alg]
maintaintick numbers
ƒAnonymousticks⇒
rulesonlydistinguish
–r
rem
> r loc
(f + 1, GR
rule)
–r
rem
≥r
loc
(n −f, GEQ
rule)
Asynchronous up/down-counter
TG alg1
TG alg6
TG alg5
TG alg4
TG alg3
TG alg2
TG net
On booting:
send tick(0) to all;C:= 0;
If got tick(X) from f+1 procsand X > C:
send tick(C+1),…, tick(X) to all [once];
C := X;
If got tick(C) from n -fprocesses:
send tick(C+1) to all [once];
C := C+1;
Move tick number maintenance from sender to receiver
Keynote SSS'08U. Schmid
36
Asynchron. Up/Down Counter
C
C
C
C
Rremote,in
C
C
C
C
NAND
NOR
NOR
NAND
NAND
NAND
GEQ
e
GRe
GEQ
o
GRo
Ctop
Pipe Compare Signal Generation
Diff-GateLocal Pipe
Remote Pipe
Counter Module
LocalClk
ƒIngredients:
–Twoelasticpipelines(= FIFO buffersforsignal
transitions) countremoteand localclockticks
–Common transitionsremovedbyDiff-Gate
–GR and GEQ status
signalsderivedfromlast stages
ƒMetastability-freebyconstruction[well, almost…]
Keynote SSS'08U. Schmid
37
Atomicity of Actions
ƒThe gates making up the f + 1 and then −frule
computecontinuouslyand concurrently, hence
–maybothproducetick(k), forthesamek
–thismustbecircumventedbyall means[„once“]
ƒHowto ensurethisatomicity?
–Useseparate circuitryforgeneratingup-transitions(odd
k) and down-transitions(evenk)
→tick(k−1) and tick(k) nevermixedup
–Ensure that ratio of the maximum and minimum delay
along certain paths is bounded (cp. Θ–Model [WLS05],
ABC Model [RS08]) →tick(k−2) and tick(k) never
mixedup
Keynote SSS'08U. Schmid
38
ThresholdModules
...
...
...
...
...
...
ƒGR and GEQ status
signalsof then−1
CounterModulesfed
intof +1 and n −f
thresholdgates
ƒBack-transition from
status signals to
transition-signallingfor
generating tick(k)
Keynote SSS'08U. Schmid
39
Proofs
Keynote SSS'08U. Schmid
40
Proofs & Implementations (SW)
abstraction
model (alg+sys)
implementation
SW
specification
proof
On
booting:
send
tick
(0) to all;
C
:
= 0;
If
got
tick
(
X
) from
f
+1 procs
a
nd
X >
C:
send
tick
(C+
1
),…
,

tick
(
X
) to all [once];
C
:=
X
;
If
got
tick
(C) from
n -
f
processes:

send
tick
(C+
1
) to all [once];

C := C
+
1;
-
m
ax
precision
-
m
in/max
frequency
Ticksync
n
TG Algs
,
f
Byz.
Executable machine code,
real system
ƒ
Prove that the model
meets the specification
ƒ
Minimize „proof gap“
between model and
implementation
Proof goals:
Ti
ck syn
c
ed FT clo
c
ks
Di
str. state machine,
Byzantine failures
TTP implementation
Keynote SSS'08U. Schmid
41
Proofs & Implementations (HW)
abstraction
model (alg+sys)
implementation
SW
HW
partitioning &
constraints
HW capabilities
specification
proof
On
booting:
send
tick
(0) to all;
C
:
= 0;
If
got
tick
(
X
) from
f
+1 procs
a
nd
X >
C:
send
tick
(C+
1
),…
,

tick
(
X
) to all [once];
C
:=
X
;
If
got
tick
(C) from
n -
f
processes:

send
tick
(C+
1
) to all [once];

C := C
+
1;
Keynote SSS'08U. Schmid
42
Hierarchical Proof
ƒSpecification of low-level building blocks
ƒUp/down ticks correctly simulate tick(k)
ƒSynchronization properties
ƒBounded Precision & Frequency
ƒBounded space (pipeline)
tick-up/down
Interlocking proof
tick(k), tick(k+1), …
(P)
Precision & Frequency
(U)
(S)
Bounded space
Keynote SSS'08U. Schmid
43
On booting:
send tick(0) to all;C:= 0;
If got tick(X) from f+1 procsand X > C:
send tick(C+1),…, tick(X) to all [once];
C := X;
If got from n -fprocesses:
send to all [once];
C := C+1;
Interlocking Proof -“[once]”
k
k+1
k-2
x
tick-up/down
tick(k), tick(k+1), …
Interlocking proof
tick(k+1)
tick(k)
x
tick(C)
tick(C+1)
Keynote SSS'08U. Schmid
44
Higher-Level Properties
ƒ(P) Progress.If all correct nodes send tick(k) by time t, then
every correct node sends at least tick(k+1) by t + T+.
ƒ(U) Unforgeability.If no correct node sends tick(k) by time
t, then no correct node sends tick(k+1) by t+T-first.
ƒ(S) Simultaneity.If some correct node sends tick(k) by time
t, then every correct process sends at least tick(k) by t+T-first
and, on top of those,
ƒPrecision & Frequency
ƒBounded pipeline size
tick(k), tick(k+1), …
(P)
Precision & Frequency
(U)
(S)
Bounded pipes
Prove elementary synchronization properties
Keynote SSS'08U. Schmid
45
Complete Suite of Proofs
[EDCC’06]
Keynote SSS'08U. Schmid
46
ack_extack
_
int
req_ext
req
_
int
RemotePipe
_____
GEQ
e
GR
e
GEQ
o
___
GR
o
3f+1
1
=
2f+1
=
2f+1
=
f+1
=
f+1
...
...
...
...
ThresholdLogic
_
_
_
_
_
GEQe
GRe
GEQo
_
_
_
GRo
clk_out
Pipeline1
Nodep
...
...
...
PipeCompareSignalGenerators
C
C
C
C
C
C
C
C
C
Diff-Gate
C
C
C
LocalPipe
remote
clk_in
ExternalPipe
Pipeline2
LocalPipe
Diff-
Gate
PipeCompareSignalGen.
External
Pipe
Pipeline3
LocalPipe
Diff-
Gate
PipeCompareSignalGen.
Remote
Pipe
Pipeline3f+1
Local
Pipe
Diff-
Gate
PipeCompareSignalGen.
...
Complete Implementation
ƒImplementation of the model only needs to
–implement the low-level building blocks as specified
–ensure the additional delay ratio bounds for
interlocking proof (place & route constraints)
[DFT’06]
Keynote SSS'08U. Schmid
47
DARTS -Lessons Learned
ƒFault-tolerant distributed algorithms are indeed
applicable in the VLSI context, but need “down-sizing”
ƒDistributed computing models with bounded delay ratio
(Θ-Model, ABC model) well-suited for VLSI context
(technology migration, re-using of models, etc.)
ƒSole transition logic approach not sufficient for fault-
tolerance ⇒need a model that integrates event and
state representation
ƒTime-free models suffer from a large “proof-gap”

need
a model incorporating (continuous) time
ƒFailures raise new metastabilityconcerns

MS needs
further investigation
Keynote SSS'08U. Schmid
48
Under the rug: Metastability…
[Stolen from Dagstuhlpresentation of A. Steininger…]
Keynote SSS'08U. Schmid
49
Metastability
1
2
3
4
5
12
345
Inv1
Inv2
ui,2
=
u
o,1
ui,1 = u
o,2
stable(HI)
stable(LO)
metastable
Bistableelement
(memorycell) with
positive feedback
Keynote SSS'08U. Schmid
50
RevisitMuller C-Element
1
0
1
0
x
a
x
y
a
x
y
a
x
y
pure delayat gate
and interconnect
limitedoutputslope
normal operation
oscillation
creeping
b
y
a
Keynote SSS'08U. Schmid
51
Error Containment
countpr
countpq
T
h
M
TG
nodep
countqp
countqr
T
h
M
TG
nodeq
countrp
countrq
T
h
M
TG
noder
Accordingto ourproofsthewall holds–butweignoredmetastability!
Keynote SSS'08U. Schmid
52
TheCounterModule
countpr
countpq
T
h
M
TG
nodep
countqp
countqr
T
h
M
TG
nodeq
countrp
countrq
T
h
M
TG
noder
C
C
C
C
Rremote,in
C
C
C
C
NAND
NOR
NOR
NAND
NAND
NAND
GEQe
GRe
GEQ
o
GR
o
Ctop
Pipe Compare Signal Generation
Diff-GateLocal Pipe
Remote Pipe
Counter Module
LocalClk
purelycombinationallogic
won‘thurt
BUTwon‘thelp
Muller C-Element
Metastableinputmaypass
through!
Keynote SSS'08U. Schmid
53
TheThresholdModule
countpr
countpq
T
h
M
TG
nodep
countqp
countqr
T
h
M
TG
nodeq
countrp
countrq
T
h
M
TG
noder
ThresholdModule
purelycombinationallogic
=> will notcreatemetastabilityproblem
BUT:
will propagatemetastability
whilebeingnearthe
threshold
NO masking, NO protection
Keynote SSS'08U. Schmid
54
MetastabilityContainment ?
countpr
countpq
T
h
M
TG
nodep
countqp
countqr
T
h
M
TG
nodeq
countrp
countrq
T
h
M
TG
noder
Keynote SSS'08U. Schmid
55
The End …
©
2007, WDR
Keynote SSS'08U. Schmid
56
Some References
ƒ[Bau05] R. Baumann. Radiation-induced soft errors in advanced semiconductor technologies.
IEEE Transactions on Device and Materials Reliability 5(3):305--316, Sept. 2005.
ƒ[BJ83] J. C. Barros and B. W. Johnson. Equivalence of the arbiter, the synchronizer, the latch,
and the inertial delay. IEEE Trans. Comput., 32(7):603--614, 1983.
ƒ[BZMLCLD02] R. Bhamidipati, A. Zaidi, S. Makineni, K. Low, R. Chen, K.-Y. Liu, and J.
Dalgrehn. Challenges and methodologies for implementing high-performance network
processors. Intel Technology Journal, 6(3):83--92, Aug. 2002.
ƒ[BY07] A. Binkand R. York. Arm996hs, the first licensable, clockless32-bit processor core. IEEE
Micro, 25(2):58--68, February 2007.
ƒ[Bor05] S. Borkar. Designing reliable systems from unreliable components: the challenges of
transistor variability and degradation. IEEE Micro, 25(6):10--16, Nov. 2005.
ƒ[Cha84] D. M. Chapiro. Globally-Asynchronous Locally-Synchronous Systems. PhD thesis,
Stanford University, Oct. 1984.
ƒ[Con03] C. Constantinescu. Trends and challenges in VLSI circuitreliability. IEEE Micro,
23(4):14--19, July 2003.
ƒ[DH06a] S. Dolevand Y. Haviv. Self-stabilizing microprocessors, analyzing and overcoming soft-
errors. IEEE Transactions on Computers, 55(4):385--399, Apr. 2006.
ƒ[Dol00] S. Dolev. Self-Stabilization. MIT Press, 2000.
ƒ[DR98] C. Dyer and D. Rodgers. Effects on spacecraft \& aircraft electronics. In Proceedings
ESA Workshop on Space Weather, ESA WPP-155, pages 17--27, Nordwijk, The Netherlands,
nov1998. ESA. [DT08] S. Dolevand N. Tzachar. Brief announcment: Corruption resilient
fountain codes. In DISC, pages 502--503, 2008.
ƒ[FFSK06:DFT] M. Ferringer, G. Fuchs, A. Steininger, and G. Kempf. VLSI Implementation of a
Fault-Tolerant Distributed Clock Generation. IEEE International Symposium on Defect and Fault-
Tolerance in VLSI Systems (DFT2006), pages 563--571, Oct. 2006.
Keynote SSS'08U. Schmid
57
Some References
ƒ[FMRR07] R. Friedman, A. Mostefaoui, S. Rajsbaum, and M. Raynal. Asynchronous agreement
and its relation with error-correcting codes. IEEE Trans. Comput., 56(7):865--875, 2007.
ƒ[Fri01] E. G. Friedman. Clock distribution networks in synchronous digital integrated circuits.
Proceedings of the IEEE, 89(5):665--692, May 2001.
ƒ[FSFK06] M. Fuegger, U. Schmid, G. Fuchs, and G. Kempf. Fault-Tolerant Distributed Clock
Generation in VLSI Systems-on-Chip. In Proceedings of the Sixth European Dependable
Computing Conference (EDCC-6), pages 87--96. IEEE Computer Society Press, Oct. 2006.
ƒ[ITRS05] International technology roadmap for semiconductors, 2005.
ƒ[KHP04] T. Karnik, P. Hazucha, and J. Patel. Characterization of soft errors caused by single
event upsets in CMOS processes. Dependable and Secure Computing, IEEE Transactions on,
1(2):128--143, April-June 2004.
ƒ[KK98] I. Korenand Z. Koren. Defect tolerance in VLSI circuits: Techniques and yield analysis.
Proceedings of the IEEE, 86(9):1819--1838, Sep 1998.
ƒ[Lam84] L. Lamport. Buridan'sprinciple. Technical report, SRI Technical Report, 1984.
ƒ[Lam03] L. Lamport. Arbitration-free synchronization. Distributed Computing, 16(2/3):219--237,
September 2003. [LP76] L. Lamportand R. Palais. On the glitch phenomenon. Technical report,
SRI Technical Report, 1976.
ƒ[LS03] G. Le Lannand U. Schmid. How to implement a timer-free perfect failure detector in
partially synchronous systems. Technical Report 183/1-127, Department of Automation,
TechnischeUniversit\"at Wien, January 2003.
ƒ[Mar81] L. Marino. General theory of metastableoperation. IEEE Transactions on Computers, C-
30(2):107--115, February 1981.
ƒ[MA01] M. S. Mazaand M. L. Aranda. Analysis of clock distribution networks in the presence of
crosstalk and groundbounce. In Proceedings International IEEE Conference on Electronics,
Circuits, and Systems (ICECS), pages 773--776, 2001.
Keynote SSS'08U. Schmid
58
Some References
ƒ[Nic05] M. Nicolaidis. Design for soft error mitigation. Device and Materials Reliability, IEEE
Transactions on, 5(3):405--418, Sept. 2005.
ƒ[Nor96] E. Normand. Single-event effects in avionics. IEEE Transactions on Nuclear Science,
43(2):461--474, Apr 1996.
ƒ[PB93] M. Peercyand P. Banerjee. Fault tolerant VLSI systems. Proceedings of the IEEE,
81(5):745--758, May 1993.
ƒ[Res01] P. J. Restleand others. A clock distribution network for microprocessors. IEEE Journal
of Solid-State Circuits, 36(5):792--799, May 2001. [RDS90] L. M. Reyneri, D. DelCorso, and B.
Sacco. Oscillatory metastabilityin homogeneous and nhomogeneousflip-flops. IEEE Journal of
Solid-State Circuits, SC-25(1):254--264, February 1990.
ƒ[RS08] P. Robinson and U. Schmid. The Asynchronous Bounded-Cycle Model. Proceedings
SSS'08, 2008.
ƒ[SE02] I. E. Sutherland and J. Ebergen. Computers without Clocks. Scientific American,
287(2):62--69, Aug. 2002.
ƒ[Sut89] I. E. Sutherland. Micropipelines. Communications of the ACM, Turing Award, 32(6):720--
738, June 1989. ISSN:0001-0782.
ƒ[WLS05] J. Widder, G. Le Lann, and U. Schmid. Failure detection with booting in partially
synchronous systems. In Proceedings of the 5th European Dependable Computing Conference
(EDCC-5), volume 3463 of LNCS, pages 20--37, Budapest, Hungary, Apr. 2005. Springer
Verlag.
ƒ[WS05] J. Widderand U. Schmid. Achieving synchrony without clocks. Research Report
49/2005, TechnischeUniversitätWien, InstitutfürTechnischeInformatik, 2005. (submitted).