Slides

fearlessquickMobile - Wireless

Dec 12, 2013 (3 years and 8 months ago)

67 views

Andrea Goldsmith


fundamental communication limits

in non
-
asymptotic regimes

Thanks to collaborators Chen,
Eldar
,
Grover,
Mirghaderi
,
Weissman

Information Theory and
Asymptopia


C
apacity with asymptotically small error
achieved by asymptotically long codes.


Defining capacity in terms of asymptotically

small error
and infinite delay is brilliant!


Has also been limiting


Cause of unconsummated union between networks
and information
theory



Optimal compression based on properties of
asymptotically long sequences


Leads to optimality of separation



Other forms of
asymptopia


Infinite SNR, energy, sampling, precision, feedback, …



Why back off?

Theory not informing practice

Theory vs. practice

Theory

Practice

Infinite
blocklength

codes

Infinite SNR


Infinite energy


Infinite feedback

Infinite sampling rates


Infinite (free) processing

Infinite

precision ADCs

Uncoded

to LDPC

-
7dB

in LTE

Finite battery life

1 bit ARQ

50
-
500
Msps


200 MFLOPs
-
1B FLOPs

8
-
16 bits

What else lives in
asymptopia
?

Backing off from:
infinite
blocklength


Recent developments on finite
blocklength


Channel codes (Capacity
C

for
n

)


Source codes (entropy
H

or rate distortion
R(D))

[
Ingber
, Kochman’11;
Kostina
, Verdu’11
]

Separation not Optimal

Separation not Optimal

[Wang et. Al’11;
Kostina
, Verdu’12
]

Grand Challenges Workshop: CTW Maui


From
the perspective of the cellular industry, the Shannon
bounds evaluated by
Slepian

are within .5 dB for a packet size of
30 bits or more for the real AWGN channel at 0.5 bits/
sym
, for
BLER = 1e
-
4. In this perhaps narrow context there is not much
uncertainty for performance evaluations.



F
or
cellular and general wireless channels, finite
blocklength

bounds for practical fading models are needed and there is very
little work along those lines.



Even
for the AWGN channel the computational effort of
evaluating the Shannon bounds is formidable.



This
indicates a need for accurate approximations, such as those
recently developed based on the idea of channel dispersion.


Diversity vs. Multiplexing Tradeoff


Use antennas for multiplexing or diversity




Diversity/Multiplexing tradeoffs
(
Zheng
/
Tse
)



Error Prone

Low P
e

r)
r)(N
(N
(r)
d
r
t
*



r
SNR
log
R(SNR)
lim



SNR
d
SNR
log
P

log
e




)
(
lim
SNR
SNR
What

i
s

Infinite?

Backing off from:
infinite SNR


High SNR
Myth: Use some spatial dimensions
for
multiplexing
and others for
diversity

*Transmit
Diversity vs. Spatial Multiplexing
in

Modern
MIMO
Systems”,

Lozano/Jindal


Reality: Use
all

spatial dimensions for one or the other*


Diversity is wasteful of spatial dimensions
with HARQ


Adapt modulation/coding to channel SNR


Diversity
-
Multiplexing
-
ARQ Tradeoff


Suppose we allow ARQ with incremental redundancy

























ARQ is a form of diversity [Caire/El Gamal 2005]

0
2
4
6
8
10
12
14
16
0
1
2
3
4
ARQ Window

Size L=1

L=2

L=3

L=4

d
r
Joint Source/Channel Coding


Use antennas for multiplexing:





Use antennas for diversity

High
-
Rate

Quantizer

ST Code

High Rate

Decoder

Error Prone

Low P
e

Low
-
Rate

Quantizer

ST Code

High

Diversity

Decoder

How should antennas be used:
Depends on end
-
to
-
end metric

Joint Source
-
Channel coding w/MIMO

k
R
u

Index
Assignment

s bits

p(
i)

Channel
Encoder

s bits

i

MIMO
Channel

Channel
Decoder

Inverse Index
Assignment

p(
j)

s bits

j

s bits

Increased rate here

decreases source distortion

But permits less
diversity here

Resulting in more errors

Source

Encoder

Source

Decoder

And maybe higher total distortion

A joint design is needed

v
j

Antenna Assignment vs. SNR

Relaying in wireless networks


Intermediate nodes (relays) in a route help to forward the
packet to its final destination.


Decode
-
and
-
forward (store
-
and
-
forward) most common:


Packet decoded, then re
-
encoded for transmission


Removes noise at the expense of complexity


Amplify
-
and
-
forward: relay just amplifies received packet


Also amplifies noise: works poorly for long routes; low SNR.


Compress
-
and
-
forward: relay compresses received packet


Used when Source
-
relay link good, relay
-
destination link weak

Source

Relay

Destination

C
apacity of the relay channel unknown: only have bounds

Cooperation in Wireless Networks


Relaying is a simple form of cooperation


Many more complex ways to cooperate:


Virtual MIMO , generalized relaying, interference
forwarding, and one
-
shot/iterative conferencing


Many theoretical and practice issues:



Overhead, forming groups, dynamics, full
-
duplex, synch, …


Generalized Relaying and Interference
Forwarding


Can forward message and/or interference


Relay can forward all or part of the messages


Much room for innovation


Relay can forward
interference


To help subtract it out

TX1

TX2

relay

RX2

RX1

X
1

X
2

Y
3
=X
1
+X
2
+Z
3

Y
4
=X
1
+X
2
+X
3
+Z
4

Y
5
=X
1
+X
2
+X
3
+Z
5

X
3
= f(Y
3
)

Analog network coding

Beneficial to forward both

interference and message

In fact, it can achieve capacity

S

D

P
s

P
1

P
2

P
3

P
4


For large powers
P
s,
P
1
, P
2
,
…,
analog
network
coding
(AF) approaches
capacity

:
Asymptopia
?

Maric
/Goldsmith’12

Interference Alignment


Addresses the number of interference
-
free signaling
dimensions in an interference channel



Based on our orthogonal analysis earlier, it would appear
that resources need to be divided evenly, so only 2BT/N
dimensions available



Jafar

and
Cadambe

showed that by aligning interference,
2BT/2 dimensions are available



Everyone gets half the cake!


Except at finite SNRs



Backing off from:
infinite SNR


H
igh SNR Myth: Decode
-
and
-
forward equivalent to
amplify
-
forward, which is optimal at high SNR*


Noise amplification drawback of AF diminishes at high SNR


Amplify
-
forward achieves full degrees of freedom in MIMO systems
(
Borade
/
Zheng
/Gallager’07)


At
high
-
SNR,
Amplify
-
forward is within
a constant
gap from
the
capacity
upper
bound as the received powers
increase (
Maric
/Goldsmith’07)




Reality
: optimal relaying unknown at most
SNRs:


Amplify
-
forward
highly suboptimal outside high
SNR per
-
node regime
,
which is
not always the
high
power or high channel gain regime


Amplify
-
forward has unbounded gap from capacity in the high channel
gain regime (
Avestimehr
/
Diggavi
/Tse’11)


Relay strategy should
depend on the worst link

Decode
-
forward used in practice

Capacity and Feedback


Capacity under feedback largely unknown


Channels with memory


Finite rate and/or noisy feedback


Multiuser channels


Multihop

networks



ARQ is
ubiquitious

in practice


Works well on finite
-
rate noisy feedback channels


Reduces end
-
to
-
end delay



Why hasn’t theory met practice when it comes to
feedback?


PtP

Memoryless

Channels: Perfect Feedback


Shannon


Feedback does not increase capacity of DMCs



Schalkwijk
-
Kailath Scheme for AWGN channels


Low
-
complexity linear
recursive scheme


Achieves capacity


Double
exponential decay in error
probability

Encoder

Decoder

W

W
W

W
ˆ
+

Backing off from:
Perfect Feedback

+

Channel
Encoder

Decoder

m

1
,
...,
e
nR



(
0
,
1
)
X
i
Y
i
m
Feedback
Module

U
i

[Shannon 59]: No Feedback




[
Pinsker
,
Gallager

et al.]:
Perfect
feedback


Infinite rate/no noise




[Kim et. al.
07/10]: Feedback with AWGN




[
Polyaskiy

et. al. 10]: Noiseless feedback reduces


the minimum energy per bit when
nR

is fixed
and
n



P
r
ˆ
m

m



e

O
(
n
)

P
r
ˆ
m

m



e
xp(

e
xp(
...
e
xp(
O
(
n
)
O
(
n
)
...)
)
)
P
r
ˆ
m

m



e

O
(
n
)







Objective:



Choose and


to maximize the decay rate of


error probability

Gaussian Channel with Rate
-
Limited Feedback

+

Channel
Encoder

Decoder


(
0
,
1
)
X
i
Y
i
ˆ
m
Feedback
Module

U
i

Constraints





E
|
X
i
|
2
i

1
n
å
é
ë
ê
ù
û
ú
£
nP

P
e
(
n
,
R
,
R
F
B
,
P
)
Feedback is rate
-

l
imited ; no noise

A super
-
exponential error probability is achievable if and only if

R
F
B

R


: The error exponent is finite but higher
than no
-
feedback error exponent







: Double exponential error probability







: L
-
fold exponential error probability

R
F
B

R
P
e
(
n
,
R
,
R
F
B
,
P
)

e

n
(
E
No
F
B
(
R
)

R
F
B

o
(
1
)
)
P
e
(
n
,
R
,
R
F
B
,
P
)

e

e
O
(
n
)

P
e
(
n
,
R
,
R
F
B
,
P
)

e
xp(

e
xp(
...
e
xp(
L
O
(
n
)
...)
)
)
R
F
B

R
R
F
B

L
R
ˆ
S
t
m
-
bit
Encoder

m
-
bit
Decoder

S

b
1
...
b
m
m
-
bit
Encoder

m
-
bit
Decoder

Forward

Channel

Feedback
Channel


S
t


If , send

Termination
Alarm



Otherwise, resend
with energy






S
t

S
E
t

1


Send back
with energy


If Termination
Alarm is received,
report
as
the
decoded message



E
t
F
B
F
e
e
dba
c
k
E
ne
r
gy:

E
t
F
B

P
r
S
t

ˆ
S
t




(
E
t
F
B
)
F
or
w
a
r
d
E
ne
r
gy:

E
t
P
r
ˆ
S
t

S




(
E
t
)
ˆ
S
t
ˆ
S
t
Feedback under Energy/Delay Constraint


Constraints

D
e
c
odi
ng D
e
l
a
y

T
T
ot
a
l

E
ne
r
gy:

(
E
t

t

1
T

E
t
F
B
)

E
t
ot

Objective
:


Choose

to
minimize

the overall probability of error

P
e
(
E
t
ot
,
T
)
E
t
,
E
t
F
B


t

1
T
D
epends on the error probability model
ε
(

)


Exponential Error Model:
ε(x)=
β
e
-
αx



Applicable when
Tx

energy dominates


Feedback gain is high if total energy is large
enough


No feedback gain for energy budgets below
a threshold

Feedback Gain under Energy/Delay Constraint


Super
-
Exponential Error Model:
ε(x)=
β
e
-
αx
2



-
Applicable when
Tx

and coding energy are comparable

-
No feedback gain for energy budgets above a threshold

E
t
ot
Backing off from:
perfect feedback


Memoryless

point
-
to
-
point channels:


Capacity unchanged with perfect feedback


Simple linear scheme reduces error exponent





(
Schalkwijk
-
Kailath
: double exponential)


Feedback reduces energy consumption



Capacity of feedback channels largely
unknown


Unknown for general channels
with
memory and perfect feedback


Unknown under finite
rate and/or noisy feedback


Unknown in general for multiuser
channels


Unknown in general for
multihop

networks



ARQ is
ubiquitious

in practice


Assumes channel errors


Works
well on finite
-
rate noisy feedback channels


Reduces end
-
to
-
end delay


No feedback

Feedback


Output feedback


Channel information (CSI)


Acknowledgements


Something else?

Noisy/Compressed

How to use feedback in wireless networks?

Interesting applications to neuroscience




For a given sampling mechanism (i.e. a “new” channel)


What is the optimal input signal?


What is the
tradeoff

between capacity and sampling rate?


What known sampling methods lead to highest capacity?



What is the
optimal

sampling mechanism?


Among all possible (known and unknown) sampling schemes

h
(
t
)
Sampling

Mechanism

(rate
f
s
)

New Channel

Backing off from:
infinite sampling

Capacity under Sampling w/
Prefilter





Theorem: Channel capacity




h
(
t
)
)
(
t
h
)
(
t

)
(
t
x
)
(
t
s
s
nT
t

]
[
n
y
“Folded”
SNR
filtered by
S(f)


Determined by
waterfilling
:

suppresses aliasing

Capacity not monotonic in
f
s


Consider a “sparse” channel





Capacity not


monotonic in
f
s
!


Single
-
branch

sampling fails to

exploit channel

structure



















Filter Bank Sampling







Theorem: Capacity of the sampled channel using a
bank of
m

filters with aggregate rate
f
s


h
(
t
)
)
(
t
h
)
(
t

)
(
t
x
)
(
1
t
s
)
(
t
s
i
)
(
t
s
m
)
(
s
mT
n
t

)
(
s
mT
n
t

)
(
s
mT
n
t

]
[
1
n
y
]
[
n
y
i
]
[
n
y
m
Similar to MIMO; no combining!

Equivalent MIMO Channel Model














h
(
t
)
)
(
t
h
)
(
t

)
(
t
x
)
(
1
t
s
)
(
t
s
i
)
(
t
s
m
)
(
s
mT
n
t

)
(
s
mT
n
t

)
(
s
mT
n
t

]
[
1
n
y
]
[
n
y
i
]
[
n
y
m
(

f
X
(

s
kf
f
X

(

s
kf
f
X

)
(
s
kf
f
H


)
(
f
H

)
(
s
kf
f
H


)
(
s
kf
f
N

)
(
f
N
)
(
s
kf
f
N

(

f
Y
i
(

f
Y
1
(

f
Y
m
(

f
S
1
(

s
m
kf
f
S

(

f
S
i
(

s
i
kf
f
S

(

s
kf
f
S

1
(

f
S
m
(

s
m
kf
f
S

(

s
i
kf
f
S

(

s
kf
f
S

1

Theorem 3
: The channel capacity of the sampled
channel using a bank of
m

filters with aggregate rate
is

For each
f

Water
-
filling

over singular

values

MIMO


Decoupling

Pre
-
whitening


Selects the
m

branches with
m

highest SNR


Example (Bank of 2 branches)












highest
SNR

2
nd

highest

SNR

low SNR

(

s
kf
f
X
2

(

f
X
(

s
kf
f
X

(

s
kf
f
X

)
(
s
kf
f
H


)
(
f
H

)
(
s
kf
f
H


)
(
s
kf
f
N

)
(
f
N
)
(
s
kf
f
N

)
(
s
kf
f
S


)
(
f
S

)
(
s
kf
f
S


)
2
(
s
kf
f
H


)
2
(
s
kf
f
N

)
2
(
s
kf
f
S


Joint Optimization of Input and Filter Bank

low

SNR

(

f
Y
1
(

f
Y
2
Capacity monotonic in
f
s

Can we do better?

Sampling with
Modulator+Filter

(1 or more)

h
(
t
)
)
(
t
h
)
(
t

)
(
t
s
]
[
n
y

q
(
t
)
x
(
t
)
p
(
t
)

Theorem:



Bank of
Modulator+Filter

卩湧汥

䉲慮捨



F楬i敲䉡湫











Theorem



Optimal

among all
time
-
preserving

nonuniform

sampling techniques of rate
f
s



zzzz
zzzz
zz

)
(
t
s
]
[
n
y

q
(
t
)
zzzz
zzzz
zz

p
(
t
)
)
(
1
t
s
)
(
t
s
i
)
(
t
s
m
)
(
s
mT
n
t

)
(
s
mT
n
t

)
(
s
mT
n
t

]
[
1
n
y
]
[
n
y
i
]
[
n
y
m
equals

Backing off from:
Infinite processing power

Is
Shannon
-
capacity still
a good metric for system design?
Our approach

P
ower consumption via a network graph

power consumed in nodes and wires

B
1
B
2
B
3
B
4
X
5
X
6
X
7
X
8
Extends early work of El
Gamal

et. al.’84 and Thompson’80

Fundamental area
-
time
-
performance tradeoffs


For encoding/decoding “good” codes,








Stay away from capacity!


Close to capacity we have


Large chip
-
area


More time


More power

Area occupied by wires

Encoding/decoding clock cycles

B
1
B
2
B
3
B
4
X
5
X
6
X
7
X
8
Total power diverges to infinity!

Regular LDPCs closer to bound than capacity
-
approaching LDPCs!

Need novel code designs with short wires, good performance

Conclusions


Information theory
asympotia

has provided much insight and
decades of sublime delight to researchers



Backing off from infinity required for some problems to gain
insight and fundamental bounds



New mathematical tools and new ways of applying
conventional tools needed for these problems



Many interesting applications in finance, biology,
neuroscience, …