AI and Robotics

Nov 24, 2013 (4 years and 7 months ago)

77 views

AGC

DSP

1

Problem
: Equalise through a FIR filter the distorting
effect of a communication channel that may be
changing with time.

If the channel were
fixed

then a possible solution
could be based on the
Wiener filter

approach

We need to know in such case the
correlation matrix

of the transmitted signal and the
cross correlation

vector between the input and desired response.

When the the filter is operating in an unknown
environment these required quantities need to be
found from the
accumulated data
.

AGC

DSP

2

The problem is particularly acute when not
only the
environment is changing

but also the
data involved are
non
-
stationary

In such cases we need temporally
to follow

the behaviour of the signals, and

the
correlation parameters as the environment is
changing.

This would essentially produce a
temporally
.

AGC

DSP

3

A possible framework is:

]
[
n
d
]
[
ˆ
n
d
]}
[
{
n
x
]
[
n
e
w
:
Filter
Algorithm

AGC

DSP

4

Applications are many

Digital Communications

Channel Equalisation

System identification

Smart antenna systems

Blind system equalisation

And many, many others

AGC

DSP

5

Applications

AGC

DSP

6

Echo Cancellers in Local Loops

-

+

-

+

Rx1

Rx2

Tx1

Rx2

Echo canceller

Echo canceller

Hybrid

Hybrid

Local Loop

AGC

DSP

7

Noise

Signal +Noise

-

+

FIR filter

PRIMARY SIGNAL

REFERENCE SIGNAL

AGC

DSP

8

System Identification

Unknown System

Signal

-

+

FIR filter

AGC

DSP

9

System Equalisation

Unknown System

Signal

-

+

FIR filter

Delay

AGC

DSP

10

Signal

-

+

FIR filter

Delay

AGC

DSP

11

Linear Combiner

Interference

AGC

DSP

12

Basic principles:

1) Form an objective function (performance
criterion)

2) Find gradient of objective function with
respect to FIR filter weights

3) There are several different approaches
that can be used at this point

3) Form a differential/difference equation

AGC

DSP

13

Let the desired signal be

The input signal

The output

Now form the vectors

So that

]
[
n
d
]
[
n
x
]
[
n
y

T
m
n
x
n
x
n
x
n
]
1
[
.
]
1
[
]
[
]
[

x

T
m
h
h
h
]
1
[
.
]
1
[
]
0
[

h
h
x
T
n
n
y
]
[
]
[

AGC

DSP

14

The form the objective function

where

}
]
[
]
[
{
)
(
2
n
y
n
d
E
J

w
Rh
h
p
h
h
p
w
T
T
T
d
J

2
)
(

}
]
[
]
[
{
T
n
n
E
x
x
R

]}
[
]
[
{
n
d
n
E
x
p

AGC

DSP

15

We wish to minimise this function at the
instant
n

Using
Steepest Descent

we write

But

]
[
])
[
(
2
1
]
[
]
1
[
n
n
J
n
n
h
h
h
h

Rh
p
h
h
2
2
)
(

J
AGC

DSP

16

So that the
“weights update equation”

Since the objective function is quadratic this
expression will converge in
m
steps

The equation is not practical

If we knew and a priori we could find
the required solution (Wiener) as

])
[
(
]
[
]
1
[
n
n
n
Rh
p
h
h

p
R
p
R
h
1

opt
AGC

DSP

17

However these matrices are not known

Approximate expressions are obtained by
ignoring the expectations in the earlier
complete forms

This is very crude. However, because the
update equation accumulates such quantities,
progressive we expect the crude form to
improve

T
n
n
n
]
[
]
[
]
[
ˆ
x
x
R

]
[
]
[
]
[
ˆ
n
d
n
n
x
p

AGC

DSP

18

The LMS Algorithm

Thus we have

Where the error is

And hence can write

This is sometimes called
the stochastic

descent

])
[
]
[
]
[
](
[
]
[
]
1
[
n
n
n
d
n
n
n
T
h
x
x
h
h

])
[
]
[
(
])
[
]
[
]
[
(
]
[
n
y
n
d
n
n
n
d
n
e
T

h
x
]
[
]
[
]
[
]
1
[
n
e
n
n
n
x
h
h

AGC

DSP

19

Convergence

The parameter is the
step size
, and it
should be selected carefully

If too small it takes too long to
converge, if too large it can lead to
instability

Write the autocorrelation matrix in the
eigen factorisation form

ΛQ
Q
R
T

AGC

DSP

20

Convergence

Where is orthogonal and is
diagonal containing the eigenvalues

The error in the weights with respect to
their optimal values is given by (using
the Wiener solution for

We obtain

Q
Λ
])
[
(
]
[
]
1
[
n
n
n
opt
opt
opt
Rh
Rh
h
h
h
h

p
]
[
]
[
]
1
[
n
n
n
h
h
h
Re
e
e

AGC

DSP

21

Convergence

Or equivalently

I.e.

Thus we have

Form a new variable

]
[
)
1
(
]
1
[
n
n
h
h
e
ΛQ
Q
e
T

]
[
)
(
]
[
)
1
(
]
1
[
n
n
n
h
h
h
e
ΛQ
QQ
Q
e
ΛQ
Q
Q
Qe
T
T

]
[
)
1
(
]
1
[
n
n
h
h
Qe
Λ
Qe

]
[
]
[
n
n
h
Qe
v

AGC

DSP

22

Convergence

So that

Thus each element of this new variable is
dependent on the previous value of it via a
scaling constant

The equation will therefore have an
exponential form in the time domain, and the
largest coefficient in the right hand side will
dominate

]
[
)
1
(
]
1
[
n
n
v
Λ
v

AGC

DSP

23

Convergence

We require that

Or

In practice we take a much smaller
value than this

1
1
max


max
2
0

AGC

DSP

24

Estimates

Then it can be seen that as the
weight update equation yields

And on taking expectations of both sides of it
we have

Or

n
]}
[
{
]}
1
[
{
n
E
n
E
h
h

])}
[
]
[
]
[
](
[
{
]}
[
{
]}
1
[
{
n
n
n
d
n
E
n
E
n
E
T
h
x
x
h
h

])}
[
]
[
]
[
]
[
]
[
{(
0
n
n
n
n
d
n
E
T
h
x
x
x

AGC

DSP

25

Limiting forms

This indicates that the solution
ultimately tends to the Wiener form

I.e. the estimate is unbiased

AGC

DSP

26

The excess mean square error in the
objective function due to gradient noise

Assume uncorrelatedness set

Where is the variance of desired
response and is zero when uncorrelated.

opt
T
d
J
h
p

2
min

2
d

opt
h
min
min
/
)
)
(
(
J
J
J
J
LMS
XS

AGC

DSP

27

It can be shown that the misadjustment
is given by

m
i
i
i
XS
J
J
1
min
1
/


AGC

DSP

28

Normalised LMS

To make the step size respond to the
signal needs

In this case

the step size.

]
[
]
[
]
[
1
2
]
[
]
1
[
2
n
e
n
n
n
n
x
x
h
h

1
0

AGC

DSP

29

Transform based LMS

]
[
n
d
]
[
ˆ
n
d
]}
[
{
n
x
]
[
n
e
w
:
Filter
Algorithm

Transform

Inverse Transform

AGC

DSP

30

with

We have the Least Squares solution

However, this is computationally very
intensive to implement.

Alternative forms make use of recursive
estimates of the matrices involved.

n
i
T
i
i
n
1
]
[
]
[
]
[
x
x
R

n
i
n
d
n
n
1
]
[
]
[
]
[
x
p
]
[
]
[
]
[
1
n
n
n
p
R
h

AGC

DSP

31

Recursive Least Squares

Firstly we note that

We now use the Inversion Lemma (or the
Sherman
-
Morrison formula)

Let

]
[
]
[
]
1
[
]
[
n
d
n
n
n
x
p
p

T
n
n
n
n
]
[
]
[
]
1
[
]
[
x
x
R
R

AGC

DSP

32

Recursive Least Squares (RLS)

Let

Then

The quantity is known as the
Kalman
gain

]
[
]
1
[
]
[
1
]
[
]
1
[
]
[
1
1
n
n
n
n
n
n
T
x
R
x
x
R
k

1
]
[
]
[

n
n
R
P
]
1
[
]
[
]
[
]
1
[
]
[

n
n
n
n
n
T
P
x
k
R
P
]
[
n
k
AGC

DSP

33

Recursive Least Squares

Now use in the computation of
the filter weights

From the earlier expression for updates we
have

And hence

]
[
]
[
]
[
n
n
n
x
P
k

])
[
]
[
]
1
[
](
[
]
[
]
[
]
[
n
d
n
n
n
n
n
n
x
p
P
p
P
h

]
[
n
P
]
1
[
]
1
[
]
[
]
[
]
1
[
]
1
[
]
1
[
]
[

n
n
n
n
n
n
n
n
T
p
P
x
k
p
P
p
P
])
1
[
]
[
]
[
](
[
]
1
[
]
[

n
n
n
d
n
n
n
T
h
x
k
h
h
AGC

DSP

34

Kalman Filters

Kalman filter is a sequential estimation
problem normally derived from

The Bayes approach

The Innovations approach

Essentially they lead to the same equations
as RLS, but underlying assumptions are
different

AGC

DSP

35

Kalman Filters

The problem is normally stated as:

Given a sequence of noisy observations to
estimate the sequence of state vectors of a linear
system driven by noise.

Standard formulation

]
[
]
[
]
1
[
n
n
n
w
Ax
x

]
[
]
[
]
[
]
[
n
n
n
n
ν
x
C
y

AGC

DSP

36

Kalman Filters

Kalman filters may be seen as RLS with the
following correspondence

Sate space

RLS

Sate
-
Update matrix

Sate
-
noise variance

Observation matrix

Observations

State estimate

]
[
n
A
]
[
n
x
T
n
]
[
x
]
[
n
C
]
[
n
y
]
[
n
h
}
]
[
]
[
{
]
[
T
n
n
E
n
w
w
Q

I
0
]
[
n
d
AGC

DSP

37

Cholesky Factorisation

In situations where storage and to some
extend computational demand is at a
premium one can use the Cholesky
factorisation tecchnique for a positive definite
matrix

Express , where is lower
triangular

There are many techniques for determining
the factorisation

T
LL
R

L