Robust Automotive Positioning: Integration of GPS and ... - TUKE

brontidegrrrΜηχανική

14 Νοε 2013 (πριν από 3 χρόνια και 8 μήνες)

69 εμφανίσεις

5.2 PRÍLOHY KU KAPIT
OLE


5.2.1
Robust Automotive Positioning

Jon Kronander

LINKÖPING
,
LITH
-
ISY
-
EX
-
3578
-
2004


Chapter 2

Sensor Fusion

The basic principle behind a robust positioning system is the
fusion
of information
from several
different
sensors that

ar
e in some way related to the position of the

vehicle. To be able to perform
this fusion, a model relating the vehicle to the sensor

outputs is necessary.


2.1

Linear State Space Models

When approximating reality with a model, it is important to choose a le
vel of approximation

that
makes the model computationally
manageable, but still providing
meaningful description of the
properties that are of interest. A model structure

that is commonly used is the linear state space
description:

x
k+1

= F
k
x
k

+ G
k
w
k










(2.1a)

k
k
k
k
e
x
H
y












(2.1b)


where
x
k

is the state vector,
y
k

the measurable system output, and
w
k

and
ek
are

white noise
processes with the following statistical properties:


E
(
w
k
) = 0











(2.2a)

E
(
e
k
) = 0











(2.2b)

E
(
w
k
w
T
k
) =
Q
k










(2.2c)

E
(
e
k
e
T
k
) =
R
k











(2.2d)

E
(
w
k
w
T
k
+
i
) = 0
, i
≠0










(2.2e)

E
(
e
k
e
T
k
+
i
) = 0
, i

0









(2.2f)

E
(
e
k
w
T
i
) = 0
,


k, i









(2.2g)



The initial state vector
x
0 is usually considered to be a rando
m vector:




0
0
x
x
E











(2.3a)







0
0
0
0
0




T
x
x
x
x
E









(2.3b)


In many applications, it is of great interest to know the value of the state vector

of a system
model. A problem is that the state variables are usually not directly

measurable. Instead, the
measurements consist of noisy linear (or nonlinear) combinations

of the state variables.
Determining the states of a linear system, given

access to its known inputs and measurable
outputs, is referred to as the
observer

design
pro
blem [13].


2.2

Basics of Observer Design

As stated in the previous section, the task of the observer is to provide an estimate

k
x
ˆ

of the true
state
x
k
. In principle, such an estimate could be calculated in the

following way:


0
0
ˆ
x
x













(2.4)

k
k
x
F
x
ˆ
ˆ
1
0













(2.5)


It is obvious that this observer is suboptimal; it relies only on the initial state estimate

as input,
and it does not use the information contained in the measurements

y
k
. To utiliz
e th
e
measurements, a feedback term
can be added to the observer

equation:


k
k
k
k
Ke
x
F
x



ˆ
ˆ
1










(2.6)

k
k
k
k
x
H
y
e
ˆ













(2.7)



To see the benefit of this, consider the special case of a time invariant state space

model with no

process or measurement noise:


x
k
+1
=
Fx
k












(2.8)

y
k

=
Hx
k












(2.9)


The observer error,
k
k
k
x
x
x
ˆ
~
1




,
will now obey the equation:






k
k
k
k
k
x
KH
F
x
H
y
K
x
F
x
~
ˆ
~
~
1













(2.10)

This implies that




0
~
~
x
KH
F
x
k
k












(2.11
)


Thus, if we can find
K


0 so that
F − KH
has any desired eigenvalues, then

the error
k
x
~

can be
made to go to zero at any desired rate as
k
increases. If the

system is
observable
,
K
can be chosen
to give
F − KH

arbitrary eigenvalues

[11].

To return to the more general model (2.1a), the corresponding observer error can

be written


1
~

k
x

= (
F
k

− K
k
H
k
)

k
x
~

+
G
k
w
k

− K
k
ek







(2.12)


Now the choice of
K
is not as obvious as in the case without proc
ess or measurement

noise.


2.3

The Kalman Filter

The problem of choosing
K
in (2.12) is a nontrivial compromise between rate of

convergence and
sensitivity to measurement noise. The choice of
K
that minimizes

the observer error in a least
squares sense, is

given by set of recursive equations

named the Kalman filter (after its inventor
Rudolf Kalman). The Kalman filter

equations are:

0
1
0
ˆ
x
x














(2.13a)

0
1
0



P











(2.13b)


k
T
k
k
k
k
k
R
H
P
H
S



1









(2.14a)

1
1



k
T
k
k
k
k
S
H
P
K










(2.14b)



1
ˆ
ˆ
ˆ




k
k
k
k
k
k
k
k
k
x
K
y
K
x
x









(2.14c)

1
1




k
k
k
k
k
k
k
k
P
H
K
P
P










(2.14d)

k
k
k
k
k
x
F
x
ˆ
ˆ
1












(2.15a)

T
k
k
k
T
k
k
k
k
k
k
G
Q
G
F
P
F
P



1









(2.15
b
)


where
P
k|k
is the covariance of the state estimate e
rror. Equations (2.14) and

(2.15) are
commonly
referred to as the
measurement update
and the
time update
,

respectively.

Although the matrices
Q
k

and
R
k

are meant to represent the true covariances

of the process
and measurement noise, respectively, they are

often viewed as tuning

parameters of the filter. For
instance, the filter can be tuned to follow rapid changes

in a certain state variable more directly
by increasing the corresponding diagonal

value of the
Q
matrix. In the same way, the effect of
noisy m
easurements can be

reduced by increasing the diagonal values of the
R
matrix.

In practical applications, the values of
Q
and
R
are usually set to some ad hoc

value that is
believed to reflect the properties of the underlying signal, and then

during simulat
ions tuned to

achieve the desired compromise
between the ability of

the filter to track state changes and
suppress measurement noise.


2.4
The Extended Kalman Filter

Linear Kalman filter theory has proved to be useful also when working with
nonlinear

model
s.
The nonlinear state space model is linearized around a

nominal

trajectory, or alternatively around
the latest state estimates, allowing the linear

Kalman equations
to be applied, cf. Section 5.1.
Consider the nonlinear continuous

system with discrete ti
me observations:


x
˙
t

=
f
(
x
t
) +
w
t











(2.16a)

y
k

=
h
(
x
k
) +
e
k










(2.16b)

E
(
w
t
) = 0











(2.16c)

E
(
w
t

w
T
t
) =
Q
t











(2.16d)

E
(
e
k
) = 0











(2.16e)

E
(
e
k

e
T
k
) =
R
k











(2.16f)


2.4.1 Time Update

To perform the time update

step of the filter, the nonlinear continuous model can

either be
linearized and then discretized, or the other way around. The
discretized

linearization
has the
following time update equations, see [10]:








k
k
T
x
f
T
k
k
k
k
x
f
dT
e
x
x
k
k
ˆ
ˆ
ˆ
ˆ
0
1












(2.17a)










k
T
T
x
f
k
k
T
x
f
k
k
Q
e
P
e
P
k
k
k
k





ˆ
ˆ
1








(2.17b)


where
f
´

is the jacobian of
f
, and
k
Q

is the covariance of the discretized version

of
w
t

in (2.16a).
There are several alternative ways of calculating
k
Q

from
Q
t
,

depending on the assumptions
made about
w
t
. For its simplicity it is here assumed

that
w
t
is white noise such that its total
influence during one sample interval is

k
Q

=
T Q
t
. Other more elaborate assumptions on
w
t

have
also been evalu
ated,

but they did not result in any noticeable benefit.


2.4.2 Measurement Update

A linear measurement equation can be created by a Taylor expansion of (2.16b)

around the
predicted state:


k
k
k
k
k
e
x
H
y



1
ˆ









(2.18a)



1
ˆ



k
k
x
x
k
dx
x
dh
H










(2.18b)


With this approximation, it is straightforward to apply the standard Kalman filter

measurement
update equations (2.14a) to (2.14d), with one exception: in (2.14c),

the calculation of the
residual


1
ˆ



k
k
k
k
k
x
H
y
e
should be performed us
ing the

nonlinear measurement equation:




1
ˆ


k
k
k
k
k
x
h
y
e











(2.19)