Internet based smart home robot
Page
69
LOCALIZATION AND
MAPPING
7
.1
Introduct
ion to Localization
and
Mapping
Where Am I?
Frequently, robots find themselves asking this question.
Knowing your location, and being able to navigate to other
locations, is extremely important for
autonomous robots. The
act of finding one's location against a map is known as
localization. So how can a robot localize itself? The most
common form of intelligent localization and navigation is to
use a map, combined with sensor readings and some form of
closed

loop motion feedback. But a robot needs to do so
much more than just localizing itself
7
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
70
Often, we'd like our robots to be able to build their own
maps, since map building by hand is tedious, boring, and
error

prone. The field of robotics that
studies localization and
mapping is typically called SLAM (simultaneous localization
and mapping).
Mapping and Localization are almost always a precursor to
navigation. We can now break down the several types of
problems that exist in SLAM and lesser probl
ems:
1) Known
map, known localization

this is the easiest type of
problem. If you know the map, and your current location, you
can easily navigate to a new location
2) Known
map, unknown localization

you know what the
world looks like, but you don't kn
ow where you are. Once
you find out where you are, you really are just doing case #1
above.
3) Unknown
map, unknown localization

the hardest of all
problems. This is known as simultaneous mapping and
localization (SLAM).
Now we will discuss briefly
introduced the main concepts of
Mapping, Localization,,,
7
.2
Probability Theory
And Bayes Rules
Probability Theory
Probability theory is the branch of mathematics concerned with
probability and the analysis of random phenomena. The central
objects of
probability theory are random variables, stochastic
processes, and events: mathematical abstractions of non

deterministic events or measured quantities that may either be
single occurrences or evolve over time in an apparently random
fashion. If an individ
ual coin toss or the roll of dice is considered to
be a random event, then if repeated many times the sequence of
random events will exhibit certain patterns, which can be studied
and predicted.
As a mathematical foundation for statistics, probability theo
ry is
essential to many human activities that involve quantitative
analysis of large sets of data. Methods of probability theory also
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
71
apply to descriptions of complex systems given only partial
knowledge of their state.
Most introductions to probability th
eory treat discrete probability
distributions and continuous probability distributions separately.
The more mathematically advanced measure theory based
treatment of probability covers both the discrete, the continuous,
any mix of these two and more.

Dis
crete probability theory
deals with events that occur in
countable sample spaces. (
Examples
:
Throwing dice, experiments
with decks of cards)
We have a finite or countable set called the sample space, which
relates to the set of all
possible outcomes
in cla
ssical sense,
denoted by
. It is then assumed that for each element
, an
intrinsic "probability" value
is attached, which satisfies the
following properties:
1.
2.

Continuous probability theory
deals with events that occur in a
continuous sample space.
If the outcome space of a random variable
X
is the set of real
numbers (
) or a subset thereof, then a function called the
cumulative distribution function
(or
cdf
)
exists, defined by
. That is,
F
(
x
) returns the probability that
X
will
be less than or eq
ual to
x
.
The cdf necessarily satisfies the following properties.
1.
is a monotonically non

decreasing, right

continuous
function;
2.
3.
Bayes Rules
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
72
In probability theory and statistics,
Bayes' theorem
(alternatively
Bayes' law
or
Bayes' rule
) is a result that
is of importance in the
mathematical manipulation of conditional probabilities.
Mathematically, Bayes' theorem gives the relationship between the
probabilities of
A
and
B
,
P
(
A
) and
P
(
B
), and the conditional
probabilities of
A
given
B
and
B
given
A
,
P
(
A

B
) and
P
(
B

A
). In its
most common form, it is:
The meaning of this statement depends on the interpretation of
probability ascribed to the terms:
Bayesian interpretation
In the Bayesian (or epistemological) interpretation, probability
measures a
degree of
belief
. Bayes' theorem then links the degree
of belief in a proposition before and after accounting for evidence.
For example, suppose somebody proposes that a biased coin is
twice as likely to land heads than tails. Degree of belief in this
might initiall
y be 50%. The coin is then flipped a number of times to
collect evidence. Belief may rise to 70% if the evidence supports
the proposition.
For proposition
A
and evidence
B
,
P
(
A
), the
prior
, is the initial degree of belief in
A
.
P
(
A

B
), the
posterior
, is th
e degree of belief having
accounted for
B
.
the quotient
P
(
B

A
)/
P
(
B
) represents the support
B
provides
for
A
.
Frequentist interpretation
In the frequentist interpretation, probability measures a
proportion
of outcomes
. For example, suppose an experiment is performed
many times.
P
(
A
) is the proportion of outcomes with property
A
,
and
P
(
B
) that with property
B
.
P
(
B

A
) is the proportion of outcomes
with property
B
out of
outcomes with property
A
, and
P
(
A

B
) the
proportion
of those with
A
out of those with
B
.
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
73
The role of Bayes' theorem is best visualized with tree diagrams,
as shown to the right. The two diagrams partition the same
outcomes by
A
and
B
in opposite orders, to obtain the inverse
probabilities. Bayes' theorem serves as the link between these
different partitioning
s
.
EVENT FORMS
Simple F
orm
In many applications, for instance in Bayesian inference, the event
B
is fixed in the discussion, and
we wish to consider the impact of
its having been observed on our belief in various possible events
A
. In such a situation the denominator of the last expression, the
probability of the given evidence
B
, is fixed; what we want to vary
is
A
. Bayes theorem
then shows that the posterior probabilities are
proportional to the numerator:
(proportionality over
A
for given
B
)
….
EQ(8.1)
In words:
posterior is proportional to prior times likelihood
If events
A
1
,
A
2
, …, are mutually exclusive and exhaustive, i.e.,
one of them is certain to occur but no two can occur together, and
we know their probabilities up to proportionality, then we can
determine the proportionality constant by using the fact that their
probabil
ities must add up to one. For instance, for a given event
A
,
the event
A
itself and its complement ¬
A
are exclusive and
exhaustive. Denoting the constant of proportionality by
c
we have
...
EQ (8.2)
and
…
EQ (8.3)
Adding these two formulas we deduce that
…
EQ (8.4)
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
74
Extended form
Often, for some partition {
A
j
} of the event space, the event space is
given or conceptualized in terms of
P
(
A
j
) and
P
(
B

A
j
). It is then
useful to compute
P
(
B
) using the law of total probability:
….
EQ (8.5)
…
EQ(8.6)
In the special case where A is a
binary variable
:
… EQ (8.7)
Bayes R
ule
Bayes' rule is Bayes' theorem in odds form (Odds form means
The
odds in favor of
an event is defined by the ratio of the probability
that the event will happen to the probability that it will not happen.).
…
EQ (8.8)
where
…
EQ (8.9)
is called the Bayes factor or likelihood ratio
and the odds between two events is simply the ratio of the
probabilities of the two events. Thus
,
…
EQ (8.10)
,
…
EQ (8.11)
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
75
So the rule says that the posterior odds are the prior odds times
the Bayes factor, or in other words, posterior is proportional to prior
times likelihood.
Derivation F
or events
Bayes' theorem may be
derived from the definition of conditional
probability:
…
EQ (8.12)
…
EQ (8.13)
…
EQ
(8.14)
…
EQ (8.15)
For random variables
For two continuous random variables
X
and
Y
, Bayes' theorem
may be analogously derived from the definition of conditional
density:
…
EQ (8.16)
…
EQ (8.17)
…
EQ (8.18)
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
76
7
.3
SLAM(Simultan
eous Localization and
M
apping)
Abstract
is a technique used by
robots
and
autonomous vehicles
to build up
a map within an unknown environment (without a priori
knowledge), or to update a map within a known environment (with
a priori knowledge from a given map), while at the same time
keeping
track
of their current location.
Mapping is the problem of integrating the information gathered by
a set of sensors into a consistent model and depicting that
information as a given representation. It can be described by the
first
characteristic question, “What does the world look like? “.
In contrast to this, localization is the problem of estimating the
place of the robot relative to a map; in other words, the robot has
to answer the second characteristic question, “Where am I?
“.
SLAM is therefore defined as the problem of building a model
leading to a new map, or repetitively improving an existing map,
while at the same time localizing the robot within that map. In
practice, the answers to the two characteristic questions canno
t be
delivered independently of each other.
Mapping
SLAM in the mobile robotics community generally refers to the
process of creating geometrically consistent maps of the
environment.
Topological
maps
are a method of environment
representation which capture the connectivity of the environment
rather than creating a geometrically accurate map. As a result,
algorithms that create top
ological maps are not referred to as
SLAM
Sensing
SLAM will always use several different types of sensors to acquire
data with statistically independent errors.
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
77
Locating
The results from sensing will feed the algorithms for locating.
The SLAM Process
The SLAM process consists of a number of steps. The goal of the
process is to use the environment to update the position of the
robot. Since the odometry (provide an approximate position of the
robot as measured by the movement of the wheels of the robot,
to
serve as the initial guess of where the robot might be in the PF ) of
the robot but we cannot rely directly on the odometry. We can use
laser scans of the environment to correct the position of the robot.
This is accomplished by extracting features fro
m the environment
and re

observing when the robot moves around. A PF (particle
Filter) is the heart of the SLAM process. It is responsible for
updating where the robot thinks it is based on these features.
These features are commonly called landmarks .( La
ndmarks are
features which can easily be re

observed and distinguished from
the environment. These are used by the robot to find out where it
is (to localize itself).)
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
78
Figure
7
.
1
Describes the SLAM process.
When the odometry
changes because the robot moves the
uncertainty pertaining to the robots new position is updated in the
EKF using Odometry update. Landmarks are then extracted from
the environment from the robots new position. The robot then
attempts to associate these la
ndmarks to observations of
landmarks it previously has seen. Re

observed landmarks are then
used to update the robots position in the EKF. Landmarks which
have not previously been seen are added to the EKF as new
observations so they can be re

observed lat
er.
Figure
7.2
The robot is represented by the triangle. The stars represent landmarks.
The robot initially measures using its sensors and the location of the landmarks
(sensor measurements illustrated with lightening)
.
Figure
7.3
The robot moves so it now thinks it is here. The distance moved is given
by the robot's odometry.
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
79
Figure
7.4
The robot once again measures the location of the landmarks using its
sensors but finds out they don't match with where the robot thinks they sh
ould be
(given the robot location). Thus the robot is not where it thinks it is.
Figure
7.5
As the robot believes more its sensors
than its odometry, it now uses the
information gained about where the landmarks actually are to determine where it is
(the
location the robot originally thought it was at is illustrated by the dashed
triangle).
Figure
7
.6
In actual fact the robot is here. The sensors are not perfect so the robot
will not precisely know where it is. However, this estimate is better than rely
ing on
odometry alone. The dotted triangle represents where is thinks it is; the dashed
triangle is where the odometry told it it was; and the last triangle is where it actually
is.
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
80
PF (Particle Filter)
This filter is used to localize the robot on the
given last updated
map and we will talk about it later in other section in this book and
how it works.
Overview of the process
As soon as the landmark extraction and the data association is in
place the SLAM process can be considered as three steps:
1. U
pdate the current state estimate using the odometry data
2. Update the estimated state from re

observing landmarks.
3. Add new landmarks to the current state.
The first step is very easy. It is just an addition
of the controls
of the robot to the old
state estimate. E.g. the robot is at point
(x, y) with rotation theta and the controls are (dx, dy) and
change in rotation is dtheta. The result of the first step is the
new state of the robot (x+dx, y+dy) with rotation theta+dtheta.
In the second step th
e re

observed landmarks are
considered. Using the estimate of the current position it is
possible to estimate where the landmark should be. There is
usually some difference, this is called the innovation. So the
innovation is basically the difference betwe
en the estimated
robot position and the actual robot position, based on what
the robot is able to see.
In the second step the uncertainty of each observed landmark
is also updated to reflect recent changes. An example could
be if the uncertainty of the c
urrent landmark position is very
little. Re

observing a landmark from this position with low
uncertainty will i
ncrease the landmark certainty,
i.e: the
variance of the landmark with respect to the current position
of the robot.
In the third step new land
marks are added to the state, the
robot map of the world. This is done using information about
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
81
the current position and adding information about the relation
between the new landmark and the old landmarks.
7
.4
Particle Filters
Particle filters have
many names: (Sequential) Monte Carlo
filters, Bootstrap filters, Condensation, Interacting Particle
Approximations, Survival of the fittest, … etc.
In statistics, a particle filter, also known as a sequential Monte
Carlo method (SMC), The particle filter
aims to estimate the
sequence of hidden parameters,
x
k
for
k
=
0,1,2,3,…, based
only on the observed data
y
k
for
k
=
0,1,2,3,…. All Bayesian
estimates of
x
k
follow from the posterior distribution
p
(
x
k

y
0
,
y
1
,…,
y
k
).
Our problem is tracking the state of a
system as it evolves
over time (system here is the robot and its state is its
location). We have sequentially arriving (noisy or ambiguous)
observations from our kinect device and ultrasonic sensor
and we need to know the best possible estimate of the hi
dden
variables.
Bayesi
an Filtering / Tracking Problem
Unknown State Vector x
0:t
= (x
0
, …, x
t
)
Observation Vector z
1:t
Find PDF (Probability Distribution Function) p(x
0:t
 z
1:t
)…
posterior distribution
or p(x
t
 z
1:t
)…
filtering distribution
Prior Information
given:
p(x
0
) …
prior on state distribution
p(z
t
 x
t
)…
sensor model
p(z
t
 x
t

1
)…
Markovian state

space model
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
82
Sequential Update
Recursive filtering
Predict next state pdf from current estimate.
Update the prediction using sequentially arriving new
measur
ements.
Optimal Bayesian solution is recursively calculating exact
posterior density.
Particle Filters Algorithm
Here is a floor plan of an environment where a robot is
located and the
robot has to perform global localization.
Global localization
is when
an
object has no idea where it is in
space and has to find out where it is just based on sensory
measurements.
Figure 7
.7
map of particles
The robot, which is located in the upper right hand corner of
the
environment, has range sensors that are
represented by
the blue stripes.
The sensors are used to range the distance of nearby
obstacles. These sensors and help the robot determine a
good posterior distribution as to where it is.
In this environment the red dots are particles. They are a
discre
te guess as
to where the robot might be. These
particles have the structure (X, Y, heading) [Heading is the
angle the robot makes with respect to X axis]

three values
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
83
to comprise a single guess. However, a single guess is not a
filter, but rather a filt
er is the set of several thousands of
guesses that together generate an approximate
representation for the posterior of the robot.
The essence of particle filters is to have the particles guess
where the robot
might be moving, but also have them survive,
a kind of "survival of the
fittest," so that particles that are more
consistent with the measurements,
are more likely to survive.
As a result, places of high probability will collect
more
particles, and therefore be more representative of the robot's
post
erior belief. The particle together, make up the
approximate belief of
the robot as it localizes itself.
The robot senses the distance to the nearby landmarks (For
example, we have four landmarks here in our environment).
The distances from the landmarks
to the robot make up the
measurement vector of the robot.
Figure
7
.8
robot distance
We have a measurement vector that consists of the four
values of the four
distances from
L
~1~ to
L
~4~. If a particle
hypothesizes that its coordinates
are somewhere
other than
where the robot actually is (the red robot
indicates the particle
hypothesis), we
have th
e situation shown in figure (8.9)
:
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
84
Figure
7
.9
new distance
The particle also hypothesizes a different heading direction.
We can then
take the same
measurement vector from our
robot and apply it to the particle.
Figure
7
.10
new distance versus old distance
The green indicates the measurement vector we would have
predicted if the red particle actually were a good match for the
robot's actual
location.
Figure
7
.11
subtract old from new position
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
85
The closer the assumed particle is to the correct position, the
more likely will be the set of measurements given that
p
osition. The idea of particle filters is that the mismatch of the
actual
measurement and the predicted measurement leads to
an importance weight that indicates the importance of that
specific particle. The larger the weight the more important is
the particle.
Figure
7
.12
weight calculations
When we have a bunch of
particles, each has its own weight;
some are
very plausible, while others look very implausible as
indicated by the size of the particle.
Figure
7
.13
weight
Next we allow the particles to survive at random, but the
probability of
survival will be proportional to the weights. That
is, a particle with a larger weight will survive at a higher
proportion than a particle with a small weight. This means that
after
resampling
, which is randomly drawing particles from
the old ones with rep
lacement in proportion to the importance
weight, the particles with a higher importance weight will live
on, while the smaller ones will die out. The "with replacement"
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
86
aspect of this selection method is important because it allows
us to choose the high pr
obability particles multiple times. This
causes the particles to cluster around regions with high
posterior probability.
7
.5
Mapping
(Occupancy Grid M
apping)
Occupancy Grid Mapping refers to a family of computer
algorithms in probabilistic robotics for
mobile robots which
address the problem of generating maps from noisy and
uncertain sensor measurement data, with the assumption
that the robot pose is known.
The basic idea of the occupancy grid is to represent a map of
the environment as an evenly space
d field of binary random
variables each representing the presence of an obstacle at
that location in the environment. Occupancy grid algorithms
compute approximate posterior estimates for these random
variables.
We can never be entirely certain if a cell
in our 2D
occupancy grid map is occupied or empty, thus we will
typically give each cell a probability of being full or empty.
We have to initialize the values of all cells to some unknown,
a common value would be 0.5 denoting that we believe there
is a 50
% chance the cell is occupied. If you are navigating in
mostly open space, you may lower this value, or vice versa.
For now, we will assume we know our position absolutely.
Our position is typically called a "pose", and in our 2D map, it
would entail X and
Y coordinates as well as a rotation angle.
The goal of an occupancy mapping algorithm is to estimate
the posterior probability over maps given the data:
, where
is the map,
is the set of
measurements from time 1 to t, and
is the set of robot
poses
from time 1 to t. The controls and odometry data play
no part in the occupancy grid mapping algorithm since the
path is assumed known.
Occupancy grid algorithms represent the map
as a fine

grained grid over the continuous space of locations in the
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
87
environment. The most common type of occupancy grid
maps are 2d maps that describe a slice of the 3d world.
Occupancy Grid Mapping A
lgorithm
Figure(
7
.14
)
Mapping A
lgorithm
First update the grid cells which fall in the range of Zt
and updates the
occupancy value by the function
inverse_sensor_model.
Where inverse_sensor_model is a function that assigns
to all cells within the sensor cone whose range is close to
the measured range.
Otherwise if the cell is not in the range of the Zt then
the value
in the cell is kept as it is not updated.
L0 is the prior of occupancy.
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
88
Figure
7
.15
P
lan for an office enviro
n
ment
Figure
7
.16
O
ccupancy grid map for an office environment
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
89
7
.6
Rao

Blackwellised
Particle
Filtering
(
RBPF)
Introduction
Learning maps is a
fundamental task of mobile robots and a
lot of researchers focused on this problem. In the literature,
the mobile robot mapping problem is often referred to as the
simultaneous localization and mapping (SLAM) problem. In
general, SLAM is a complex problem
because for learning a
map the robot requires a good pose estimate while at the
same time a consistent map is needed to localize the robot.
This dependency between the pose and the map estimate
makes the SLAM problem hard and requires to search for a
so
lution in a high

dimensional space. We introduced Rao
–
Blackwellized particle filters (RBPFs) as an effective means
to solve the SLAM problem. The main problem of Rao
–
Blackwellized particle filters lies in their complexity,
measured in terms of the number
of particles required to
learn an accurate map. Either reducing this quantity or
improving the
algorithm so that it is able to handle larger
sample sets is one of
the major challenges for this family of
algorithms.
Learning M
aps with Rao
–
Blackwellized Par
ticle
Filters
The key idea of the Rao
–
Blackwellized
Particle F
ilter for
SLAM is to estimate the joint posterior p(x
1:t
,m  z
1:t
, u
1:t−1
)
about the trajectory x
1:t
= x
1
, . . . , x
t
of the robot and the map
m of the environment given the observations z
1:t
=
z
1
, . . . , z
t
and odometry measurements u
1:t−1
= u
1
, . . . , u
t−1
. It does so
by using the following factorization
..EQ(8.19)
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
90
In this equation, the posterior p(x
1:t
 z
1:t
, u
1:t−1
) is similar to the
localization problem, since only the trajectory of
the vehicle
needs to be estimated. This estimation is performed using a
particle filter which incrementally processes the observations
and the odometry readings as they are available. The
second
term p(m  x
1:t
, z
1:t
) can be computed efficiently since
the
poses x1:t of the robot are known when estimating the
map m. As a result of the factorization, a Rao
–
Blackwellized
particle filter for SLAM maintains an individual map for each
sample and updates this map based on the trajectory
estimate of the sample upon
“mapping with known poses”.
A mapping system that applies a RBPF requires a proposal
distribution to draw the next generation of samples. The
general framework leaves open which proposal should be
used and how it should be computed. A proposal distributi
on
that is typically used in the context of Monte

Carlo
localization is the motion
model p(x(t)  x(
t−1), u(t−1)). This
proposal, however, is suboptimal since it does not consider
the observations of the robot to predict its motion.
T
he optimal proposal d
istribution is
….EQ (8.20)
Whenever a laser range finder is used, one can observe that
the observation likelihood p(z(t)  m(t−1), x(t) ) is much more
peaked than the motion model p(x(t)  x(t−1), u(t−1)). The
observation likelihood dominates the product in Eq.
(2)
in the
meaning
ful area of the distribution. Therefore, we
approximate p(x(t) x(t−1), u(t−1)) by a constant k within
this meaningful area L(i ). Under this approximation, the
proposal turns into
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
91
…EQ(8.21)
Eq.
(3)
can be computed by evaluating
on
a grid which is b
ounded by the maximum odometry error.
Alternatively, one can u
se a set of sampled points {x
j
} and
then evaluate point

wise the observation likelihood. In order
to efficiently sample the next generation of particles, one can
approximate this distribution by
a Gaussian. For each
particle i
, the parameters
of the Gaussian are
computed as
…EQ (8.
22)
which is computationally expensive but leads to an informed
proposal distribution. This allows us to draw particles in a
more accurate manner which seriously reduces the number
of required samples.
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
92
Speeding up
the computation of the proposal
The
problem of the method presented above is the
computational complexity of the informed proposal
distribution. The computations need to be carried out for
each sample individually. As a result, such a mapping
system runs online only for small particle sets.
Furthermore,
each particle maintains a full grid map which requires to
store large grid structures in the memory. To overcome this
limitation, we present a way to utilize intermediate results in
order to efficiently determine the proposal for the individua
l
samples.
The proposal distribution is needed to model the relative
movement of the vehicle under uncertainty. In most
situations, this uncertainty is similar for all samples within
one movement. It therefore makes sense to use the same
uncertainty to pr
opagate the particles. We derive a way to
sample multiple particles from the same proposal. As a
result, the time consuming computation of the proposal
distribution can be carried out for a few particles that are
representatives for groups of similar sampl
es. Furthermore,
we observed that local maps which are represented in a
particle

centered coordinate frame look similar for many
samples. We therefore present a compact map model in
which multiple particles can share their local maps. Instead
of storing a
full grid map, each sample maintains only a set
of reference frames for the different local maps. This
substantially reduces the memory requirements of the
mapping algorithm.
Diff
erent situations during mapping
Before we derive our new proposal distribut
ion, we start with
a brief analysis of the behavior of a RBPF. One can
distinguish three different types of situations during mapping:
• The robot is moving through unknown areas,
• is moving through known areas, or
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
93
• is closing a loop. Here, closing a loo
p means that the robot
first moves through unknown areas and then reenters known
terrain. It can be seen as moving along a so far non

traversed shortcut from the current pose of the robot to an
already known area.
For each of these three situations, we wi
ll present a proposal
distribution that needs to be computed only for a small set of
representatives rather than for all particles. we make the
following three assumptions:
Assumption 1.
The current situation is known, which means
that the robot can deter
mine whether it is moving through
unknown terrain, within a known area, or is closing a loop.
Assumption 2.
The corresponding local maps of two samples
are similar if considered in a particle

centered reference
frame. In the following, we refer to this pr
operty as local
similarity of the maps.
Assumption 3.
An accurate algorithm for pose tracking is
used and the observations are affected by a limited sensor
noise.
Computing t
he proposal for unknown terrain
For proximity sensors like laser range finders, the
observations of the robot cover only a local area around the
robot. As a result, we only need to consider the surroundings
of the robot when computing the proposal distribution. Let
m
(i
)
t−1
refer to the l
ocal map of particle i around its previous
pose x
(i)
t−1
.In the surroundings of the robot, we can
approximate
…EQ (8.23)
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
94
…EQ (8.24)
As a result, we can determine the proposal for a particle j by
computing the proposal in the reference frame of particle i
and then translating it to the reference frame of particle j
…EQ (8.25)
Computing the pro
posal for already visited areas
Whenever the robot moves through known areas, each
particle stays localized in its own map according to
Assumption 3
. To update
the pose of each particle while the
robot moves, we choose the pose x
t
that maximizes the
likelihood of the observation around the pose predicted by
odometry
….EQ (8.26)
Under the
Assumptions 2
and
3
, we can estimate the poses
of all samples according to
Eq.
(12)
(while moving through
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
95
known
areas). In this way, the complex computation of an
informed
proposal needs to be done only once.
Computing t
he proposal when closing a loop
In contrast to the two situations described earlier, the
computation of the proposal is more complex in case of a
loop closure. This is due to the fact that
Assumption 2
(local
similarity) is typically violated even for subsets of particles.
To illustrate
this, assume that the particle cloud is widely
spread when the loop is closed. Typically, the individual
samples reenter the previously mapped terrain at different
locations. This results in different hypotheses about the
topology of the environment and de
finitively violates
Assumption 2
.
Dealing with such a situation, requires additional effort in the
estimation process. Whenever a particle i closes a loop, we
consider that the map m
(i
)
t−1
of its surroundings consists of
two components. Let m(i)loop refer to the map of the area,
the robot seeks to
reenter. Then
,
m
(i
)
local is the map
constructed from the most recent measurements without the
part o
f the map that overlaps with m
(i
)
loop. Since
those two
maps are disjoint and under the assumption that the
individual grid cells are independent, we can use a factorized
form for our likelihood function
…EQ (8.27)
For efficiency reasons
, we use only the local map m
(i
)
local to
compute the p
roposal
and do not consider m
(i
)
loop. This
procedure is valid but requires to adapt the weight
computation. According to the importance sampling principle,
this leads to
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
96
….EQ (8.28)
where etha1 and etha2 are normalization factors resulting
from Bayes’ rule.
Approximativ
e importance weight computation
We observed in practical experiments that the normalizing
factors etha1 and etha2 in Eq.
(16)
have only a minor
influence on the overall weight. In order to speed up the
computation of the importance weights, we approximate Eq.
(16)
by
…EQ (8.29)
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
97
Figure
7
.17
SLAM mapping using Rao

Blackwellised particle filters
7
.7
SLAM Pseudo code
Include
a header file that includes all the header files and
namespaces needed from the MRPT library
.
Class SLAM
//define a class called SLAM to which an input is a pointer to
the critical section
MAKESLAM
(shared resource )
//function of SLAM class is defined an
d has shared resource
as an input
D
efine map parameters like likelihood options and particle
filter options
.
Initial
ize values of map parameters
.
A
ssign these values to the Tconstruction_options which is
the main input options to our RBPF map
.
D
efine the
grid map
.
D
efine the action and observation variables and kinect
.
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
98
While (true)
{
Enter the critical section;
If (exit signal is true)
{
Save the last map update to file and save it as an
image in the shared resource
}
Else if (there is a new odometry
and observation input and
the map isn’t yet updated)
{
Get the last motion and observation update from the shared
resource;
Update the shared resource position variables and MAP
variable and mapimage ;
}
Leave the critical section;
Update our map using th
e action and observation;
Update the location of the robot on the updated map ;
Update the absolute position with the x,y,yaw of the mean
value of the position ;
Clear action ;
Clear observation;
Sleep 10 ms to give another thread the chance to execute;
}
7
.8
Passing SLAM to Other Modules
The SLAM section is highly related to many
modules in the project which are:
1

Path P
lan
ning M
odule :
The output of the rbpf slam is a grid map that is passed to
the path planning and used to determine which path
is used
to go from initial state to goal
.
CH 8 LOCALIZATION AND MAPPING
Internet based smart home robot
Page
99
2

Kinect G
rabber :
This extracts the kinect 3D observation and put it in shared
resource to be used by the SLAM
.
3

Odometry G
rabber :
Extracts the incremental motion from the encoder and put in
shared
resource to be then used by the SLAM
.
4

Shared R
esource file :
Uses some of the shared resource elements like :
MAP
:
T
he map is updated each iteration
.
Absolute position :
U
pdated by each iteration
.
MAPimage:
M
ap update is saved as an image
.
Incremental motion
:
M
otion that is grabbed by the
odometry grabber
.
Signals_to_exit:
T
o tell the thread that is must stop
working and exit the critical section
.
Kinect_observation
:
G
rabbed by the kinect grabber
.
Persistent_synchronization_mechanism:
U
s
ed to make
sure that each time we use SLAM to update map and
position there is a new action and observation.
Σχόλια 0
Συνδεθείτε για να κοινοποιήσετε σχόλιο