# Probabilistic Robotics

Τεχνίτη Νοημοσύνη και Ρομποτική

2 Νοε 2013 (πριν από 4 χρόνια και 6 μήνες)

121 εμφανίσεις

Probabilistic Robotics

Introduction

2

Robot Environment Interaction

A robot can, at least, hypothetically, keep a
record of all past sensor measurements and
control actions. Such a collection is referred to as
data.

Two different streams of data

Environment measurement data

Control data

corresponds to the change of state in the time interval

(t
-
1;t]

3

Robot Environment Interaction

Environment perception provides information
about the environment’s state, and it tends to
increase the robot’s knowledge.

Motion (control date), on the other hand, tends to
induce a loss of knowledge due to noise
(uncertainty).

The evolution of state and measurements is
governed by probabilistic laws. (Probabilistic
Robotics)

4

Robot Environment Interaction

For state variable

If the state variable is complete

This is an example of Conditional independence
(CI).

5

Robot Environment Interaction

For measurement data

If the state variable is complete

This is another example of Conditional
independence (CI).

6

Robot Environment Interaction

State transition probability

measurement probability

7

Robot Environment Interaction

The state transition probability and the
measurement probability together describes the
dynamic stochastic system of the robot and its
environment.

See Figure 2.2.

8

Robot Environment Interaction

Besides measurement, control, etc, another key
concept in probabilistic robotics is that of a
belief
.

A belief reflects the robot’s internal knowledge
about the state of the environment, because the
state of the environment, to the robot, is
unobservable.

How belief is probabilistically represented in
probabilistic robotics?

9

Robot Environment Interaction

The belief of a robot is represented in the form of
conditional probability distribution (CPD) as:

Sometimes, the following CPD is also of interest.

predication

10

Bayes Filter
-
The single most important algorithm in the book

It calculates the belief distribution

bel
from measurement and control date.

It is a recursive algorithm. It is the
basis of all other algorithms in the
book.

11

Simple Example of Bayes Filter
Algorithm

Suppose a robot obtains measurement
z

What is
P(open|z)?

12

Causal vs. Diagnostic Reasoning

P(open|z)
is
diagnostic
.

P(z|open)
is
causal
.

Often
causal

knowledge is easier to
obtain.

Bayes rule allows us to use causal
knowledge:

count frequencies!

13

Example

P(z|open) = 0.6

P(z|

open) = 0.3

P(open) = P(

open) = 0.5

z
raise
s

the probability that the door is open
.

14

Combining Evidence

Suppose our robot obtains another
observation
z
2
.

How can we integrate this new
information?

More generally, how can we estimate

P(x| z
1
...z
n
)
?

15

Recursive Bayesian Updating

Markov assumption
:
z
n

is
independent

of

z
1
,...,z
n
-
1

if
we know
x.

16

Example: Second Measurement

P(z
2
|open) = 0.5

P(z
2
|

open) = 0.6

P(open|z
1
)=2/3

z
2

lowers the probability that the door is open
.

17

Example

The previous examples seems only concern
with measurement. What about control data
(or motion, action)?

How does control data play its role?

18

Actions

Often the world is
dynamic

since

actions carried out by the robot
,

actions carried out by other agents
,

or just the
time

passing by change the
world.

How can we
incorporate
such
actions
?

19

Typical Actions

The robot
turns its wheels

to move

The robot
uses its manipulator

to grasp
an object

Plants grow over
time

Actions are
never carried out with
absolute certainty
.

In contrast to measurements,
actions
generally increase the uncertainty
.

20

Modeling Actions

To incorporate the outcome of an
action
u

into the current “belief”, we
use the conditional pdf

P(x|u,x’)

This term specifies the pdf that
executing
u

changes the state
from
x’ to x
.

21

Example: Closing the door

22

State Transition (probability
distribution)

P(x|u,x’)

for
u

= “close door”:

If the door is open, the action “close
door” succeeds in 90% of all cases.

23

Integrating the Outcome of Actions

Continuous case:

Discrete case:

What’s going
on here?

24

Example: The Resulting Belief

25

Bayes Filters: Framework

Given:

Stream of observations
z

and action data
u:

Sensor model

P(z|x).

Action model

P(x|u,x’)
.

Prior

probability of the system state
P(x).

Wanted:

Estimate of the state
X

of a
dynamical system.

The posterior of the state is called

Belief
:

State transition probability

measurement probability

New terms

26

Bayes Filters: The Algorithm

Algorithm Bayes_filter ( )

for all do

endfor

return

Action model

Sensor model

27

Bayes Filters

Bayes

z

= observation

u

= action

x

= state

Markov

Markov

Total prob.

Markov

What is it?

Action model

Sensor model

recursion

28

Bayes Filters: An Example

Page 28
-
31.

29

Markov Assumption
(the Complete State Assumption)

Underlying Assumptions

Static world

Independent noise

Perfect model, no approximation errors

30

Bayes Filter Algorithm

1.

Algorithm

Bayes_filter
(
Bel(x),d
):

2.

0

3.

If

d

is a
perceptual

data item
z
then

4.

For all
x

do

5.

6.

7.

For all
x

do

8.

9.

Else if

d

is an
action

data item
u

then

10.

For all
x

do

11.

12.

13.
Return

Bel’(x)

31

Bayes Filters are Familiar!

Kalman filters

Particle filters

Hidden Markov models

Dynamic Bayesian networks

Partially Observable Markov Decision
Processes (POMDPs)

32

Summary

Bayes rule allows us to compute
probabilities that are hard to assess
otherwise.

Under the Markov assumption,
recursive Bayesian updating can be
used to efficiently combine evidence.

Bayes filters are a probabilistic tool
for estimating the state of dynamic
systems.