# CS 540 (Shavlik) HW 3 Probabilistic and Case-Based Reasoning

AI and Robotics

Oct 24, 2013 (4 years and 8 months ago)

534 views

CS 540 HW
3

Page
1

CS 540

(Shavlik)

HW
3

Probabilistic and Case
-
Based Reasoning

Assigned:

3
/18/13

Due:

4
/
10
/
13
at 11:59pm
(not accepted a
fter
10:50am

on 4/15/13
)

Points:

1
25

Suggestion: you should consider doing this
HW
in

a word processor since cut
-
and
-
pasting is likely to be useful.

Problem 1:

Full Joint Probability Distributions (
15

points)

Consider this
full joint probability distribution

involving four Boolean
-
valued random variables
(A
-
D):

A B C D

Prob

F F F F

?

F F F T

0.
002

F F T F

0.
003

F F T T

0.
004

F T F F

0.0
15

F T F T

0.0
16

F T T F

0.0
17

F T T T

0.0
18

T F F F

0.
0
3
0

T F F T

0.
0
3
1

T F T F

0.
0
3
2

T F T T

0.
0
3
3

T T F F

0.
0
5
4

T T F T

0.
0
5
5

T T T F

0.
0
5
6

T T T T

0.0
5
7

i.

Compute
P(A =
false

and B =
false

and C =
false

and D =
false
)
.

ii.

Compute
P(A =
tr
ue

and
C

= false

and D = true
)
.

iii.

Compute
P(
B

= true)
.

iv.

Compute
P(
B

= true |
A

= true and C =
false

and D = true)
.

v.

Compute
P(A =
false

and B = true | C =
true

and D = true)
.

CS 540 HW
3

Page
2

Problem 2: Bayesian Networks (20 points)

Consider the following Bayesian Network, where variables
A
-
D

are all Boolean
-
valued:

Show your work for the following calculations.

i.

Compute
P(A = true and B = true and C = true and D = true).

ii.

Compute
P(A = true and B = true and
D

=
false
).

iii.

Compute
P(
B

= true | A =
false

and
C

=
true

and
D

=
false
).

iv.

Compute
P(
D

= true | A = false and C =
true
).

v.

Compute
P(
A

= false
and B = false
|
D

= false
).

A

B

P(C =true
|

A, B)

false

false

0.
2

false

true

0.
3

true

false

0.
4

true

true

0.
6

B

C

P(
D =true
|
B
,C
)

false

false

0.
7

false

true

0.
8

true

false

0.
1

true

true

0.
5

A

P(A=true)

= 0.2

P(B=true)

= 0.9

B

C

D

CS 540 HW
3

Page
3

Problem
3
:

Bayes' Rule (
2
0 points)

Define the following two variables about people:

shot

=

got flu
shot

last semester

flu

=

caught
flu

this semester

Assume we know from past experience that:

P(
shot
)

=

0.
75

P(flu)

=

0.
25

P(flu |
shot
)

=

0.
1
0

i.

Given someone did
not

get a
shot
, what is the probability he or she gets the
flu
?

ii.

G
iven you find out someone has the
flu
, what's the probability
he or she

got a
shot
?

Be sure to show and explain your calculations for both parts (
i
) and (
ii
).
Start by writing out the
above questions as
conditional probabilities.

In the general population, 5 in a 100,000 people have the dreaded
Senioritis
disease.
Fortunately, there is a test (
test4it)
for this disease that is 99.9% accurate. That is, if one has the
disease, 999 times out of 1000
test
4it

will turn out positive; if one does
not

have the disease,

1 time out of 1000 the test will turn out positive.

iii.

You take
test4it

and the results come back true. Use Bayesian reasoning to calculate
the probability that you actually have
Senioritis
.
That is, compute:

P
(haveSenioritis = true | test4it = true)

you
may

use
HS
for

haveSenioritis
and
T4
for

test4
it
if you wish.

CS 540 HW
3

Page
4

Problem
4
:

Naïve Bayes and
‘Bag
-
of
-
Words’
Text Processing (
1
0 points)

Sally

has divided her books into two groups, those she likes and those she doesn't. For simplicity,
assume no book contains a given word more than once.

The
1
1

books that
Sally

likes contain (only) the following words:

animal

(4 times),
mineral

(
9

tim
es),
see

(
1
0

times),
eat
(
5

times)

The
18

books that
Sally

dis
like
s

contain (only) the following words:

animal

(
17

times),
mineral

(1 time),
vegetable

(
13
times),
see

(
4

times),
eat

(1 time)

Using
Bayes Rule and
the Naive Bayes assumption, determine whether it is more probable that
Sally

likes the following book than that she dislikes it.

see
animal

eat vegetable

// These words are the entire contents of this new book.

That is, compute the ratio:

P
rob(Sally likes book | ‘
see’

in book ˄ ‘
animal’

in book ˄ ‘
eat

in book ˄ ‘
vegetable’

in book)

P
rob(Sally dislikes book| ‘
see’

in book ˄ ‘
animal’

in book ˄ ‘
eat

in book ˄ ‘
vegetable’

in book)

Be

sure to show and explain your work.
En
sure that

none of your probabilities are zero by
starting all your counters at 1 instead of 0 (the counts ab
ove result from starting at 0, i
.e., imagine
that there is one more book
Sally

likes that contains each of the above words exactly once and
also one more boo
k she dislikes that also contains each of the above words exactly once).

CS 540 HW
3

Page
5

Problem
5
:

Case
-
Based Reasoning (10 points)

Imagine that we have the following examples, represented using
four

Boolean
-
valued features:

ex1:

F1 = false F2 = false F3 = true

F4 = false category = +

ex2:

F1 = true F2 = false F3 = true

F4 = false category = +

ex3:

F1 = false F2 = true F3 = false F4 = false category = +

ex4:

F1 = true F2 = true F3 = true

F4 = true category =
-

ex5:

F1 = false F2 = false F3 = false F4 = true category =
-

ex6:

F1 = true F2 = false F3 = false F4 =
false

category =
-

As
sume we have the following test
-
set example, whose category we wish to estimate based on a
case
-
based approach where we find the three (3) nearest neighbors and use the most common
category among them as our estimate:

F1 = false F2 = true F3 = false F4 = true

category = ?

As a
similarity function
, count the number of features that have the same value.

and explain

1.

What it the similarity of the test example to each of the six training

examples?

2.

What ar
e the three nearest
-
neighbors?

3.

What category should be predicted for the test example?

CS 540 HW
3

Page
6

Problem 6: Creating Probabilistic
Reasoner
s that Play

Nannon

(
5
0 points)

This
problem
involves writing Java code that implements two probabilistic reasoners

to play the
two
-
person board game called
Nannon

(
http://nannon.com
),

which is a simplified version of the
well
-
known game Backgammon (
http://en.wikipedia.org/wiki/Ba
ckgammon
)
.
Instructions for
Nannon

are available at
http://nannon.com/rules.html
.
Sample game traces have been posted to Moodle, in the
News Forum on March 4, 2013.

Here is how we will formulate the task.
At each turn, whenever there
is

more than
one

legal
move, your
chooseMove

1)

The

current board configuration.

2)

A

list of legal moves
;
each move's effect is also provided, and you are able to
determine the next board configuration fro
m each move.
(
Explanations of which
effects are computed for you appear in the
chooseMove

method of the provided
Ra
n
domNannonPlayer

and in the

ManageMoveEffects.java

file
).

Your
chooseMove

method needs to return one of the legal moves. It should do this by using
Bayes’ Rule to estimate the odds each move will lead to a winning game, returning the one with
the highest odds. That is, it should compute for each possible move:

Prob(will
win

game

| current board, move, next board, and move’s effect)

__________________________________
____________________________

Prob(will
lose

game | current board, move, next board, and move’s effect)

Your solution need not use ALL of these given’s to estimat
e these probabilities, and you can
choose to define whichever random variables you wish from the provided information. The
specific design is up to you and we expect each student’s solution to be unique.

You need to create
two solutions
. In one, you will c
reate a full joint probability table. In the
other you will create a Bayesian Network (one that is not equivalent to your full joint probability
table). It is up to you to decide the specific random variables used and, for the Bayesian
Network, which cond
itional independence assumptions you wish to make. The random variables
in your two solutions need not be the same.

You need to place your two solutions in these files, where

is your
actual
Moodle

FullJointPro

Copy all the files in
http://pages.cs.wisc.edu/~shavlik/cs540/HWs/HW3/

The
provided
P
layNannon.java
,

NannonPlayer.java,
and
R
andomNannonPlayer
.java

file
s
contain

substantial

details on what
you need to do. You should start by reading the comments in
them; I suggest you read the files in the order they appear in the previous sentence.

So

how do you get the necessary information to compute these probabilities?

After each game,

metho
d is given information about the sequence of board
configurations encountered and the moves chosen by your player in that game, as well as
whether or not your player won that game. You should not try to figure out which moves where
good or bad in any one s
pecific games; instead, if a game was won,
all

moves in it should be
CS 540 HW
3

Page
7

considered good (i.e., led to a win) and if a game is lost
all

moves should be considered bad (led
to a loss). Obviously some moves in losing games were good and vice versa, but because
we are
using statistics, we are robust to this ‘noise.’

Your two players need to implement the
reportLearnedModel

method
, which reports the value of
the random variable (or values of the combination of random variables) where the following ratio
is
largest

(i.e., most indicative of a win) and the
smallest

(i.e., most indicative of a loss):

prob( randomVariable(s) | win) / prob(randomVariable(s) | loss)

-
joint
-
prob table,
randomVariables

should be a setting of all the random variables
other
than the ‘win’ variable (i.e., loss = ¬win). For your Bayes Net approach,
randomVariable(s)

should be one of the entries in the product of probabilities you compute.
(Recall that if we want to allow some dependencies among our random variables, the produ
ct in
a Bayes Net calculation will include something like
p(A | B

win) x p(B | win)
, which is
equivalent to
p(A

B | win)
, as explained in class.)

The
reportLearnedModel

method is automatically called (by code we have written) after a run of
k

games completes when
Nannon.
reportLearnedModels

is set to true.

It is fine to print more about what was learned, but the above requested information should be
easy to find in your printout.

What needs to be turned in, in addition to the two Java files lis
ted above (be sure to comment

A report

(as part of your HW3.pdf file) containing:

1.

A description of your design for your
FullJointProbTablePlayer
; should be 1
-
2 pages
.

I
ncluding an illustrative picture is highly recommended (the picture can be

hand drawn)
.

2.

A description of your design for your
BayesNetPlayer
; should be 1
-
2 pages. A
n
illustrative picture
is
required

(the picture can be hand drawn)
.

3.

A 1
-
2 page report presenting and discussing how your two players performed again
st
each other, as well as against the provided players (see the PlayNannon.java file). Be
sure to include a table of your empirical results (i.e., do not ‘bury’ your results inside an
ordinary sentence, because that will make them harder for the reader to

understand).

of HW3 and this report into one PDF file, please ask one of the course instructors for help.

Note that we will test your
BayesNetPlayer

under var
ious conditions.

The PlayNannon.java
file provides details. Also note that you should not modify any of the provided Java files since
when we test your solution we will be using the original versions of these files.

A class
-
wide tournament will be conduc
ted soon after this homework is due. It will be for fun
and a learning experience (e.g., what did the winner(s) do that worked so well?); performance in
this tournament will not be part of your grade on this homework.

Feel free to
create
player
s based on genetic algorithms and/or a nearest
-
neighbor
approach

(or even decision trees/forests)
, but that is not required nor will

any extra credit be
given. I will be quite happy to talk to students about such approaches, though.

Be sure to monitor the

HW
3

Forum in Moodle for suggestions, extra information, and (hopefully
very rarely
, if at all
) bug fixes.