A Variational Approach to Blind Image Deconvolution

chemistoddAI and Robotics

Nov 6, 2013 (3 years and 9 months ago)

79 views

Rob Fergus

Courant Institute of Mathematical Sciences

New York University

A
Variational

Approach to
Blind Image
Deconvolution

Overview


Much recent interest in blind
deconvolution
:


Levin ’06, Fergus et al. ’06,
Jia

et al. ’07,

Joshi et al. ’08, Shan et al. ’08, Whyte et al. ’10



This talk:


Discuss Fergus ‘06 algorithm


Try and give some insight into the
variational

methods used

Removing Camera Shake from a
Single Photograph

Rob Fergus, Barun Singh, Aaron Hertzmann,

Sam T. Roweis and William T. Freeman

Massachusetts Institute of Technology

and

University of Toronto

Image formation process

=



Blurry image

Sharp image

Blur

kernel

Input to algorithm

Desired output

Convolution

operator

Model is approximation

Assume static scene & constant blur


Why such a hard problem?

=



Blurry image

Sharp image

Blur kernel

=



=



Statistics of gradients in natural images

Histogram of image gradients

Characteristic distribution with heavy tails


Log # pixels

Image gradient is difference between spatially
adjacent pixel intensities

Blurry images have different statistics


Histogram of image gradients

Log # pixels

Parametric distribution


Histogram of image gradients

Use parametric model of sharp image statistics


Log # pixels

Three sources of information

1. Reconstruction constraint:

=



Input blurry image

Estimated sharp image

Estimated

blur kernel

2. Image prior:

3. Blur prior:

Positive

&

Sparse

Distribution
of gradients

Three sources of information

y = observed image b = blur kernel

x = sharp image

Posterior

p
(
b
;
x
j
y
)
=
k
p
(
y
j
b
;
x
)
p
(
x
)
p
(
b
)
Three sources of information

y = observed image b = blur kernel

x = sharp image

Posterior

1. Likelihood

(Reconstruction
constraint)

2. Image


prior

3. Blur

prior

p
(
b
;
x
j
y
)
=
k
p
(
y
j
b
;
x
)
p
(
x
)
p
(
b
)
Three sources of information

y = observed image b = blur kernel

x = sharp image

y = observed image

b = blur


x = sharp image

1. Likelihood

Reconstruction constraint

p
(
y
j
b
;
x
)
=
Q
i
N
(
y
i
j
x
i
-
b
;
¾
2
)
i
-

pixel index

Overview of model

2. Image prior

p
(
x
)
Mixture of Gaussians fit to empirical
distribution of image gradients

3. Blur prior

p
(
b
)
Exponential
distribution to keep
kernel +
ve

& sparse

The obvious thing to do



Combine 3 terms into an objective function


Run conjugate gradient descent


This is Maximum a
-
Posteriori (MAP)

No success!

Posterior

1. Likelihood

(Reconstruction
constraint)

2. Image


prior

3. Blur

prior

p
(
b
;
x
j
y
)
=
k
p
(
y
j
b
;
x
)
p
(
x
)
p
(
b
)
Variational Independent Component Analysis




Binary images




Priors on intensities




Small, synthetic blurs






Not applicable to


natural images


Miskin and Mackay, 2000

Variational

Bayes

Variational Bayesian approach

Keeps track of uncertainty in estimates of image and blur by
using a distribution instead of a single estimate

Toy illustration:

Optimization
surface for a single variable

Maximum

a
-
Posteriori (MAP)

Pixel intensity

Score

Simple 1
-
D
blind
deconvolution

example


y = observed image

b = blur

x = sharp image


n = noise ~ N(0,
σ
2
)


Let y = 2

y
=
b
x
+
n
Let y = 2


σ
2

= 0.1

N
(
y
j
b
x
;
¾
2
)
p
(
b
;
x
j
y
)
=
k
p
(
y
j
b
;
x
)
p
(
x
)
p
(
b
)
Gaussian distribution:

N
(
x
j
0
;
2
)
p
(
b
;
x
j
y
)
=
k
p
(
y
j
b
;
x
)
p
(
x
)
p
(
b
)
p
(
b
;
x
j
y
)
=
k
p
(
y
j
b
;
x
)
p
(
x
)
p
(
b
)
Marginal distribution p(b|y)

p
(
b
j
y
)
=
R
p
(
b
;
x
j
y
)
d
x
=
k
R
p
(
y
j
b
;
x
)
p
(
x
)
d
x
0
1
2
3
4
5
6
7
8
9
10
0
0.02
0.04
0.06
0.08
0.
1
0.12
0.14
0.16
b
Bayes p(b|y)
0
1
2
3
4
5
6
7
8
9
10
0
0.02
0.04
0.06
0.08
0.
1
0.12
0.14
0.16
b
Bayes p(b|y)
MAP solution

Highest point on surface:

a
r
g
m
a
x
b
;
x
p
(
x
;
b
j
y
)
Variational Bayes


True Bayesian
approach not
tractable



Approximate
posterior

with simple
distribution

Fitting posterior with a Gaussian


Approximating distribution is Gaussian


Minimize

K
L
(
q
(
x
;
b
)
j
j
p
(
x
;
b
j
y
)
)
q
(
x
;
b
)
KL
-
Distance vs Gaussian width

0
0.
1
0.
2
0.
3
0.
4
0.
5
0.
6
0.
7
4
5
6
7
8
9
10
11
Gaussian width
KL(q||p)
Variational Approximation of Marginal

0
1
2
3
4
5
6
7
8
9
10
0
0.
5
1
1.
5
2
2.
5
b
p(b|y)
Variational

True

marginal

MAP

Try sampling from the model

Let true b = 2


Repeat:


Sample x ~ N(0,2)


Sample n ~ N(0,
σ
2
)


y = xb + n


Compute
p
MAP
(b|y)
,
p
Bayes
(b|y)

&
p
Variational
(b|y)


Multiply with existing density estimates (assume iid)


Actual Setup of
Variational

Approach

Work in gradient domain:

!
r
x
-
b
=
r
y
x
-
b
=
y
Approximate posterior

with

is Gaussian on each pixel

is rectified Gaussian on each blur kernel element

p
(
r
x
;
b
j
r
y
)
q
(
r
x
;
b
)
q
(
r
x
;
b
)
=
q
(
r
x
)
q
(
b
)
q
(
r
x
)
q
(
b
)
K
L
(
q
(
r
x
)
q
(
b
)
j
j
p
(
r
x
;
b
j
r
y
)
)
Assume

Cost function



Close
-
up


Original






Output





Original photograph


Our output

Blur kernel

Blur kernel

Our output

Original image

Close
-
up

Recent Comparison paper

Percent
success


(higher is
better)

IEEE CVPR 2009 conference

Related Problems


Bayesian Color Constancy


Brainard

& Freeman


JOSA 1997


Given color pixels, deduce:


Reflectance Spectrum of surface


Illuminant Spectrum


Use Laplace approximation


Similar to Gaussian q(.)


From Foundation of Vision by Brian
Wandell
,
Sinauer

Associates, 1995

Conclusions


Variational

methods seem to do the right
thing for blind
deconvolution
.



Interesting from inference point of view: rare
case where Bayesian methods are needed



Can potentially be applied to other ill
-
posed
problems in image processing