Adaptive 3D Reconstruction System with Improved Recovery of Miscoded Region to Automatically Adjust Projected Light Patterns

gurgleplayAI and Robotics

Oct 18, 2013 (3 years and 10 months ago)

58 views

Adaptive 3D Reconstruction System with

Improved

Recovery of Miscoded Region to

Automatically Adjust Projected Light Patterns

H
.
-
J
.
Chien
1
,
C.
-
Y.

Chen
1
,
C.
-
F. Chen
2

Y.
-
M. Su
2

and

Y.
-
Y. Chang
2

1
Computer Science & Information Engineering, National Universit
y of Kaohsiung, .Taiwan.

2

Electrical Engineering, I
-
Shou University, Taiwan.


Email:
ayen
@
nuk.edu.tw

Abstract

The paper proposes an adaptive system for 3D object reconstruction which analyzes the shape and colours of
the object and automatically designs s
uitable projection patterns for more effective reconstruction. The system is
also able to detect regions with details and
ad
a
ptively

provides higher resolution reconstructions for such
regions.
A

method

that interpolates the
ambiguous or miscoded regions

b
ased on the
monotonicity of camera
-
projector corr
espondence relation
ship

is
also
proposed

to improve the accuracy of the system.

Experiments have
been performed to demonstrate the effectiveness of the proposed approach
es

and to show that the proposed
syste
m is able to overcome inflexibilities in traditional structured lighting approaches.

Keywords
:

adaptive
3D reconstruction, 3D shape recovery, s
tructured lighting
,
coded light pattern


1

Introduction

Advances in recent technology have enabled more
accurate,
safe and efficient 3D measurement
systems

to be developed. These
systems

may acquire 3D
measurements of scenes and/or objects via
information extracted from image sensors, range
finders, probes, or other
types of sensors
.
Approaches

such as structured ligh
ting, binocular stereo, shape
from shading, or other techniques based on computer
vision approaches, are

generally

safer than most other
methods since these methods are non
-
tactile and use
undisruptive

equipment
for
data acquisition. Among
computer vision
based
3D measurement
approaches
,
structured lighting is currently considered the most
accurate approach

[1]
,
[3]
,
[7]
,
[8]
.

Although there are vario
us systems for acquiring 3D
information of an object using structured lighting
method, most of the systems still use simple,
mon
ochrome line or grid patterns,
often requiring a
large

number of
input
images
[4]

,

[9]

,

[10]

,

[11]

,

[12]

,

[13]
. Furthermore, traditional structured lighting
systems cannot adaptively adjust the projected
patterns base
d on characteristics of the object’s
surface, thus may be unable to provide satisfactory
reconstructions in regions with fine details or
combinations of colours.
With the traditional
structured lighting approach
, physical movement of
the system, such as th
e slight shifts or displacements
associated with changing of projected light patterns

may cause measurement errors
[14]
,

[15]
,

[16]
.

Also,

the light patterns are usually proje
cted onto the
surface of the object by placing various masks in front
of the light source. Hence, the project light patterns
are fixed, unless the masks are physically changed,
which may then lead to disturbances of the system. In
addition, the light sourc
e is often of a single colour,
which may not be clearly visible when it is projected
onto objects with a multitude of colours. Such
inflexibilities in previous structured lighting systems
need to be overcome in order to provide effective 3D
reconstruction
for objects with various colours and
details.

The paper proposes a method and system for adaptive
reconstruction of 3D shapes in multiple resolutions. In
particular, the proposed method is aimed to improve
the mentioned drawbacks of existing structured
lig
hting systems for 3D object reconstruction by
automatically determining regions with fine details, as
well as the colours of object surfaces, and adaptively
adjusts the projected light pattern and colours to
improve the quality of reconstruction.
As a resu
lt,
target regions with fine details or different colours
may be reconstructed with higher resolution and
accuracy.

T
he paper
also proposes an approach for handling the
camera
-
projector correspondence in the
structured

lighting system
to reduce

errors caus
ed by
uncertainties at the boundary
between

illuminated and
dark region.

Once t
he camera and projector
correspondence relationship
has been
determined, it is
then
used to
find
the distance between camera and
points on the object surface from the acquired i
mages.
By analyzing the acquired images, the system further
determines suitable patterns and colours for projection
onto the object surface, as well as locates regions with
details for higher resolution reconstruction. From the
analysis, new light patterns

with suitable geometry
and colours are projected using the colour data
projector. Thus the proposed system is able to avoid
the drawbacks of inflexible light patterns in existing
systems, and is more effective and accurate in
acquiring the 3D surface info
rmation for a wider
range of applications.

2

System setup

The propose system uses 3D triangulation to find the
angles between the camera, projector and the object.

The physical setup and schematics of the system are
described in this section.

2.1

Physical setup

To setup the system, the camera is placed in front of
the object to be reconstructed, and the data projector
is placed on the same base line as the camera.
T
he
center of the image is
assumed to be
the origin of the
3D coordinate system, with the data proje
ctor on the
X
-
axis
. T
he setup
of the system
is shown in
figure 1
.

Figure 1
:

3D relationships between the camera, data
projector and the object.

As shown in
figure 1
, the camera is pointing towards
positive

Z

direction, with the foca
l length,
f
, along the
Z
-
axis. The image plane is aligned with the
XY
-
plane,
and the projected point,
p
, has coordinates
(x,y)

in the
acquired image. The coordinate of point
P

is (
X
0
,Y
0
,Z
0
)
and the relationship between the point in 3D space
and the project
ed point is given by
equation (1).



(
1
)

T
he angles between the object, projector and the
camera are
respectively represented
by
α
,
β
, and
γ
.

T
he distance between the projector and the camera is
given by
k
.

T
he distance
d

between the

camera and
the point
P

is

derived and given

b
y
equation (
2
)

[6]
.



(
2
)

By rearranging
equations (1)

and
(2)
, the coordinates
of the 3D point
P
,
(
X
0
,Y
0
,Z
0
)
, can calculated as
follows

(
3
)

2.2

System schematics

Once the relationships between the parameters have
been obtained using 3D triangulation, the
proposed

reconstruction system i
s setup

and ready to execute
the tasks shown in figure
2
.

Figure
2
:

Schematics of th
e 3D reconstruction
system
.

The computer
first
transmits
a set of
designated light
patterns to the projector and synchr
onizes the camera
to acquire a
n initial series of
image of the object. The
acquired image is then processed in real time to
analyze the s
hape and the colour(s) of the object

and
allow the system

to adaptively adjust the resolution
and the colours of the projected light patterns. At the
same time, the computer may also control the
turntable to rotate the object and acquire images of the
obje
ct from different directions, such that a 360° 3D
model can be reconstructed.

3

Establishing correspondence


Accurate correspondence between the camera and
projector is crucial in any structured lighting system.
Although some parameters may be provided by th
e
device itself, for example, the
focal length of the
camera
, calibration of the system is still necessary to
establish accurate correspondence between the camera
and the projector.

In addition, a major concern in the application of a
structured lighting s
ystem
is the uncertaint
y in the
boundary region

between an illuminated and a dark
area. Such regions may often be wrongly coded due to
gradually changing intensity values.
As demonstrated
in figure 3(c), the binary image does not
correctly
give the boundar
y of the light stripes.
Moreover,
while the data projector used in the system is safer
and more flexible, it does ha
ve

lesser power and more
inconstant

optical characteristics
than a laser projector.
Hence it is
important
that the correspondence for the
sy
stem is accurately determined.


(a)


(b)


(c)

Figure

3
:

(a) original image with projected patterns,
(b)
boundary

between dark and illuminated stripes
and (c) result after binarization
.

3.1

Monotonicity of camera
-
projector correspondence

Under normal applicati
ons,
it is

intuitive
to
assume

that
the camera
-
projector correspondences
are
monotonic
, as shown in
figure 4
. Although this may
not always be the case, exceptions will be further
investigated in future work.


Figure 4
:

Ordering of points on projection pla
ne and
image plane is assumed to be monotonic.

Under the perspective projection and geometrical
assumptions
,

the

camera
-
projector

correspondence
relationship

satisf
ies
the monotonicity
, this is also
known as
strict ordering

property
.

T
herefore,
the
recover
y of correspondences on boundary regions can
be
performed
by enforcing
a
monotonic
ally

increasing
mapping
.

T
he problem
can be
formulated as follow
s
. Given
a
discrete function
F

whose values are known in the
integer interval

[a,b]
,
find
the

largest
subset

, such that
, if and
only if
.
This problem can be solved by applying
a modified longest increasing subsequence

(
LIS
)
finder

to retrieve longest non
-
decreasing subsequence

[5]

[2]
.

3.2

Implementation of the recovery
procedure

The proposed
recovery procedure
for the
horizontal
direction

is implemented
with the pseudo code given
in
figure 5
. In the implementation, d
ecreasing values
in camera
-
p
rojector pixel correspondence are
interpolated by neighbouring valid values.

Figure 5
:

Pseudo code of the proposed
correspondence recovery procedure.

Figure 6
(a) shows the depth maps of a recovered cup
without the proposed metho
d to determine correct
correspondence in boundary regions. The vertical
stripes on the cup indicate errors in recovered depth
values. In
figure 6
(b), the proposed correspondence
recovery method was used, giving a depth map with
visibly fewer errors.


(a)



(b)

Figure 6
:

Visualisat
ion of recovered correspondence
of cup

(a) without proposed correspondence recovery
and (b) with the proposed correspondence recovery.

The

proposed approach is
further

evaluated by
selecting
one thousand test spots in the image p
lane
randomly.
T
he original
(
straightforward mapping
without the proposed method)
and
improved

correspondences
are
obtained respectively by
projecting series of plain
-
coded and Gray
-
coded
patterns
onto the
projection plane.

The absolute
Euclidean differenc
es between
sensed and expected
pixel positions
are calculated to give the error
measure.

Figure 7

shows
errors
for the improved
correspondences

in comparison with the original
approach. As seen from
figure 7
, the error is reduced
FastRecovery(CpPixMappingX)

For J = 1 to I
mageHeight


A =FindLNDS(CpPixMappingX[1:ImageWidth][J])



For I = 1 to ImageWidth

-

1


K

= 1




If I is not in A




P

=
CpPixMappingX[
A[K]][J]


Q =
CpPixMappingX[
A[K+1]][J]


R = P + (P

Q)/(A[K+1]
-

A[K])



CpPixMappingX[I][J]

= R


Else


K = K + 1


End If


End For

End For

when the camera
-
projector
correspondence is
determined using the proposed approach.


Figure 7
:

Error
between

the original and improved

4

Adaptive
adjustment of
colours and patterns

This section describes how the system
automatically
determines suitable colours and patterns for proj
ection

based on the object’s surface to provide optimal
recognition of the projected light patterns.
.

4.1

Colour

selection

The
Macbeth ColorChecker

is used to calibrate the
colour sensitivity of the camera and to demonstrate
how different coloured surfaces may

have different
reflectance properties for same coloured light.

It is
also used to generate a lookup table which determines
colour of projected light spots according to colour of
surface for better pattern identification.

Figure
8

shows red stripes being p
rojected onto the
colour checker, it can be seen that the colour stripes
are less distinguishable on darker colours, such as
brown, purple, or black, and also on colours with
significantly higher red components, such as red, pink
or orange.


Figure
8
:

Pro
jection of red coloured light stripe on
the colour checker
.

Since it is impractical to assume that the object to be
reconstructed has only a single colour, the proposed
system is designed to recognize the colours on the
surface of the object and project th
e suitable colour
pattern to improve the visibility of the projected light
stripes.

4.2

Design of
l
ight
p
atterns

To acquire the shape of an unknown object, an initial
image is taken to analyze its silhouette and possible
regions, that is, regions defined by di
fferences in
geometry or colours. The shape and regions on the
object is then used to design the preliminary
projection pattern.
Figure 9

shows an example of a
coloured cube and its edge image used for region
generation.


(a)



(b)

Figure

9
:

A coloured
cube orange
-
red, white, cyan,
and yellow colours (a) with external illumination (b)
an edge image obtained from (a).

A

suitable light pattern is designed for each
detected
object
region, such that the designed pattern will be
projected onto the designated
region.

Images of the
object with the projected pattern are acquired and the
3D coordinates of the object regions are calculated.

Once the initial set of 3D coordinates is obtained, the
values are analyzed with respect to variance in depth
values. If the
depth values for a given region have a
high variance, the system classifies the surface region
as being non
-
uniform with fine details, which may
require reconstructions in higher resolution. The
system then refines its design of light patterns to
project p
atterns with closer intervals, such that more
points in the regions may be reconstructed, and the
region may bear a closer resemblance to the original
object’s surface.


(a)



(b)

Figure
10
:

(a) Initial points projected onto regions
on the surface

and
(b
) refined patterns for each
region
.

The image in
Figure 10
(a) is acquired with a
preliminary light pattern designed to project a point
onto the nine regions identified from
figure 9
(b). At
this stage, the system can estimate the depth values of
each region

from the projected points.
Figure 10
(b)
shows a refined light pattern with more points
projected into each region. The image in
figure 10
(b)
will provide more information about the surface of the
object than
figure 10

(a).

5

Experimental
r
esults

A coloured

Rubik

s cube has been used in the
experiment due various reasons. More significantly,
the cube has colourful surfaces with distinct regions,
and
it is also
able
to
change the offset angles between
each layer.

Hence the cube has been used to test the
perfo
rmance of the proposed system.

According to
the procedures
described in Section 4,

t
he adaptive adjustment of light pattern and
reconstruction process is the main loop in the
reconstruction process

of the proposed system.

E
ach
region on the surface
of the

cube

is reconstructed
adaptively
until all of the regions have been
reconstruc
ted to satisfactory resolutions
.



Figure
11
:

Images of a coloured Rubik

s cube with
different viewing angles

with projected light patterns.

Figure 11

shows
sample
images for a

coloured
Rubik’s cube

when the cube is at
position



and

60°
.
The faces
of the cube
are rotated to provide each
image with different depth values and colours.
Figure
11

also
shows one of the light patterns used for
projection onto the different colour re
gions. The cube
is reconstructed from six directions with 60° in
between each viewing direction. Intermediate results
obtained from the projected patterns are shown in
figure

12
, lines with different colours and markers
represent different layers on the cu
be.


(a)



(b)

Figure

12
:

Intermediate reconstructions of the
Rubik’s cube

in lines

(a) top view (b) side view.

From
figure
12
(a)
, we are able to compare the angle
of rotation between each layer of the cube. By joining
the data points represented by the
markers, it has been
found that corners on each layer are at 90° angles.
Furthermore, we are able to estimate the dimensions
of the reconstructed 3D cube model using the
parameters obtained from calibration. It has been
found that the measured dimensions m
atch the
physical dimensions of the cube. The 3D cube is
visualized in
figure
13

in solid colour.


The proposed system is still being further refined and
the work is still in progress. Nevertheless, at this stage,
the experimental results show promising
outcomes.


Figure
13
:

Reconstructed 3D Rubik’s cube
.

6

Conclusions

An effective and flexible method and system for
adaptive 3D model reconstruction based on structured
lighting approaches has been proposed in this paper.
In particular, the system is able to

automatically
determine the colours and geometry of the object and
designing suitable light patterns for projection onto
the surface of the object. Furthermore, the system has
a feedback loop to analyze depth values obtained for
different regions on the o
bject, such that regions with
more details may be reconstructed in higher
resolutions.

Overall, the propose method and system provides a
more flexible and refined approach to 3D
measurement and reconstruction. Nevertheless, we
hope to acquire a high accura
cy measurement
apparatus, for example, a laser range finder, to
establish the accuracy of our system, as well as
perform more tests to verify our results
.


References

[1]

J.

Batlle, E.

Mouaddib, and J.

Salvi
,


Recent
Progress in Coded Structured Light as a
Te
chnique to Solve the Correspondence
Problem: A Survey

,
Pattern Recognition
,
vol. 31(7), pp. 963

982, 1998
.

[2]

L. Bergroth, H. Hakonen, and T. Raita
,

A
survey of longest common subsequence
algorithms

,
SPIRE’2000
,
pp
.
39
0

48, 2000.

[3]

D.

Caspi, N.

Kiryati, and J
.

Shamir
,

Range
Imaging with Adaptive Color Structured
Light

,

IEEE Trans. on Pattern Analysis and
Machine Intelligence
, vol. 20(5), pp. 470

480, 1998
.

[4]

C.

Chen
, Y. Hung, C. Chiang, and J. Wu
,

Range Data Acquisition Using Color
Structured Lighting and St
ereo Vision

,

Image and Vision Computing
, vol. 15(6), pp.
445

456, 1997
.

[5]

J. Hunt,
and
T. Szymanski,

A fast algorithm
for computing longest common
subsequences

,
Comm. ACM
,

vol.
20
(5)

,
pp
.
350

353, 1977.

[6]

R. Klette, K. Schlüens, and A. Koschan,
Computer Vis
ion


Three
-
dimensional Data
from Images.

Singapore: Springer, 1998.

[7]

J. Forest, J.

M.

Teixidor, J. Salvi,

and
E.
Cabruja,

A
P
roposal for
L
aser

S
canners
S
ub
-
P
ixel
A
ccuracy
P
eak
D
etector
,
W
orkshop on European

Scientific and
Industrial Collaboration
,

pp. 525

532
,
2003
.

[8]

J
.
Gühring,

Dense 3
-
D
S
urface
A
cquisition
by
S
tructured
L
ight using
O
ff
-
T
he
-
S
helf
C
omponents

,
SPIE

Photonics West
,

Videometrics VII
,
v
ol.
4309,
pp. 22
-
23
,
2001
.

[9]

P.

Garbat, M.

Wegiel, M.

Kujawinsk
a
,

Real
time visualization of 3D Variable in Time
Object based on Cloud of Points Data
Gathered by Coloured Structure Light
Projection System

,
Proc.
O
f
3DPVT
, pp.
623
-
630, 2004
.

[10]

E. Horn,
and
N. Kiryati,

Towards
O
ptimal
S
tructured
L
ight
P
atterns

,
Image
and Vision
Computing
, Vol. 17, pp. 87
-
97
, 1998.

[11]

J.

Salvi
,

J
.
Pags
,

J
.
Batlle
,

Codification
S
trategies in
S
tructured
L
ight
S
ystems

,

Proceedings

37th Pattern Recognition
,
pp.
827
-
849
, 2004
.

[12]

K. Sato and S. Inokuchi.

Three
-
D
imensional
S
urface
M
easurement by

S
pace
E
ncoding
R
ange
I
maging

,
Journal of
Robotic Systems
,
v
ol. 2, pp27

39
,
1985.

[13]

F. M. Wahl,
“A
C
oded
L
ight
A
pproach for
D
epth
M
ap
A
cquisition”
,
Proc
eedings

8th
DAGM
-
Symposium
,

pp.12
-
17
,

1986.

[14]

J.

Davis, R.

Ramamoorthi, and

S
.

Rusinkiewicz, “Spacetime Ste
reo: A
Unifying Framework for Depth from
Triangulation

,
Computer Vision and Pattern
Recognition
, 2003
.

[15]

E.

Horn, and N.

Kiryati,

Toward Optimal
Structured Light Patterns

,
Int. Conf. on
Recent Advances in Three Dimensional
Digital Imaging and Modeling
, pp
. 28
--
35,
1997.


[16]

P.

Poulin, M.

Stamminger, F.

Duranleau, M.
-
C.

Frasson,
and

G.

Drettakis
,

Interactive
Point
-
based Modeling of Complex Objects
from Images

,

In Graphics Interface
, pp. 11
-
20
,
2003
.