Research of Monocular 3D Measurement

Based on Support Vector Machine

YIN Aijun, Instructor; QIN Shuren, Professor;Test center, College of Mechanical Engineering,

Chongqing University, Chongqing 400044,China

ABSTRACT: By researching into the method of Shape From Shading problem(SFS) and Support Vector

Machine(SVM), the paper proposes a monocular 3D measurement method based on SVM. The paper

analyzes amply the theoretical basis and feasibility of applying this method, probes into the key problems

such as determining the input parameters of SVM, choosing and constructing the core function etc.

Experiments show that compared to traditional SFS method, the method proposed is capable to acquire

directly the height of an object with accuracy and not liable to the influence of the environment thus there is no

need to make many assumptions on imaging condition. In addition, the method is also beneficial in avoiding

the difficulty of matching image in 3D measurement based on binocular vision and simplifying the system

structure. Finally the future improvement of the method is concerned.

KEYWORDS: SVM,SFS , Depth forecast, Monocular 3D measurement

1 INTRODUCE

Currently 3D measurement based on machine vision has learn widely researched which has also planned in

western countries as one of the strategic project of research in the next 15 years. Now the research and application

of this technique is emphasized on binocular (multiple) vision field. In 1970 Horn presented the method of

recovering Shape from Shading(SFS) in which the parameters of height etc of the points at the surface are

recovered by the variation of lightness of object surface in a single image

[1][2]

, resulting that the monocular vision

based shape recovering is widely researched. Later, Pentland and Tisa etal presented their own method on this

research. However, in terms of Marr theory SFS is recognized as an abnormal medium vision problem so that

traditional methods are all constrained by some conditions such as smooth and continuous surface etc

[3]

Nevertheless, traditional SFS method still can not obtain directly the height of an object but only an approximate

value.

The support vector machine(SVM) is a new machine learning method based on the statistical learning theory(SLT).

It can solve the difficult problems encountered in the machine learning of small sample, nonlinear and

multi-dimensional space, and overcame some essential problems in nerve network which motivates SVM into wide

application

[4]

.

It is shown in this paper from the research on SFS problem and SVM that 3D measurement can be realized by the

predicted depth of monocular image with SVM in combination with 2D image hence no extra conditions is needed,

which makes the method applicable in wide use.

2 CRITICAL POINTS AND TYPICAL ALGORITHMS OF SFS

2.1 Reflective model and normalized model

In ideal imaging condition the grey of image satisfys the reflected image function

[5][6]

:

( )

(

)

(

)

(

)

22

222

0

2

0

00

1

sinsinsincoscos

11

1

,,,,

qp

qp

qpqp

qqpp

yxqyxpRyxE

++

++

=

++++

++

=

=

στστσ

（1）

where represents the lightness(grey) at image element

(

yxE,

)

(

)

yx,

of the normalized image,

is reflection function,

( ) ( )(

yxqyxpR,,,

)

⎟

⎟

⎠

⎞

⎜

⎜

⎝

⎛

∂

∂

=

∂

∂

=

y

z

q

x

z

p,

is surface gradient, where the height of the object

surface is represented by ,

( )

yxzz,=

σ

and

τ

represents respectively surface slant and tilt, while

σ

σ

τ

cos

sincos

0

=p

，

σ

σ

τ

cos

sinsin

0

=q

.

The SFS method expressed by equ.(1) is abnormal, hence traditional algorithms introduced some restrictive

conditions such as assuming the object surface is smooth i.e the height function of surface is continuous. With this

assumption, the boundary condition of the object and reflective function the normalized model of SFS problem is

built. In term of the different manners of building normalized model, the existing SFS algorithms can be divided into

minimizing method, evolutive method, local method and linearizing method.

In addition, many researchers have made deep research on reflective model and presented other kinds of

reflective model and obtained corresponding SFS algorithms.

2.2 Typical SFS algorithm

(1) Minimizing method This method represents both the reflective equation expressed by equ.(1) and smooth

surface model as energy function, and then represents the combination of them as a functional extreme problem.

The solution of minimum value in this problem is that of SFS problem. The method includes the algorithms

presented by Horn, Zheng, Chelappa, Lee and Kuoad etc.

For example, Horn converts the lightness equation (1) into error function J

1

, which then is combined with the

smooth surface model J

2

and an integrabel restrictive condition J

3

to become a functional limit problem, i.e

（2）

( ) ( ) ( )( )( )

[

(

( )

( )

( ))

]

dxdyqzpz

qqpp

yxqyxpRyxE

JJJJ

yx

yxyx

22

2222

2

321

,,,,

−+−

++++

+−=

++=

∫∫

Ω

μ

λ

μλ

where p

x

，p

y

and q

x

，q

y

represents respectively the partial derivative of the function p and q with respect to x and y，

while z

x

，z

y

represents the partial quotient of function Z with respect to x and y，λ and μare power coefficients.

(2) Evolutive method This method was presented by Oliensis, Bruckstein and Osher etal, which takes equ(1) as a

Hamilton systematic problem, thus the problem turns into solving a Hamilton-Jacobi equation as

( )

0

11

1

,

222

0

2

0

00

=

++++

++

−=

qpqp

qqpp

yxEH

（3）

when the boundary condition is given the partial differential equation(3) becomes a Dirichlet problem i.e boundary

problem. This problem can be solved by characteristic line and correlation analysis. The critical procedure of these

SFS algorithm is to find out a specific point or some specific points in the image which can determine uniquely the

shape, then to obtain the whole surface solution from these points.

(3) Local analysis method The minimizing method and evolutive method employ interavtive or evolutive process

to expand the boundary or initial conditions to the whole object surface, hence the solution process covers the

whole surface and can not obtain individually the local shape of surface. However, the local analysis method

combines reflective model and the assumed local shape of surface to construct a linear partial differential equation

with parameters of local shape and uses boundary conditions to obtain the unique solution of the equation. In these

methods the local shape of object is assumed spheric as Lee and Rosenfeld did in their local analysis method.

(4) Linearizing method Linearizing method refers to that by means of linearizing p and q of reflective function to

convert the nonlinear problem of original SFS to a linear problem for obtaining the solution. In addition, since p and

q can be approached by height Z, thus Tsia and Shah etal convert directly the linearization problem of p and q into

the linearization problem of height Z. The procedure of the method is that expanding the reflective function as Tailer

series, then remaining the constant term and one order term about p and q and eliminating the nonlinear term to

obtain a linear expression of R, as shown in the works of Pentland, Tsia and Shah.

3 THE PRINCIPIUM AND ALGORITHM OF SUPPORT VECTOR MACHINE

SVM is a statistics-based learning method which is an approximation of the inductive principle for minimizing the

structural risk, theoretical foundation of which is statistic learning theory. Assume that there is a training sample of

，where n is the number of samples, d is the dimensional number, then

with linear and divisible condition these may be a hyperplane to divide completely the sample into two sorts.

Assume that super plane is

( ) {

1,1,,,...2,1,,−∈∈=

i

d

iii

yRxniyx

}

0

=

+

•

bxw

（4）

which enables

(

)

[

]

nibxwy

ii

,...,2,1,1

=

≥

+

•

where ‘·’ is the dot product of vectors. What SVM does is to find such vector capable of constructing an optimum

hyperplane, which can divide the trained data without error. The optimum hyperplane is the hyperplane between

which and the vector most closed to hyperplanes the distance is the the longest. The sample point on the

hyperplane is support vector.

The definition of hyperplane and support vector is to find the (w ,b) solution of the optimization problem as follows

( )

[ ]

⎪

⎩

⎪

⎨

⎧

=≥+• nibxwyts

w

ii

,,2,1,1..

2

1

min

2

L

（5）

For this quadric planning problem which has unique minimal value, convert it into the couple form with Lagrange

multiplier method as

( )

⎪

⎪

⎪

⎪

⎩

⎪

⎪

⎪

⎪

⎨

⎧

=

≥=

•−

∑

∑

∑∑

=

=

=

n

i

iii

n

i

iii

n

ji

jijiji

n

i

i

xayw

aayts

xxyyaaa

1

1

,1

0,0..

2

1

max

（6）

if a

i

>0 then the corresponding x

i

is support vector while the optimum hyperplane is

( )

∑

∈

=+•

svx

iii

i

bxxya 0

，where

sv is support vector collection. Then optimum separative function is obtain as

( ) ( )

⎥

⎦

⎤

⎢

⎣

⎡

+•=

∑

=

n

i

iii

bxxyaxf

1

sgn

（7）

where sgnO is symbolic function.

However, in practical use samples often are nonlinear where the sample x can be reflected to multi-dimensional

feature space H(Hilbert Space),i.e,

HR

d

→:

φ

(

)

(

)

(

)

(

)

(

)

,...,...,

21

xxxxx

i

φ

φ

φ

φ

=

→

for making it linearly separable, then the input vector x is replaced by feature vector

( )

x

φ

which is a real function

and the hyperplane is constructed in multi-dimensional space H thus to realize separation. This nonlinear reflective

function is called core function. According to functional theory as long as core function

(

)

ji

xxK,

satisfies Mercer

condition, then it is correspond to an inner product of a conversion space , i.e

(

)

( )

(

)

jiji

xxxxK

φ

φ

•

=,

, so it

needs only to make inner product calculation in multi-dimensional space, then equ.(7) becomes

( )

(

)

[

]

( )

( ) ( )

⎥

⎦

⎤

⎢

⎣

⎡

+=

⎥

⎦

⎤

⎢

⎣

⎡

+=

+

•

=

∑

∑

=

=

bxxya

bxxKya

bxwxf

n

i

iii

n

i

iii

φφ

φ

1

1

sgn

,sgn

sgn

（8）

The core functions often used include linear core function, polynomial core function, Guass radial base core

function etc

[7][8]

.

4 SVM -BASED MONOCULAR 3D MEASUREMENT

4.1 The application of SVM to 3D measurement

Currently the research and application on SVM is concentrated on using SVM as separator such as pattern

recognition and function simulation etc. equ.(7)and(8) indicate the distance between measuring sample and the

optimum hyperplane is separated via symbol function, if taking directly the distance as output function, then for

equ.(8) there is

( ) ( ) ( )

bxxyaxf

n

i

iii

+•=

∑

=

φφ

1

（9）

then the output of which is a continuous value greater than or equal to zero.

This paper mainly researches into recovering the height of the corresponding point from the greyness of a

monocular image. Since image greyness is affected synthetically by the shape of object, colour and position of light

source etc, and these parameters are nonlinear and complicated in relation to greyness, traditional SFS algorithms

introduce various assumptions on imaging conditions such as light source is infinite distant and the surface is

smooth etc. This paper uses an image of standard sphere to train the nonlinear SVM. The support vector of trained

SVM can express generally the nonlinear relationship between greyness and the factors affecting it, thus the

trained SVM can be used to predict the depth (height) of the object image generated at the same imaging condition,

i.e, the depth wanted is the output of the function expressed by equ.(9)

The other two dimensional information of the surface can be obtained directly by 2D image, it may be thought that

the 2D information in the plane perpendicular to camera is simply linear. However, for improving the accuracy of

measurement image refining should be made.

It is shown from analysis that when use SVM to predict depth the environment requirement for it is not strict but

only the reconstructed object has the imaging condition similar to that of standard sphere, which is in fact easily to

be ensured, is required. Also, the two objects are required to have similar surface feature. Here selecting the image

of standard sphere as trained sample is because sphere surface has all imaging directions and continuous height

which enables SVM to separate all the value along the height of sphere.

4.2 Selection of input characteristic parameters of SVM

It is shown from SVM theory and the above analysis that whether SVM can predict effectively the depth of image

greatly depends on what feature is taken as the input vector of SVM. The feature values are required to be the

most incorrelate with each other which enables SVM to obtain easily from trained sample the optimum

hypersurface which has the optimum ability of separating (predicting) for the trained samples. Besides, to some

degree these feature values should be correlative to the depth (height).

Feature of light source

(position, lightness)

Object shape（x,y,z）

Surface colour of object

（g(r，g，b)）

Surface feature of

bj

t

Camera(determining

vision point etc.

i

m

a

g

i

n

g

p

a

r

a

m

e

t

e

r

s

)

image

（

x’,y’,g’(r,g’,b’)

）

Fi

g

1 affective factors of ima

g

in

g

The imaging system indicates we can say that the following parameters determine image features: imaging system

containing parameters like position, features of light source like position and lightness etc, object feature including

position, shape, colour, surface features etc, the relationship of them is shown in fig 1, in which (x, y, z) represents

the 3D coordinates of object surface, g(r, g, b) represents the colour of object surface, （x’,y’,g’(r’,g’,b’)） represents

the coordinates of the planar image elements of image and the corresponding value of element colour. Therefore,

the difference of different image of the object is expressed by（x’,y’,g’(r’,g’,b’)）or other image parameters derived

by these parameters, i.g, substituting the value of three original colours by grayness valve, or converting RGB

colour space to Lab (lightness/chrominance) colour space or HSV space etc.

Above analysis shows that the precondition of prediction is trained samples has the same imaging condition as

measured samples do, i.e the assumptions are in accordance to practical case as follows:

1) At the imaging moment of trained sphere and measured object both camera position and light source like

position and lightness are unchanged.

2) The surface reflective model of trained sphere is similar to that of measured object.

Therefore the problem turns to predicting the coordinate (x, y, z) of object surface by（x’,y’,g’(r’,g’,b’)）. Because (x,

y) can be obtained directly from（x’,y’）,thus only remains the unknown value of Z .It can be predicted by the output

of SVM separation function, and the predicted value is just the actual height of object whereas the other influence

parameters are constant expressed by support vector. The experimental comparison has been made in the spaces

of various colour, and herein（x’,y’,g’(r’,g’,b’)）is taken as the input vector of SVM.

4.3 Selection and Construction of SVM Core Function

It is known from SVM theory that the performance of nonlinear SVM is closely related to core function. In SVM

application the main work is to select or construct an appropriate core function. Herein the following often used

core functions are compared experimentally:

1) Linear core function

(

)

yxyxK

•

=

=

,

2) Polynomial core function

(

)

(

)

[

]

p

yxyxK 1,+•=

where p is used as the input parameter of core function which should be predetermined.

3) Guess core function or radial-base core function

( )

2

2

2

,

σ

yx

eyxK

−−

=

where

σ

is used as the input parameter which is predetermined. Besides, the others include two-layer neuro

network core function and radiative core function etc. Experiments indicate linear core function and polynomial

core function are efficient in prediction.

Since the process from object to image is nonlinear and very complicated, the above core functions are all

approximate to nonlinear problem, which is in fact very rough. Therefore, if it is enabled to construct a core function

satisfying the requirement of this problem, then the predictive accuracy may be greatly improved.

In terms of Amari and Wu’s method

[8]

it is enabled to employ the Riemann geometric analysis on core function to

modify gradually the existed core function with experimental data to fit well with practical core, i.g, for Gauss core

function, assume

( )

(

)

∑

∈

−−

=

svx

xx

i

i

i

ehxc

2

2

2/τ

（10）

where

n/

στ

≈

is a parameter, h

i

is weight coefficient and x

i

is support vector. Hence for Gauss core function K(x,

z), according to Mercer formula there is another core function as

(

)

(

)

(

)

(

)

zczxKxczxK,,=

（11）

thus it is enabled to use improved core function

(

)

zxK,

as SVM’s core function to improve predictive accuracy and

processing speed.

In addition, core function can be constructed by reflective model like Lambertian surface model and lightered

model like Phong illumination model. The core functions constructed by these models can approached to nonlinear

relation more accurately.

5. EXPERIMENTAL COMPARISON

Firstly the paper uses equ.(12) and (13), i.e

（12）

100

222

=++ zyx

(

)

22

22

sin

yx

yx

z

+

+

=

（13）

where , to construct two 3D curved surfaces of the same colour. For investigating

conveniently the height of them are shown with different colours as shown in fig 2(a) and 2(b). Then take two

images of the two curved surfaces with the camera and light source both in the direction (1,0,0) as shown in fig 2(c)

and 2(d). Take fig.2(c) as SVM’s trained image which input vector is （x’,y’,r’,g’,b’）, where （x’,y’） represents the

position of image element plane, （r’,g’,b’） represents the three original colour components of corresponding

image element positions, fig 2(c) and 2(d) as measured samples to obtain depth (height) finally to recover 3D

[ ] [

10,10,10,10 −∈−∈ yx

]

shape.

Fig. 3 and fig.4 represents respectively the recovered 3D surface with the linear core function expressed by

equ.(14) and the polynomial core function expressed by equ.(15), i.e

(

)

(

)

12,

+

•

=

=

yxyxK

（14）

(

)

(

)

[

]

2

1,+•= yxyxK

（15）

Fig2 the measured curved surface and

the plane image in (1,0,0) direction

(a) (b)

(c) (d)

Fig 3 recovering surface with linear core function

(a) (b)

Fig 4 recovering surface with polynomial core function

(a) (b)

Fig 5 recovering surface with the SFS algorithms

presented by M.Bichsel and A.P.Pentland

(a) (b)

Fig 5 is the 3D surface obtained by the SFS algorithm presented by M.Bichsel and A.P.Pentland. For investigating

conveniently all the rebuilt images are expressed by line graphs and use different colours to show its height.

Experimental results indicate that in comparing to traditional SFS algorithm the recovered surface with SVM-based

3D measurement presented by the paper is the surface of practical object, while the height obtained is just the

height of practical object and the relative shape is more similar to object shape.

The surface recovering with linear core function is more accurate at the two ends of height while big error occurs at

mid-height. The image obtained by polynomial core function has the shape more closed to that of practical object

but more error of magnitude. In addition, either the method presented or traditional SFS method has big error at the

peak of height caused by the big highlight region in the image.

6. CONCLUSION

This paper presents a SVM-based 3D measuring method, experiments demonstrate in comparing to traditional

SFS method it can more accurately acquire directly the height of object and is less influenced by environment

along with no need to make overabundance constraints for imaging conditions. In addition, this method avoids the

difficulty of image-match in double (multiple) vision measurement. The measuring system founded by this method

is featured by simple structure, high efficiency and wide application.

However, with the method presented the recovering accuracy is influenced by core function, which will constrain

the further improvement of recovering accuracy. Besides, if it is possible to eliminate the influence of the original

colour of object surface, then the applying range of this method will be expanded.

7 REFERENCES

[1] B. K. P. Horn, "Hill shading and the reflectance map," Image Understanding Workshop, Palo Alto, CA 1979.

[2] B. K. P. Horn, "Height and gradient from shading," International Journal of Computer Vision, 1990, 5(1):37-75

[3] Pentland A. Shape inform ation from shading: A theory about buman perception[A]. In: Proc. Intl. Conf. CV[C].

Tampa, 1988:404-413

[4] Vapnik V. The Nature of Statistical Learning Theory. New York:Springer-Verlag, 1995

[5] Szeliski R. Fast shape from shading[J]. CVGIP:IU. 1991,53(2):129-153

[6]

Bichsel, M. Pentland, A.P. A simple algorithm for shape from shading,

Computer Vision and Pattern Recognition

.

1992. Proceedings CVPR '92:459-465

[7] C Cortes, V Vapnik. Support vertor networks[J], Machine Learning. 1995(20):273-295

[8] S Amari, S Wu. Improving support vector machine classifier by modifying kernel function[J], Networks.

1999(12):783-789

## Comments 0

Log in to post a comment