Segmentation via Graph Theory

tealackingAI and Robotics

Nov 8, 2013 (3 years and 7 months ago)

47 views


Segmentation


via

Graph Theory







Final project by


Yaniv Goldyan


040404040




Segmentation via Graph Theory




Introduction


There are many systems in which we could use segmentation of the main objects
from an image, like traffic or se
curity surveillance systems. In order to achieve such
efficient sophisticated systems we need a good basic system. In this project I tried
to explore the weight function of graph theoretic approach to segmentation and to
implement a simple and efficient pr
ogram that segment a colored image to its global
main objects using the graph theoretic approach.




Approach and Method


To maintain good results I chose the graph theoretic approach to segmentation
which seemed to be an advanced and promising approach.












Graph representation




First we need to represent the original image as an undirected weighted graph G =
(V,E). the nodes of the graph will be the pixels of the image and between each pair
of nodes i , j we need to decide on a weight to the e
dge between them w(i,j) = ?. this
is an important question because we will decide how to disjoint a group of nodes
according to the weights of the edges between them.

This weight of each edge is the core of the ahead computation. We want to assign
big weig
ht on edges between nodes that shouldn't be in the same segmented group.
So we need to choose a weight that points on the similarity between the two nodes.

Subjectively we are grouping parts of image to different objects according to prior
knowledge about

those objects. This kind of segmentation acquires huge databases
on each kind of possible object and is likely unpractical.

Instead I used the objective approach which relies on the image low level
properties. But what low level properties we can use whe
n our image is an array of
numbers ?

These values of each pixel implies on their intensity, so a good criterion for an edge
weight will be the intensity differences, because we want to disjoint nodes with
meaningful intensity difference. Than we give each

node a value and define that the
weight of an edge between two nodes is the difference between their values.


But is that all? Can we now just group together the nodes with similar intensity and
get a good segmentation? Well not exactly. If two nodes g
ot the same intensity
they are not necessarily belong to the same segment object, because they could
belong to different objects that got parts with the same intensity. There for we need
that the weight of edge between tow nodes should be affected also by
the length
between the nodes.



But still we got another image low level property that we are not taking advantage
of
-

c
o
l
o
r

-
. While intensity distinguishes different brightness, it's ability to
distinguish between different colors is limited.

In order

to measure differences between colors I used the indexed image
representation which represent a color image I as a pair of <X,colormap>. where
each entry in the colormap is a different color. And the colors arranged in the map
in the colormap in ascending

order:




And the X is a two dimensional matrix that represent the image and each entry in
that matrix is an appropriate entry number in the color map. By this representation
we can measure the difference between the colors of two pixels easily by subtra
ct
between their values in the X matrix.

Although it may seem to be that we can now ignore the intensity measure it will be
a mistake. And that because the vast of the colors and the optimal length of the
clormap is huge in order to include for each existi
ng color a certain entry in the
colormap.

Thus this representation can cause different colors to be mapped into the same entry
in the colormap. So to overcome this problem we will compute the value of each
pixel by adding to his intensity value the color
value multiplied by some 'colorfull'
parameter. Which means that if an image got many entries occupied in the colormap
than the colormap value will affect more on the assign value to the node.


Another aspect we can use to evaluate similarity is texture.
But in order to find
difference in texture one need to search for any texture pattern from some texture
'bank' by comparing each slice of image to each texture or find a repeat pattern by
comparing slices of image.

Well such search will harm dramatically t
he efficient of the implementation and
will not worth the trouble. Instead we can use convolution of 5x5 pixel matrix (with
1/25 values) on the image and hope to obtain by that similar values in pixels that
surrounded by the similar pixels which implies th
at they got a similar texture.

This action will heart the contours of the pixels and the preciseness of the previous
criterions thus we just add to each pixel
-
value the new convoluted value multiplied
by some small constant.


Segmentation by Graph Cuts


B
asic idea for break the graph into segments will be:

• Delete edges that cross between segments

• Easiest to break edges that have low weight



similar pixels should be in the same segments



dissimilar pixels should be in different segments




before trying

to use cuts for segmentation lets first define cut.


Edges cut:



• set of edges whose removal makes a graph disconnected

• weight of a cut:




It is very tempted to use here minimum cut in order to decide how to pick the edges
to delete.


finding minim
um cut is a well studied problem, and there exist an efficient
algorithms to solve it but minimum cut is not always the best cut...




so instead of cuts we will use normalized cuts.



Normalized Cut



• a cut penalizes large segments

• fix by normalizing

for size of segments



• where Volume(A) = sum of weights of all edges that touch A.



well that’s very nice indeed but unfortunately finding minimum Ncut is
NP
-
hard

so
we need to generalize the problem somehow in order to get a good estimation for
the m
inimum Ncut.



matrix representation of Ncuts:




W will represent the weight matrix : W(i , j) = Wi,j

D will represent the sum of weights from node i :


now we can write normalized cut as:








so this is the problem in it's

matrix representation :



and the solution derived from "generalization" of the problem gives us:


which can be rewrite to :


Solution of this version of the problem corresponds to second smallest eigenvector
(y1).


and now we just need to put it altog
ether by a grouping algorithm.



Recursive Ncut grouping algorithm



1. Given an image, set up a weighted graph
G=(V, E)
and set the weight on the
edge connection two nodes to be a measure of the similarity between the two nodes.

2. Compute
W,D

3. Solve fo
r the eigenvectors of

.

4. Use the eigenvector of the second smaller eigenvalue to bipartition the graph.

5. Decide if the current partition should be subdivided


and recursively repartition the segmented parts if necessary.


Where in step 5 Decide if

the current partition should be subdivided by

checking the stability of the cut by making sure the founded Ncut is below the pre
-
specified threshold value.


All the steps were implemented in matlab (6.5 version) and a matlab gui performing
all the above i
s the final result.


Results


"one picture is worth more than a thousand words". so here are some results
pictures.







a simple objects first:








now a more realistic object:




black and white images:














face images:










an o
bject that is similar to the background:











more complex images:


















image of a moving object:




Conclusions


As you can see the program is far from being perfect. nevertheless It evolves good
results with concerning to simple o
bjects that their background is whether uniform
or not. We can regard this success to the good weight assigning of the associate
graph and to the Ncut algorithm which chooses to divide only non
-
similar regions
that are
significant

enough.

The application
is also very convenient, simple and friendly to the user. That due to
the matlab gui which wraps the program and enables the simplicity.

With concerning to efficiency the program run
-
time on a 2.5 GHz is up to 1.3
minutes depending on the image resolutio
n. Though the commutation time of
solving (D
-
W)y =
ג
Dy takes O(n^3) I performed preprocessing on the original
image that resize the original resolution down to a certain threshold. This comes off
course on the account of the accuracy of the image processin
g. This is a very
meaningful factor that could be improved in the future by simplify the computation
of (D
-
W)y =
ג
Dy (
with Lanczos method for example).


Using the indexed representation to distinguish between colors is efficient only on
nodes of very dif
ferent colors this can be regarded to the matlab default color map
which its size is 132. it might be that adding more kinds of colors to the map,
meaning using bigger map could improve the results.

another possible improvement is finding a more efficien
t computation to achieve
texture similarity that could improve the results by improving the similarity
measures (the weight function).

All in all the program produce satisfying results consider the limitation of time and
certainly could be a good starting
point for anyone who is interesting in
segmentation and its implementation.




References


J. Shi and J. Malik

"NCuts and Image Segmentation",
IEEE
, June 2000

J. Shi and J. Malik, "Normalized Cuts and Image Segmentation," Int. Conf.
Computer Vision and P
attern Recognition, San Juan, Puerto Rico, June 1997.

M. Fiedler, "A property of eigenvectors of nonnegative symmetric matrices and its
application to graph theory", Czech. Math. J. 25, pp.619
--
637, 1975.

Wu, Z., and Leahy, R., “An Optimal Graph Theoretic Approach to Data Clustering:
Theory and Its Application to Image Segmentation”, PAM I (15), No. 11, pp. 1101
-
1113, 1993.