1
Color Space Analysis and Color Image Segmentation
Author: David X. Zhong
School of Electrical and Information Engineering,
The University of Sydney
Email:dzhong@ee.usyd.edu.au
Supervisor: Prof. Hong Yan
Abstract
:
This paper describes two classic styl
e
methods to analyze and segment the color space. The
RGB space method includes color space pyramiding,
low

pass filtering, 3

D object labeling and property
calculation to acquire a proper number of colors and a
good initial estimate of center positions, t
hen fuzzy c

means algorithm can be used to optimally cluster the
color space distribution points. The second method
introduced is a less complicated YIQ~Y
S system which
is in the category of histogram thresholding to achieve
color segmentation.
1. Intro
duction
The problem that the color image processing is faced
with is a five dimensional (5

D) problem. 2

D is
geometrical and 3

D is chromatic. To extract information
from a color image means that it has to be decomposed
into identifiable items using colo
r image processing
techniques plus gray image processing techniques. Color
segmentation is an important step in the decomposition
of a color image into less complicated component
images.
At hardware level, color images are usually captured,
stored and disp
layed using elementary R, G, B
component images. The color attributes of a color image
pixel can be represented as a 3

D vector in the color
space as shown in Figure 1.
The color space distribution h(r,g,b) of a color image
can be spread, and can also be c
ondensed. Its diversity
and complexity are well acknowledged. The only
constraint is that the visible color space volumn is
limited. For simple colors it tends to be condensed on a
few locations but the condensation itself may still be
broadly spread. Beca
use of its 3

D nature, it is not easy
to visualize a color space distribution when we are
analyzing the colors.
Color image segmentation methods can be roughly
categorized as (a) histogram thresholding [1] [2]; (b)
color space clustering [3] [4]; (c) edge
detection and
region extraction [5]; (d) Marcov random field (MRF)
and Gibbs random field (GRF) [6] [7] [8]; (e) neural
network and learning theory [9] [10] [11]; (f) the various
combination of the above techniques. Often color
segmentation comprises two s
tages: coarse segmentation
and refinement. It is interesting to look at in detail how
these algorithms find the number of colors
C
. Ohta et al
(1980) [1] and Shafarenko et al (1998) [2] decide C by
major histogram peaks. Healey (1992) [5] extracts the
spat
ial region to determine the number of colors. Lim &
Lee (1990) [3], Huang et al (1992) [6] and Liu & Yang
(1994) [7] decide C by using SSF (scale space filter) in
coarse segmentation. Xie & Beni (1991) [4] uses
compactness and separation validity measure.
Chang et
al (1994) [8] divides the color space into 8 cubes then
eliminates unpopulated cubes resulting a C
< 8. Wu et al
(1994) [9] and Littman & Ritter (1997) [10] use only two
clusters. Uchiyama & Arbib (1994) [9] pre

selects C to
be 8.
B
Q
Cyan
Magenta
White
Y
Black
G
Yellow
I
R
Figure 1: 3

D color space.
Basically the color segmentation differs from color
quantization in that i
t is capable of finding the number of
color unsupervisedly or dynamically. However, this very
question has not been solved satisfactorily. The existing
color space clustering algorithms make use of various
validity measures such as entropy, partition coeff
icient
[12] etc. and repeat the fuzzy c

means algorithm for
2
different number of colors to find the optimal C. These
validity measures may not always work. The best
validity measure probably is the Xie & Beni’s
compactness and separability [4], but it is st
ill time
consuming.
This paper describes two classic style methods to
analyze and segment the color space. The RGB space
method includes color space pyramiding, low

pass
filtering, 3

D object labeling and property calculation.
The second method introduced
is a less complicated
YIQ~Y
S system which is in the category of histogram
thresholding.
2. RGB Space Segmentation
2.1. Pyramiding
The color space is usually of the size of
256
256
256. The dots are usually disjointed one from
the other. The number of t
otal
dots
mapped from an
input image to the color space are equal to the size of the
input image, i.e.
N
r
N
c
, where N
r
is the row number and
N
c
is the column number of the input image. These total
number of dots can be considered as total weight of color
s
pace objects.
In order to make analysis easier a pyramid algorithm
[13] is used to reduce the size of the 3

D color space.
The pyramid algorithm adds, through out the entire color
space, the values of every eight voxels in a 2
2
2 cubic
into one voxel, th
us reduces the size of the color space
by eight into the size of 128
128
128. The reduction is
iterated three times, effectively reduces the color space
to the size of 32
32
32. The total weight of the objects
in the color space, however, is not changed, s
till equal to
N
r
N
c
, since the pyramiding does not throw away any
weight contained in the original color space.
2.2. Low

Pass Filtering
The number of objects in the 32
32
32 color space is
still very large. In order to make the dots joint into
blocks, a
low

pass filter is applied to blur the distribution
into connected component blocks. The low

pass filter
averages a voxel’s neighborhood volume of 5
5
5 into a
new 3

D buffer, then, after every voxel is averaged,
assigns the new 3

D buffer back to the 32
3
2
32 color
space. Because of this averaging, narrow gaps between
heavy objects in the color space become no longer
empty, rather they are filled with certain average values
or clouds. Thus more isolated objects become jointed.
This low

pass filter is appli
ed twice and the number of
objects is dramatically reduced from hundreds to less
than 7. Figure 2 illustrates the process of the pyramid
reduction and the low

pass filtering on a 2

D sample
data which is a slice of a 3

D color space distribution. In
Figure
2, there are four rows of resulting images. The
first row is the pyramid reduction without any filtering.
The second row is the result of applying the filter once.
The third row is the result of applying the filter twice.
The last row is the result of app
lying the filter three
times. It can be seen that the number of clusters
decreases when the times of the pyramid reduction and
the filter application increase.
2.3. 3

D Object Labeling
The objects in the 3

D color space of the size
32
32
32 are labeled u
sing 3

D labeling. The 3

D
labeling is comprised of a stack of thirty

two 2

D
labeling. 2

D binary object labeling is described in most
image processing text books such as Gonzalez and
Woods [14]. The thirty

two 2

D labeled images are
stacked together and
the connectedness between objects
in neighboring 2

D images are searched and recorded.
Similar to the 2

D 4

connectedness, a
first order (nearest
neighbourhood) connectedness
is used. The information
of the connectedness between labeled objects in
neighbor
ing 2

D images is used to equalize the labels of
objects separated in 2

D but connected in 3

D.
The equalization is done through a
square sorting
matrix
. The square sorting matrix has row number
(hence the column number) equal to the total number of
label
s used in the thirty

two 2

D labeling. The matrix
elements are initialized to zero. Each connectedness
between objects will set a matrix element to 1. Then a
column by column scan is carried out. At one particular
column, if any 2 non

zero elements are enc
ountered (that
means the two rows are connected and not independent
from each other), the two rows are merged into one row,
the other row is reset to zero. Eventually every row is
independent from other rows. Each independent row is
assigned a representati
ve label which is consistent and
consecutive through out all independent rows. For
example, if there are total 7 objects labeled by number
“1” to “7” and, “1” is connected to “4” and “5”, “2” is
connected to “5” and “6”, “3” and “7” are isolated
individual
objects, as illustrated in Figure 3, then the
matrix will be set as shown in (1) where the arrow
indicates the merge and reset operation. At the end there
are total three independent rows representing three
independent connected components. They are eac
h
assigned with representative label a, b and c.
Using the consecutive representative labels, all
objects in the 32
32
32 space can be alphabetically
labeled and identified. Geometric centers (and other
geometrical properties) for the labeled 3

D objects c
an
then be calculated. The centers thus calculated can be
used as good initial estimate of cluster centers for the
fuzzy c

means algorithm. The number of colors thus
3
2

D sample data
1
st
pyramid reduction 2
nd
reduction
3
rd
4
th
5
th
Apply filter once
Apply filter twice
Apply filter three times
Figure 2: Pyramid reduction and low

pass filtering on a 2

D sample data.
4
1
2
3
7
4 5 6
Figure 3: The stack example showing objects
separated in 2

D but c
onnected in 3

D.
1 2 3 4 5 6 7
7
6
5
4
3
2
1
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
1
1
0
0
1
0
0
0
1
1
0
0
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
1
1
0
1
1
c
b
a
(1)
acquired is more physically meaningful and relevant
than those acquired using other validity measures.
Once
the number of colors is known, a k

means
algorithm [15] or fuzzy c

means algorithm [12], [16],
[17] can be used to further cluster the color space
objects. Fuzzy c

means algorithm can produce good
results if the correct number (together with a good initial
center estimation) of clusters is known a priori.
3. YIQ

Y
S⁓e杭en瑡ti潮
Other than the RGB system, from an engineering
point of view, the television broadcasting scheme YIQ
system is a well tried system for electronic color analysis
(Pritchard 1977 [1
8]). The YIQ system is a perceptually
consistent system which is a better representation of
human view of colors. It decouples color information
from luminance information. Its luminance component is
defined to be the same as the CIE Y primary (Foley et al
1990 [19]), or the black

white TV component. The Y
component can be used as a way of color space
dimentionality reduction in comparison with that
described by Yang and Tsai (1995) [20]. The color
components of the YIQ system provide additional
information
about the image contents.
Viewed from the top of the YIQ system, a color
phase graph can be obtained as shown in Figure 4 where
the arrows are the 2

D projection of the 3

D color
vectors. Their length can be defined as saturation
S
and
the angle be the hu
e angle
. Thus the
Y
S
coordinates
system can be defined.
I
70
134
Red
Yellow
28
Magenta
Q
Green
208
Blue
Cyan
314
250
Figure 4: Top view of the YIQ system.
The total
transformations from RGB coordinates to
Y
S coordinates are
Y
0.30 0.59 0.11
R
I
=
0.60

0.28 0.32
G
(2)
Q
0.21

0.52 0.31
B
and
Y = Y
= arctg(I/Q)
(3)
S =
(I*I + Q*Q)
Using the tran
sformation, a RGB space distribution
h(r, g, b) can be transformed into Y
S space distribution
h(y,
,s). Such a distribution h(y,
,s) usually constitutes a
number of 3

D
modes
(major peak blocks). Ideally color
segmentation should be done by segmenting all
these
modes.
If the 3

D distribution is projected onto the three
axes, there will be 3 x 1

D distribution h(y), h(
), and
h(s); and for each 1

D distribution there will be the
number of
N
y
,
N
, and
N
s
modes respectively. The
maximum number of possible 3

D
modes will be the
multiplication of the projected mode numbers:
N
max
= N
y
* N
* N
s
(4)
For example, if there is one mode of the Y
component, two of the
component and two of the S
component, then there will be the maximum possible
5
color space mo
de number of four. By looking for the
significant modes an input color image can efficiently
segmented. One application example of the method can
be human face extraction.
4. The Experiment Results
Two examples are shown in the Figure 5 through
Figure 15
using the RGB space analysis method. Figure
5 is the input image for the 1
st
example and Figure 10 the
input for the 2
nd
example. Figures 6~9 and Figures
11~15 are the segmentation results presented on gray
background. The results show that important text
information are clearly lifted from the original input
images. The time components spent using the RGB
space analysis method are: 5.55 seconds for the classic
color space analysis (color number calculation and initial
center estimation) and 3.02 seconds f
or fuzzy clustering
in the 1
st
example; 5.67 seconds for the color analysis
and 8.89 seconds for the fuzzy clustering in the 2
nd
example.
The 3
rd
example, applying the YIQ~Y
S analysis, is
shown in Figure 16~18. Figure 16 is the input image.
Figure 17 and
Figure 18 are the segmentation results.
The parameters used are 16
S (saturation)
77, 81
(the hue angle)
127.
5. Conclusion
The paper has described two methods of color space
analysis and their applications. The RGB space analysis
method is com
prised of three sub

processes: pyramiding
which reduces the color space to 32
32
32, low

pass
filtering which reduces the number of color space objects
to less than seven, and 3

D labeling which identifies
each 3

D object and calculates it’s geometric prop
erty.
Once the proper number of colors and good initial center
values are obtained, the fuzzy c

means algorithm can
achieve the result for which it becomes well known. The
YIQ~Y
S space analysis is comprised of two
transformations. Segmenting (thresholding
) the modes in
the
~S plane can extract specific color patches from a
color input image regardless luminance diversity.
References
[1] Y

I. Ohta, T. Kanade and T. Sakai: Color information for
region segmentation. Computer Graphics and Image
Processing
1
3
, 222

241 (1980).
[2] L. Shafarenko, M. Petrou, and J. Kittler: Histogram

based
segmentation in a perceptually uniform color space. IEEE
Transactions on Image Processing, Vol. 7, No. 9, September
1998.
[3] Y. W. Lim and S. U. Lee: On the color image
segm
entation algorithm based on the thresholding and
the fuzzy c

means techniques. Pattern Recognition, Vol.
23, No. 9, pp. 935

952, 1990.
[4] X. L. Xie and G. Beni: A validity measure for fuzzy
clustering. IEEE Trans. Pattern Anal. Machine Intell,
13(8):841

8
47, 1991.
[5] G. Healey: Segmenting images using normalized color.
IEEE Transactions on Systems, Man, And Cybernetics, Vol.
22, No. 1, January/February 1992.
[6] C

L. Huang, T

Y. Cheng and C

C. Chen: Color images’
segmentation using scale space filter and
Markov random field.
Pattern Recognition, Vol. 25, No. 10, pp. 1217

1229, 1992.
[7] J. Liu and Y

H. Yang: Multiresolution color image
segmentation. IEEE Transactions on Pattern Analysis and
Machine Intelligence, Vol. 16, No. 7, July 1994.
[8] M. M. Chang,
M. I. Sezan and A. M. Tekalp: Adaptive
Bayesian segmentation of color images. Journal of Electronic
Imaging 3(4), 404

414 (October 1994).
[9] J. Wu, H. Yan and A. N. Chalmers: Color image
segmentation using fuzzy clustering and supervised learning.
Journal
of Electronic Imaging 3(4), 397

403 (October 1994).
[10] T. Uchiyama and M. A. Arbib: Color image segmentation
using competitive learning. IEEE Transactions on Pattern
Analysis and Machine Intelligence, Vol. 16, No. 12, December
1994.
[11] E. Littmann and
H. Ritter: Adaptive color segmentation
—
a comparison of neural and statistical methods. IEEE
Transactions on Neural Networks, Vol. 8, No. 1, January 1997.
[12] J. C. Bezdek: Pattern Recognition with Fuzzy Objective
Function Algorithms. Plenum Press, New Yo
rk, 1981.
[13] C. L. Tan and P. O. Ng: Text extraction using pyramid.
Pattern Recognition
, Vol. 31: No. 1, pp. 63

72, 1998.
[14] R. C. Gonzalez and R. E. Woods: Digital Image
Processing. Addison

Wesley Publishing Inc., 1992, pp. 40

45
and pp. 416

423.
[15]
J. T. Tou and R. C. Gonzalez, Pattern Recognition
Principles, Addison

Wesley, Reading, MA (1974).
[16] Z. Chi, H. Yan and T. D. Pham: Fuzzy Algorithms: With
Applications to Image Processing and Pattern Recognition,
World Scientific, Singapore. 1996.
[17]
J. C. Bezdek and S. K. Pal: Fuzzy Models for Pattern
Recognition. IEEE Press, New York, 1992.
[18] Pritchard, D.H. (1977). U. S. color television
fundamentals
–
a review,
IEEE Trans. Consumer Electronics
,
vol. CE

23, no. 4, pp. 467

478.
[19] Foley, J.D.,
Dam, A.van, Feiner, S.K. and Hughes, J.F.
(1990).
Computer Graphics Principles and Practice
. Addison

Wesley, pp. 563

603.
[20] Yang, C

K. and Tsai, W

H. (1995). Reduction of color
space dimensionality by moment

preserving thresholding and
its application
for edge detection in color images,
Pattern
Recognition Letters
, 17 (1996) 481

490.
6
Figure 5: 1
st
example.
Figure 6: Segment #1.
Figure 7: Segment #2.
Figure 8: Segment #3.
Figure 9: Segment #4.
Figure 10: 2
nd
exam
ple.
Figure 11: Segment #1.
Figure 12: Segment #2.
Figure 13: Segment #3.
Figure 14: Segment #4.
Figure 15: Segment #5.
Figure 16: 3
rd
example.
Figure 17: Segment #1.
Figure 18: Segment #
2.
Comments 0
Log in to post a comment