Separation of wheat kernels into different quality
groups using computer vision and image shape
analysis
Stanislav Penchev
,
Computer Vision and Media Technology Laboratory,
Aalborg University, Denmark.
22. 06. 2004
The
problem.
The problem which has to be solved is the following:
We have images of healthy and damaged wheat kernels. They have to be separated in three general
groups
[1]
: healthy kernels (HK), broken kernels with more than a half of the original size
rema
ining (BrB) and broken kernels with less than a half of the original size remaining (BrS). The
separation must be done using the analysis of the shape of the objects.
The following algorithm for separation is proposed
. It consists in few general steps:
St
ep 1: Finding proper features and performing a classification in order to separate all the broken
objects from all the healthy. As a result the input set of images is divided in two groups: healthy and
broken (no matter how) objects (kernels).
Step 2:
Find
ing proper features and performing classification in order to separate the second group
from step 1 (broken objects) in two sub

groups: broken kernels with more than a half of the original
size remaining (BrB) and those with less than a half of the origina
l size remaining (BrS).
The following chart (fig. 1) briefly describes the algorithm presented.
fig. 1
Algorithm for separation of wheat kernels
The solution.
I.
Image acquisition conditions and image processing.
As it was said
above the problem must be solved by shape analysis of the objects in the images. For
that reason images were taken in such conditions that give information mostly about the shape of
the objects.
However, before the shape analysis part there are several imp
ortant steps that have to be
done. They are:

Image acquisition;

Image threshold;

Obtaining information about the object’s contour.
Within that chapter each of these steps will be briefly explained.
Every image contains one object which is placed on a semi
transparent horizontal plane.
The CCD
camera
is
mounted above at a proper distance form the plane and the illumination system
is
placed
below the object and the plane.
When using such conditions the images taken consist of two general
groups of pixels
(fig
. 2)
: bright pixels which belong to the background and dark pixels which
belong to the outline of the particular object. What’s more, these conditions give opportunities to
further image processing procedures to extract the object from the background more
easily and to
obtain the
objects contour more correctly.
Finally, images are stored in bitmap (*.bmp) format using
grayscale color palette.
fig. 2 Images of different objects
After the images are acquired the Matlab’s Image Processing Toolbox is used
to perform the
following operations.
First, the function
im2bw
is used to c
onvert an image to a binary image,
based on threshold
.
The output binary image has values of 0 (black) for all pixels in the input image
with luminance less than
a certain
level
an
d 1 (white) for all other pixels
(fig. 3)
.
The function
graythresh
can be used to compute the level argument automatically.
It c
ompute global image
threshold using Otsu's method
and the result
is a normalized intensity value that lies in the range [0,
1].
fig. 3 Threshold results for the examples from fig. 2
As a next step the contour of the object in the image is determined. There are many well

known
methods for that. Most of them deal with edge detection and further processing to connect boundary
pi
xels. Here the contour pixels are found using morphological operation performed by the function
bwmorph
(image,
option)
.
When using the option
‘remove’
that function
removes interior
pixels. This option sets a pixel to 0 if all of its 4

connected neighbors
are 1, thus l
eaving only the
boundary pixels
on
(fig. 4).
However, before that the binary image must be inverted so that the
important pixels for the morphological operation (object’s pixels) become 1 (on) and all the others
–
0 (off).
fig. 4 Contour
detection for the examples from fig. 3
To find the location of boundary pixels the function
imfeature
is used. It
c
omputes
feature
measurements for image regions
. Two of them are most important when obtaining information
about
the object’s contour
–
‘Cent
roid’
and
‘PixelList’
. The first is
1

by

2 vector
which
contains
the x

and y

coordinates of t
he center of mass of the region and the second is
p

by

2 matrix
with
the actual pixels in the region. Each row of the matrix contains the x

and y

coordinates of
one
pixel in the region.
In fact, since the region in the image is only one (fig. 4) the matrix contains the
coordinates of the pixels from the object’s boundary.
Due to the fact that the function
imfeature
finds features scanning the image row

by

row the
pixels in the matrix are sorted by the ascending order of the rows. On the other hand it
is good for
pixels that describe the contour to be consecutive. So the following procedure to put the pixel list in
proper order is proposed. The matrix rows are sorte
d according to the slope of the line that
connects
each pixel with the center of mass
of the object
.
It is described by the equation:
,
(1)
where
is the center of mass;
is a part
icular pixel.
The value of
is calculated
for every pixel in the list. Because of the fact that the pixel list represents a closed contour, the slope
value changes between
–
and
in radians. After that the pixels are sorted due to the ascending
order of
.
The result is a set of consecutive contour points in clockwise direction.
II.
Separation of healthy and broken kernels.
An algorithm for finding prope
r feature that could
effectively
separate broken kernels from healthy
ones using the shape
information
is proposed in the following chapter. The main idea of that
algorithm is to perform a comparison between a particular shape and some well

known theoretic
al
curve, to access the difference between them and to use it as a possible feature. It is expected that
the
feature
could have
considerably different values for objects from different quality groups.
Finally, a classification is performed in order to prov
e the usefulness of the proposed feature.
The algorithm consists in the following main steps:
1.)
Finding the description of the theoretical curve.
An ellipse is chosen as a curve which shape
will be compared with the objects shape. That is due to the fact tha
t the shape of the healthy
wheat kernels is very similar to ellipse
,
so
it would be easy to highlight any shape violations
caused by different damages.
Having a list of consecutive contour pixels as an input, the ellipse
parameters are obtained by direct l
east square fitting method. That method combines the
following advantages: ellipse

specificity, providing useful results under all noise and occlusion
conditions; invariance to affine transformation of the data; high robustness to noise; high
computational
efficiency.
2.)
Comparison between object’s and ellipse shape.
After the parameters are obtained
,
the ellipse
is constructed in 2D space using the same number of points as the number of object’s contour
points. These two particular shapes are described using
one

dimensional distance function
–
the
distance from any shape point to the centre of the mass of the figure. As a result we have two
distance functions
–
one for the object and one for the corresponding ellipse. The comparison
between these two shapes is
performed by calculating the
average
squared difference K of the
two distance functions.
3.)
Classification using the feature proposed.
The average squared difference K is calculated for
all objects in the input set of images. After that a classification in t
wo classes (healthy and
broken kernels) is performed in order to prove the usefulness of that feature. A naïve Bayesian
classifier and a backpropagation neural network are used as classification procedures.
The
results show that the feature K is very usefu
l for separation of broken kernels from healthy ones
when shape information is used as an object description.
II. 1.
Ellipse fitting.
The literature on ellipse
fi
tting divides into two broad techniques: clustering (such as Hough

based
methods) and lea
st

squares
fi
tting
[2]
.
Least

squares techniques center on
fi
nding the set of
parameters that minimize some distance
measure between the data points and the ellipse.
It could be
said that ellipse fitting is closely related to another problem, conic fitting
.
Let a general conic is represent by an implicit second order polynomial:
,
(2)
where
and
.
is called the “algebraic distance” of a
point
to the conic
.
The
fi
tting of a general conic may be approached by
minimizing the sum of squared algebraic distances
(3)
of the curve to the N data points
.
In order to avoid
the trivial solution
, and recognizing
that any multiple of a solution
a
represents the same conic, the parameter vector
a
is constrained in
some way.
I
f a quadratic constraint is set on the parameters
(e.g., to avoid the trivial so
lution
) the minimization (
3
) can be solved by the rank

de
fi
cient
generalized eigenvalue system:
,
(4)
where
is called
design matrix
and
C
is the matrix that expresses the constr
aint.
In order to fit ellipses specifically while retaining the efficiency of solution of the linear least

squares problem (3), it is good to constrain the parameter vector
a
so that the conic that it represents
is forced to be an ellipse. The appropriate
constraint is well known, namely that the
discriminant
be negative.
However, this constrained problem is di
ffi
cult to solve in
general as the
Kuhn

Tucker conditions do not guarantee a solution.
If the parameters are arbitrary scaled
and that
scaling is incorporated into the constraint the
equality
constraint
could be imposed.
This is a quadratic constraint which may be expressed in the matrix form
as
(5)
No
w the constrained ellipse fitting problem reduces to
minimizing
subject to the
constraint
.
(6)
Introducing the Lagrange multiplier
and differentiating, we arrive at the system of s
imultaneous
equations
(7)
This may be rewritten as the system
(8)
(9)
where
S
is the
scatter matrix
. This system is readily solved by
considering the generalized
eigenvectors of (8). If
solves (8), then so does
for any
and from (9) we can find
the value
of
as
, giving
(10)
Finally, setting
solves (7).
The minimization of
subject to
yields exactly one solution, which corresponds,
by virtue of the constraint, to an ellipse
.
A Matlab implementation of the theoretical approach presented above is used in order to determine
the ellipse parameters. The function
E
=
fitellipse
(X,Y)
takes as input
arguments
the
coordinates of 2D data points, solves the system (7) finding by that way
the coefficients of the
polynomial (2)
and then returns in
E
the set of ellipse parameters
–
the center, radii and orientation
of the ellipse, stored as
(Cx, Cy, Rx, Ry, theta)
.
After applying the function to the coordinates of the contour points in the i
mage and obtaining the
parameters of the fitted ellipse, the location of its points is needed to be found.
For that reason the
Matlab function
t=
linspace
(
0
,
,
number of contour points
)
is used to generate
a linearly spaced vector betw
een 0 and
which values are the same number as the contour points.
After that the coordinates
of the ellipse points are calculated using
(11)
As a final step these coordinates
are
rotated to the ellipse orientation and then
translated according
to the ellipse center
(12)
Now we have a set of 2D points
that belong to the
fitted
ellipse
and are the same number
as the object’s co
ntour points. The last action taken is to sort them the same way as the contour
pixels and to obtain a list of consecutive ellipse points which will be used by further procedures of
the main algorithm.
II. 2
.
Comparison between
the
object and ellipse shap
e.
In order to perform a comparison between the object’s shape and the corresponding ellipse these
two figures have to be described by the same way. One of the simplest description techniques which
also
give
information about any changes along the contour
is
the so called
centroid
distance
[
3
]
.
It
is a 1D function which represents the distance form each shape point to the centre of the mass of the
figure.
Having in mind that both the object and the fitted ellipse have the same orientation and the
same numb
er of points which are consecutive, the
centroid distance
function could be used to
compare the two shapes.
Starting from the first point, the distances between the contour points and the centre of the mass of
the object, derived from the function
imfeatur
e
, and between the ellipse points and the centre of
the ellipse, derived from the function
fitellipse
, are calculated.
Then the two data sets are
normalized in order to avoid scaling problems. A way to normalize the data is to center it to zero
mean and sc
ale it to unit standard deviation:
sdate = (cdate

mean(cdate))./std(cdate)
(13)
As a measurement of the similarity of the two functions an average squared difference
K
between
the corresponding values of the two functions is introduced
(14)
where
is the
centroid distance
function for the object’s contour,
is the same for the fitted
ellipse (
), and
is the number of points.
On
fig. 5 and 6
examples
of the approach described above are presented. As it can be seen on fig. 5a
and b, the shape of a healthy wheat kernel is very close to ellipse so it is expected for the two
centroid distance
functions to be similar (fig. 5c) and for
the parameter
K
to have a small value. In
that case
K
=0.0224.
On the other hand when the kernel is broken somehow its shape is
considerably different than the shape of the fitted ellipse (fig. 6a and b). That is the reason why the
degree of similarity for
the two
centroid distance
functions is bigger than in previous case (fig. 6c).
For that particular example
K
=1.1493.
a.)
b.)
c.)
fig. 5 Graphical example of the presented approach for healthy kernels: a.) the shape of
the object; b.) the shape of the
fitted ellipse; c.)
the centroid distance functions for the two shapes: solid line
–
the object, dashed line
–
the ellipse.
a.)
b.)
c.)
fig.
6
Graphical example of the presented approach
for
broken
kernels: a.) the shape of the object; b.) the shape of the
fitted ellipse; c.) the centroid distance functions for the two shapes: solid line
–
the object, dashed line
–
the ellipse.
II. 3
.
Classification using
K
as a feature.
As it was said
before, an attempt to classify the objects using parameter
K
as a feature is made, in
order to prove its usefulness.
Two classification procedures are used: a naïve Bayesian classifier
and a backpropagation neural network.
The task is to classify the objec
ts in two classes: healthy
kernels and broken (no matter how).
Both procedures need some preliminary training before the
actual classification
,
in order to determine the decision rules. For that reason half of the input set of
images is used as a training
set, then the classification is performed over the whole sample.
The
input set consists of 150 kernel images (90 healthy and 60 broken), so the training sample combines
45 healthy and 30 broken kernels (total
–
75 objects).
In order to define the first cla
ssifier the basic
Bayes’ rule
is taken into consideration.
After
applying
it for a two

class problem, the criterion for the decision
to set
x
to the class
becomes
(15)
where
is
a priori probability
of class
;
is the loss function;
is the likelihood
function of class
with respect to
x
; it is the probability density function for
x
given t
hat the state
of nature is
(i.e. it is a pattern belonging to class
).
This is the so called the
maximum
likelihood rule
.
S
ome
assumptions are made before applying that rule to this particular case.
First of
all the loss
function
L
is considered to be symmetric, i.e.
L
ik
=1 and
L
ii
=0
椬i欠a湤ni
k.
Then, having in mind
that the training sample consists of 45 healthy and 30 broken kernels
is set to 0.6 and

to 0.4
.
So, in that case the
maximum likelihood rule
becomes
(16)
The probability density functions
are calculated using the expression for the normal
distribution, where the mean value and standard deviation
are obtained from the training samples
for each of the classes.
The actual classification using the decision rule (16) is performed. The results show that only one of
the healthy and only two of the broken kernels is misclassified. The
classification accu
racy
in that
case is
98.9%
for class
1
(healthy kernels) and
96.7%
for class
2
(broken kernels).
As an alternative to the Bayesian classifier a backpropagation neural network is used to solve the
same task.
It consists of three layers:
input layer
with o
ne neuron because there is only one feature
used;
output layer
with one neuron because the classification is in two classes and the desired
output in this case is 0 for objects from class 1 and 1 for objects from class 2; one
hidden layer
which has 9 neuro
ns, where the number of the neurons is obtained from the expression
(17)
where
I
is the number of inputs,
O
is the number of outputs and
y
is the dimension of the training
sample.
A graphical expression of the network struc
ture is given of fig. 7.
fig. 7 The structure of the neural network used.
After the training using the objects from the training sample, a simulation of a network’s behavior
using all the objects from the input sample, is performed. The results show th
at only one of the
healthy kernels is misclassified while all the broken kernels are correctly recognized. The
classification accuracy
in that case is
98.9%
for class
1
(healthy kernels) and
100%
for class
2
(broken kernels).
The analysis of the classification results leads to a conclusion that the feature
K
(a degree of the
difference between shape representations of the investigated object and some theoretical curve
)
is
very useful when trying to solve the first task of the main algorithm
–
to separate all the broken
objects from all the healthy. One of the main advantages of such approach is that only one feature is
needed to describe the objects for needs of the cl
assification procedure used. The comparative
analysis of the results from both classificators shows also that the backpropagation neural network
could be more useful because it separates all broken objects from the investigated sample correctly.
III.
Deter
mination of the
degree
of damage for broken kernels.
As it was mentioned above the main algorithm for separation of wheat kernels into three quality
groups is divided into two main steps. The first step deals with separation of all the broken from all
the
healthy kernels in a sample and its solution was described in chapter II. The second step
consider all the broken kernels as an input and the problem which have to be solved is how to
distinguish kernels with less than a half of the original size remainin
g from those with more than a
half remaining.
In the following chapter an outline of a possible solution will be presented together
with
an explanation of some of its steps
,
which are already developed.
One of the ways to determine how big the missing part
of the kernel is is to compare its shape with
those of a healthy kernel or some theoretical curve close to it. Thus, since we have information
about the object’s contour we have to decide which part of it is “healthy” and which
–
“broken”.
This means that
the shape have to be analyzed in order to determine which part of it is non

damaged
and where (according to the contour) is the damage.
If we succeed in that we have to extract the
“healthy” points from the contour and using them as a basis
,
to perform a
reconstruction of the
original shape.
Finally, by comparison between the reconstructed and
the present
shape
we have to
access
the so called “degree of damage”
–
how big the missing part is comparing to the possible
shape of a healthy kernel.
Unfortunately
the approach presented above is still under development, so in this chapter only the
idea how to extract the “healthy” points of the contour will be explained
in detail.
III. 1
.
Extraction of the contour
pieces
which correspond to
a non

damaged
part of t
he object.
It was already mentioned that the shape of the object is represented by a list of consecutive contour
points
. Thus, it can be considered as a 2D function in a Euclidian space. The main idea of
the algorithm for
determinat
ion
of “healthy” parts of the contour is to analyze its curvature and to
extract from it those parts where the curvature changes in some unusual way. Because of the fact
that curvature assessment is based on calculation of
first and second order derivative
s of the
function, it is very sensitive to “noise”. Even small and insignificant changes of the contour can
cause relatively big changes in magnitude or in
orientation
of the curvature vector.
To avoid such
effect, the contour have to be smoothed before th
e curvature is calculated for every point.
Finally,
the contour is segmented into several parts and those, for which the curvature changes
undesirably
are not considered as “healthy”.
III. 1. 1.
Reducing the number of contour points and smoothing.
To obt
ain smooth boundary and noise insensitivity Gaussian smoothing filter is used. Before that,
the number of contour points is reduced in order to increase the effect of smoothing and to obtain
new, equally spaced
set of points.
The proposed procedure consist
s in the following steps. First, the
length of the contour is calculated as the sum of the distances between all contiguous points:
(18)
where
N
is the number of points.
Then a step
d
is obtained as 1% of
P
. Finally, startin
g from the
first point, the new set of contour points is arranged. The criteria
is
(19)
where
are the points in the new set,
,
is the number of points in
the new set.
An
example of such approach is given on fig. 8.
a.)
b.)
f
ig. 8 Results for contour points reducing: a.) original contour; b.) reduced contour.
III. 1. 2.
Boundary segmentation.
After the contour is reduced and smoothed it is segmen
ted into several parts which later will be
tested for whether they belong or not to the “healthy” side of the boundary.
These parts are
separated by several characteristic contour points, called “corners”. Actually these are the contour
points for
which th
e curvature is higher.
The proposed two
–
p
ass
algorithm
for extraction of these characteristic points
defines a corner in a
simple and intuitively appealing way, as a location where a triangle of specified size and opening
angle can be inscribed in a curv
e
[4]
. A curve is represented by a sequence of points
in the
image plane. The ordered points are densely sampled along the curve, but no regular spacing
between them is assumed.
In the first pass the algorithm scans the sequence and
selects candidate
corner points. The second pass is post

processing to remove superfluous candidates.
First pass.
In each curve point
p
the detector tries to inscribe in the curve a variable triangle
(p

, p,
p
+
)
constrained by a set of simple rules:
(20)
where
is the distance between
p
and
p
+
,

the distance between
p
and
p

, and
the opening angle of the triangle
(see fig. 9a).
The latter is computed as
(21)
fig. 9
Detecting high curvature points
:
(a)
Determining if
p
is a candidate point. (b)
Testing
p
for sharpness non

maxima
suppression.
Variations of the triangle that satisfy the conditions (20) are called admissible.
Search for the
admissible variations starts from
p
outwards and stops if any of the conditions (20) is violated.
(That is, a limited number of neighboring points is only considered.) Among the admissible
variations, the least opening angle
is selected.
is assigned to
p
as the
sharpness
of
the candidate. If no admissible triangle can be inscribed,
p
is rejected and no sharpness is assigned.
Second pass.
The sharpness based non

maxima suppression procedure is i
llustrated in figure 9b. A
corner detector can respond to the same corner in a few consecutive points. Similarly to edge
detection, a post

processing step is needed to select the strongest response by discarding the non

maxima points.
A candidate point
p
i
s discarded if it has a sharper valid
neighbor
:
. In the current
implementation, a candidate point
is a valid neighbor of
p
if
. As alternative
definitions, one ca
n use
or the points adjacent to
p
.
Parameters.
,
and
are the parameters of the algorithm.
sets the scale
(resolution), with small values
responding to fine corners. The upper limit
is necessary to
avoid false sharp triangles formed by distant points in highly varying curves.
is the angle limit
that determines the minimum sharpness accepted as
high curvature. In practice, we often set
and tune the remaining two parameters,
and
. The default values are
and
.
On fig. 10 the result o
f the presented algorithm is illustrated using the example image from fig. 8.
The position of the corners is shown by black squares.
fig. 10 Corner detection.
After the position of
N
corners is determined the overall contour is
divided
into
N
boundary
segments
and
N
separate pixel lists are created. Each list includes the pixel coordinates between
corners
and
. The last list include pixels between
and
. These p
ixel lists are saved
and will be used by the next procedure.
III. 1. 3.
Assessment of the contour segments using
their
curvature.
Curvature is one of the most commonly used mathematical characteristics when describing some
planar curve
[5]
. It can give i
nformation about the behavior of the curve and to help to find the parts
where the curve bends more sharply, which in our particular case could have some physical
meaning.
The simplest form of curvature and that usually first encountered in
calculus
is an
extrinsic
curvature
. In two dimensions, let a
plane curve
be given by
Cartesian
parametric equations
and
. Then the curvature
k
is defined by
(22)
where
is the tangential angle and
s
is the arc length. This can be represented by
(23)
where
and
are, respectively, the first and second derivatives
of
x
and
y
with respect to
t
.
W
hen applying (23) to an object’s contour it can be seen that
k
has positive values when the
contour is convex and negative
–
for parts which are concave. On fig. 11 the curvature vectors for
every contour
pixel
are shown.
When the vector direction is to th
e inside of the object the value of
k
is positive, otherwise
–
it is negative.
fig. 11 Curvature of the object’s contour.
The observations preceding the development of the algorithm for extraction of “healthy” parts of
the contour shows that the smooth
ed shape of a healthy kernel changes in a way, similar to an
ellipse
–
its curvature is greater for the parts which are most distant from the centre of the mass and
its direction is always inward. Because of that
,
to make a decision
whether a particular
co
ntour
segment
belongs to the “healthy” part of the object or not, both the magnitude and the direction of
the curvature are taken into account.
So, the extraction of a “healthy” part of the contour is made using the following steps. For each
contour segmen
t, obtained as described in (III. 1. 2.), the curvature
k
for every
its
point is
calculated.
Then the magnitude of the curvature is obtained using the following expression
[6]
(24)
The values of
and
are derived from
(25)
where
r
is
(26)
and
x
and
y
are the coordinates of the segment points. The values of
T
are normalized and centered
to zero mean using the expressi
on (13).
That means that the values, less than the average
T
for a
particular segment, are now negative while all the others
–
positive.
If in a particular contour segment there are
pixels
for which the value of
k
is negative and the value
of
T
is positive
, then we have to conclude that the curvature changes undesirably
, because for these
pixels the contour bends sharply (
T>0
) while the curvature vector points outward
(
k<0
)
.
Such
behavior of the contour is not typical for healthy kernels and in that case th
e contour segment must
not be considered as a “healthy”
one
.
Applying the presented decision rule for each contour segment, a new set of contour points is
constructed. For these points the curvature vector is in the direction to the inside of the object or
, if
it is outside, its magnitude is very small.
That second case corresponds to slight changes of a
contour in an image
,
which can be caused by some noise but not by
any kind of
damage of the
kernel.
Despite of them, the behavior of the contour in general
is still similar to that for a non

damaged kernel, so in that case the segment is considered as a “healthy”.
fig. 12 Extraction of contour points, which correspond to the healthy part of the kernel:
(*)
–
accepted points; (o)
–
rejected points; (■)
–
“
corner” points, delimiting the contour segments.
The final result of the algorithm is a list of consecutive contour points which correspond to a non

damaged part of the object.
That list is supposed to be used by further procedures for reconstruction
of t
he original shape of the object as well as to assess the “degree of damage” for wheat kernels.
REFERENCES
:
1.
Bulgarian Government Standard 601
–
85, Seed. Rules for sample formation and methods
for assessment of sowing properties. Sofia, 1985.
2.
Fitzgibbon A
., M. Pilu, R. B. Fisher, Direct Least Square Fitting of Ellipses, IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 21, No. 5, may 1999.
3.
Zhang D., G. Lu, Review of shape representation and description techniques. Pattern
Recognition, 37
(2004), 1
–
19.
4.
Chetverikov D., Z.
Szabó
,
Detection of High Curvature Points in Planar Curves
,
http://visual.ipan.sztaki.hu/corner/
.
5.
Weisstein
E
.
,
"Curvature." From
MathWorld

A Wolfram Web Resource.
http://mathworld.wolfram.com/Curvature.html
6.
Arnold D., Curvature in Matlab. Math 50C
–
Multivariable Calculus, College of the
Redwoods.
Σχόλια 0
Συνδεθείτε για να κοινοποιήσετε σχόλιο