Project Documentation_1 - ITbuds Collections

pogonotomygobbleΤεχνίτη Νοημοσύνη και Ρομποτική

15 Νοε 2013 (πριν από 3 χρόνια και 8 μήνες)

71 εμφανίσεις


1

1. INTRODUCTION


The explosive growth of digital image collections on the Web sites is calling for
an efficient and intelligent method of browsing, se
arching, and retrieving
images.

Content
-
based image retrieval, a technique which uses visual contents to s
earch

images
from large scale image databases

according to users' interests.

In this project, an
artificial neural network (ANN)
-
based approach is proposed

to explore a promising
solution to the Web image retrieval (IR).

Compared with other image retrieval

methods,
this new approach uses the

Content
-
Based
features to

improve retrieval performance.


1.1

OVERVIEW


Content
-
based image
retrieval

uses the visual contents of an image such as
color
,

shape
,
texture
, and
spatial layout
to represent and index the ima
ge. The

feature vectors
of the images in the database form a feature database. To retrieve

images,
users provide
the retrieval system with example images or sketched figures.The system then change

these examples into its internal representation of feature

vectors. The similarities
/distances between the feature vectors of the query example

o
r

sketch and those of the
images in the database are then calculated and retrieval is

performed with the aid of an
indexing scheme. The indexing scheme provides an

effic
ient way to search for the
image database. Recent retrieval systems have

incorporated users' relevance feedback to
modify the retrieval process in order to

generate perceptually and semantically more
meaningful retrieval results.






2

2. SYSTEM ANALYSIS


2
.1
. EXISTING

SYSTEM

Early techniques were not generally based on visual features but on the textual

annotation of images.
Text
-
based image retrieval uses traditional database techniques to
manage

images. Through text descriptions, images can be organized b
y topical or
semantic

hierarchies to facilitate easy navigation and browsing based on standard
Boolean

queries.

However, since automatically generating descriptive texts for a wide

spectrum of
images is not feasible, most text
-
based image retrieval system
s require

only

manual
annotation of images. Obviously, annotating images manually is a

cumbersome and
expensive task for large image databases, and is often subjective,

context
-
sensitive and
incomplete. As a result, it is difficult for the traditional

text
-
based methods to support a
variety of task
-
dependent queries.


2.2. PROPOSED SYSTEM

Content
-
based image retrieval uses the visual contents of an image such
as
color
,
shape
,

and

texture

to represent and index the image. To retrieve images, users provide
th
e retrieval system with example images or sketched figures.

The system then
changes these examples into its internal representation of feature vectors.

The
similarities /distances between the feature vectors of the query example or sketch and
tho
se of the images in the database are then calculated and retrieval is performed.







3

2.3. STUDY ON PROPOSED SYSTEM

2.3.1
. Image

Content Descriptors

Generally speaking, image content may include both visual and semantic content.
Visual content can be very
general or domain specific.
General visual content

include
color, texture, shape, spatial relationship, etc.
Domain specific visual content
, like
human faces, is application dependent and may involve domain knowledge.
Semantic
content
is obtained either by

textual annotation or by complex inference procedures
based on visual
content.

Our proposed system is based on
General visual content

which
includes


a. Color


b.

Texture


c.

Shape


2.3.1.1.

Color

2.3.1.1. 1.
Color Histogram

The color histogram is a metho
d for describing the color content of an image,

it
counts the number of occurrences of each color in an image [298]. The color

histogram
of an image is rotation, translation, and scale
-
invariant; therefore,

it is very suitable for
color
-
based CBIR: content
-
based image retrieval using

solely global color features of
images. However, the main drawback of using

the color histogram for CBIR is that it
only uses color information, texture

and shape
-
properties are not taken into account.

This may lead to unexpec
ted

errors; for example, a CBIR engine using the color
histogram as a feature is

not able to distinguish between

a red cup, a red plate, a red
fl
ower, and a red

car.





4






Figure 2.
3.1.1
.1
: Two distinct images are shown. However, when represented

by
their color histograms, they are judged as identical.



Two distinct images are shown. However, when represented by

their color
histograms, they are judged as identical.

[color tuple histograms
,
color coherent vectors
, color correlograms

, local color

regions, and blobs . These methods are concerned

with
optimizing color matching techniques on a spatial level, but disregard

the basic issue of
intuitive color coding.



In other words, the way the

engine is processing color, is not related to human
color

processing. In our

opinion, the issue of color coding (or categorization) should be
stressed prior

to exploring these techniques. Therefore, we will focus on color
histogram

based image retrieval.





5

2.3.1.1.2.
Color Q
uantization

In order to produce color

histograms, color quantization has to be applied.Color
quantization is the process of reducing the number of colors used to

represent an image.
A quantization scheme is determined by the color space

and the segmentation (i.e., split
up) of the color space

used. A color space is the representation of color in a three
dimensional space.



Figure 2.3.1
.1
.2
: From top to bottom: The original image using 2563 colors, quantized

in 8 bins,
and quantized in 64, bins using RGB color space.


6

In applying a standard q
uantization scheme on a color space, each axis

is divided
into a number of parts. When the axes are divided in k, l, and

m parts, the number of
colors (n) used to represent an image will be n =
k.
l
.
m. A quantization of color space in
n colors is often refer
red to as a n
-
bins quantization scheme. Figure 2.2 illustrates the
effect of quantizing color

images. The segmentation of each axis d
epends on the color
space used.


2.3.1.1.3.
Color spaces

A color space speci
fi
es colors as tuples of (typically three) numb
ers, according

to
certain speci
fi
cations. Color spaces lend themselves to (in principle) reproducible

representations of color, particularly in digital representations, such

as digital printing
or digital electronic display. The purpose of a color space

is

to facilitate t
he
specifi
cation of colors in some standard, generally accepted

way.

One can describe color spaces using the notion: perceptual uniformity.
Perceptually uniform means that two colors that are equally distant

in the color space
are perceptua
lly equally distant. Perceptual uniformity

is a very important notion when
a color space is quantized. When a color

space is perceptually uniform, there is less
chance that the difference in color

value due to the quantization will be noticeable on a
displ
ay or on a hard copy.

In the remainder of this section, several color spaces with
their quantization

schemes will be described. In addition, the conversion of color
images to

gray
-
scale images, using the speci
fi
c color space, will be described. The
quantiz
ation

of color images transformed into gray
-
scale
images

is independent of

the
color spaces: the gray
-
scale axis is divided in the number of bins needed

for the speci
fi
c
quantization scheme. In this thesis, gray
-
scale images were

quantized in 8, 16, 32, 64
,
and 128 bins.




7

2.3.1.1.4.
The RGB color space

The RGB color space is the most used color space for computer graphics. Note

that R, G, and B stand here for intensities of the Red, Green, and Blue guns in

a CRT,
not for primaries as meant in the CIE RGB s
pace. It is an additive color

space: red,
green, and blue light are combined to create other colors. It is not

perceptually uniform.
The RGB color space can be visualized as a cube, as

illustrated in Figure 2.3.
1.1.4

Each color
-
axis (R, G, and B) is equall
y important. Therefore, each axis

should
be quantized with the same precision. So, when the RGB color space

is quantized, the
number of bins should always be a cube of an integer. In

this thesis, 8 (23), 64 (43), 216
(63), 512 (83), and 4096 (163) bins are

used in

quantizing the RGB color space. The
conversion from a RGB image to a gray

value image simply takes the sum of the R, G,
and B values and divides the

result by three.





Figure
2.3.1.1.4. :

The RGB color space visualized as a cube



8

Euclidean Dist
ance transformation (EDT)

Region growing algorithms can be applied to obtain distance transformations.

A
distance transformation
creates an image in which the value of each

pixel is its distance
to the set of object pixels O in the original image:

D (
p
) =
min {dist (
p, q
),
q



0}


The Euclidean distance transform (EDT) has been extensively used in

computer
vision and pattern recognition, either by itself or as an important

intermediate or ancillary method in applications ranging from trajectory

plannin
g to
neuromorphometry
. Examples of methods possibly involving the EDT are: (i)
skeletonization ; (ii) Voronoi tessellations; (iii) Bouligand
-
Minkowsky fractal
dimension
, (iv) Watershed

algorithms
, and (v) ro
bot navigation
.





The process of erosion i
llustrated. The left figure is the original shape A. The square in the
middle is the erosion marker B (dot is the center). The

middle of the marker runs over the boundary of
A. The result of erosion of A by B (

B) is given by the solid shape on the right, in which the outer
(dotted) square projects the original object A.



9


Algorithm for hexadec

numberof iterations.


Several methods for calculation of the EDT have been described in the

literature,
both for sequ
ential and parallel machines.

However, most of these methods do not
produce exact distances, but only approx
imations
.
C
hamfer distance transformation

using two raster scans on the image, which produces a coarse approximation

of the
exact EDT

can be used
. T
o get a result that is exact on most points but can produce

small errors on some points, four raster scans

can be used
.



2.3.1.2.
Texture

Texture is another important property of images. Various texture representations
have

been investigated in pattern re
cognition and computer vision. Basically, texture

representation methods can be classified into two categories:
structural
and
statistical
.

Structural methods, including
morphological operator
and
adjacency graph
, describe

texture by identifying structural

primitives and their placement rules. They tend to be

most effective when applied to textures that are very regular.

Statistical methods,

including
Fourier power spectra
,
co
-
occurrence matrices
,
shift
-
invariant principal

component analysis
(
SPCA
),
Tamura

feature
,
Wold
decomposition
,
Markov random

field
,
fractal model
, and
multi
-
resolution filtering

10

techniques such as
Gabor and

wavelet transform
, characterize texture by the statistical
distribution of the image

intensity.

The structural techniques deal wit
h the arrangement
of image primitives,

such as the description of texture based on regularly spaced,
parallel

lines. In our research, the co
-
occurrence matrix was used to perform

texture
analysis because it is an important gray
-
scale texture analysis metho
d
.


The co
-
occurrence matrix

The co
-
occurrence matrix is constructed from an image by estimating the pair wise

statistics of pixel intensity. In order to (i) provide perceptual intuitive

results and (ii)
tackle the computational burden, intensity was quant
ized into

an arbitrary number of
clusters of intensity values, which we will name: gray

values.

The co
-
occurrence matrix C
d
(
i,

j) counts the co
-
occurrence of pixels with

gray values

i

and

j

at a given distance
d
. The distance
d

is defi
ned in polar

coordina
tes
(
d
,
a
), with discrete length and orientation. In practice,
a

takes the

values 0
o
,

45

o
,
90

o
,
135

o
,
180

o
,
225

o
,

270

o
,

and 315

o
. The co
-
occurrence matrix C
d
(
i,

j) can now be
de
fi
ned as follows:

C
d
(i, j) = Pr(I(p
1
) = i ^ I(p
2
) =
j||

p
1

-

p
2

|

= d)

where
Pr is probability, and
p
1

and
p
2

are positions in the gray
-
scale image I.

The algorithm yields a symmetric matrix, which has the advantage that

only angles up to 180

o

need to be considered. A single co
-
occurrence matrix

can be de
fi
ned for each distance (
d
) by averaging four co
-
occurrence matrices

of
different angles (i.e., 0

o
, 45

o
, 90

o
, and 135

o
).


Let N be the number of gray
-
values in the image, then the dimension

of the co
-
occurrence matrix
C
d
(
i,

j)
will be N
x

N. So, the computational

complexity of
the co
-
occurrence matrix depends quadratically on the number

of gray
-
scales used for
quantization.



11


Because of the high dimensionality of the matrix, the individual elements

of the
co
-
occurrence matrix are rarely

used directly for texture anal
ysis. Instea
d, a large
number of textural features can be derived from the matrix,

such as: energy, entropy,
correlation, inverse difference moment, inertia,

Haralick's
correlation, cluster

shade, and
cluster
prominence.


2.3.1.3.
Shape

The shape extraction phase is
d
ivided in three stages: (i) coarse image
segmentation,

(ii) pixel wise classi
fi
cation, and (iii) smoothing. The coarse image

segmentation uses only texture information to segment the image in texture

regi
ons. In
the pixel wise classifi
cation phase, only co
lor information is used

because the regions
are too small for our texture descriptor to be informative.

There are a plenty of shape descriptors available [15, 16] that can be divided into
two main

categories: region
-
based and contour
-
based methods. Region
-
based methods
use the whole

area of an object for shape description, while contour
-
based methods use
only the information

present in the contour of an object.

There are some techniques, for example, Fourier

transforms and moments, that
can be applied usin
g both approaches with only small changes

in algorithms.



Using only contour information in shape analysis can be beneficial:





I
nformation inside the object’s contour is lost when dealing only with the
contour(whether this is an advantage or a disadvant
age, depends on the
application)



I
t takes less space to store different objects (data compression)


12



S
hape descriptors are faster to calculate because there are less image pixels to
process

(although the overhead that comes from contour tracking must be
incl
uded in total

computation time)



V
ariations in a contour are more easily detected


In case of surface
defects,

the main interest is in the shape of a defect’s contour. A
natural

approach is thus to consider contour
-
based shape descriptors that can better
ca
pture

the information that is present in contours.

Different kinds of edge histograms have been popular in

various CBIR
applications since they are powerful, fast to extract, and they do not require a

segmentation mask (i.e., images do not need to be pre
-
segmented). The region
-
based
shape

is a standard region
-
based shape descriptor that (like the SSD) needs a
segmentation mask.


2.3.2
ROLE OF
NEURAL NETWORK

An
artificial neural network

(
ANN
)
or

commonly just
neural network

(NN
) is an
interconnected group
of artificial neurons
that uses a mathematical model
or

computational model for information processing based
on

a connectionist approach to
computation. In most cases an
ANN

is an adaptive system that changes its structure
based on external or internal inf
ormation that flows through the network.


An artificial neuron is a device with many inputs and one output. The neuron has
two modes of operation; the training mode and the using mode. In the training mode,
the neuron can be trained to fire (or not), for p
articular input patterns. In the using
mode, when a taught input pattern is detected at the input, its associated output becomes
the current output. If the input pattern does not belong in the taught list of input
patterns, the firing rule is used to deter
mine whether to fire or not.



13







An important application of neural networks is pattern recognition. Pattern
recognition can be i
mplemented by using a feed
-
forward neural network that has been
trained accordingly. During training, the network is trained to associate outputs with
input patterns. When the network is used, it identifies the input pattern and tries to
output the associa
ted output pattern. The power of neural networks comes to life when a
pattern that has no output associated with it, is given as an input. In this case, the
network gives the output that corresponds to a taught input pattern that is least different
from th
e given pattern.


The commonest type of artificial neural network consists of three groups, or layers, of
units: a layer of "
input
" units is connected to a layer of "
hidden
" units, which is
connected to a layer of
"output
" units.




The activity of the inp
ut units represents the raw information that is fed into the
network.


14



The activity of each hidden unit is determined by the activities of the input units
and the weights on the connections between the input and the hidden units.



The behavior of the output
units depends on the activity of the hidden units and
the weights between the hidden and output units.


This simple type of network is interesting because the hidden units are free to construct
their own representations of the input. The weights between th
e input and hidden units
determine when each hidden unit is active, and so by modifying these weights, a hidden
unit can choose what it represents





The computing world has a lot to gain from neural networks. Their ability to
learn by example makes them very flexible and powerful. Furthermore there is no need
to devise an algorithm in order to perform a specific task; i.e. there is no need to
understand the internal mechanisms of that task.


15

They are also very well suited for r
eal time systems because of their fast response
and computational times which are due to their parallel architecture.

Neural networks also contribute to other areas of research such as neurology and
psychology. They are regularly used to model parts of li
ving organisms and to
investigate the internal mechanisms of the brain.


2.4 PROJECT

DESCRIPTION




For content
-
based image retrieval, user interaction with the retrieval system is
crucial since flexible formation and modification of queries can only be o
btained by
involving the user in the retrieval procedure. User interfaces in image retrieval systems
typically consist of a query formulation part and a result presentation part.



Query Specification

Specifying what kind of images a user wishes to ret
rieve from the database can
be done in many ways. Commonly used query formations are: category browsing, query

16

by concept, query by sketch, and query by example. Category browsing is to browse
Fundamentals of Content
-
Based Image Retrieval through the datab
ase according to the
category of the image. For this purpose, images in the database are classified into
different categories according to their semantic or visual content.

Query by concept is to retrieve images according to the conceptual description
ass
ociated with each image in the database. Query by sketch and query by example

is to
draw a sketch or provide an example image from which images with similar visual
features will be extracted from the database. The first two types of queries are related to
the semantic description of images which will be introduced in the following chapters.

Query by sketch allows user to draw a sketch of an image with a graphic editing
tool provided either by the retrieval system or by some other software. Queries may be
f
ormed by drawing several objects with certain properties like color, texture, Proof
shape, sizes and locations. In most cases, a coarse sketch is sufficient, as the query can
be refined based on retrieval results.

Query by example allows the user to for
mulate a query by providing an example
image. The system converts the example image into an internal representation of
features. Images stored in the database with similar features are then searched. Query by
example can be further classified into query by

external image example, if the query
image is not in the database, and query by internal image example, if otherwise. For
query by internal image, all relationships between images can be pre
-
computed.

The main advantage of query by example is that the us
er is not required to
provide an explicit description of the target, which is instead computed by the system. It
is suitable for applications where the target is an image of the same object or set of
objects under different viewing conditions. Most of the
current systems provide this
form of querying.



17

Query by group example allows user to select multiple images. The system will
then find the images that best match the common characteristics of the group of
examples. In this way, a target can be defined
more precisely by specifying the relevant
feature variations and removing irrelevant variations in the query. In addition,
only

group properties can be refined by adding negative examples. Many recently developed
systems provide both
queries

by positive an
d negative examples.


2.5 FEASIBILITY

STUDY


The feasibility of the project can be ascertained in terms of technical factors,
economic factors, or both. A feasibility study is documented with a report showing all
the ramifications of the

project. In project finance, the pre
-
financing work (sometimes
-
referred to as due diligence) is to make sure there is no “dry rot” in the project and to
identify project risks ensuring they can be mitigated and managed in addition to
ascertaining “debt se
rvice” capability.



2.
5
.1
.
TECHNICAL FEASIBILITY


Generally, new system brings new technology into the company. At times,
the new system stretches the state of the art of the technology. Other projects utilize
existing technology but com
bine it into new untested
configurations
. Finally, even
existing technology can pose the same challenges as new technology if there is a lack of
expertise within the company. If an outside vendor if providing capability, the client
organization usually ass
umes that, it is an expert in the area in which it provides the
service. However, even an outside vendor is subject to the risk that the requested level
of technology is too complicated.



18


The project management team needs to access very

carefully the proposed
technological requirements and available expertise. When these risks are identified a
solution are usually fairly straightforward.

The solution to technical risks include



Providing additional training



Hiring consultants




In our project, since we choose java language in which the
neural
networking concept can be utilized very efficiently with the help of cla
sses defined in
javax.swing
.

Unlike AWT components, Swing components are not implemented by
platform
-
specific

code. Instead they are written entirely in Java and, therefore, are
platform
-
independent.

This paradigm is useful for
network communication and it is
useful for technical people in the organization
.



2.
5
.2

ECONOMICAL

FEASIBILTY


Economi
cal feasibility is a process of identifying the financial benefits and
cost associated with the development project.


Installation
cost and recruiting experts in java does not make an

Organization

in a critical position to fund the proj
ect.


In this section, we had to minimize our cost and maximize our
benefits/profits. In order to make it happen we followed some suitable strategies, which
led us quite smoothly to get our target goals, which make the organization to p
rovide
fewer funds to the project.





19



2.6 SYSTEM

SPECIFICATION


User require the following features availability with the system

1.

Reliable system

2.

Faster response



2.6.1 HARDWARE

SPECIFICATION


To implement this pr
oject

CONTENT

BASED IMAGE
RETRIEVAL”
, the following configuration

is

recommended.




Processor



-

Intel /AMD Based Processor


2GHz



Mother Board


-

Intel / Asus



RAM




-

128 MB DDR SDRAM



Hard disk drive


-

40 GB



Monitor



-

15” Color



Keyboard and Mouse.




2.6.2 SOFTWARE

SPECIFICATION




Oper
ating system


-

Windows Xp professional,



Application Software

-


j
dk 1.5

MySQL
5.0



GEL

(
java IDE
)



SQLyog ( SQL E
ditor)



20



2.7 TESTING

AND IMPLEMENTATION


2.7.1 System

Testing

Testing is vital to the success of the system. System testing makes a logical
assumption that if all the parts of the system are correct, the goal will be
successfully

achieved. Inadequate testing or non
-
testing leads to errors that may
not appear until months later. This create two problems

1.

The time lag between the cause and appearance of the problem.

2.

The effect of system errors on files an
d records with in
the system. A
small system error
can conceivably exploded into much larger problem. Effective
early in the process translates directly into long term cost savings from a
reduced number of errors.




2.7.2
Testin
g methodologies



2.7.2.
1
.

Unit Testing



A program represents the logical elements
of a system. For a program to run
satisfactorily, it must compile and test data correctly and tie in properly with
other programs.

Achieving an error free program is the responsibility of the
programmer. Program testing checks for two types of errors: syntax and logical.
Syntax error is a program statement that violates one or more rules of the language
in which
it is written. An improperly defined field dimension or omitted keywords
are common syntax errors. These errors are shown through error message generated by
the computer. Logic error the programmer must examine the output carefully.



When a program
is tested, the actual output is compared with the expected
output. When there is a discrepancy the sequence of instructions must be traced

21

to determine the problem the process the is facilitated by breaking the program
down
into self
-
contained portions, each of which can be checked at certain key
points .The idea is to compare program values against desk
-
calculated values to
isolate the Problems.


Unit testing has been performed the module.
The syntax and logical error have
been corrected than and there. All this syntax have been Rectified during compilation.
The output has been tested with the manual input. All the data are stored correctly.




2.7.2.
2.
Functional testing:

Functional testin
g

of an application is used to prove the application delivers correct
results, using enough inputs to give an adequate level of confidence that will work
correctly for all sets of inputs. The functional testing will need to prove that the
application wor
ks for each client type and that personalization function work correctly.



2.7.2.3
Non
-
Functional testing


This testing used to check that an application will work in the operational
environment.


Non
-
functional testi
ng includes:



Load testing



Performance testing



Usability testing



Reliability testing



Load testing:



It is necessary to ascertain that the application behaves correctly under loads
when ‘server busy’ response is received.




22




Performance testing
:


This is required to assure that an application perforce adequately, having the
capability to handle any workloads, delivering its results in expected time and using a
acceptable level of resource and it is a aspect of operational management.



Usability testing:


It is necessary to prove that the application is usable in its intended role
considering human

and environment factor.


Reliability testing


This is to check that the application is rugged and reliable and c
an handle the
failure of any of the components involved in provide the application.



2.7.2.4



White Box Testing

White box testing, sometimes called glass
-
box testing is a test case design method
that uses the control structure of the pro
cedural design to derive test cases.

Using white box testing method, the software engineer can derive test cases.

1.


Guarantee that all independent paths within a module have been
exercised at least once.

2.


Exercise all logic
al decisions on their true and false sides
.

3.


Execute
all loops

at their boundaries and within their operational bounds
.

4.


Exercise
internal data

structures to ensure their validity
.

2.7.2.5

Black Box Testing




Black box testing
, also called behavi
oral testing, focuses

on the
functional
requirements

of the software
. That is, black testing enables the software engineer to
derive sets of input conditions that will fully exercise all functional requirements
for a program. Black
box testing is

not
alternative to

white box

techniques. Rather

23

it is a complementary approach that is likely to uncover a different class of
errors than white box methods.
Black box

testing attempts to find errors in

the
following categor
ies
.

1.


Incorrect or missing functions
.

2.


Interface errors
.

3.


Errors in a data structures or external data base access
.

4.


Behavior or performance errors
.

5.


Initialization and termination errors
.


2.7.2.6

Integration Testing

Programs are invariably rel
ated to one another and interact in the total
system. Each program is tested to see whether it conforms to related programs
in the systems. Each portion of the system is tested against the entire module with
both the test da
ta and the live data before the entire system is tested as a
whole.


Integration testing is systematic

techniques for conducting the program

structure. While at the same time conducting tests to uncover errors associated
wi
th the interfacing.


The objectives are to take unit tested modules and to built a program that
has been dedicated by design.




There are two types of Integration steps
,



Top down Integration
.



Bottom up Integration
.





24


2.7.2.7
Validation Testing


The validation
testing is performed for

all the data in

the system. The data are
completely validated according to the company’s request and requirement.



2.7.2.8

Output

Testing


Various out
puts have been showed by
the system
. All the output is perfect
, as

the
company desires
. The total system is also tested for recovery and fallback, after
various major failures to ensure that no data are lost during the emergency
time.


Quality Assurance

Greater emphasis on quality in organization requires quality assurance to be an
integral part of information system development. The development process must
include checks throughout the process to ensure that final product meets
the original
user requirements.


Quality Assurance thus becomes an important component of development
process. It is included in Industry standards (IEEE 1993) on the development process.
Quality Assurance process is integrated into the linear d
evelopment cycle through
validation and verification performed at crucial system development
steps. The

goal of
the management is to institute and monitor a Quality Assurance program within the
development process. Quality Assurance program includes



Valid
ation of the system against requirements



Checking for errors in design documents and in the
system itself



Checking for Qualitative features such as Portability,
and



Checking for usability.


25



Quality factors are



Correctness


The software satisfies u
sers needs by providing correctness of
the sequential pattern with the advantage of
multidimensional

and fulfils the
user objectives.35



Reliability
-

The software is the extent to which a program can be expected to
perform its indented function
with requi
red

precision.



Efficiency
-
The software satisfies its requirements with enough system
resources.



Integrity
-
The access to software or data by unauthorized persons are restricted
by constructing

security technology.



Maintainability
-
The efforts taken in the so
ftware is to locate and fix an error
in an operational program.



Testability
-
The software test performed to ensure it performs its indented
functions.



Portability
-
The software is portable to any hardware configuration and
software system environment

2.7.3

Generic risks

Risk identification

is a systematic attempt to specify treats to the project
plan
(
estimates.

schedule, resource

loading and so on) by identifying known and predicable
risks, the

first step is to avoiding them when possible and controlling t
hem when
necessary.

There
are 2

types of risks



Generic Risks



Product Specific Risks


26

Generic Risks are potential threat to every software
project. Product

specific Risks can
be identified by only those with the clear understanding of
technology,

the people

and
environment that is specific to the project at hand. To identify product Specific risks the
project plan and the software statements of scope are examined and an answer to the
following question is
developed:


“What special characteristics of this pro
duct may threaten our project plan?”

One method for identifying risks creating a risk item checklist. The checklist can be

used for risk identification and focuses on some subset to known and predictable risks
in the following generic subcategory:

2.7.3.1


Product

Size

Risks associated with overall size of software to built or modified.


2.
7.3.2

Business

Impart

Risks associated with constraints imposed by management.


2.7.3.3


Customer

Characteristics

Risks associated with sophistication of the customer

and developers ability to
communicate with the customer in a timely manner.


2.7.3.
4


Process

Definition

Risks associated with the degree to which the software process has been defined and is
followed by the development Organization.


2.7.3.5

Developme
nt

Environment

Risks associated with the availability and quality of the tools to be used to built the
project.



27

3. CONCLUSION

3.1 Conclusion

In this
project
, we introduced some fundamental techniques for content
-
based
image retrieval, including visual con
tent description, similarity/distance measures,
indexing scheme, user interaction and system performance evaluation. Our emphasis is
on visual feature description techniques. General visual features most widely used in
content
-
based image retrieval are col
or, texture, shape, and spatial information.

Most CBIR research focuses on the utilization of advanced algorithms. An

important constraint for CBIR is the complexity of the algorithm

c
hosen.
Each of the
three features used for CBIR (i.e., color, texture,
and shape), was developed from a
human
-
centered pe
rspective, where in parallel im
provements to algorithms were an
issue.


3.2

Future
E
nhancement

The CBIR techniques, as presented in this thesis, should be plugged into a

General

Fr
ame

w
ork

from

which

an

online

CBIR

system

can

be

developed.

Next to images, video material can be retrieved based on its content for this
purpose, frequently, so called

key frames are selected, which can be analyzed as
images. Henceforth, CBIR

techniques can be utilized. Note t
hat where with CBIR only
one image is

present,

in

content
-
based

video

retrieval

a

set

of

images

describes

a

part

of

the

video.







28

APPENDIX

Coding

CBIR.java

import java.awt.*;

import java.awt.event.*;

import javax.swing.*;

import java.util.*;

import java
.io.*;

public class CBIR extends JFrame implements ActionListener

{


JLabel l1;


JButton next,previous;


String path;


JMenu fileMenu,histogram,texture,shape;


JMenuBar bar;


JMenuItem
averageRGB,coOccerence,global,open,addNew,edgeFrequency,localColor,geom
etricM
oment;


GridBagLayout grid;


GridBagConstraints gbc;


Container content;


ImagePanel ipanel;


File file;


Viewer2 view;


JPanel bottom;


FeatureTester tester;


29


ArrayList outputList,list;


public CBIR()


{



super("CBIR");



content=getContentPane();



//Rectangle d=content.getBounds();



//content.setBounds((int)d.getX(),(int)d.getY(),2000,150);



content.setLayout(new BorderLayout());



bar=new JMenuBar();



setJMenuBar(bar);



fileMenu=new JMenu("File");



histogram=new JMenu("Histogram");



texture
=new JMenu("Texture");



shape=new JMenu("Shape");



bar.add(fileMenu);



bar.add(histogram);



bar.add(texture);



bar.add(shape);



averageRGB=new JMenuItem("AverageRGB");



coOccerence=new JMenuItem("Co_Occurence");



global=new JMenuItem("Global color
Histogram");



localColor=new JMenuItem("Local color Histogram");



edgeFrequency=new JMenuItem("Edge Frequency");



geometricMoment=new JMenuItem("Geometric Moment");



open=new JMenuItem("Open");



addNew=new JMenuItem("Add New Image");



averageRGB.addA
ctionListener(this);


30



coOccerence.addActionListener(this);



global.addActionListener(this);



open.addActionListener(this);



addNew.addActionListener(this);



edgeFrequency.addActionListener(this);



localColor.addActionListener(this);




fileMenu.add(a
ddNew);



fileMenu.add(open);



histogram.add(localColor);



histogram.add(global);



texture.add(edgeFrequency);



texture.add(coOccerence);



shape.add(geometricMoment);



JPanel jp=new JPanel();



jp.setLayout(new FlowLayout());



ImageIcon icon=new Ima
geIcon("CBIR.jpg");



//System.out.println(icon.getIconWidth());



l1=new JLabel(icon);



jp.add(l1);



content.add("North",jp);



view=new Viewer2();



content.add("Center",view);



bottom=new JPanel();



bottom.add(next=new JButton("Next"));



bottom.add
(previous=new JButton("Previous"));



next.addActionListener(this);


31



previous.addActionListener(this);



geometricMoment.addActionListener(this);



previous.setEnabled(false);



next.setEnabled(false);



content.add("South",bottom);



setSize(650,500);



setVisible(true);



setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);


}


public void actionPerformed(ActionEvent ae)


{



Object object=ae.getSource();



if(object==open)



{




JFileChooser jfc=new JFileChooser(".");




int approve=jfc.showOpenDialog(this)
;




if(approve==JFileChooser.APPROVE_OPTION)




{





file=jfc.getSelectedFile();





path=file.getPath();





l1.setIcon(new ImageIcon(getImage(path)));




}



}



if(object==addNew)



{




new AddNew(this,l1);



}


32




if(object==averageRGB)



{




if(pat
h!=null&&!path.equals(""))




{





double d=15,d1=0;





String[] argv={"AverageRGB",path};





tester=new FeatureTester(argv,d,view);





outputList=tester.getOutputList();





list=new ArrayList();





for(int i=0;i<outputList.size();i++)





{






if(
i==5)






break;






list.add((String)outputList.get(i));





}





view.list=list;





System.out.println("
-------------------------
"+list);





view.repaint();





next.setEnabled(true);




}




else





JOptionPane.showMessageDialog(this,"Please selec
t input
image file");



}






33


if(object==coOccerence)



{




if(path!=null&&!path.equals(""))




{





double d=5,d1=0;





String[] argv={"Cooccurence",path};





tester=new FeatureTester(argv,d,view);





outputList=tester.getOutputList();





list=new
ArrayList();





for(int i=0;i<outputList.size();i++)





{






if(i==5)






break;






list.add((String)outputList.get(i));





}





view.list=list;





System.out.println("
-------------------------
"+list);





view.repaint();





next.setEnabled(true
);




}




else





JOptionPane.showMessageDialog(this,"Please select input
image file");



}



if(object==global)



{


34




if(path!=null&&!path.equals(""))




{

double d=0.035,d1=0;





String[] argv={"GlobalColorHistogram",path};





tester=new FeatureTest
er(argv,d,view);





outputList=tester.getOutputList();





list=new ArrayList();





for(int i=0;i<outputList.size();i++)





{






if(i==5)






break;






list.add((String)outputList.get(i));





}





view.list=list;





System.out.println("
---------
----------------
"+list);





view.repaint();





next.setEnabled(true);




}




else





JOptionPane.showMessageDialog(this,"Please select input
image file");



}



if(object==edgeFrequency)



{




if(path!=null&&!path.equals(""))




{





double d=30,d1=0
;





String[] argv={"Edgefrequency",path};


35





tester=new FeatureTester(argv,d,view);





outputList=tester.getOutputList();





list=new ArrayList();





for(int i=0;i<outputList.size();i++)





{






if(i==5)






break;






list.add((String)outputLis
t.get(i));





}





view.list=list;





System.out.println("
-------------------------
"+list);





view.repaint();





next.setEnabled(true);




}




else





JOptionPane.showMessageDialog(this,"Please select input image
file");



}




if(object==localC
olor)



{




if(path!=null&&!path.equals(""))




{





double d=0.075,d1=0;





String[] argv={"LocalColorHistogram",path};





tester=new FeatureTester(argv,d,view);





outputList=tester.getOutputList();


36





list=new ArrayList();





for(int i=0;i<output
List.size();i++)





{






if(i==5)






break;






list.add((String)outputList.get(i));





}





view.list=list;





System.out.println("
-------------------------
"+list);





view.repaint();





next.setEnabled(true);




}




else





JOptionPane.sh
owMessageDialog(this,"Please select input image
file");



}




if(object==geometricMoment)



{




if(path!=null&&!path.equals(""))




{





double d=0.007,d1=0;





String[] argv={"InvariantMoment",path};





tester=new FeatureTester(argv,d,view);





outp
utList=tester.getOutputList();





list=new ArrayList();





for(int i=0;i<outputList.size();i++)


37





{






if(i==5)






break;






list.add((String)outputList.get(i));





}





view.list=list;





System.out.println("
-------------------------
"+list);





view.repaint();





next.setEnabled(true);




}




else





JOptionPane.showMessageDialog(this,"Please select input image
file");



}




if(object==next&&!(list.size()==0))



{




int index=outputList.indexOf(list.get(list.size()
-
1));




if(index!=o
utputList.size()
-
1)




{






list=null;





list=new ArrayList();





for(int i=index+1;i<outputList.size();i++)





{






if(list.size()==5)






break;


38






list.add((String)outputList.get(i));






System.out.println(i);





}





view.list=list;




S
ystem.out.println("+++++++++++++++++"+list+"+++++++++++++++++");





view.repaint();





repaint();





previous.setEnabled(true);




}




else




next.setEnabled(false);



}



if(object==previous&&!(list.size()==0))



{




int index=outputList.indexOf(lis
t.get(list.size()
-
1));




if(index>0)




{





list=null;





list=new ArrayList();





for(int i=index
-
1;i>=0;i
--
)





{






if(list.size()==5)






break;






list.add((String)outputList.get(i));






System.out.println(i);





}





view.list=list;


39





System.out.println("*****************"+list+"*****************");





view.repaint();





repaint();




}




else




previous.setEnabled(false);




}



}


public static void main(String args[])


{



new CBIR();


}


public Image getImage(String path)


{



String extension="";



Image theImage =null;



FileInputStream in =null;



for(int i=(path.length()
-
1);i>=0;i
--
)



{




String ch=String.valueOf(path.charAt(i));




if(ch.equals("."))




{





break;




}


40




extension+=ch;



}



if(extension.equalsIgnoreC
ase("PMB"))



{




try




{




in = new FileInputStream(path);




}




catch(Exception e)




{




}




BMPLoader bmp=new BMPLoader();




theImage = bmp.read(in);



}



else if(extension.equalsIgnoreCase("gpj"))



{




theImage=new ImageIcon(path).
getImage();



}



else if(extension.equalsIgnoreCase("fig"))




theImage=new ImageIcon(path).getImage();



else




theImage=new ImageIcon("Null.jpg").getImage();



return theImage;


}


}



41

SNAP SHOTS





42






43





RETREIVED IMAGES









44

BIBILIOGRAPHY




S.

K. Chang, Q. Y. Shi, and C. Y. Yan, "Iconic indexing by 2
-
D strings,"
IEEE Trans. on Pattern Anal. Machine Intell. Vol.9, No.3, pp. 413
-
428, May
1987.




Machine Learning


by Tom Mitchell, McGraw
-
Hill, 1997,

ISBN:
0070428077



Based on: J. Huang
.

Color
-
Spatial Image Indexing and Applications

.
PhD

thesis, Cornell Univ., 1998


WEB REFERENCES



http://www.cse.unl.edu/~sscott



http://meru.cecs.missouri.edu/mm_seminar/cont_ret.html



http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html