A new approach for data clustering using hybrid artiﬁcial

bee colony algorithm

Xiaohui Yan

a,b,

n

,Yunlong Zhu

a

,Wenping Zou

a,b

,Liang Wang

a,b

a

Key Laboratory of Industrial Informatics,Shenyang Institute of Automation,Chinese Academy of Sciences,110016 Shenyang,China

b

Graduate School of the Chinese Academy of Sciences,100039 Beijing,China

a r t i c l e i n f o

Article history:

Received 1 October 2011

Received in revised form

9 March 2012

Accepted 25 April 2012

Communicated by W.S.Hong

Available online 13 June 2012

Keywords:

Data clustering

Artiﬁcial bee colony

Hybrid artiﬁcial bee colony

Crossover operator

a b s t r a c t

Data clustering is a popular data analysis technique needed in many ﬁelds.Recent years,some swarm

intelligence-based approaches for clustering were proposed and achieved encouraging results.

This paper presents a Hybrid Artiﬁcial Bee Colony (HABC) algorithm for data clustering.The incentive

mechanism of HABC is enhancing the information exchange (social learning) between bees by

introducing the crossover operator of Genetic Algorithm (GA) to ABC.With a test on ten benchmark

functions,the proposed HABC algorithmis proved to have signiﬁcant improvement over canonical ABC

and several other comparison algorithms.The HABC algorithm is then employed for data clustering.

Six real datasets selected fromthe UCI machine learning repository are used.The results show that the

HABC algorithm achieved better results than other algorithms and is a competitive approach for data

clustering.

& 2012 Elsevier B.V.All rights reserved.

1.Introduction

Swarm Intelligence (SI) is an innovative artiﬁcial intelligence

technique inspired by intelligent behaviors of insect or animal

groups in nature,such as ant colonies,bird ﬂocks,bee colonies,

bacterial swarms,and so on.In recent years,many SI algorithms

have been proposed,such as Ant Colony Optimization (ACO) [1],

Particle Swarm Optimization (PSO) [2],Immune Algorithm (IA)

[3],Bacterial Foraging Optimization (BFO) [4].Artiﬁcial Bee

Colony (ABC) algorithm is a novel swarm intelligent algorithm

inspired by the foraging behaviors of honeybee colony.It was ﬁrst

introduced by Karaboga in 2005 [5].Since the ABC is simple in

concept,easy to implement,and has fewer control parameters,it

has attracted the attention of researchers and been widely used in

solving many numerical optimization [6,7] and engineering

optimization problems [8–10].

However,the convergence speed of ABC algorithm will

decrease as the dimension of the problem increases [6].This is

easy to explain:in ABC algorithm,bees exchange information on

one dimension with a random neighbor in each food source

searching process.When dimension increases,the information

exchange is limited and its effect is weakened.In this paper,

a Hybrid Artiﬁcial Bee Colony (HABC) algorithm is proposed to

improve the optimization ability of canonical ABC.In HABC,the

crossover operator of Genetic Algorithm (GA) is introduced to

enhance the information exchange between bees.A large set

of benchmark functions are used to test the performance of

HABC algorithm compared with several other algorithms.The

results show that the HABC algorithm outperforms the other

algorithms in terms of accuracy,robustness,and convergence

speed obviously.

Clustering is a widely encountered problem that must often

need to be solved as a part of complicated tasks in data mining [11],

pattern recognition [12],image analysis [13] and other ﬁelds of

science and engineering.The aimof data clustering is to partition a

set of data into several clusters according to some predeﬁned

attributes,under which the data in the same cluster are much

similar with each other and data in different clusters are dissimilar.

The existing clustering algorithms can be simply classiﬁed into two

categories:hierarchical clustering and partitional clustering [14].

The goal of hierarchical clustering is partitioning the objects into

successively fewer structures while the partitional clustering is

dividing the objects into a predeﬁned number of clusters according

some optimization criterions.In this paper,we focus on partitional

clustering and the hierarchical clustering will not be mentioned in

detail.The most popular algorithms for partitional clustering are

the center-based clustering algorithms.Among them,K-means

algorithm is a typical one.Due to its simplicity and efﬁciency,

K-means algorithm has been widely used in past years.However,

it has its shortcomings:the algorithm is sensitive to its initial

Contents lists available at SciVerse ScienceDirect

journal homepage:www.elsevier.com/locate/neucom

Neurocomputing

0925-2312/$- see front matter & 2012 Elsevier B.V.All rights reserved.

http://dx.doi.org/10.1016/j.neucom.2012.04.025

n

Corresponding author at:Key Laboratory of Industrial Informatics,Shenyang

Institute of Automation,Chinese Academy of Sciences,110016,Shenyang,China.

Tel.:þ86 02 423970695.

E-mail addresses:yanxiaohui@sia.cn,yxhsunshine@gmail.com (X.Yan),

ylzhu@sia.cn (Y.Zhu),zouwp@sia.cn (W.Zou),wangliang1@sia.cn (L.Wang).

Neurocomputing 97 (2012) 241–250

cluster centers and is easily trapped in local minima.In order to

overcome these problems,many heuristic clustering algorithms

have been introduced.For example,Krishna and Murty proposed a

novel approach called genetic K-means algorithm for clustering

analysis.In the algorithm,a speciﬁc distance-based mutation based

on the mutation operator of GA was used [15].Selimand Al-Sultan

proposed a simulated annealing approach for solving the clustering

problem [16].

Over the last decade,as the swarm intelligence optimization

technology attracts many researchers’ attention,different swarm

intelligence-based clustering approaches were proposed.Shelokar

introduced an evolutionary algorithmbased on ACO algorithmfor

clustering problem[17],Merwe et al.used PSO algorithmto solve

the clustering problem [18,19] Karaboga and Ozturk,and Zhang

et al.used the ABC algorithm to solve the problem [20,21].

Zou et al.proposed a Cooperative Article Bee Colony (CABC)

algorithm to solve the clustering problem [22],in which the

Cooperative search technique was introduced.In this paper,accord-

ing to excellent performance of HABC algorithm on benchmark

functions,it is employed for data clustering.The algorithmis tested

on six well-kwon real datasets provided fromthe UCI database [23].

Several other mentioned algorithms are tested as a comparison.The

test shows that the proposed HABC algorithm achieved better

results than the other algorithms on most datasets.

The rest of the paper is organized as follows.In Section 2,we

will introduce the canonical ABC algorithm.Section 3 will discuss

how crossover operator is used in ABC.Details of the HABC

algorithmwill be presented in this section.In Section 4,the HABC

algorithm is tested on a set of benchmark functions compared

with several other algorithms.Results are presented and dis-

cussed.Section 5 introduces the data clustering problemand how

K-means algorithm and HABC algorithm are used for clustering.

Test of algorithms including HABC on real datasets clustering are

given and discussed in Section 6.Finally,conclusions are drawn in

Section 7.

2.Artiﬁcial bee colony algorithm

Artiﬁcial Bee Colony algorithm is a recently proposed swarm

intelligence algorithm inspired by the foraging behaviors of bee

colonies.It was ﬁrst proposed by Karaboga [5] and then further

developed by Karaboga,Basturk and Akay et al.[6,7,24,25].In ABC

algorithm,the search space is simulated as the foraging environ-

ment and each point in the search space corresponds to a food

source (solution) that the artiﬁcial bees could exploit.The nectar

amount of a food source represents the ﬁtness of the solution.

Three kinds of bees exist in a bee colony:employed bees,

onlooker bees and scout bees.Employed bees exploit the speciﬁc

food sources they have explored before and give the quality

information of the food sources to the onlooker bees.Onlooker

bees receive information about the food sources and choose a

food source to exploit depending on the information of nectar

quality.The more nectar the food source contains,the larger

probability the onlooker bees will choose it.The employed bee

whose food source has been abandoned by it becomes a scout bee.

This is controlled by a parameter called ‘‘limit’’,which is also the

only parameter of ABC algorithm except for those traditional

parameters,such as population size.Scout bees search the whole

environment randomly.In ABC algorithm,half of the colony

comprises of employed bees and the other half includes the

onlooker bees.Each food source is exploited by only one

employed bee.That is,the number of the employed bees or the

onlooker bees is equal to the number of food sources [6].

The pseudo code for the ABC algorithm is listed in Fig.1 and

the details of description are given below.

In the initialization phase,the algorithm generates a group of

food sources corresponding to the solutions in the search space.

The food sources are produced randomly within the range of the

boundaries of the variables.

x

i,j

¼x

min

j

þrandð0,1Þ ðx

max

j

x

min

j

Þ ð1Þ

where i ¼1,2,y,SN,j ¼1,2,y,D.SN is the number of food

sources and equals to half of the colony size.D is the dimension of

the problem,representing the number of parameters to be

optimized.x

min

j

and x

max

j

are lower and upper bounds of the jth

parameter.The ﬁtness of food sources will be evaluated.Addi-

tional,counters which store the numbers of trials of each bee are

set to 0 in this phase.

In the employed bees’ phase,each employed bee is sent to the

food source in its memory and ﬁnds a neighboring food source.

The neighboring food source is produced according to Eq.(2) as

followed:

v

i,j

¼x

i,j

þ

f

ðx

i,j

x

k,j

Þ ð2Þ

where k is a randomly selected food source different from i,j is a

randomly selected dimension.

f

is a random number which

uniformly distributed in range [1,1].The new food source v is

determined by changing one dimension on x.If the value in this

dimension produced by this operation exceeds its predetermined

boundaries,it will set to be the boundaries.

Then the new food source is evaluated.A greedy selection is

applied on the original food source and the new one.The better

one will be kept in the memory.The trials counter of this food will

1:Initialization.

Initialize the food sources and evaluate the nectar amount (fitness) of food sources;

Send the employed bees to the current food source;

Iteration=0;

2: Do while (the termination conditions are not met)

3: /*Employed Bees’ Phase*/

For (each employed bee)

Find a new food source in its neighborhood following the Eq. (2);

Evaluate the fitness of the new food source

Apply greedy selection on the original food source and the new one;

End for

4:Calculate the probability P for each food source;

5: /*Onlooker Bees’ Phase*/

t=1, i=1

While (Current number of onlooker Bees t ≤ Sn/2)

If (rand () < P

i

)

Send onlooker bees to the food sources of the ith employed bee;

Find a new food source in its neighborhood following the Eq. (2);

Evaluate the fitness of the new food source

Apply greedy selection on the original food source and the new one;

t=t+1;

Else

i=i+1;

i=mod ((i-1), Sn/2) +1;

End if

End while

6: /*Scout Bees’ Phase*/

If (any employed bee becomes scout bee)

Send the scout bee to a randomly produced food source;

End if

7: Memorize the best solution achieved so far

Iteration= Iteration +1;

End while

8: Output the best solution achieved

Fig.1.Pseudo of original ABC algorithm.

X.Yan et al./Neurocomputing 97 (2012) 241–250242

be reset to zero if the food source is improved,otherwise,its value

will be incremented by one.

In the onlooker bees’ phase,the onlookers receive the informa-

tion of the food sources shared by employed bees.Then each of

them will choose a food source to exploit depending on a

probability related to the nectar amount of the food source

(ﬁtness values of the solution).That is to say,there may be more

than one onlooker bees choosing a same food source if the source

has a higher ﬁtness.The probability is calculated according to

Eq.(3) as followed:

P

i

¼

f itness

i

P

SN

j ¼ 1

f itness

j

ð3Þ

After food sources have been chosen,each onlooker bee ﬁnds a

new food source in its neighborhood following Eq.(2),just like

the employed bee does.A greedy selection is applied on the new

and original food sources,too.

In scout bees’ phase,if the value of trials counter of a food

source is greater than a parameter,known as ‘‘limit’’,the food

source is abandoned and the bee becomes a scout bee.A newfood

source will be produced randomly in the search space using Eq.

(1),as in the case of initialization phase.And trials counter of the

bee will be reset to zero.

The employed,onlooker and scout bees’ phases will recycle

until the termination condition is met.The best food source which

presents the best solution is then outputted.

3.Hybrid artiﬁcial bee colony algorithm

The social learning is the most important factor in the forma-

tion of the collective knowledge of swarm intelligence.In ABC

algorithm,this is realized mainly through the employed bees and

onlooker bees’ neighbor searching procedure.However,as it has

motioned above,in canonical ABC algorithm,the newfood source

is produced by changing value on its randomly chosen dimension

learning from a randomly chosen bee.It means that information

on only one bee and its one dimension is exchanged in each

neighborhood searching process.Two weaknesses may exist in

this way:ﬁrst,as the dimension increases,the information

exchange is still limited on one dimension so the convergence

speed of the algorithm may get slower.Second,the neighbor bee

and dimension are all chosen randomly.As a result,food sources

with higher ﬁtness which may guide the population towards

better area are not utilized.

Genetic Algorithm (GA) was a classic evolutionary algorithm

ﬁrstly proposed in 1975 by Holland [26].It was inspired by the

evolution phenomenon in the natural world.For a special pro-

blem,GA codes a potential solution as an individual chromosome.

The algorithm begins with an initial chromosome population

which represents a set of initial solutions in the decision space

of the problem.Then the operators which simulate the reproduc-

tion and evolution procedures such as selection,crossover and

mutation are applied to the chromosome population.In selection

procedure,individuals are selected as parents according to their

ﬁtness.Those chromosomes with higher ﬁtness are regarded as

carrying better gene information and have larger chance to be

selected.The crossover procedure plays a core role in the Genetic

Algorithm.With a crossover probability P

c,

it crosses two parent

chromosomes to produce new offspring.It is expected that good

gene information will be inherited and the offspring newly

produced are good ones.There are a variety of different crossover

methods,such as single-point crossover,multiple-point cross-

over,arithmetic crossover,uniform crossover and so on,among

which the single-point crossover and arithmetic crossover are

most commonly used.Mutation procedure simulates the gene

mutation in the nature.It introduces perturbation and avoids

premature.Under the mechanism of ‘‘selecting the superior and

eliminating the inferior’’,chromosome population improves

towards better one and better solutions are found.

Recent years,some swarm intelligence algorithms have been

proposed,such as ACO,PSO and ABC et al.They have much more

profound intelligent background and performed better on most

problems compared with GA.However,the evolutionary idea of

GA or its operators are usually used to improve these SI algo-

rithms.For example,Juang proposed a hybrid of GA and PSO called

HGAPSO used for recurrent network design [27].In HGAPSO,PSO

and GA use the same population to evolve,and the new popula-

tion is produced half by the enhanced PSO and half by crossover

and mutation on the enhanced elites.Shi et al.proposed two

hybrid evolutionary algorithms based on PSO and GA [28].The

main ideas of the two algorithms are to integrate PSO and GA

methods in parallel and series forms,respectively.The newly

proposed algorithms offer better performance than the standard

PSO.Zhao et al.proposed a hybrid algorithm of GA and ABC in

which the two algorithms execute simultaneously and exchange

information between bee colony and chromosome population

with a probability [29].The hybrid algorithm outperformed ABC

from its results.However,the improvement is not distinct and

only four benchmark functions were used in the experiment.

In this paper,the crossover operator,which is the core

procedure of GA,is introduced into the original ABC to improve

its optimizing ability.The new algorithm is named Hybrid

Artiﬁcial Bee Colony (HABC) algorithm.The rest parts are the

same as the original ABC algorithm except a crossover phase is

added between the onlooker bees’ and scout bees’ phases.The

pseudo code of HABC is listed in Fig.2.As only one extra operator

is introduced in,our HABC is much easier to implement compared

with the hybrid algorithms mentioned above.The Pseudo of HABC

algorithm is listed in Fig.2.

i.First,select a group of parent population P

p

from the current

food sources according to their ﬁtness.The number of parents

is set to be equal with the number of the food sources.Food

sources with higher ﬁtness have larger probability to be

selected,which makes certain that the offspring newly pro-

duced may be good ones.

1:Initialization.

Initialize the food sources and evaluate the nectar amount (fitness) of food sources;

Send the employed bees to the current food source;

Iteration=0;

2: Do while (the termination conditions are not met)

3: Employed Bees’ Phase

4:Calculate the probability P for each food source;

5: Onlooker Bees’ Phase

6: /*Crossover Phase*/

Produce parent population P

p

applying tournament selection;

For (each food source s

i

in original population P

o

)

Select two parents randomly from P

p

Produce new food source s

new

by crossing the selected parents;

Apply greedy selection on the original food source s

i

and the newly produced

food source s

new

;

End for

7: Scout Bees’ Phase

8:Memorize the best solution achieved so far

Iteration= Iteration +1;

End while

9: Output the best solution achieved

Fig.2.Pseudo of HABC algorithm.

X.Yan et al./Neurocomputing 97 (2012) 241–250 243

ii.For each of the food source s

i

in original population P

o

,select

two parent food sources randomly fromP

p

and cross them.The

newly produced offspring s

new

is then compared with the s

i

,a

greed selection is applied and the better one is remained in the

population.

In the parent selection procedure,parent population is pro-

duced using a binary tournament selection.Each time two food

sources are selected randomly from the current population and

the one with higher ﬁtness is chosen to the parent population.

The selection continues until the amount of the parents meets.

Arithmetic crossover method is used in the crossover operator.

The offspring is produced following the Eq.(4).Child represents

the newly produced offspring,while parent

1

and parent

2

are the

two selected parents,rand (0,1) is a randomly produced number

between 0 and 1.Only one child will be produced in this way.It is

worthy noting that,the two random parameters are generated

independently.We have tested and proved using the independent

parameters is better than that the sum of two parameters is one.

child ¼randð0,1Þ parent

1

þrandð0,1Þ parent

2

ð4Þ

After the offspring is generated,a greedy selection is applied to

the original food source and newly produced offspring.If the

ﬁtness of the offspring is higher than the original one,it will

replace the original one and the trials counter for this food source

will be reset to zero.Otherwise,memory does not change and

counter’s value will be incremented by one,just like that in

employed bees’’ or onlooker bees’ phase.

With applying the crossover operator,the information

exchange of HABC algorithm is enhanced.The food sources with

higher ﬁtness are fully utilized,too.It is expected that the HABC

will have a good performance.Test on a set of benchmark

functions with several other algorithms will be presented in the

next section.

4.Experiment

The proposed HABC algorithm will be tested on a set of

benchmark functions.Five other algorithms are used as a com-

parison.They are canonical ABC,PSO,GA,CABC by Zou et al.[22]

and Cooperative Particle Swarm Optimization (CPSO) by van den

Bergh and Engelbrecht [30],three classic original algorithms and

two variations.CABC algorithm is a well performed algorithm

proposed recently.In the algorithm,cooperative search strategy is

introduced.A virtual super best solution g

b

is recorded and its

each component of all dimensional is the best in all individuals.In

employed bees’ and onlooker bees’ phases,it will replace the

value on each dimension of g

b

by the corresponding dimension of

the newly produced individual,to see if g

b

will be improved.By

this method,the best value on each dimension is expected to be

found.CPSO algorithm works with similar mechanism.

4.1.Benchmark functions

Ten well-known benchmark functions are used in the test.

These functions contain three unimodal functions,four multi-

modal functions and three rotated functions.

The ﬁrst function is Sphere function whose global minimum

value is 0 at (0,0,y,0).Initialization range for the function is

[5.12,5.12].It is a unimodal function with non-separable

variables.

f

1

ðxÞ ¼

X

D

i ¼ 1

x

2

i

The second function is Rosenbrock function whose global

minimum value is 0 at (1,1,y,1).Initialization range for the

function is [15,15].It is a unimodal function with non-

separable variables.Its global optimum is inside a long,narrow,

parabolic shaped ﬂat valley.So it is difﬁcult to converge to the

global optimum.

f

2

ðxÞ ¼

X

D1

i ¼ 1

ð100ðx

2

i

x

i þ1

Þ

2

þð1x

i

Þ

2

Þ

The third function is Quadric function whose global minimum

value is 0 at (0,0,y,0).Initialization range for the function is

[10,10].It is a unimodal function with non-separable variables.

f

3

ðxÞ ¼

X

D

i ¼ 1

X

i

j ¼ 1

x

j

0

@

1

A

2

The fourth function is Rastrigin function whose global mini-

mumvalue is 0 at (0,0,y,0).Initialization range for the function

is [15,15].It is a multimodal function with separable variables.

f

4

ðxÞ ¼

X

D

i ¼ 1

ðx

2

i

10cosð2

p

x

i

Þþ10Þ

The ﬁfth function is Schwefel function whose global minimum

value is 0 at (420.9867,420.9867,y,420.9867).Initialization

range for the function is [500,500].It is a multimodal function

with separable variables.

f

5

ðxÞ ¼D418:9829þ

X

D

i ¼ 1

x

i

sinð

ﬃﬃﬃﬃﬃﬃﬃﬃ

9x

i

9

q

Þ

The sixth function is Ackley function whose global minimum

value is 0 at (0,0,y,0).Initialization range for the function is

[32.768,32.768].It is a multimodal function with non-separ-

able variables.

f

6

ðxÞ ¼20þe20e

0:2

ﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ

1

D

P

D

i ¼ 1

x

2

i

s

!

e

1

D

P

D

i ¼ 1

cosð2

p

x

i

Þ

The seventh function is Griewank function whose global

minimum value is 0 at (0,0,y,0).Initialization range for the

function is [600,600].It is a multimodal function with non-

separable variables.

f

7

ðxÞ ¼

1

4000

X

D

i ¼ 1

x

2

i

!

Y

D

i ¼ 1

cos

x

i

ﬃﬃ

i

p

!

þ1

Functions f

1

–f

7

are basic functions that usually used in many

works [25,31].To further test the proposed algorithm,we used

another three rotated functions employed in Liang’s work [32].In

rotated functions,a rotated variable y,which produced by the

original variable x left multiplied an orthogonal matrix,is used to

calculate the ﬁtness instead of x.The orthogonal rotation matrix

does not affect the shape of the functions.However,when one

dimension in vector x is changed,all dimensions in vector y will

be affected,so the rotated functions are much more difﬁcult to

solve.The orthogonal rotation matrix is generated used Salomon’s

method [33] in this paper.The three rotated functions are:

Rotated Rastrigin function,which uses Mx instead of x on

Rastrigin function.The global minimum value is 0 at (0,0,y,0).

Initialization range for the function is [15,15].

f

8

ðxÞ ¼f

4

ðyÞ,y ¼Mx

Rotated Ackley function,which uses Mx instead of x on Ackley

function.The global minimum value is 0 at (0,0,y,0).

Initialization range for the function is [32.768,32.768].

f

9

ðxÞ ¼f

6

ðyÞ,y ¼Mx

X.Yan et al./Neurocomputing 97 (2012) 241–250244

Rotated Griewank function,which uses Mx instead of x on

Griewank function.The global minimumvalue is 0 at (0,0,y,0).

Initialization range for the function is [600,600].

f

10

ðxÞ ¼f

7

ðyÞ,y ¼Mx

4.2.Parameters settings for the involved algorithms

In the experiment,all functions were tested with 30 dimen-

sions and run for 30 times.The population sizes of all algorithms

were 100.In HABC,ABC and CABC,the numbers of employed bees

and onlooker bees were half of the population size and the

number of scout bees was selected as one.The limit¼100.In

PSO and CPSO algorithms,inertia weight

o

decreased from0.9 to

0.7 linearly with the iterations.The learning factors c1¼c2¼2.0

[34].Vmin¼0.1Lb,Vmax¼0.1Ub where Lb and Ub refer the

lower bound and upper bound of x.Simple GA is employed in this

paper.Single-point crossover is used and the crossover probabil-

ity is 0.95,mutation probability is 0.1,which are the same as that

in Reference [6].

The iterations count is no longer a reasonable measure

as different computational complexity may be taken in each

iteration for different algorithms.In order to compare the

different algorithms,a fair measure method should be selected.

In this paper,we use number of function evaluations (FEs) as a

measure criterion [22,35,36].All algorithms were terminated

after 100,000 FEs.

All algorithms were implemented in Matlab 2010b using

computer with Intel Core 2 Duo CPU E4500,2.20 GHz,1 GB

RAM.The operating system of the computer is Windows XP.

4.3.Simulation results for benchmark functions

The mean and standard deviations of function values obtained

by HABC,ABC,CABC,PSO,CPSO and GA algorithms for 30 runs

after 100,000 FEs are given in Table 1.Best values obtained by the

algorithms for each function were marked as bold.Rank repre-

sents the performance order of the six algorithms on each

benchmark function.

As shown in Table 1,HABC algorithms performed best on eight

benchmark functions among all ten while ABC and CPSO each

performed best on one.GA performed worst on nine functions

while PSO performed worst on one.It is worth noting that,the

HABC algorithm obtained exactly the global minimum on six

functions.The converging speed and accuracy are much better

than the other algorithms indeed.The mean best function value

proﬁles of the six algorithms are shown in Fig.3.

On Sphere function,all algorithms obtained acceptable results

except GA.The result achieved by HABC was improved continu-

ally and got the global minimum0 at about 90,000 FEs,seen from

the Fig.3(a).Performances of CABC and ABC on this function are

better than that of PSO and CPSO.

On Rosenbrock function,all algorithms did not perform well.

ABC algorithmperformed best on this function.CABC,CPSO,HABC

and PSO performed a little worse than ABC while GA performed

worst.It can be seen from Fig.3(b),HABC algorithm converged

fairly fast at the very beginning and then trapped in local

minimum.CPSO and PSO trapped in local minimum later while

only ABC seemed to have the ability of continual improving.

On Quadric function,the performance of HABC is much similar

with it on Sphere.The result was improved continually and got

the global minimum 0 at about 90,000 FEs.For this reason,

Fig.3(c) is similar with Fig.3(a).The difference is that,perfor-

mances of CABC and ABC on this function are worse than that of

PSO and CPSO.

On Rastrigin function,HABC algorithmconverged very fast and

got the global minimum at about 6,000 FEs,as seen from the

Fig.3(d).CABC and CPSO achieved acceptable results and got the

rank 2 and rank 3,respectively.ABC,PSO and GA did not perform

well and GA is the worst.

Table 1

Performances of HABC,ABC,CABC,PSO,CPSO and GA algorithms on benchmark functions f

1

–f

10

.

Func.HABC ABC CABC PSO CPSO GA

f

1

Mean 0 4.54316e-012 3.12169e-013 1.03497e-005 7.77685e-009 1.38989eþ000

Std 0 2.80216e-012 3.34301e-013 7.01478e-006 7.71643e-009 5.15608e-001

Rank 1 3 2 5 4 6

f

2

Mean 2.81228eþ001 4.27623e-001 6.2047eþ000 3.4052eþ001 3.16607eþ000 1.4398eþ003

Std 5.43474e-001 3.18182e-001 1.43227eþ001 2.11113eþ001 2.6112eþ000 7.81296eþ002

Rank 4 1 3 5 2 6

f

3

Mean 0 1.03861eþ002 8.57809eþ001 4.02959e-002 8.86096e-003 2.05850eþ002

Std 0 2.84448eþ001 2.0114eþ001 1.9606e-002 8.85094e-002 5.77667eþ001

Rank 1 5 4 3 2 6

f

4

Mean 0 1.35600e-001 2.8432e-007 5.99753eþ001 9.81891e-006 1.37988eþ002

Std 0 3.27338e-001 6.2806e-007 1.65137eþ001 9.88098e-006 3.24693eþ001

Rank 1 4 2 5 3 6

f

5

Mean 7.91626eþ002 1.40091eþ002 3.84785e-004 6.00488eþ003 1.30206e-004 1.02275eþ003

Std 3.54901eþ002 9.23071eþ001 6.91784e-006 8.0045eþ002 2.63398e-006 4.21222eþ002

Rank 4 3 2 6 1 5

f

6

Mean 8.88178e-016 5.96747e-006 2.61759e-005 2.11595eþ000 8.10679e-004 1.91905eþ001

Std 2.00587e-031 3.13744e-006 1.19183e-005 4.70844e-001 3.84077e-004 8.46434e-001

Rank 1 2 3 5 4 6

f

7

Mean 0 1.06029e-007 4.13795e-004 3.13796e-002 4.82023e-002 5.05984eþ000

Std 0 4.6102e-007 1.84187e-003 1.45468e-002 3.04478e-002 1.63389

Rank 1 2 3 4 5 6

f

8

Mean 0 9.02396eþ001 1.01876eþ002 7.74909eþ001 1.63648eþ002 2.57888eþ002

Std 0 1.28850eþ001 2.41752eþ001 2.0803eþ001 4.6345eþ001 4.09012eþ001

Rank 1 2 4 3 5 6

f

9

Mean 8.88178e-016 2.44863eþ000 2.28923eþ000 2.85518eþ000 6.19628eþ000 1.97429eþ001

Std 2.00587e-031 1.27872eþ000 8.48221e-001 6.09806e-001 7.21582eþ000 5.0853e-001

Rank 1 3 2 4 5 6

f

10

Mean 0 2.2141e-005 5.82031e-005 3.33974e-002 3.50708e-002 5.18582eþ000

Std 0 1.78621e-005 1.82399e-004 1.68016e-002 3.36502e-002 1.31005eþ000

Rank 1 2 3 4 5 6

X.Yan et al./Neurocomputing 97 (2012) 241–250 245

On Schwefel function,CPSO performed best both on conver-

ging speed and ﬁnal result.CABC is a little worse than CPSO.ABC,

HABC,PSO and GA all did not performwell on this function while

PSO performed worst,as shown in Fig.3(e).

On Ackley function,HABC converged fast at the beginning and

then trapped in the local minimumafter about 10,000 FEs,as can

be seen in Fig.3(f).Though trapped in local minimum,HABC

obtained the best result among all the ﬁve algorithms.GA and

PSO performed worst on this function while the other algorithms

are not much different with each other.

On Griewank function,similar with its performnce on Rastrigin,

HABC algorithmconverged very fast and got the global minimum

at about 7000 FEs,as seen from the Fig.3(g).ABC and CABC

obtained acceptable results on this function while CPSO and PSO

performed not well.GA performed worst.

On three rotated functions,performances of HABC are much

similar with it on non-rotated ones.It shows that HABC algorithm

is not sensitive to rotation and can maintain its excellent

performance on these functions.Performances of the other algo-

rithms on these three functions are not the same.As it has

mentioned above,CABC and CPSO obtained acceptable results

on Ackley and Rastrigin.ABC algorithm obtained acceptable

results and on Ackley.However,these three algorithm all per-

formed poor on Rotated Rastrigin and Rotated Ackley,as seen

from Table 1 and Fig.3.PSO maintained its poor performance on

the two functions,too.On Rotated Griewank function,perfor-

mance of ABC algorithm got a little worse than on original

Griewank.Results obtained by other algorithms are close to their

results on Griewank,as shown in Table 1.

The average time consumptions for the algorithms with single

running on the ten functions are listed in Table 2.It is clear that

time consumptions of PSO and GA algorithms are close to each

other and they cost least time on all functions.CPSO algorithm

costs the most time.Time consumptions of HABC,ABC and CABC

are medium and similar with each other.Overall,most of them

can obtain their results within ten seconds,which is fairly fast

already.Though HABC uses more time than PSO and GA algo-

rithm,its results are much better.And fromthe Fig.3,we can see

that HABC obtained the global optima much earlier before the

total function evaluations on several functions,which indicates

that it needs less time than the results in the Table 2 on these

functions.

Overall,the HABC algorithm outperforms the other ﬁve algo-

rithms on eight benchmark functions among all ten.Especially,on

six functions,it converged fast indeed and obtained the global

minimum zero.On three rotated functions,it showed great

robustness as the algorithm maintained its well performances.It

can be concluded that the proposed HABC is an efﬁcient algorithm

for numerical function optimization.According to its excellent

performance on benchmark functions,we employed it for data

clustering.

5.Data clustering

As it has mentioned above,in this paper,we mainly focus on

partitional clustering.In a partitional clustering problem,we need

to divide a set of n objects into k clusters.Let O (o

1

,o

2

,y,o

n

) be

the set of n objects.Each object has p characters and each

character is quantiﬁed with a real-value.Let X

np

be the char-

acter data matrix.It has n rows and p columns.Each rows

0

5

10

x 10

4

-400

-300

-200

-100

0

100

evaluation Count

Fitness (log)

HABC

ABC

CABC

PSO

CPSO

GA

0

2

4

6

8

10

x 10

4

0

2

4

6

8

evaluation Count

Fitness (log)

0

5

10

x 10

4

-400

-300

-200

-100

0

100

evaluation Count

Fitness (log)

0

5

10

x 10

4

-15

-10

-5

0

5

evaluation Count

Fitness (log)

0

2

4

6

8

10

x 10

4

-4

-2

0

2

4

evaluation Count

Fitness (log)

0

5

10

x 10

4

-20

-15

-10

-5

0

evaluation Count

Fitness (log)

0

5

10

x 10

4

-20

-15

-10

-5

0

evaluation Count

Fitness (log)

0

5

10

x 10

4

-15

-10

-5

0

5

evaluation Count

Fitness (log)

0

5

10

x 10

4

-20

-15

-10

-5

0

5

evaluation Count

Fitness (log)

0

5

10

x 10

4

-20

-15

-10

-5

0

5

evaluation Count

Fitness (log)

HABC

ABC

CABC

PSO

CPSO

GA

HABC

ABC

CABC

PSO

CPSO

GA

HABC

ABC

CABC

PSO

CPSO

GA

HABC

ABC

CABC

PSO

CPSO

GA

HABC

ABC

CABC

PSO

CPSO

GA

HABC

ABC

CABC

PSO

CPSO

GA

HABC

ABC

CABC

PSO

CPSO

GA

HABC

ABC

CABC

PSO

CPSO

GA

HABC

ABC

CABC

PSO

CPSO

GA

Fig.3.The men best function value proﬁles of HABC,ABC,CABC,PSO,CPSO and

GA.(a) Sphere function,(b) rosenbrock function,(c) quadric function,(d) Rastrigin

function,(e) Schwefel function,(f) Ackley function,(g) Griewank function,

(h) rotated rastrigin,(i) Rotated Ackley,(j) Rotated Griewank.

Table 2

Average time consumptions of HABC,ABC,CABC,PSO,CPSO and GA algorithms on

benchmark functions.

Function HABC ABC CABC PSO CPSO GA

f

1

4.2763 4.0491 4.3404 2.3877 9.4691 2.6607

f

2

4.9939 4.9341 4.9934 2.6079 9.9794 2.7370

f

3

6.5073 6.3608 6.9296 3.5684 12.2584 3.5133

f

4

4.7157 4.7118 4.7991 2.5939 10.2066 2.6949

f

5

5.2126 5.1006 5.2661 2.7789 10.7178 2.7299

f

6

5.4825 5.5731 5.6621 3.1682 11.3551 3.1755

f

7

8.2412 8.3197 8.2317 4.0181 13.7551 4.1085

f

8

6.5093 6.5663 6.5678 3.4882 12.1548 3.4766

f

9

6.9664 7.1036 6.9710 3.7816 12.5786 3.8437

f

10

9.7017 9.7811 9.6632 4.2812 14.3786 4.1545

X.Yan et al./Neurocomputing 97 (2012) 241–250246

presents a data and x

i,j

corresponding the jth feature of the ith

data (i ¼1,2,y,n,j ¼1,2,y,p).

Let C¼ (C

1

,C

2

,y,C

k

) be the k clusters.Then

C

i

a

f

,C

j

\C

i

a

f

,[

k

j ¼ 1

C

i

¼O,i,j ¼1,2,:::,k,i aj

The goal of clustering algorithm is to ﬁnd such a C which

makes the objects in the same cluster are as similar as possible

while objects in the different clusters are dissimilar.These can be

measured by some criterions,such as the total within-cluster

variance or the total mean-square quantization error (MSE) [37].

Perf ðO,CÞ ¼

X

n

i ¼ 1

Min 99o

i

c

j

99

2

,j ¼1,2,:::k

n o

ð5Þ

where 99o

i

-c

j

99

2

denotes the similarity between the ith object and

the center of jth cluster.The most popular used similarity metric

in clustering is Euclidean distance,which is derived from the

Minkowski metric,as Eq.(6).

dðo

i

,c

j

Þ ¼

X

p

m¼ 1

ðx

im

c

jm

Þ

r

1=r

)dðo

i

,c

j

Þ ¼

ﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ

X

p

m¼ 1

ðx

im

c

jm

Þ

2

r

ð6Þ

where c

j

is the center of jth cluster C

j

,mis the dimension within p.

5.1.K-means algorithm for clustering

K-means algorithm is a classic center-based clustering algo-

rithm proposed in three decades ago.It was popular for a long

time due to its simplicity and efﬁciency.The main steps of

K-means algorithm are as follows:

i.First,randomly choose k cluster centers (c

1

,c

2

,y,c

k

) fromthe

n objects.c

i

¼o

j

and j is different for different i.

ii.Then,calculate the distances between all objects and all

cluster centers following Eq.(6).Assign the objects to the

nearest cluster center to form k clusters (C

1

,C

2

,y,C

k

).

iii.After assigning,recalculate the centers of the clusters follow-

ing Eq.(7).n

i

is the number of objects belongs to cluster C

i

.

iv.Repeat step ii and step iii until the cluster centers no longer

changed or other termination criteria is satisﬁed.

c

i

¼

1

n

i

X

8o

j

AC

i

x

j

ð7Þ

K-means algorithm has linear time complexity but it strongly

depends on the initial cluster centers and is easily trapped in local

optimum.

5.2.HABC for clustering

Similar with the other swarm intelligence algorithms for

clustering,it is easy to apply HABC algorithm for clustering.Just

two changes need to be done for this approach:

i.Solution presentation

In HABC for numerical optimization problem,each food source

presents a solution of the problem.While in HABC for cluster-

ing,each food source presents a set of cluster centers,seen in

Eq.(8).And the food source can be decoded to the cluster

centers using Eq.(9).

X

i

¼ x

1

,x

2

,...,x

p

,x

pþ1

,...,x

kp

ð8Þ

c

m ¼

x

ðm1Þpþ1

,x

ðm1Þpþ2

,...,x

mp

ðm¼1,2,...,kÞ ð9Þ

X

i

presents a food source in HABC algorithm.k is the number

of clusters and p is the number of characters for the data

clustering problem to be solved.The real dimension of HABC

for a k-centers clustering problem with p characters is kp.

The colony size of HABC algorithm is independent with the

clustering problem.The clustering data will be scanned ﬁrstly to

ﬁnd the upper bound and lower bound on each character.In the

initialization phase and scout bees’ phase,when a new food

source is produced,its value on the jth dimension should be

restricted within the boundary of lth character.l is calculated

following Eq.(10)

l ¼modððj1Þ,pÞþ1 ð10Þ

ii.Fitness calculation

Different from solving the numerical optimization problem,

when solving data clustering problem,the total within-cluster

variance in Eq.(5) is used to evaluate the quality of the clusters

partition.The pseudo code of ﬁtness calculation of HABC

algorithm for solving clustering problems is listed in Fig.4.

For each food source,decode it to the k cluster centers,

calculate the distance between objects and each center,assign

objects to the nearest cluster center,then compute the total

within-cluster variance as the food source’ ﬁtness.

The rest parts are the same with that presented in Section 3.

6.Experiment of HABC for data clustering

6.1.Datasets and parameters setting

To evaluate the performance of HABC algorithmfor data cluster-

ing,we compared it with ABC,CABC,PSO,CPSO,GA and the classic

K-means algorithm on six real datasets selected from the UCI

machine learning repository [23].The datasets are as followed.N

is the number of data records.P is the number of characters for each

record.K is the number of clusters to be divided to.

Iris data (N ¼150,P¼4,K¼3):this dataset is with 150 random

samples of ﬂowers from the iris species setosa,versicolor,and

virginica collected by Anderson.From each species there are 50

observations for sepal length,sepal width,petal length,and petal

width in cm [38].

Wine data (N¼178,P¼13,K¼3):this is the wine dataset from

MCI laboratory.These data are the results of a chemical analysis

of wines grown in the same region in Italy but derived fromthree

different cultivars.The analysis determined the quantities of 13

constituents found in each of the three types of wines.There are

178 instances with 13 numerical attributes in wine data set.All

attributes are continuous.There is no missing attribute value [38].

Contraceptive Method Choice (N¼1473,P¼10,K¼3):the CMC

dataset is a subset of the 1987 National Indonesia Contraceptive

Prevalence Survey.The samples are married women who were

either not pregnant or do not know if they were at the time of

interview.The problem is to predict the current contraceptive

method choices (including no use,long-term methods,or

Input:food sources X, data D

Output:fitness F

For each food source X

i

Decode X

i

to the k cluster centers following Eq. (9)

Calculate the distance between all objects in D and each cluster center following Eq. (6

)

Assign objects to the nearest clusters centers

Compute the total within-cluster variance V

i

following Eq. (5)

F

i

=V

i

End for

Return F

Fig.4.Pseudo code of ﬁtness calculation of HABC algorithmfor clustering problems.

X.Yan et al./Neurocomputing 97 (2012) 241–250 247

short-term methods) of a woman based on her demographic and

socioeconomic characteristics [38].

Wisconsin Breast Cancer (N¼683,P¼9,K¼2):the WBC dataset

is consists of 683 objects characterized by nine features:clump

thickness,cell size uniformity,cell shape uniformity,marginal

adhesion,single epithelial cell size,bare nuclei,bland chromatin,

normal nucleoli,and mitoses.There are two categories in the data:

malignant (444 objects) and benign (239 objects) [38].

Glass data(N¼214,P¼9,K¼6):the data were sampled fromsix

different types of glass:building windows ﬂoat processed (70

objects),building windows non-ﬂoat processed (76 objects),vehicle

windows ﬂoat processed (17 objects),containers (13 objects),table-

ware (9 objects),and headlamps (29 objects),each with nine

features,which are refractive index,sodium,magnesium,aluminum,

silicon,potassium,calcium,barium,and iron [38].

Liver Disorders (N¼345,P¼6,K¼2):the Liver Disorders data is

named as BUPA Liver Disorders.BUPA Liver Disorders dataset

prepared by BUPA Medical Research Company includes 345 samples

consisting of six attributes and two classes.Each sample is taken

from an unmarried man.Two hundred of these samples are of one

class with remaining 145 are belong to the other.First ﬁve attributes

of the collected data samples are the results of blood test while the

last attribute includes daily alcohol consumption [39].

In this experiment,each algorithm was run for 30 times with

randomly initial solutions on every datasets.Parameters for

HABC,ABC,CABC,PSO,CPSO and GA algorithms are the same

with they are in the Section 4,except the six algorithms were

terminated after 10,000 FEs because it is time-consuming.As the

K-means algorithm needs the initial cluster centers only,there is

no extra parameter for it.

6.2.Results and analysis of data clustering

Clustering results by HABC,ABC,CABC,PSO,CPSO,GA and

K-means algorithms are given in Table 3.Mean represents the

average total within-cluster variance for 30 runs and the Std

represents the standard deviation.Rank represents performance

order of the seven algorithms on each dataset.As shown in

Table 3,HABC algorithm obtained the best mean and standard

deviation of total within-cluster variance criterion on ﬁve

data sets.K-means algorithm performed worst on all datasets.

The mean minimum total within-cluster variance proﬁles of the

HABC,ABC,PSO,CPSO and GA algorithms are shown in Fig.5.

On Iris dataset,the performance order of the algorithms is

HABC4CABC4ABC4PSO4CPSO4GA4K-means.Results obtained

by HABC,ABC and CABC are close with each other.However,

the standard deviation of HABC is 2.86938e-006,much smaller

than the other algorithms,which means that HABC is robust and

can converge to the minimumeach time.PSO,CPSO and GA are a

little worse than the above three algorithms.GA converged

slower than the other ﬁve algorithms obviously,as seen from

Fig.5(a).

On Wine dataset,the performance order of the algorithms is

HABC4CABC4ABC4CPSO4PSO4GA4K-means.HABC algo-

rithm converged fast and achieved the best mean and standard

deviation results.CABC is a little better than ABC,the mean ﬁtness of

them are 1.62992eþ004 and 1.63060eþ004,respectively.CPSO

performed better than PSO.GA converged fast at ﬁrst but achieved

the worst results among the six heuristic algorithms.The converging

speed of most algorithms slowed down after 7000 FEs.

On Contraceptive Method Choice data,the performance order

of the algorithms is HABC4ABC4CABC4CPSO4PSO4GA4

K-means.HABC got the best results both on mean and standard

deviation value.GA algorithmconverged fast at the beginning and

fell behind at last,seen from Fig.5(c).

On Wisconsin Breast Cancer data,the performance order of the

algorithms is ABC4CABC4HABC4CPSO4GA4PSO4K-means.

Result obtained by HABC is worse than ABC and CABC on this

dataset,which is obvious from the standard deviation value.But it

is much better than CPSO,PSO,GA and K-means algorithms all

the same.

On Glass data,the performance order of the algorithms is

HABC4CABC4ABC4GA4CPSO4PSO4K-means.The mean

total within-cluster variances of HABC and CABC are similar.

ABC performed a little worse.CPSO and PSO performed worse

than GA on this dataset.Though the two algorithms converged

fast at the beginning,their ﬁnal results are not good and close to

the result of K-means.

On Liver Disorders data,the performance order of the algo-

rithms is HABC4ABC4CABC4PSO4GA4CPSO4K-means.The

mean total within-cluster variances and convergence plots of

HABC,ABC and CABC are nearly the same.But the standard

deviation of HABC is smaller than the other two algorithms,

which means it is has the better convergence ability.The results

of CPSO,GA and PSO are better than K-means,but much worse

than the above three algorithms.

The time consumptions of the six intelligence algorithms on

each data clustering problem are nearly the same.On Iris,Wine,

Contraceptive Method Choice,Wisconsin Breast Cancer,Glass and

Liver Disorders data,the average time consumptions of single run

Table 3

Average total within-cluster variances of HABC,ABC,CABC,PSO,CPSO,GA and K-means algorithms on six datasets.

Datasets HABC ABC CABC PSO CPSO GA K-Means

Iris Mean 9.46034eþ001 9.46106eþ001 9.46039eþ001 9.52257eþ001 9.62048eþ001 9.73895eþ001 1.08303eþ002

Std 2.86938e-006 9.68198e-003 5.62618e-004 6.56006e-001 8.79374e-001 2.80244eþ000 1.95741eþ001

Rank 1 3 2 4 5 6 7

Wine Mean 1.62977eþ004 1.63060eþ004 1.62992eþ004 1.63371eþ004 1.63147eþ004 1.64061eþ004 1.87911eþ004

Std 5.27099eþ000 3.64531eþ001 8.10866eþ000 6.36983eþ001 5.14605eþ001 1.31552eþ002 7.45520eþ002

Rank 1 3 2 5 4 6 7

CMC Mean 5.69486eþ003 5.69567eþ003 5.69628eþ003 5.71398eþ003 5.70341eþ003 5.80445eþ003 5.95809eþ003

Std 9.01368e-001 1.96629eþ000 5.49646eþ000 2.4463eþ001 2.99744eþ001 1.07516eþ002 1.82840eþ002

Rank 1 2 3 5 4 6 7

WBC Mean 2.9713eþ003 2.96439eþ003 2.96483eþ003 3.17228eþ003 3.00496eþ003 3.13073eþ003 3.31858eþ003

Std 2.45086eþ001 1.33911e-002 8.78505e-001 1.47687eþ002 8.63002eþ001 1.42091eþ002 2.8978eþ002

Rank 3 1 2 6 4 5 7

Glass Mean 2.21660eþ002 2.30550eþ002 2.23393eþ002 2.54035eþ002 2.49245eþ002 2.46061eþ002 2.55300eþ002

Std 4.06821eþ000 1.14880eþ001 6.17516eþ000 1.0107eþ001 1.19932eþ001 1.36449eþ001 1.6353eþ001

Rank 1 3 2 6 5 4 7

LD Mean 9.85173eþ003 9.85175eþ003 9.85179eþ003 9.90415eþ003 1.00240eþ004 9.93828eþ003 1.2063eþ004

Std 1.12273e-002 8.25978e-002 1.46685e-001 2.34147eþ002 4.40462eþ002 5.37967eþ002 6.53482eþ002

Rank 1 2 3 4 6 5 7

X.Yan et al./Neurocomputing 97 (2012) 241–250248

are about 1.0,1.2,9.2,2.8,5.0,1.4 min,respectively.This is mainly

because that in data clustering,the main consumptions are used

to calculate the total within-cluster variances in function ﬁtness

evaluation processes.As they all stopped after the same functions

evaluations (10,000),their time consumptions are close to each

other.K-means algorithm is an approximate algorithm and can

obtain results in seconds,but its results are much worse.

7.Conclusion

This paper presents a Hybrid Artiﬁcial Bee Colony (HABC)

algorithm,in which the crossover operator of GA is introduced in

to improve the original ABC algorithm.With the new operator,

information is exchanged fully between bees and the good indivi-

duals are utilized.In the early stage of the algorithm,the searching

ability of the algorithmis enhanced,and at the end of the algorithm,

as the difference between individuals’ decreases,the perturbation of

crossover operator decreases and can maintain its convergence,too.

To demonstrate the performance of the HABC algorithm,we test it

on ten benchmark functions compared with ABC,CBAC,PSO,CPSO

and GA algorithms.The results show that the proposed HABC

algorithm outperforms the canonical ABC and other compared

algorithms on eight functions in terms of convergence accuracy

and convergence speed.The test on rotated functions further proves

that HABC is robust and can maintain its superiority on rotated

functions while other algorithms are getting worse.

According to its excellent optimization ability on numerical

functions,we apply HABC algorithm to the data clustering

problem.Six well-known real datasets selected from the UCI

machine learning repository are used for testing.Algorithms

mentioned above as well as K-means algorithm are employed as

comparison.The Results show that HABC got the best total

within-cluster variance value on ﬁve datasets,which prove that

the HABC algorithmis a competitive approach for data clustering.

However,the algorithm will still trap in local minimum on a

few functions,which can be seen both from the benchmark

functions and data clustering.Finding the features of functions

which HABC works not well on and improving the algorithm in

solving these functions are the future work.

Acknowledgment

This work is supported by the National Natural Science Founda-

tion of China (Grant no.61174164,61003208,61105067).And the

0 5000 10000

100

150

200

250

evaluation Count

Fitness

HABC

ABC

CABC

PSO

CPSO

GA

0 5000 10000

1.6

1.8

2

2.2

x 10

4

evaluation Count

Fitness

HABC

ABC

CABC

PSO

CPSO

GA

0 5000 10000

5000

6000

7000

8000

9000

evaluation Count

Fitness

HABC

ABC

CABC

PSO

CPSO

GA

0 5000 10000

3000

4000

5000

6000

7000

evaluation Count

Fitness

HABC

ABC

CABC

PSO

CPSO

GA

0 5000 10000

200

300

400

500

600

700

evaluation Count

Fitness

HABC

ABC

CABC

PSO

CPSO

GA

0 5000 10000

1

1.5

2

2.5

x 10

4

evaluation Count

Fitness

HABC

ABC

CABC

PSO

CPSO

GA

Fig.5.The mean minimum total within-cluster variance proﬁles of HABC,ABC,CABC,PSO,CPSO and GA.(a) Iris data,(b) wine data,(c) contraceptive method choice,

(d) Wisconsin breast cancer,(e) glass data,(f) liver disorders.

X.Yan et al./Neurocomputing 97 (2012) 241–250 249

authors are very grateful to the anonymous reviewers for their

valuable suggestions and comments to improve the quality of

this paper.

References

[1] M.Dorigo,L.M.Gambardella,Ant colony system:a cooperating learning

approach to the travelling salesman problem,IEEE Trans.Evol.Comput.1 (1)

(1997) 53–66.

[2] J.Kennedy,R.C.Eberhart,Particle swarmoptimization,In:Proceedings of the 1995

IEEE International Conference on Neural Networks,4 (1995),pp.1942–1948.

[3] M.Gong,L.Jiao,X.Zhang,A.Population-based,Artiﬁcial immune system for

numerical optimization,Neurocomputing 72 (1–3) (2008) 149–161.

[4] K.M.Passino,Biomimicry of bacterial foraging for distributed optimization

and control,IEEE Control Syst.Mag.22 (2002) 52–67.

[5] D.Karaboga,An idea based on honey bee swarmfor numerical optimization,

technical report-TR06,Erciyes University,Engineering Faculty,Comput.Eng.

Dep.(2005).

[6] D.Karaboga,B.Basturk,A powerful and efﬁcient algorithm for numerical

function optimization:artiﬁcial bee colony (abc) algorithm,J.Global Optim.

39 (3) (2007) 459–471.

[7] D.Karaboga,B.Basturk,Artiﬁcial bee colony (ABC) optimization algorithm for

solving constrained optimization problems,Lec.Notes Comput.Sci 4529 (2007)

789–798.

[8] D.Karaboga,B.Akay,C.Ozturk,Artiﬁcial bee colony (ABC) optimization

algorithmfor training feed-forward neural networks,Modeling Decisions for

Artif.Intell.4617 (2007) 318–329.

[9] A.Baykasoglu,L.Ozbakır,P.Tapkan,Artiﬁcial bee colony algorithm and its

application to generalized assignment problem,SwarmIntelligence:Focus on

Ant and Particle Swarm Optimization,I-Tech Education and Publishing,

Vienna Austria,2007,pp.113–144.

[10] M.F.Tasgetiren,Q.-K.Pan,P.N.Suganthan,A.H.-L.Chen,A discrete artiﬁcial

bee colony algorithm for the total ﬂowtime minimization in permutation

ﬂow shops,Inf.Sci.181 (2011) 3459–3475.

[11] I.E.Evangelou,D.G.Hadjimitsis,A.A.Lazakidou,C.Clayton,Data mining and

knowledge discovery in complex image data using artiﬁcial neural networks,

In:Proceedings of the Workshop on Complex Reasoning an Geographical

Data,Paphos,Cyprus,2001.

[12] M.S.Kamel,S.Z.Selim,New algorithms for solving the fuzzy clustering

problem,Pattern Recognition 27 (3) (1994) 421–428.

[13] M.Omran,A.Salman,A.P.Engelbrecht,Image classiﬁcation using particle

swarm optimization,In Proceedings of the 4th Asia-Paciﬁc Conference on

Simulated Evolution and Learning,Singapore,(2002),pp.370–374.

[14] J.Han,M.Kamber,Data Mining:Concepts and Techniques,Academic Press,

New York,NY,USA,2001.

[15] K.Krishna Murty,Genetic K-means Algorithm,IEEE Trans.Syst.,Man and

Cybernetics:Part B 29 (3) (1999) 433–439.

[16] S.Z.Selim,K.Al-Sultan,A simulated annealing algorithm for the clustering

problem,Pattern Recognition 24 (10) (1991) 1003–1008.

[17] P.S.Shelokar,V.K.Jayaraman,B.D.Kulkarni,An ant colony approach for

clustering,Anal.Chim.Acta 509 (2) (2004) 187–195.

[18] V.D.Merwe and A.P.Engelbrecht,Data clustering using particle swarm

optimization,in Proceedings of IEEE Congress on Evolutionary Computation

(CEC 03),Canbella,Australia,(2003),pp.215–220.

[19] M.Omran,A.P.Engelbrecht,A.Salman,Particle swarm optimization method for

image clustering,Int.J.Pattern Recognition and Artif.Intell.19 (3) (2005)

297–321.

[20] D.Karaboga,C.Ozturk,A novel clustering approach:artiﬁcial bee colony

(ABC) algorithm,Appl.Soft Comput.11 (2010) 652–657.

[21] C.Zhang,D.Ouyang,J.Ning,An artiﬁcial bee colony approach for clustering,

Expert Syst.Appl.37 (7) (2010) 4761–4767.

[22] W.Zou,Y.Zhu,H.Chen,X.Sui,A clustering approach using cooperative

artiﬁcial bee colony algorithm,Discrete Dynamics in Nat.Soc.(2010) 16,

Article ID 459796.

[23] C.L.Blake,C.J.Merz,UCI Repository of Machine Learning Databases,

/http://archive.ics.uci.edu/ml/datasets.htmlS.

[24] D.Karaboga,B.Basturk,On the performance of artiﬁcial bee colony (ABC)

algorithm,Appl.Soft Comput.8 (1) (2008) 687–697.

[25] D.Karaboga,B.Akay,A comparative study of artiﬁcial bee colony algorithm,

Appl.Math.Comput.214 (2009) 108–132.

[26] J.J.Holland,Adaptation in Natural and Artiﬁcial Systems,University of

Michigan Press,1975.

[27] C.F.Juang,A hybrid genetic algorithm and particle swarm optimization for

recurrent network design,IEEE Trans.Syst.,Man,and Cybernetics,Part B 34

(2004) 997–1006.

[28] X.H.Shi,Y.H.Lu,C.G.Zhou,H.P.Lee,W.Z.Lin,Y.C.Liang,Hybrid evolutionary

algorithms based on PSO and GA,In:Proceedings of the IEEE Congress on

Evolutionary Computation(CEC’03),4 (2003),pp.2393–2399.

[29] H.Zhao,Z.Pei,J.Jiang,R.Guan,C.Wang,X.Shi,A hybrid swarm intelligent

method based on geneticalgorithm and artiﬁcial bee colony,Lect.Notes

Comput.Sci.6145 (2010) 558–565.

[30] F.van den Bergh,A.P.Engelbrecht,A cooperative approach to particle swam

optimization,IEEE Trans.Evol.Comput.8 (3) (2004) 225–239.

[31] H.Chen,Y.Zhu,K.Hu,Discrete and continuous optimization based on multi-

swarm coevolution,Nat.Comput.9 (3) (2010) 659–682.

[32] J.J.Liang,A.K.Qin,P.N.Suganthan,S.Baskar,Comprehensive learning particle

swarm optimizer for global optimization of multimodal functions,IEEE

Trans.Evol.Comput.10 (2006) 281–295.

[33] R.Salomon,Reevaluating genetic algorithm performance under coordinate

rotation of benchmark functions,Bio Syst.39 (1996) 263–278.

[34] Y.Shi,R.C.Eberhart,Empirical study of particle swarm optimization,In

Proceedings of the IEEE Congress on Evolutionary Computation(CEC’99),

Piscataway,NJ,USA,3 (1999),pp.1945–1950.

[35] B.Akay,D.Karaboga,Parameter tuning for the artiﬁcial bee colony algorithm.

In:Proceeding of the First International Conference (ICCCI’09),Wroclaw,

Poland,(2009),pp.608–619.

[36] D.Karaboga,B.Akay,A modiﬁed artiﬁcial bee colony (ABC) algorithm for

constrained optimization problems,Appl.Soft Comput.,11 (3),3021–3031.

[37] Z.G

¨

ung

¨

or,A.

¨

Unler,K-harmonic means data clustering with simulated

annealing heuristic,Appl.Math.Comput.184 (2) (2007) 199–209.

[38] T.Niknam,B.Bahmani Firouzi,M.Nayeripour,An efﬁcient hybrid evolutionary

algorithmfor cluster analysis,World Appl.Sci.J.4 (2) (2008) 300–307.

[39] K.Polat,S.Sahan,H.Kodaz,S.G

¨

unes,Breast cancer and liver disorders

classiﬁcation using artiﬁcial immune recognition system (AIRS) with perfor-

mance evaluation by fuzzy resource allocation mechanism,Expert Syst.Appl.

32 (17) (2007) 172–183.

Xiaohui Yan received his B.S.degree in Industry

Engineering from Huazhong University of Science and

Technology,Wuhan,China,in 2007.He is currently

pursuing the Ph.D.degree at Shenyang Institute of

Automation,Chinese Academy of Sciences,Shenyang,

China.His current research interests include swarm

intelligence,bioinformatics and computational biol-

ogy,neural networks,and the application of the

intelligent optimization methods on data mining and

scheduling.

Yunlong Zhu is the Director of the Key Laboratory of

Industrial Informatics,Shenyang Institute of Automa-

tion,Chinese Academy of Sciences.He received his

Ph.D.in 2005 from the Chinese Academy of Sciences,

China.He has research interests in various aspects of

Enterprise Information Management but he has

ongoing interests in artiﬁcial intelligence,data mining,

complex systems and related areas.Prof.Zhu’s

research has led to a dozen professional publications

in these areas.

Wenping Zou earned his B.S.degree in Computer

Sciences and Technology from Shenyang University

of Technology in Shenyang,Liaoning,China,in 2006.

He is now pursuing his Ph.D.in Shenyang Institute of

Automation of the Chinese Academy of Sciences.His

current research interests include swarm intelligence,

bioinformatics and computational biology,with an

emphasis on evolutionary and other stochastic opti-

mization methods.

Wang Liang obtained his M.S.degree in automatic

control from Northeast University,Shenyang,China,in

2009.He is currently pursuing the Ph.D.degree at

Shenyang Institute of Automation,Chinese Academy

of Sciences,Shenyang,China.His current research

interests include data mining,social computing and

decision support systems.

X.Yan et al./Neurocomputing 97 (2012) 241–250250

## Σχόλια 0

Συνδεθείτε για να κοινοποιήσετε σχόλιο