Neural Information Processing - Letters and Reviews Vol.10, No.3, March 2006

51

Neural Networks for Solving Quadratic Assignment Problems

Gamil A. Azim

College of Computer, Qassim University, Saudi Arabia

Email: gazim3@hotmail.com

(Submitted on December 14, 2005; Accepted on March 31, 2006)

Abstract— In this paper the Hopfield neural networks are adopted to solve the quadratic

assignment problem, which is a generalization of the traveling salesman’s problem (TSP),

the graph-partitioning problem (GPP), and the matching problem. When the Hopfield

neural network was applied alone, a sub-optimal solution was obtained. By adding the 2-

exchange we obtained a solution very close to the optimum solution. The relationship

between the gain of neuron λ and the penalty coefficient α of the Hopfield model has been

experimentally analyzed.

Keywords— Neural networks, combinatorial optimization, quadratic assignment problem

1. Introduction

Neural networks can achieve high computational performance through massive parallelism. As shown in

[1-3], they usually provide good solutions to different combinatorial optimization problems including the NP-

complete traveling salesman’s problem. The quadratic assignment problem (QAP) belongs to a class of

optimization problems best suited for neural network applications. Furthermore, the QAP serves as a generalized

extension of some other important optimization problems such as the TSP, the graph partitioning problem

(GPP), and matching problem. There exist conventional algorithms for solving these problems such as the

branch and bound algorithm. However, the computational complexity of these algorithms is non polynomial. On

other hand, the computational time of neural network is not expected to grow rapidly with problem size. Also the

technology is available to implement neural network models on a very large-scale integrated (VLSI) chip. That‘s

why we are interested in neural network solutions.

This paper is organized as follows. In Section 2 and Section 3 the Hopfield neural network model and the

quadratic assignment problems are summarized, respectively. In Section 4 we show how to map the QAP into

Hopfield model, and a computational example is shown in Section 5.

2. Hopfield Neural Network

The Hopfield neural network is topologically characterized by the presence of feedback between each pair

of neurons. It is worth noting that we will focus on the continuous Hopfield model [2]. Each feedback is

associated with a weight that expresses the influence of one neuron on another, and each of them may be

supplied with external bias currents. The feedback and bias currents determine the net input of a neuron as

⎟

⎟

⎠

⎞

⎜

⎜

⎝

⎛

−+∆+=

∑

=

n

j

old

i

ijij

old

i

new

i

y

IxTtyy

1

η

, (1)

where the subscript i denotes the index of the neuron, η is a user-selected decay constant, ∆t is the time step-

size, x

j

is the out put of the neuron j, I

i

is the bias current of the neuron i, and T

ij

is the weight of the connection

between the neuron i and neuron j. The output of each neuron x

j

is then obtained through an nonlinear activation

function and is given by

.)tanh1(

2

1

jj

yx λ+=

(2)

LETTER

Neural Networks for Solving Quadratic Assignment Problems Gamil A. Azim

52

The parameter λ determines the slope of the sigmoid function, and is a user-selectable parameter. The updating

of neural activation can be either synchronous or asynchronous. We will assume that it is synchronous.

Hopfield and others [2] had shown that a recurrent network is stable if the weights matrix is symmetric and has

only zeros of the main diagonal. Of course this is only a sufficient condition. In the case of stable recurrent

networks it is possible to verify that the energy function (Lyapunov function) is given by

∑

∫

∑∑

=

−

==

+−−=

n

i

x

n

i

ii

n

ji

ijij

xdxgxIxxTxE

i

1

5.0

1

11,

)(

2

1

)(

λ

, (3)

where n is the number of neurons in the network. The network converges to local minima corresponding to some

vertices of the n-dimensional hypercube. [1,2,3]

3. Quadratic Assignment Problem

The goal of the quadratic assignment problem (QAP) of order n is to assign n objects to n positions. It is a

classic combinatorial optimization problem and belongs to a class of NP-hard problems. It is so easy to state, but

very difficult to solve.[13] The QAP has been used to develop a useful and suitable model for many different

real world situations. For an excellent review of the QAPs and their mathematical treatment, see the references

[6,7,8,9,14].

The QAP can be described mathematically as follows. Given a finite set N={1,…,n} and two matrices A =

[a

ij

] and B= [b

ij

]. We have to find a permutation ρ of the set N which minimizes the quadratic functional form of

∑ ∑

= =

=

n

i

n

j

jiij

baZ

1 1

)()(

)(

ρρ

ρ

. (4a)

For every permutation ρ there is a correspondent permutation matrix X=[x

ij

] with

1

1

=

∑

=

n

j

ij

x

for i = 1 to n , (4b)

1

1

=

∑

=

n

i

ij

x

for j = 1 to n , (4c)

where x

ij

∈{0,1} for i, j = 1,…,n . Eqs.(4b) and (4c) are called as permutations constraints. Hence the QAP can

be reformulated as an Integer Program with quadratic function as

,minargminarg

1,,

xxba

ijkljl

n

ji

n

lk

ik

XX

Z

∑ ∑

=

=

(5)

with the two conditions in Eqs.(4b) and (4c), and

⎩

⎨

⎧

=

ortherwise0

positiontoassignedisobject if1 ji

x

ij

The product a

ik

b

jl

in Eq.(5) is the cost associated with assigning the object i to position j and object l to position

k. The objective function Z is defined as the total cost for the assignment. The condition in Eq.(4b) ensures that

every object will be given a single position, and the condition in Eq.(4c) ensures that each position will be

occupied by only one object.

We briefly show some examples in this section. Other applications concern the design of the write keyboards

and control panels. See for example [5]. Many exact and heuristic algorithms have been developed in the field

of operations research (OR) to solve QAPs. Interested readers should consult with Ref. [15].

3.1 Facility layout

A typical application is the facility layout problem, in which n facilities have to be assigned to n locations.

In this particular case a

ik

is the flow from facility i to facility k, b

jl

is the distance between location j and location

l. Mathematically we get the following model:

⎩

⎨

⎧

=

ortherwise0

positionin placed isfacility if1 ji

x

ij

.

3.2 Backboard wiring

This application concerns placing certain modules on a board and connect the modules by wires. The

modules should be placed such that the total wire length becomes minimal. Mathematically we get the following

model:

Neural Information Processing - Letters and Reviews Vol.10, No.3, March 2006

53

⎩

⎨

⎧

=

ortherwise0

position inplacedismodule theif1 ji

x

ij

.

where a

ik

is the number of connections between module i and k, and b

jl

is the length of connections between

location j and location l.

3.3 Graph partitioning problem (GPP)

In the graph partitioning problems one tries to make a partition of a graph G=(V,E) into two equal-sized

sets with the minimum number of edges between the two sets. The GPP can be expressed as a QAP by defining

⎩

⎨

⎧

∨≤

=

otherwise0

2/,2/,if1 njinji

a

ij

f

and

⎩

⎨

⎧

∈

=

otherwise0

),(if1 Eji

b

ji

.

A partition represented by a permutation

ρ

s敦= 湥搠慳潬lo睳⸠䄠癥牴數e j belongs to the first set if

2/)( nj ≤

ρ

, and to the second set otherwise.

4. Mapping Quadratic Assignment Problem onto Hopfield Model

To map a problem onto a Hopfield neural network we need to perform the following:

(a) Choose a representation scheme which allows the outputs of neurons (states of networks) to be decoded

into a solution of the problem.

(b) Choose an energy function whose minimum value corresponds to the best solutions of the problem to be

mapped.

(c) Derive the parameters of the neural networks (the weights of the connections

T

and external bias currents

I

) from the energy function. These should appropriately represent the instance of the specific problem to

be solved.

The first step in the mapping of a combinatorial optimization problem is the representation of the objective

function as well as the problem constraints in the form of a simple energy function. There is no direct method for

mapping a constrained optimization problem onto Hopfield neural networks except through the additional terms

in the energy function E, which penalize the violation of the constraints.

For solving the QAP by Hopfield model [3], we arrange the neurons in a matrix form, in which each

neuron is identified by a set of double indices i and j indicating its row and column, respectively. The input-

output relationship of a neuron on row i and column j is given by x

ij

= g

λ

(y

ij

). There is a feedback path among

pairs of neurons, designated by T

lm,ij

and is referred to as a connection matrix. Furthermore, there is an external

bias I

ij

supplied to each neuron. The differential equation describing the dynamics of the neurons is

ij

n

m

n

l

lmlmij

ijij

IxT

y

dt

dy

++

−

=

∑ ∑

= =1 1

,

γ

,

where

γ

is the time constant of the neuron. It has been shown that the quadratic energy function may be defined

as

∑

∫

∑ ∑ ∑

−

= = =

+−−=

n

ji

x

n

ji

n

ml

n

ji

ijijlmijlmij

i

dxxgxIxxTxE

,

2/1

1

1,1,1,

,

)(

2

1

)(

λ

(6)

For the neural network its states is always in the interior of the n-dimensional hypercube defined by 0 ≤x

ij

≤ 1. Furthermore, it has been proven that the local minima of this energy function occur only on the 2

n

corners of

the hypercube. We construct an energy function in the form of Eq.(6), where the local minima correspond to the

solutions of the particular QAP. Then, from the correspondence between the constructed energy function and

Eq.(6), the parameters values of the neural network are determined. We used the penalty function as

)1(11)(

1 11

2

1 1

2

11

2

1

ijij

n

i

n

j

ij

n

j

n

i

n

j

ij

n

i

ij

n

i

n

j

ij

xxnxxxx −+

⎪

⎭

⎪

⎬

⎫

⎪

⎩

⎪

⎨

⎧

−+

⎭

⎬

⎫

⎩

⎨

⎧

−+

⎪

⎭

⎪

⎬

⎫

⎪

⎩

⎪

⎨

⎧

−=Π

∑∑∑ ∑∑∑∑ ∑

= == = === =

ξγβα

. (7)

The first three terms in the right side are equal to 0 for allowed permutation matrices, but may also be zero for

other cases such as double stochastic matrices. Therefore we add the forth term to ensure that the final solution

consists of only binary values. Thus,

Neural Networks for Solving Quadratic Assignment Problems Gamil A. Azim

54

⎩

⎨

⎧

=Π

otherwise1

matrixnpermutatioaisif0

)(

x

x

Other penalty functions can be formulated using the permutation constrains in Eqs.(4b) and (4c). [11,12]

We show that by a suitable choice of

ξ

i j

the diagonal of matrix

T

become null and no local minimum

exists in the inside of hypercube [0 1]

n

. After the relaxation of the permutation constraints by using the penalty

method, the QAP can be mathematically described as

}1,0{,

2

1

minargminarg

1,1,1,

,

∈

⎟

⎟

⎠

⎞

⎜

⎜

⎝

⎛

−−=

∑ ∑ ∑

= = =

ij

n

ji

n

ml

n

ji

ijijlmijlmij

xxIxxTZ

xx

. (8)

The relaxation of the permutation constraints means the addition of ∏(x) to the energy function, which penalizes

the violation of the constraints. By comparing Eq.(6) and Eq.(8), we find the elements of the connection matrix

and external biases as

{

}

jlikijjlikijjlikimjllmij

baT δδξδδεγδβδα −−+++−= )1(2

,

, (9a)

ijijijijij

banI

ε

ξ

γ

β

α

−

−

+

+

= 222

, (9b)

with

⎩

⎨

⎧

−

≥

=

otherwise

ba

ijij

ij

1

01

ε

.

The diagonal terms of the connection matrix become

)(2

ijijij

T

ξ

γ

β

α

−

+

+

−

=

(10)

If we choose

γ

β

α

ζ

++≥

ij

, the diagonal terms of the connection matrix become positive or zero. The matrix

T

and external bias currents

I

of the network become

⎪

⎭

⎪

⎬

⎫

⎪

⎩

⎪

⎨

⎧

−−

+−+−+−

−=

δδ

ξ

δδε

δδδδδδ

γβα

jlik

ij

jlikijjlik

jllmjlimlmjl

lmij

ba

T

)1(

)1()1()1(

2

,

(11)

ijijijij

banI εγβα −−++= )12(

(12)

In the following section, we apply the Hopfield model with connection matrix

T

and bias currents

I

determined by Eqs. (11) and (12).

5. Computation Tests

We have treated several examples of different sizes taken from the literature where matrices

A

and

B

are

symmetric with zero diagonal elements. The parameter

λ

=桥慩渠潦h湥畲潮⤠楳湣r敡獥搠慴敧畬慲n瑥牶慬猠慳†

1,)()1( >

=

+

amam

λ

λ

, (13)

where m is the interval size. It should be noted that we adopt the idea of simulated annealing where

λ

=

捯牲敳灯湤猠瑯p瑨攠楮癥牳≥映=emp 敲慴畲e.=坥W癥h潳敮⁴桥o灥湡汴礠捯pf晩捩敮琠慳f景汬潷献†f

nji

ij

,....1,0,,

=

∀

=

+

=

=

γ

β

α

ζ

β

α

.

The modification of the penalty coefficients is made in the following way:

1,

1

>

=

+

dd

kk

α

α

.

Here

α

0

is a positive arbitrary real number. To improve the neural solution, we have applied the 2-exchange

improvement method, which consists of choosing the best value of the objective function among all

permutations obtained by exchanging two elements of the neural solution [4].

For the examples of Nugent [10] we have tested several examples with different sizes for n=12, 20, and 30,

and show only the results of the simulation for n=30 in the Tables 1 and 2. The optimal solution has the cost

function of 6124.

The model parameters are defined as follows:

0

λ

㨠楮楴楡氠ia汵攠潦e

λ

⸠†

= MI吺a硩xum=mb敲e潦≥敲慴i潮o⸠.

Neural Information Processing - Letters and Reviews Vol.10, No.3, March 2006

55

Table 1. Experiment results for (n=

3

0),

0

λ

=0.5, MIT=10000, m=100,

0

α

=600, and d=1.1

CPUIIMPITPFon

4.32

632

8

1867094

4.32

6328

197

7160

4.55

6672

179

7290

4.35

6480

178

7514

4.34

6334

139

7322

Table 2. Experiment results for (n =30),

0

λ

=0.5, MIT=10000, m=300,

0

α

=450, and d=1.1

CPUI

IMP

ITP

Fon

4.34

6328

2417094

4.33

6328

190

7094

4.34

6372

212

7140

4.39

6256

170

7318

4.34

6328

241

7094

m: interval size of

λ

桡湧n⸠.

=

0

α

›=楴楡氠ia汵攠of⁴=攠pena汴礠捯敦晩捩敮琮†

=

f

α

: final penalty coefficient.

FON: value of the cost function.

ITP: number of iteration used to obtain the permutation (feasible solution).

IMP: cost function after improvement.

CPUI: computation time for improvement.

We observed that the computational times are similar with and without the improvement method. It may

show that the obtained solution belongs to the neighborhood of the optimal solutions. As shown in Figure 1 we

also found the relationship between

λ

湤=

α

景爠晥慳楢汥⁰e牭u≥慴楯n=汵瑩≥ns⸠

=

=

䙩Fu牥‱⸠周牥攠獯汵瑩潮= 牥ri潮猠楮=

λ

⡴桥h条gnf= 畲on)湤n

α

⡴桥⁰= 湡汴礠捯n晦楣楥湴⤠獰慣f⸠债P獥琠潦s

晥慳楢汥l汵瑩潮=⡰敲eu瑡瑩≥ns⤬⁂㨠獥琠潦obi湡特n琠 no≥=晥慳楢汥llu瑩≥湳Ⱐ䌺整映捯n≥in畯u猠獯汵瑩潮猠

=

Neural Networks for Solving Quadratic Assignment Problems Gamil A. Azim

56

6. Conclusion

We have developed a neural mapping solution method for finding a good solution to quadratic assignment

problems. We have shown that recurrent neural networks provide an interesting

alternative to classical

algorithms for the optimization problems.

However the

neural approach does not ensure convergence

deterministically. It has to be studded by more effective means such as the

characterization of the problem and

its surrounding conditions.

We have noted the difficulty of finding an effective definition and weighting for the numerous coefficients

associated with the

Hopfield network which are essential to achieve good behavior. It may be the reason why so

far only limited applications are investigated to the optimization problems.

This works has shown that the solutions found by using only the Hopfield method are not satisfactory.

However, if we consider them as initial solutions for our simple improvement method, then the obtained results

are much closer to the optimal solutions. We also observed that the additional computational times for the

improvement method is negligible, This observation shows that the neural solution belongs to the neighborhood

of the optimal solution. We also experimentally found the relationship between

λ

慮搠

α

潲摭i獳楢汥s

灥牭畴慴i潮潬u≥i潮献†

=

References

[1] J.J. Hopfield, “Neuronal networks and physical systems with emergent collective computational abilities,”

Proc. Nat. Acad. Sc. U.S.A., vol. 79, pp. 2554- 2558, 1982.

[2] J.J. Hopfield, “Neurons with granded response have collective computational properties like those of two

state neurons,” Proc. Nat. Acad. Sc. U.S.A., vol. 81, pp. 3088 –3092, 1984.

[3] J.J. Hopfield and D.W. Tank, “Neuronal computation of decisions in optimization problems,” Biol. Cybern,

vol. 52, pp 141- 152, 1985.

[4] G.C. Armour and E.S. Buffa, “A heuristic logarithm and simulative approach to relative location of

facilities,” Management Science, Vol. 9, pp. 294 -309, Jan. 1963.

[5] R. E. Burkard and J. Offerman, “Entwurf von Schreibmaschinentastaturen mittels quadratischer

zuordnungsprobleme,” Z. Oper. Res., vol. 21, pp. B121-B132, 1977.

[6] R.E. Burkard and U. Derigs, “Assignment and matching problems: solution methods with FORTAN

program,” Lecture Notes in Economics and Mathematical Systems, pp. 184, Springer, Berlin, 1980.

[7] R. E. Burkard and T. Bonniger, “A heuristic for quadratic Boolean programs with applications to quadratic

assignment problems,” European J. Operational Res., Vol. 13(4), pp. 374-386, 1983.

[8] R.E. Burkard and F. Rendl, “A Thermodynamically motivated simulation procedure for combinatorial

optimization problems,” European J. Operational Res., Vol. 17, pp. 169-174, 1984.

[9] R. E. Burkard, “Quadratic assignment problems,” European J. Operational Research., Vol. 15 (3), pp. 283-

289, 1984.

[10] C. E. Nugent, T.E. Vollman, and J. Ruml, “An experimental comparison of Techniques for the assignment

of facilities to locations,” Operations Research, Vol. 16, pp. 150-173, 1968.

[11] G.A. Azim, “Applications of neural networks to combinatorial optimization,” Ph.D., Thesis, Paris Dauphine

University, May 1992, France.

[12] G.A. Azim, “Mapping the bi-partition graph and quadratic assignment problems into Hopfield neural

networks,” Far- East Journa of applied Mathematics, Vol.2., 1998.

[13] T.C. Koopmans and M.J. Beckmann, “Assignment problem and the location of economic activities,”

Econometrica, vol. 25, pp. 53-76, 1957.

[14] R. E. Burkard, S. Karisch, and F. Rendl, “QAPLIB- A quadratic assignment problem library,” Eur. J. Oper.,

pp. 115-119, 1991.

[15] G. Finke, E. Rainer, R. E. Burkard, and F. Rendl, “Quadratic assignment problem,” Annals of Discrete

Mathematics, Vol. 31, pp. 61-82, 1987.

Neural Information Processing - Letters and Reviews Vol.10, No.3, March 2006

57

Gamil A. Azim

received M.S. and the Ph.D. degrees from Paris Dauphine

University, France, in 1988 and 1992, respectively. He is currently an Assistant

Professor in the Department of Computer Sciences, College of Computer, Qassim

University, K.S.A. His Current research interests include Neural Networks,

Combinatorial Optimization, Meta-heuristic Algorithms, Pattern Recognition and

Evolutionary Computation (Genetic algorithms and Genetic Programming).

## Σχόλια 0

Συνδεθείτε για να κοινοποιήσετε σχόλιο