ANT COLONY OPTIMISATION APPLIED TO JOB SHOP SCHEDULING PROBLEM

bigskymanΤεχνίτη Νοημοσύνη και Ρομποτική

24 Οκτ 2013 (πριν από 3 χρόνια και 7 μήνες)

217 εμφανίσεις



ANT COLONY OPTIMISATION APPLIED TO JOB SHOP
SCHEDULING PROBLEM





A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQIUREMENTS


FOR THE DEGREE OF




Bachelor of Technology


in


Mechanical Engineering




By

DEBASISH DAS



Under the Guidance of

Prof. B.B. Biswal




Department of Mechanical Engineering

National Institute of Technology

Rourkela



2009







National Institute of Technology

Rourkela



CERTIFICATE


This is to certify that the thesis entitled.”ACO applied to job shop scheduli
ng problem”
submitted by Mr. Debasish Das in partial fulfillment of the requirements for the award of
Bachelor of technology Degree in Mechanical Engineering at National Institute of Technology,
Rourkela (Deemed University) is an authentic work carried ou
t by him under my guidance.


To the best of my knowledge the matter embodied in the thesis has not been submitted to any
University /Institute
for the awa
rd of any Degree or Diploma.



Date:










Prof. B.B. Biswal

Dept. of Mechanical Engg.

National Institute of Technology

Rourkela
-
769008



Acknowledgement





I

would like to express my deep sense of gratitude and respect to our supervisor Prof.
B.B.Biswal
, for his excellent guidance, suggestions and constructive criticism.
I

consider
ourselves extremely lucky to be able to work under the guidance of such a dynamic personality.


I am

also thankful to Prof K.P. Maity
and Prof. P.J. Rath
(Project

Coordinator
s
) for smooth
completion of the project curriculum.
I extend my

gratitude to all

staff members of Department
of Mechanical Engineering and other departments of NIT

Rourkela.


Lastly we would like to render heartiest thanks to our M.Tech students(ME) whose
ever helping
nature and suggestion has helped us to complete this present work.














Debasish Das


CONTENTS



Sl.No

Topic

Page

1.

Certificate

i

2.

Acknowledgement

ii

3.

Contents

iii

4.

Abstract

iv

5.

Chapter 1: General Introduction



1
-
2

6.

Chapter 2: Literature Survey

4
-

6

7.

Chapter 3:
Present Work and
Problem
Formulation

7
-

10

8.

Results and Discussion

11
-
13

9.

Conclusion

14
-
16

10.

References

17
-
24












ABSTRACT




The
problem of efficiently scheduling production jobs on several machines

is an important
consideration when attempting to make effective use of a multimachines

system such as a
flexible job shop scheduling production system (FJSP). In

most of its practical fo
rmulations, t
he
FJSP is known to be NP
-
hard,
so exact

solution methods are unfeasible for most problem
instances and heuristic approaches

must therefore be employed to find good solutions with
reasonable search time.

In this paper, two closely related appro
aches to the resolution of the
flexible job

shop scheduling production system are described. These approaches combine the

Ant system optimisation meta
-
heuristic (AS) with local search methods, including

tabu search.
The efficiency of the developed method i
s compared with others.



















CHAPTER 1

GENERAL INTRODUCTION



Ant Colony Optimization (ACO) is a metaheuristic inspired by the foraging behavior of ants,
which has been used to solve combinatorial optimization problems and the Ant System (AS)
was
the first algorithm within this class.

In the classical Job Shop Scheduling Problem, a finite number of jobs are to be processed by a
finite number of machines. Each job consists of a predetermined sequence of operations, which
will be processed withou
t interruptions by a period of time in each machine. The operations that
correspond to the same job will be processed according to their technological sequence and none
of them will be able to begin its processing before the precedent operation has finishe
d. A
feasible schedule is an assignment of operations in time on a machine without violation of the
job shop constraints. A makespan is defined as the maximum completion time of all jobs. The
objective of JSSP is to find a schedule that minimizes the makes
pan.


Modern hybrid heuristics are by their nature non
-
exhaustive, and so there is often scope for
different approaches to better previous solution methods according to the execution speed or the
quality of feasible solutions. Traditional approaches to res
olve the FJSP are as varied as the
different formulations of the problem, but include fast, simple heuristics [2][12], tabu search
[15], evolutionary approaches [5] and modern hybrid meta
-
heuristics that consolidate the
advantages of various different appr
oaches [1][13]. The ant colony optimisation (ACO) was
described by Dorigo in his PhD thesis [6] and was inspired by the ability and the organisation of
real ant colony using external chemical
pheromone trails
acting as a means of communication.

Ant system
algorithms have since been widely employed on the NP
-
hard combinatorial
Optimisation problems including problems related to Continuous Design Spaces research [4],
and job shop scheduling [16]. However, they have not previously been applied to the FJSP
des
cribed in what follows. Local search methods encompass many optimisation approaches and
have been shown that the efficiency of their use with an ant system approach [7]. The approach
described in this paper for the FJSP shows the quality of solutions found
, using benchmark
problems. The performances of the proposed approach are evaluated and compared with the
results obtained from other methods. In this paper, an application of the ant system algorithms
combined by the tabu search heuristic is proposed for
solving the FJSP. Thus, The FJSP is
described and formulated in section 2. Then, in section 3, The suggested approach by ACO with
the tabu search is described. An illustrative example is given in section 4. The last section will be
devoted to the presentat
ion of some results and some conclusions

relating to this research work.






















CHAPTER 2

LITERATURE SURVEY



Ant Colony Optimization (ACO) is a paradigm for designing metaheuristic algorithms

for
combinatorial optimization problems. The
first algorithm which can be

classified within this
fram
ework was presented in 1991
and, since then,

many diverse variants of the basic principle
have been reported in the literature.

The essential trait of ACO algorithms is the combination of a
priori inf
ormation

about the structure of a promising solution with a posteriori information about
the

structure of previously obtained good solutions.

Metaheuristic algorithms are algorithms
which, in order to escape from local

optima, drive some basic heuristic:
either a constructive
heuristic starting from a

null solution and adding elements to build a good complete one, or a
local search

heuristic starting from a complete solution and iteratively modifying some of its

elements in order to achieve a better one. T
he metaheuristic part permits the lowlevel

heuristic to obtain solutions better than those it could have achieved alone,

even if iterated.
Usually, the controlling mechanism is achieved either by constraining

or by randomizing the set
of local neighbor sol
utions to consider in local

search (as is the c
ase of simulated annealing
o
r
tabu search
), or by combining

elements taken by different solutions (as is the case of evolution
strategies

and genetic or bionomic

algorithms).

The characteristic of ACO algorith
ms is their
explicit use of elements of previous

solutions. In fact, they drive a constructive low
-
level
solution, as GRASP [30]

does, but including it in a population framework and randomizing the
construction

in a Monte Carlo way. A Monte Carlo combinati
on of different solution elements

is
suggested also by
Genetic Algorithms
, but in the case of ACO the probability

distribution is
explicitly defined by previously obtained solution components.

The particular way of defining components and associated probab
ilities is problem
-

specific, and
can be designed in different ways, facing a trade
-
off between the

specificity of the information
used for the conditioning and the number of solutions

which need to be constructed before
effectively biasing the probability

distribution to favor the emergence of good solutions.
Different applications have favored

either the use of conditioning at the level of decision
variables, thus requiring

a huge number of iterations before getting a precise distribution, or the
computat
ional

efficiency, thus using very coarse conditioning information.

The chapter is
structured as follows. Section 2 describes the common elements

of the heuristics following the
ACO paradigm and outlines some of the variants

proposed. Section 3 presents the

application of
ACO algorithms to a number of

different combinatorial optimization problems and it ends with a
wider overview

of the problem attacked by means of ACO up to now. Section 4 outlines the
most

significant theoretical results so far published ab
out convergence properties of

ACO
variants.



5.2.1 Ant System

The importance
of the original Ant System (AS)

resides mainly in being

the prototype of a
number of ant algorithms which collectively implement the

ACO paradigm. AS already follows
the outline

presented in the previous subsection,

specifying its elements as

follows.

The move
probability distribution defines probabilities p
ιψ
k to be equal to 0 for

all moves which are
infeasible (i.e., they are in the tabu list of ant
k
, that is a list

containin
g all moves which are
infeasible for ants
k
starting from state
ι
), otherwise

they are computed by means of formula
(5.1), where
α
and
β
are userdefined

parameters (0
≤ α,β ≤
1):



In formula 5.1, tabu
k
is the tabu list of ant
k
, while parameters
α
and
β
specify

the impact of trail
and attractiveness, respectively.

After each iteration
t
of the algorithm, i.e., when all ants have
completed a solution,

trails are updated by means of formula (5.2):





where
Δτιψ
represents the sum of the contributions of a
ll ants that used move

(
ιψ
) to construct
their solution,
ρ
, 0
≤ ρ ≤
1, is a user
-
defined parameter called

evaporation coefficient
, and
Δτιψ
represents the sum of the contributions of all

ants that used move (
ιψ
) to construct their solution.
The ants’ contr
ibutions are

proportional to the quality of the solutions achieved, i.e., the better
solution is, the

higher will be the trail contributions added to the moves it used.

For example, in
the case of the TSP, moves correspond to arcs of the graph,

thus state
ι
could correspond to a
path ending in node
i
, the state
ψ
to the same

path but with the arc (
ij
) added at the end and the
move would be the traversal of

arc (
ij
). The quality of the solution of ant
k
would be the length
L
k
of the tour

found by the ant and formula (5.2) would become
τ
ij
(
t
)=
ρ τ
ij
(
t
-
1)+
Δτ
ij
, with





where
m
is the number of ants and
k

Δτ
ij
is the amount of trail laid on edge (
ij
)

by ant
k
, which
can be computed as




Q being a constant parameter.

The ant system simply iterates a main loop where
m
ants construct in parallel

their solutions,
thereafter updating the trail levels. The performance of the algorithm

depends on the correct
tuning of several parameters, namely:
α, β
, relative

importance of
trail and attractiveness,
ρ
, trail
persistence,
τ
ij
(0), initial trail level,

m
, number of ants, and Q, used for defining to be of high
quality solutions with

low cost. The algorithm is the following.


1. {Initialization}

Initialize
τιψ
and
ηιψ
,

(
ιψ
).

2.
{Construction}

For each ant
k
(currently in state
ι
) do

repeat

choose in probability the state to move into.

append the chosen move to the
k
-
th ant's set tabu
k
.

until ant
k
has completed its solution.

end for

3. {Trail update}

For each ant move (
ιψ
) do

com
pute
Δτιψ

update the trail matrix.

end for

4. {Terminating condition}

If not(end test) go to step 2


5.2.2 Ant Colony System


AS was the first algorithm inspired by real ants behavior. AS was initially applied

to the solution
of the traveling salesman
problem but was not able to compete

against the state
-
of
-
the art
algorithms in the field. On the other hand he has the

merit to introduce ACO algorithms and to
show the potentiality of using artificial

pheromone and artificial ants to drive the search of
a
lways better solutions for

complex optimization problems. The next researches were motivated
by two

goals: the first was to improve the performance of the algorithm and the second

was to
investigate and better explain its behavior. Gambardella and Dorigo p
roposed

in 1995 the Ant
-
Q
algorithm,
an extension of AS which integrates

some ideas from Q
-
learning,
and in 1996

Ant
Colony System (ACS)
a simplified version of Ant
-
Q which maintained approximately the same
level of

performance, measured by algorithm compl
exity and by computational results.

Since ACS is the base of many algorithms defined in the following years we focus

the attention
on ACS other than Ant
-
Q. ACS differs from the previous AS because

of three main aspects:


Pheromone


In ACS once all ants
have computed their tour (i.e. at the end of each iteration) AS

updates the
pheromone trail using all the solutions produced by the ant colony.

Each edge belonging to one
of the computed solutions is modified by an amount

of pheromone proportional to its s
olution
value. At the end of this phase the

pheromone of the entire system evaporates and the process of
construction and

update is iterated. On the contrary, in ACS only the best solution computed
since

the beginning of the computation is used to
globally

update
the pheromone. As

was the
case in AS, global updating is intended to increase the attractiveness of

promising route but ACS

mechanism is more effective since it avoids long convergence

time by directly concentrate the
search in a neighborhood of t
he best tour

found up to the current iteration of the algorithm.

In ACS, the final evaporation phase is substituted by a
local updating
of the

pheromone applied
during the construction phase. Each time an ant moves from

the current city to the next the
phe
romone associated to the edge is modified in the

following way:


τ
ij
(
t
) = ρ

τ
ij
(
t
-
1
)+
(1


ρ
)


τ
0 where 0
≤ ρ ≤
1 is a parameter

(usually set at 0.9) and
τ
0 is the initial pheromone value.
τ
0
is defined as

τ
0=(
n
∙L
nn
)
-
1, where L
nn
is the tour
length produced by the execution of one ACS

iteration without the pheromone component (this is equivalent to a probabilistic

nearest neighbor
heuristic). The effect of local
-
updating is to make the desirability

of edges change dynamically:
every time an an
t uses an edge this becomes

slightly less desirable and only for the edges which
never belonged to a global best

tour the pheromone remains
τ
0. An interesting property of these
local and global

updating mechanisms is that the pheromone
τ
ij
(
t
) of each edge
is inferior limited

by
τ
0. A similar approach wa
s proposed with the Max
-
Min
-
AS
that

explicitly introduces lower
and upper bounds to the value of the pheromone trials.


State Transition Rule


During the construction of a new solution the state transition
rule is the phase

where each ant
decides which is the next state to move to. In ACS a new state

transition rule called
pseudo
-
random
-
proportional
is introduced. The
pseudorandom
-

proportional
rule is a compromise
between the
pseudo
-
random
state

choice rule

typically used in Q
-
learning [76] and the
random
-
proportional
action

choice rule typically used in Ant System. With the pseudo
-
random rule the
chosen

state is the best with probability
q
0 (
exploitation
) while a random state is chosen

with
probability 1
-
q
0

(
exploration
). Using the AS random
-
proportional rule the

next state is chosen
randomly with a probability distribution depending on
η
ij
and

τ
ij
. The ACS
pseudo
-
random
-
proportional
state transition rule provides a direct

way to balance between exploration
of new
states and exploitation of a priori and

accumulated knowledge. The best state is chosen with
probability
q
0 (that is a parameter

0

q
0

1 usually fixed to 0.9) and with probability (1
-
q
0) the
next state

is chosen randomly with a probability distri
bution based on
η
ij
and
τ
ij
weighted by

α
(usually equal to 1) and
β
(usually equal to 2) .





5.2.3 ANTS


ANTS is an extension of the AS,
which specifies some underdefined

elements of the general
algorithm, such as the attractiveness function to use

or

the initialization of the trail distribution.
This turns out to be a variation of the

general ACO framework that makes the resulting algorithm
similar in structure to

tree search algorithms. In fact, the essential trait which distinguishes
ANTS from a

tre
e search algorithm is the lack of a complete backtracking mechanism, which is

substituted by a probabilistic (
Non
-
deterministic
) choice of the state to move into

and by an
incomplete (
Approximate
) exploration of the search tree: this is the rationale

behin
d the name
ANTS, which is an acronym of
Approximated Nondeterministic

Tree Search
. In the following,
we will outline two distinctive elements

of the ANTS algorithm within the ACO framework,
namely the attractiveness

function and the trail updating mechanis
m.


Attractiveness


The attractiveness of a move can be effectively estimated by means of lower

bounds (upper
bounds in the case of maximization problems) on the cost of the

completion of a partial solution.
In fact, if a state
ι
corresponds to a partial
problem

solution it is possible to compute a lower
bound on the cost of a complete solution containing
ι
. Therefore, for each feasible move
ι
,
ψ
, it is
possible to compute

the lower bound on the cost of a complete solution containing
ψ
: the lower
the

bound
the better the move. Since a large part of research in ACO is devoted to the

identification of tight lower bounds for the different problems of interest, good

lower bounds are
usually available.

When the bound value becomes greater than the current upper b
ound, it is
obvious

that the considered move leads to a partial solution which cannot be completed

into a
solution better than the current best one. The move can therefore be

discarded from further
analysis. A further advantage of lower bounds is that in

m
any cases the values of the decision
variables, as appearing in the bound solution,

can be used as an indication of whether each
variable will appear in good solutions.

This provides an effective way of initializing the trail
values.




Trail update


A
good trail updating mechanism avoids stagnation, the undesirable situation in

which all ants
repeatedly construct the same solutions making any further exploration

in the search process
impossible. Stagnation derives from an excessive trail

level on the mo
ves of one solution, and
can be observed in advanced phases of the

search process, if parameters are not well tuned to the
problem.

The trail updating procedure evaluates each solution against the last
k
solutions

globally constructed by ANTS. As soon as
k

solutions are available, their moving

average
z
is
computed; each new solution z
curr
is compared with
z
(and then

used to compute the new
moving average value). If z
curr
is lower than
z
, the trail

level of the last solution's moves is
increased, otherwis
e it is decreased. Formula

(5.6) specifies how this is implemented:



where
z
is the average of the last
k
solutions and LB is a lower bound on the

optimal problem
solution cost. The use of a dynamic scaling procedure permits

discrimination of a small
achievement in the latest stage of search, while avoiding

focusing the search only around good
achievement in the earliest stages.

One of the most difficult aspects to be considered in

metaheuristic algorithms is

the trade
-
off between exploration and expl
oitation. To obtain good
results, an

agent should prefer actions that it has tried in the past and found to be effective in

producing desirable solutions (exploitation); but to discover them, it has to try actions

not
previously selected (exploration). Nei
ther exploration nor exploitation

can be pursued
exclusively without failing in the task: for this reason, the ANTS

algorithm integrates the
stagnation avoidance procedure to facilitate exploration

with the probability definition
mechanism based on attract
iveness and trails to determine

the desirability of moves.

Based on the elements described, the ANTS algorithm is as follows.

1.
Compute a (linear) lower bound LB to the problem

Initialize
τιψ
(

ι
,
ψ
) with the primal variable values

2. For
k=1,m (m= number of ants)
do

repeat

2.1 compute
ηιψ

(
ιψ
)

2.2 choose in probability the state to move into

2.3 append the chosen move to the
k
-
th ant’s tabu list

until
ant
k
has completed its solution

2.4 carry the solution to its local optimum

end for

3. For
each ant move (
ιψ
),

compute
Δτιψ
and update trails by means of (5.6)

4. If not
(end_test)
goto
step 2.


It can be noted that the general structure of the ANTS algorithm is closely akin

to that of a
standard tree search procedure. At each stage we hav
e in fact a partial

solution which is
expanded by branching on all possible offspring; a bound is then

computed for each offspring,
possibly fathoming dominated ones, and the current

partial solution is selected from among those
associated to the surviving

offspring

on the basis of lower bound considerations. By simply
adding backtracking and

eliminating the MonteCarlo choice of the node to move to, we revert to
a standard

branch and bound procedure. An ANTS code can therefore be easily turned into

an exact

procedure.










































Ant Colony System: A Cooperative Learning Approach to the Traveling
Salesman Problem





The state transition rule used by ant system, called a
random
-
proportional rule
, is given by (1),
which gives the

probability with which ant K in city R chooses to move to the city S.


where

is the pheromone,


is the inverse of the

distance
,

is the
set of cities that remain to be

visited by ant

k

positioned on city

r

(to make the solution

feasible),
and

is a parameter which determines the relative

importance of pheromone versus distance .



In (1) we multiply the pheromone on edge

(r,s)

by the

corresponding heuristic value


.
In this way we favor the

choice of edges which are shorter and which have a gr
eater

amount of
pheromone.

In ant system, the
g
lobal updating rule is implemented as

follows. Once all ants have
built their tours, pheromone is

updated on all edges according to




is a pheromone decay parameter,


is the length

of the tour performe
d by ant

k

,
and
m
is the number of ants.

Pheromone updating is intended to allocate a greater amount

of
pheromone to shorter tours. In a sense, this is similar to

a
reinforcement learning scheme
, in
which better

solutions get a higher reinforcement (as
happens, for example,

in genetic algorithms
under proportional selection). The

pheromone updating formula was meant to simulate the
change

in the amount of pheromone due to both the addition of new

pheromone deposited by
ants on the visited edges and to

ph
eromone evaporation.

Pheromone placed on the edges plays
the role of a distributed

long
-
term memory: this memory is not stored locally

within the
individual ants, but is distributed on the edges of

the graph. This allows an indirect form of
communication

c
alled
stigmergy
.














JOB SHOP SCHEDULING PROBLEM (JSSP)


The classic JSSP is composed of
n
-
jobs and
m
-
machines and it is denoted by
n
/
m
/
T
/
C
max
,
where the parameter
n
represents the number of jobs,
m
is the number of machines,
T
is the
technological sequence of the jobs in each machine, and
C
max
indicates the performance
measure which should be minimized (i.e., maximum time taken to complete all jobs). An
instance of the JSSP can be represented by a matrix as it is shown inTable I
.





In the example of Table I, we have
3
jobs,
3
machines and a technological sequence represented
in each row of the jobs. In the case of job
1
in Table I, we can see that it should be processed in
machine
1
first with a processing time of
3
(in the ma
trix, this time is represented between
parentheses). After that, this job
1
is processed in machine
2
with processing time of
3
and
finishes in machine
3
with a processing time of
3
. This description is called technological
sequence of job
1
. When a job
i
is processed in a machine
j
, it is called as
“operation (i,j)”
.

To apply the AS algorithm for JSSP we will use the graph representation
G
= (
V,C
_
D
)
described in [11] where:



V
is a set of nodes representing operations of the jobs together with two special nodes: a start
(0) node and an end (*) node, representing the beginning and the end of the schedule,
respectively.


C
is a set of conjunctive arcs representing technological
sequences of the operations.


D
is a set of disjunctive arcs representing pairs of operations which must be processed on the
same machine.

Figure 1 shows the corresponding graph for the instance of the JSSP described in Table I, whose
nodes represent ea
ch operation
(
i, j
)
where
i
is the current job and
j
its corresponding machine
(except for the nodes marked with (0) and (*) because they indicate the start and end of the
graph). The processing time of each operation is denoted by
t
ij
on each node. The co
njunctive
arcs give the technological sequence connecting all operations of the same job and disjunctive
arcs indicate pairs of operations in the same machine.


ANT SYSTEM (AS)


In this section, we describe the operation of the classical AS for the JSSP proposed in, in which a
population of m artificial ants builds solutions by iteratively applying n times a probabilistic
decision policy until obtaining a solution for the problem.

In order to communicate the individual
search experience to the colony, the ants mark the corresponding paths with some amount of
pheromone according to the type of solutions found. This amount is inversely proportional to the
cost of the path generated (
i.e., if the path found is long, the amount of pheromone deposited is
low; otherwise, the amount of pheromone deposited is high). Therefore, in the following
iterations more ants will be attracted to the most promising paths. Besides the pheromone, the
ant
s are guided by a heuristic value in order to help them in the construction process. All the
decisions taken by the ant (the path found or solution), are stored in a tabu list (TL). As it was
indicated above, to apply the AS algorithm, the instance of the
problem must be first constructed
in a graphical representation G. The AS starts with a small amount of pheromone c along each
edge on G. Each ant is then assigned a starting position, which is added to its tabu list. The initial
ant position is usually c
hosen at random.

Once the initialization phase is completed, each ant will independently construct a solution by
using equation (1) at each decision point until a complete solution has been found. After every
ant’s tabu list is full, the cost Cmax of the o
btained solution is calculated.

The pheromone amount along each edge (i,j) is calculated according to equation(2). Finally, all
tabu lists are emptied. If the stopping criterion has not been reached, the algorithm will continue
with a new iteration.

The de
cision of each ant is based, not only the amount of pheromone τij , located along edge
(i,j), but also on the heuristic value ηij along this edge. The transition probability to move from
node i to node j for the kth ant at iteration t is defined as:


wher
e α and β are parameters which allow the user to balance the importance given to the
heuristic (parameter β) with respect to the pheromone trails (parameter α). Setting β = 0 will
result in only considering the pheromone information in the ant’s decision,

whereas if α = 0, only
the heuristic information will be used for the ant.

The pheromone trail levels to be used in the next iteration of the algorithm are given by the
formula:

τij(t + 1) = ρ × τij (t) + Δτij (2)

where ρ is a coefficient, such that (1−ρ
) can be interpreted as a trail evaporation coefficient; that
is, (1 − ρ) × τij (t) represents the amount of trail which evaporates on each edge (i,j) in the period
between iteration t and t+ 1. The total amount of pheromone laid by the m ants Δτij , is
ca
lculated by:



where Δτk ij is calculated as:




where Q is a positive real valued constant and Cmax is the cost of the solution of the kth ant,
while Q/Ck max gives the quantity of pheromone per unit of time. It is important to note that
pheromone evap
oration causes the amount of pheromone on each edge of G to decrease over
time. The evaporation process is important because it prevents AS from prematurely converging
to a sub
-
optimal solution. In this way, the AS has the capability of forgetting bad (or
even
partially good) solutions, which favors a more in
-
depth exploration of the search space
.

CHAPTER 3

PROBLEM FORMULATION


The FJSP may be formulated as follows. Consider a set of
n
independent jobs, noted Á =
fJ
1
;J
2
;
:::;Jn;
1
∙ j ∙ Jg
, which are carried out by
K
machines
Mk
,
M
=
fM
1
;M
2
; :::;Mk;
1
∙ k ∙ Kg
. Each
job
Jj
consists of a sequence of
nj
operations
Oi; j
,
i
= 1
;
2
; :::nj
. Each routing has to be
performed to achieve a job. The execution of each operation
i
of a job
Jj
requires o
ne ressource
selected from a set of available machines. The assignment of the operation
Oi; j
to the machine
Mk µM
entails the occupation of the latter one during a processing time, noted
pi; j;k
. The
problem is thus to both determine an assignment scheme
and a sequence of the operations on all
machines that minimize some criteria.

• A set of
J
independent jobs.

• Each job is characterized by the earliest starting time
r j
and the latest finishing time
dj
.

• Denote by
pti; j
and
ri; j
respectively the proce
ssing time and the ready date of the operation
Oi; j
.

The
pi; j;k
represent the processing time
pti; j
with the machine
Mk
.

• A started operation can not be interrupted.

• Each machine can not perform more than one operation at the same time.

• The objecti
ve is to find an operation ordering set satisfying a cost function under problem
constraints.

T
he considered objective is to minimize the makespan
Cmax
.


ACO and Tabu search for FJSP Scheduling


In this stage, the application of the combined ant systems
with tabu search techniques in the
resolution of FJSP problem are described.


Construction Graph and Constraints


Generally, the FJSP can be represented by a bipartite graph with two categories of nodes:
Oi; j
and
Mk
. A task is mapped to a
Oi; j
node; a ma
chine is mapped to a
Mk
. There is an edge
between the
Oi; j
node and the
Mk
node if and only if the corresponding task can be assigned to
the corresponding machine while respecting the availability of the machine and the precedence
constraints among the op
erations of different jobs. The cost of assignment is directly related to
the processing time of the task upon the machine.

To model the process in a more straightforward manner, we use the construction graph that is
derived from the utilization matrix. Be
low is a sample construction graph.

Table 1: Construction graph of 4 machines and 7 tasks.



With this construction graph, we can transform the FJSP into a traveling ant problem.
Specifically, given the representative table of n rows and m columns, and each of its cells is
associated with
pi; j;k
, representing this one distance among
Oi; j
and
Mk
.

An ant seeks to travel
across the table in such a way that all of the following constraints will be satisfied: one and only
one cell is visited for each of the rows.

In the rest of this paper, "tour" and "solution" are used
interchangeably; a pair of (ope
ration, machine)

means: operation is assigned to machine, table 2.






Table 2: Solution of Construction graph table 1




Ant systems scheduling


The Ant system approach was inspired by the behaviour of the real ants. The ants depose the
chemical

pheromo
ne
when they move in their environment, they are also able to detect and to
follow pheromone

trails.

In our case, the pheromone trail describes how the ant systems build the
solution of the FJSP problem.

The probability of choosing a branch at a certain
time depends on
the total amount of pheromone on the

branch, which in turn is proportional to the number of ants
that used the branch until that time. The

probability
Pf

i jk
that an ant will assign an operation
Oi;
j
of job
Jj
to an available machine
Mk
.
Each of the

ants builds a solution using a combination of
the information provided by the pheromone trail t
i jk
and

by the heuristic function defined by h
i
jk
=
pi; j;k
.

Formally, the probability of picking that an ant
f th
will assign an operation
Oi; j
o
f job
Jj
to the

machine
Mk
is given in equation 1.




In this equation,
D
denotes the set of available non
-
executed operations set and where a and b are
parameters

that control the relative importance of trail versus visibility. Therefore the transition
probability

is a trade
-
off between visibility and trail intensity at the given time.


Updating the pheromone trail


To allow the ants to share information about good solutions, the updating of the pheromone trail

must be established. After each iteration
of the ant systems algorithm, equation 2 describes in
detail

the pheromone update used when all ants have completed an own scheduling solution
denote
Lants
, that

represent the length of ant tour. In order to guide the ant systems towards good
solutions, a
mechanism is

required to assess the quality of the best solution. The obvious choice
would be to use the best makespan

Lmin
=
Cmax
of all solutions given by a set of ant.





After all of the ants have completed their tours, the trail levels on all of the
arcs need to be
updated.

The evaporation factor r ensures that pheromone is not accumulated infinitely and
denotes the proportion

of ´SoldŠ pheromone that is carried over to the next iteration of the
algorithm. Then for each edge the

pheromone deposited by

each ant that used this edge are added
up, resulting in the following pheromone

level
-
update equation:



where
NBA
defines the numb
er of ants to use in the colony.

Tabu search
optimization


A simple tabu search was also implemented for this optimisation
FJSP problem. The proposed

approach is to allow the ants to build their solutions and then the resulting

solutions are taken to
a local optimum by the local search mechanism.

Each of these ant solutions is then used in the pheromone update stage. The local

search is
performed

on every ant solution, every iteration, so it needs to be fairly fast. In the case of the
FJSP problem, the

method is to pick the machine responsible to the
Cmax
and check if any
operations
Oi; j
could be swapped

between other machines

which would result in a lower
makespan.

Following their concept, the local search considers one problem machine at a time
and attempts to

swap one operation from the problem machine with any other (non
-
problem)
machine in the solution

(non
-
problem
operations). Then the ants are used to generate promising
scheduling production solutions

and the tabu search algorithm is used to try to improve these
solutions.

The tabu search is performed on each problem machine and continues until there is no
further
improvement

in the makspean value of the solution.

The set up parameter values


The set up parameter values used in the ant system scheduling algorithms are often very
important in

getting good results, however the appropriate values are very often entirel
y problem
dependent, and

cannot always be derived from features of the problem itself:


α
determines the degree to which pheromone trail is used as the ants build their solution. The
lower

the value, the less ‘attention’ the ants pay to the pheromone trai
l, but the higher values
implicate

the ants then perform too little exploration, after testing values in the range 0.1
-
0.75
this algorithm

works well with relatively high values (around 0.5
-
0.75).


β
determines the extent to which heuristic information is

used by the ants. Again, values
between

0.1
-
0.75 were tested, and a value around 0.5 appeared to offer the best trade
-
off
between following

the heuristic and allowing the ants to explore the research space.


Γ
is the value to which the pheromone trail va
lues are initialized. Initially the value of the
parameter

should be moderately high to encourage initial exploration, while the pheromone
evaporation

procedure will gradually stabilise the pheromone trail.


ρ

is the pheromone evaporation parameter and is

always set to be in the range [0
<
r
< x
]. It

defines how quickly the ants ‘forget’ past solutions. A higher value makes for a more aggressive

search; it tests a value of around 0.5
-
0.75 to find good solutions.


NBA
defines the number of ants to use in
the colony, a low value speeds the algorithm up
because

less search is done, a high value slows the search down, as more ants run before each
pheromone

update is performed. A value of 10 appeared to be a good compromise between
execution speed

and the qual
ity of the solution achieved.

It is interesting to note that for each
value of parameters the ant systems scheduling meta
-
heuristics

yields a good solution. Moreover,
its convergence speed depends essentially on the number of used ants

NBA
.


Building a sol
ution steps

The main steps in the strategy of the FJSP system by ant systems and tabu search algorithm are
given below.

• Initialize parameters
NBA
, a, b , t0, r.

• Create an initial
solution
and an empty tabu list of a given size.

In order to generate fea
sible and diverse solutions, initial ants are represented by solutio
ns issued
from heuristic rules
SPT, DL, FIFO, etc) and a random method. Heuristics are used to
approximate an optima solution as near as possible.

• Repeat the following steps until the
termination criteria are met:



Find
new solution
by ant systems procedure scheduling given in section 3.2.



Evaluate the quality of the new solution.



If a
new solution
is improved then the current
best solution
becomes
new solution



else If no
new solution
was improved then apply the tabu search optimisation given in section

3.4.



Add
solution
to the tabu list, if the tabu list is full then delete the oldest entry in the list.



Apply the updating pheromone trail procedure given in section 3.3.

• END Repeat

Illustration example

Let us consider a flexible job shop scheduling problem, this example is to execute three jobs
Jj

(
j=1,2,3
) and six machines
Mk
(
k
= 1
; : : :;
6) described in table 1.

Applying the ant systems meta
-
heuristic, the simulation

propose four different scheduling with
Cmax
= 19 ut (unit of time), shown in table 2 to 7.

The solution given in the table 7 has a makespan equal to 19 ut. The machine
M
5 is the cause of
this value of makespan. To solve this problem, the tabu search optim
isation is applied for this
solution. Indeed, this method finds the operation
O
2
;
2 for job
J
2 on
M
2 that can be swapped
with other machines which will reduce makespan to 18 ut. And this method finds that the
operation
O
1
;
3 for the job
J
1 executed by
M
2 and

can be swapped with
M
5 who will execute the
operation
O
2
;
2 for the job
J
2. Finally, the obtained solution by the tabu search is better than
before, table 8.








RESULTS AND DISCUSSIONS


All ant systems and tabu search optimisation results presented

are for 1000 iterations with 10 the
number of ants, and each run was performed 10 times. The algorithms have been coded in
Matlab and C++ and tested using a P4 Pentium processor 2.4 GHz and Windows XP system.

To illustrate the effectiveness and performanc
e of the algorithm proposed in this paper, six
representative benchmark FJSP instances (represented by problem
n£m
) based on practical data
have been selected to compute.

Concerning the FJSP instances, the different results show that the solutions obtained

are
generally acceptable and satisfactory. The values of the different objective functions show the
efficiency of the suggested approach, table 9. Moreover, the proposed method enables us to
obtain good results in a polynomial computation time. In fact, t
he efficiency of this approach can
be explained by the quality of the ant system algorithms combined by the tabu search heuristic to
the optimization of solutions.




















C
ONCLUSION


In this paper, a new approach based on the combination of the

ant system with tabu search
algorithm for solving flexible job
-
shop scheduling problems, is presented. The results for the
reformulated problems show that the ant systems with local search meta
-
heuristic can find
optimal solutions for different problems t
hat can be adapted to deal with the FJSP problem. The
performances of the new approach are evaluated and compared with the results obtained from
other methods. The obtained results show the effectiveness of the proposed method. Ant system
algorithms and th
e tabu search techniques described are very effective and they alone can
outperform all the alternative techniques.



























REFERENCES



1.

M. Dorigo and T. Stutzle, ANT COLONY OPTIMISATION, Cambridge,
Massachusetts, USA : The MIT Press,
2004
.

2.

J. Montgomery, C.Fayad and S. Petrovic, “Solution representation for job shop
scheduling problems in Ant Colony Optimisation” in Ant colony optimization and
swarm intelligence, 5
th

international workshop, ANTS 2006, Springer vol. 4150, pp. 484
-
491.

3.

Andrea Rossi and Gino Dini, “Flexible job shop scheduling with routing flexibility and
separable setup times using ant colony optimization method” in Robotics and Computer
-
Integrated Manufacturing 23(2007), pp. 503
-
516.

4.

Jacek Blazewicz, Wolfgang Domschke a
nd Erwin Pesch “The job shop scheduling
problem : Conventional and new solution techniques” in European Journal of Operational
Research 93(1996), pp. 1
-
33.

5.

A.S. Jain and S.Meeran “Deterministic job
-
shop scheduling: Past, present and future” in
in European

Journal of Operational Research 113(1999), pp. 390
-
434.

6.

Emanuel T´ellez
-
Enr´ıquez, Efr´en Mezura
-
Montes and Carlos A. Coello Coello

An Ant
System with steps counter for

the Job Shop Scheduling Problem
” in

2007 IEEE Congress
on Evolutionary Computation (C
EC 2007)
.

7.

Marco Dorigo, Senior Member, IEEE, and Luca Maria Gambardella, Member, IEEE
"Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman
Problem" in IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL.
1, NO. 1, APRIL 1997
.