INTRODUCTION
In the last few years, using evolutionary algorithms to solve constrained optimization
problems has attracted a lot of researches, since such problems are very common
in realworld applications. Particle Swarm Optimization (PSO) (Eberhart
and Kennedy, 1995; Kennedy and Eberhart, 1995; Kennedy
et al., 2001; Hu et al., 2003b) is
one of the evolutionary algorithms that have been adopted to solve such problems.
PSO has been widely used to solve several types of optimization problems. Kennedy
and Spears (1998) proposed a matching algorithm for the multimodal problem
generator. Hu et al. (2003a) used PSO in nqueens
problem and Cagnina et al. (2004) focus on sequencing
problems. Hooshmand (2008) proposed an optimal design
using PSO in power system. Shakiba et al. (2008)
researched shorttime prediction of traffic rate. Wang et
al. (2008) researched structure learning of product unit neural networks.
Ren et al. (2010) presented a new global optimization
algorithm for mixedintegerdiscretecontinuous variables based on PSO. Hernane
et al. (2010a, b) focused on scheduling problem.
Nevertheless, they are unconstrained search technique or lack an explicit mechanism
to bias the search in constrained search spaces (Parsopoulos
and Vrahatis, 2002; Coath and Halgamuge, 2003; Michalewicz
and Schoenauer, 1996). This has motivated the development of a considerable
number of approaches to incorporate constraints into the fitness function of
PSO (for example, Runarsson and Yao (2000), MezuraMontes
and Coello (2005), Liang and Suganthan (2006), Takahama
and Sakai (2006), Cagnina et al. (2007) and
Zhao et al. (2008).
PSO was inspired by the movement of a group of animals such as a bird flock
or fish school. PSO explores the search space using a population of individuals
and the best performers (either within a group or with respect to the entire
population) affect the performance of the others. Each individual is named particle
and represents a possible solution within a multidimensional search space. The
particles have their own position and velocity which are constantly updated.
They record their past behavior and use it to move towards promising regions
of the search space (Kennedy and Eberhart, 1999).
A new PSO algorithm is presented in this study which is designed to solve constrained optimization problems. For that sake, our approach contains a constrainthandling technique as well as a mechanism to update the velocity and position of particle which is extended by adding to it a dynamic inertia weight as a way to avoid premature convergence.
STATEMENT OF THE PROBLEM
Without loss of generality, this study consider the general nonlinear programming
problem (MezuraMontes and Coello, 2005) in which the
destination is:
Subject to:
Where x is the vector of solutions x = [x_{1}, x_{2},..., x_{D}], n is the number of inequality constraints and p is the number of equality constraints (in both cases, constraints could be linear or nonlinear). All satisfying all inequality and equality constraint functions determine the feasible solution.
If F denotes the feasible region and with S to the whole search space which is a Ddimensional rectangle defined by the lower and upper bounds of each variable x_{d}, then it should be clear that F⊆S.
PREVIOUS RELATED WORK
There exist many studies on solving unconstrained optimization problems using particle swarm optimization. However, the proposals of constrainthandling mechanisms for PSO are relatively scarce. Next, this study will review the most representative work down in this area.
Takahama and Sakai (2006) presented a PSO algorithm
that combines the ε constrained method for handing constrains. The ε
constrained method is an algorithm transformation method which can convert algorithms
for unconstrained problems to algorithms for constrained problems using the
ε level comparison that compares the search points based on the constraint
violation of them. In the ε PSO, the agents who satisfy the constraints
move to optimize the objective function and the agents who don’t satisfy
the constraints move to satisfy the constraints. But sometimes the velocity
of agents becomes too big and they fly away from feasible region.
MezuraMontes and Coello (2005) added a simple multimembered
evolution strategy to solve global nonlinear optimization problems which uses
a simple diversity mechanism based on allowing infeasible solutions to remain
in the population. The authors concluded that their results obtained are very
competitive when comparing the proposed approach against other stateofthe
art techniques and its computational cost is lower than the cost required by
the other techniques compared.
Liang and Suganthan (2006) proposed a novel constrainthandling
mechanism based on multiswarms. The whole population is divided into a large
number subswarms, these subswarms are regrouped frequently by using various
regrouping schedules and information is exchanged among the particles in the
whole swarm. Through combining the exploitation and exploration together, this
neighborhood structure gives better performance on complex problems. The authors
reported that the proposed algorithm can find reasonable solutions for all of
the problems taken from MezuraMontes and Coello (2005).
It was observed that the standard PSO presents difficulty in finding a finetuning of the solution based on the previous study on particle swarm optimization. In order to improve the performance of the standard PSO, a dynamic way to change inertia weight that can control the exploitation and exploration ability of PSO was proposed and some improvement of performance has been obtained. The inertia weight not only changes with the iteration time but also has a finetuning to the best position of PSO. The Dynamic Inertia Weight Particle Swarm Optimization (DIWPSO) algorithm adds some feature of the algorithms cited above: the constrainthanding mechanism, the variant tolerance factor and the velocity update formula.
DYNAMIC INERTIA WEIGHT PARTICLE SWARM OPTIMIZATION ALGORITHM
Here, we describe in detail our proposed approach which is called Dynamic Inertia Weight Particle Swarm Optimization algorithm (DIWPSO).
General model: As in the standard model, A PSO algorithm operates on a population of particles. The particles are Ddimensional real number vectors which are due to the type of problem to optimize (with D decision variables). The particles evolve using two update formulas, one for position and another one for velocity. The best position found so far for the particles (for the gbest model) or in the neighborhood (for the lbest model) is recorded. The best value reached by each particle (pbest) is stored, too.
Particular model: As it was stated in some study (Cagnina
et al., 2004; Liang and Suganthan, 2006),
the gbest model tends to converge to a local optimum. Motivated by this,
some papers proposed a formula to update the velocity and position, using a
combination of both the gbest and the lbest models (Cagnina
et al., 2006, 2007). Such a formula is adopted
here as well and is shown in Eq. 4 and 5.
where, v_{id} is the velocity of the particle i at the dimension d;
ω is the inertia weight whose goal is to balance global exploration and
exploitation (Shi and Eberhart, 1998). c_{1} is
the personal learning factor and c_{2}, c_{3} are the social
learning factors. These 3 values multiply to 3 different random numbers within
the range [0..1]. p_{id} is the best position reached by the particle
i, p_{id} is the best position reached by any particle in the neighborhood,
p_{gd} is the best position reached by any particle in the swarm and
par_{id} is the value of the particle i at the dimension d (Cagnina
et al., 2007).
This study uses a wheel topology (Kennedy, 1999) to
compute the p_{id} value, in which one Eberhartindividual is connected
to all others and they are connected to only that one. The Wheel topology effectively
isolates individuals from one another, as all information has to be communicated
through the focal individual. This focal individual compares performances of
all individuals in the neighborhood and adjusts its trajectory toward the very
best. If adjustments result in improvement in the focal individual’s performance,
then that improvement is communicated out to the rest of the population. Thus
the focal individual serves as a kind of buffer or filter, slowing the speed
of transmission of good solutions through the population (Kennedy,
1999). This concept illustrated in Fig. 1.
Handing constraints: The typical constrainthandling scheme is Lagrangebased
method (Du and Pardalos, 1995). The Lagrangebased method
is a classical approach to formulate constrained optimization problems. By introducing
the Lagrangian formulation, the primal problem (Eq. 1) can
be written as:

Fig. 1: 
Wheel topology illustration with a population of size 12 
Subject to:
Where:
where, μ is a nx1 multiplier vector for the inequality constraints. λ is a px1 multiplier vector for the equality constraints. If the problem Eq. 1 satisfies the convexity conditions over S, then the solution of the primal problem Eq. 1 is the vector x* of the saddle point {x*, μ*, λ*} of L (x*, μ*, λ*) so that:
The saddle point can be obtained by minimizing L (x*, μ*, λ*) with
the optimal Lagrange multipliers (μ*, λ*) as a fixed vector of parameter.
In general, the optimal values of the Lagrange multipliers are unknown a priori
(Krohling and Coelho, 2006).
Dynamic inertia weight: Inertia weight is a very important parameter
in standard PSO algorithm which can be used to control the exploitation and
exploration ability of algorithm. Its value determines how much it succeeds
current rate: the bigger the inertia weight of algorithm is, the greater the
speed of particle gets and thus particle has stronger exploration ability; the
smaller the inertia weight is, the stronger the exploitation ability of particle
is (Tian et al., 2008). At present, in the study
of modified PSO algorithms with inertia weight, inertia weight is usually divided
into two types of static and dynamic. Since, in the process of evolution, PSO
algorithm with static inertia weight always maintains a constant value, making
the exploitation and exploration ability of algorithm not reach balance, thus
algorithm falls into local optimization easily and, in the latter of evolution,
the convergence rate greatly decreasing makes algorithm cannot converge at global
optimal solution. Therefore, in order to overcome these problems, it is particularly
important to research dynamic inertia weight.
This study proposes a new dynamic way of inertia weight change. During the
iteration, the search space around p_{g} have a high probability of
global optimal solution, so a high intensity search around p_{g} reached
by any particle in the swarm should be consider. Shi and Eberhart
(1998) used a way of linear decrease to adjust inertia weight and reference
(Cui and Zhu, 2007) found that with inertia weight decreasing,
algorithm early had a good global search capability and late had a better convergence
but the rate was relatively slow. The algorithm hopes have a good local search
in the search space around p_{g} and particles out of the search space
around p_{g} have a good global search capability. In that way, Dynamic
inertia weight can control the exploitation and exploration ability in a good
manner.
d_{ig} denotes the Euclidean distance of par_{i} and p_{g}. Inertia weight ω was the function of iteration time k and satisfied the equation:
where, d_{max} is the maximum Euclidean distance between any particle and particle p_{g}, MaxMumber was the maximal iteration time and the value of inertia weight dynamic decreased from ω_{max} to ω_{min}.
DIWPSO algorithm pseudo code: Figure 2 shows the
pseudo code of proposed DIWPSO algorithm. At beginning of the algorithm, DIWPSO
randomly initialize the position and the velocity of each particle.

Fig. 2: 
Pseudo code of DIWPSO 
After evaluating the particles and obtaining the best values: pbest, lbest,
gbest and Euclidean distance d_{ig}, the pop begin to evolve. During
the evolutionary process, the position and the velocity of each particle are
updated. In the evolutionary loop, a new dynamic inertia weight mechanism is
applied to improve the global search capability and convergence of PSO algorithm.
Finally, the result (gbest) is taken and returned.
SIMULATION RESULTS AND STATISTICAL ANALYSIS
To evaluate the performance of the proposed approach, this paper used the 13
test functions described in Runarsson and Yao (2000).
The test functions chosen contain characteristics that are representative of
what can be considered “difficult” global optimization problems for
an evolutionary algorithm. The detailed description of the test problems may
be consulted in its original source (Runarsson and Yao, 2000).
The independent runs per problem performed 30 times. The maximum numbers of generations was set to 500. For the optimization problems studied, the population size was set to 50. The maximum inertia factor ω_{max} = 1.2, the minimum inertia factor ω_{min} = 0.4, personal learning factor and social learning factors for c_{1}, c_{2} and c_{3} was set to 2.05. The parameter settings such as personal learning factor, swarm size, social learning factors and maximum and minimum inertia factors were empirically derived after numerous experiments.
The simulation results using the standard PSO and DIWPSO are shown in Table 1. Our approach found better best solution in eight problems (G2, G3, G4, G5, G6, G9, G10, G13) and a same best result in other four (G01, G7, G8, G11). Also, our approach reached better mean and worst result in all problems. A worse best result was found by DIWPSO than PSO in problem G12.
For problem G01, the optimal solution is 15. The convergence curves of best particle using DIWPSO and PSO are shown in Fig. 3. From Table 1, it can be seen that DIWPSO and PSO find the optimal solution, however, the median and mean of fitness found by DIWPSO is much better than the results obtained by PSO. Using DIWPSO, the optimal solution is found in approximately 150 generations, while PSO finds optimal solution in approximately 300 generations. From the convergence curves of best particle, it can be seen that the solution found by DIWPSO goes down fast than by PSO which demonstrates the robustness of the DIWPSO algorithm.
For problem G04, G05, G06, G07, G09 and G10, the convergence curves of best particle using DIWPSO and PSO resembles as for G01 which is shown in Fig. 410 in detail.
Table 1: 
Results Using Pso And DiwPso 


Fig. 3: 
Fitness of the best particle for problem G01 

Fig. 4: 
Fitness of the best particle for problem G04 
For problem G02, G11 and G13, the convergence curves of best fitness using
DIWPSO and PSO are shown in Fig. 1012,
respectively. From Table 1, it can be observed that DIWPSO
finds a solution very close to the optimal solution which is better than the
solution found using PSO.

Fig. 5: 
Fitness of the best particle for problem G05 

Fig. 6: 
Fitness of the best particle for problem G06 

Fig. 7: 
Fitness of the best particle for problem G07 

Fig. 8: 
Fitness of the best particle for problem G09 

Fig. 9: 
Fitness of the best particle for problem G10 
In these cases, DIWPSO performed significantly better than PSO in terms of
the median as well as the mean. In these problems, the convergence curves of
PSO Shake violently. But DIWPSO can found best solution in fewer generations.
For problem G03, the optimal solution is 1.000. The convergence curves of
best fitness using DIWPSO and PSO are shown in Fig. 13.

Fig. 10: 
Fitness of the best particle for problem G02 

Fig. 11: 
Fitness of the best particle for problem G11 

Fig. 12: 
Fitness of the best particle for problem G13 
From Table 1, it can be observed that DIWPSO finds the optimal
solution, while PSO does not. Using DIWPSO, the best fitness curve converges
in approximately 300 generations, while PSO didn’t get to convergence in
approximately 450 generations.
For problem G08, the optimal solution is 0.095825. The convergence curves of
best fitness using DIWPSO and PSO are shown in Fig. 14.
From the Fig. 14, it can be observed that the curve of best
fitness using PSO can’t get convergence at first.

Fig. 13: 
Fitness of the best particle for problem G03 

Fig. 14: 
Fitness of the best particle for problem G08 

Fig. 15: 
Fitness of the best particle for problem G12 
After about 150 generations violently shake, the curve gets convergence finally.
However, DIWPSO can get convergence at first.
For the problem G12, the optimal solution is 1.000. The convergence curves
of best fitness using DIWPSO and PSO are shown in Fig. 15.
In this case, according to Table 1, PSO finds the optimal
solution and DIWPSO finds a very close value to the optimal. The solution found
by PSO presents a smaller standard deviation (0.0126) than the results obtained
by DIWPSO (0.0233). The convergence curve of PSO shakes more horrible than
DIWPSO.
Table 2: 
Statistic comparison between DIWPSO and pso using the wilcoxon
rank test 

For the problems G01, G02, G03, G04, G05, G06, G07, G08, G09, G10, G11 and G13, it can be observed that DIWPSO converges much faster than PSO and obtains solutions closer to the optimal and presents a smaller standard deviation. For the problem G12, however, PSO provides slightly better results than DIWPSO.
For the validation of significance of results and comparison between the DIWPSO and PSO algorithms, the Wilcoxon rank sum test using the statistical toolbox provided in Matlab was carried out. The statistic test of the results given in Table 1 is provided in Table 2. The Wilcoxon rank sum test indicates that the means of DIWPSO and PSO are significantly different at the 95% confidence level, since h is equal to 1 in Table 2. The Wilcoxon rank sum test conforms that DIWPSO performs better than standard PSO except for problem G12.
CONCLUSIONS
In this study, A PSO with a dynamic inertia weight (DIWPSO) has been presented for solving constrained numerical optimization. The use of Euclidean distance for dynamic changing the inertia weight of PSO seems to provide a good compromise between the probability of having a large number of small amplitudes around the current points and a small probability of having larger amplitudes which may allow particles to move away from the current point and escape from local minima.
The DIWPSO algorithm was compared with the standard PSO on the benchmark constrained optimization problems. The simulation results showed that the algorithm DIWPSO outperforms the standard PSO. The DIWPSO algorithm presents faster convergence and obtained solutions closer to the optimal.