Dragonfly algorithm

From Wikiversity
Jump to navigation Jump to search

Inspiration[edit | edit source]

An aggregation of dragonflies, during static swarm (migration)

The main inspiration of the Dragonfly Algorithm (DA) algorithm proposed in 2015 [1] originates from static and dynamic swarming behaviours. These two swarming behaviours are very similar to the two main phases of optimization using meta-heuristics: exploration and exploitation. Dragonflies create sub swarms and fly over different areas in a static swarm, which is the main objective of the exploration phase. In the static swarm, however, dragonflies fly in bigger swarms and along one direction, which is favourable in the exploitation phase.

Five primitive principles of swarming

For simulating the swarming behaviour of dragonflies, three primitive principles of swarming in insects proposed by Reynold [2] as well as two other new concepts have been utilized: separation, alignment, cohesion, attraction to food source, distraction from enemies.

These five concepts allow us to simulate the behaviour of dragonflies in both dynamic and static swarms. The DA algorithm is developed based on the framework of the Particle Swarm Optimization (PSO) algorithm, so there are two main vectors: step vector and position vector. These vectors store the movement directions/speed and position of dragonflies, respectively. The main equations for these two vectors are as follows:

where shows the separation weight, indicates the separation of the i-th individual, is the alignment weight, is the alignment of i-th individual, indicates the cohesion weight, is the cohesion of the i-th individual, is the food factor, is the food source of the i-th individual, is the enemy factor, is the position of enemy of the i-th individual, is the inertia weight, and is the iteration counter.

The equations for S, A, C, F, and E are as follows:

Swarm simulation of dragonflies when s=0.1, a=0.1, c=0.7, f=1, and e=1. Note that the green asterisk is food source, the red asterisk indicates the enemy, black circles are individuals, and blue lines are the step vector of the dragonflies


where is the position of the current dragonfly, shows the position of a food source, is the position of an enemy, is number of neibouring dragonfly, and indicates the position of the j-th neighbouring solution


With the step vector, the position of dragonflies are updated with the following equation:


The right figure shows how the proposed model moves the individuals around the search space with respect to each other as well as food source and enemy. Please note that the green asterisk is food source, the red asterisk indicates the enemy, black circles are individuals, and blue lines are the step vector of the dragonflies. To see the animation more than once, you need to click on the figure. With the parameters(s, a, c, f, and e), we are able to simulate different swarming behaviour. For the figure, s=0.1, a=0.1, c=0.7, f=1, and e=1 have been utilized. The flying speed of dragonflies slowed down because w was linearly decreased from 0.9 to 0.4. Otherwise, dragonflies only explore the search space without convergence towards a point (exploitation). Note that there is no random walk in the movement of alone individuals in the above figure, while in DA, an individual moves using a random walk if there is no neighbouring solution at all.

DA algorithm[edit | edit source]

The DA algorithm has been proposed for solving single-objective optimization problems. The Pseudo codes of this algorithm are as follows (for the equations' details, please refer to the paper):

Initialize the dragonflies population Xi (i = 1, 2, ..., n)
Initialize step vectors ΔXi (i = 1, 2, ..., n)
while the end condition is not satisfied
       Calculate the objective values of all dragonflies
       Update the food source and enemy
       Update w, s, a, c, f, and e
       Calculate S, A, C, F, and E using Eqs. (3.1) to (3.5) in the paper (or above the page)
       Update neighbouring radius
       if a dragonfly has at least one  neighbouring dragonfly
               Update velocity vector using Eq. (3.6) in the paper (or above the page)
               Update position vector using Eq. (3.7) in the paper (or above the page)
       else
               Update position vector using Eq. (3.8) in the paper (or above the page)
       end if
       Check and correct the new positions based on the boundaries of variables
end while

Binary Dragonfly Algorithm (BDA)[edit | edit source]

a v-shapedTransfer function

Since the DA algorithm is only able to solve continuous problems, it should be changed to solve binary problems. Basically, in discrete binary spaces, the position updating means switching between 0 and 1 values. The binary version of this algorithm has been name Binary DA (BDA), which is suitable for solving discrete problems. In order to do this, a v-shaped transfer function has been used. A transfer function maps a continuous search space to a binary. Transfer functions are computationally cheap tools for converting a continuous algorithm to a binary one. Such functions define the probability of changing the elements of a position vector from 0 to 1 or vice versa. The transfer function that has been used is illustrated on the right (for the equation, please refer to the paper).

After all the pseudo codes of the BDA algorithm are as follows:

Initialize the dragonflies population Xi (i = 1, 2, ..., n)
Initialize step vectors ΔXi (i = 1, 2, ..., n)
while the end condition is not satisfied
      Calculate the objective values of all dragonflies
      Update the food source and enemy
      Update w, s, a, c, f, and e 
      Calculate S, A, C, F, and E using Eqs. (3.1) to (3.5) in the paper (or above the page)
      Update step vectors using Eq. (3.6) in the paper (or above the page)
      Calculate the probabilities using Eq. (3.11) in the paper 
      Update position vectors using Eq. (3.12) in the paper (or above the page)
end while

Multi-objective Dragonfly Algorithm (MDA)[edit | edit source]

Solving a multi-objective problem using a meta-heuristic require special considerations. In contrary to single-objective optimization, there is no single solution when considering multiple objectives as the goal of the optimization process. In this case, a set of solutions, which represents various trade-offs between the objectives, includes optimal solutions of a multi-objective problem. Before 1984, mathematical multi-objective optimization techniques were popular among researchers in different fields of study such as applied mathematics, operation research, and computer science. Since the majority of the conventional approaches (including deterministic methods) suffered from stagnation in local optima, however, such techniques were not applicable as there are not nowadays. This is the reason why stochastic optimization algorithms are reliable alternative due the high local optima avoidance.

As mentioned above there is no more single solution for a multi-objective problem. The existence of multiple objectives prevents us from comparing the solutions with relational operators such as >, <, etc. Therefore, we have to use the definition of Pareto optimality to compare solutions. In this case, a solution is better than (dominates) another solution if and only if it shows better or equal objective value on all of the objectives and provides a better value in at least one of the objective functions. The answer for such problems is a set of solutions called Pareto optimal solutions set. This set includes Pareto optimal solutions that represents the best trade-offs between the objectives.

In order to solve multi-objective problems using meta-heuristics, an archive (repository) is widely used in the literature to maintain the Pareto optimal solutions during optimization. Two key points in finding a proper set of Pareto optimal solutions for a given problem are convergence and coverage. Convergence refers to the ability of a multi-objective algorithm in determining accurate approximations of Pareto optimal solutions. Coverage is the distribution of the obtained Pareto optimal solutions along the objectives. Since most of the current multi-objective algorithms in the literature are of a posteriori type, the coverage and number of solutions are very important for decision making after the optimization process. The ultimate goal for a multi-objective optimizer is to find the most accurate approximation of true Pareto optimal solutions (convergence) with uniform distributions (coverage) across all objectives.

For solving multi-objective problems using the DA algorithm, it is first equipped with an archive to store and retrieve the best approximations of the true Pareto optimal solutions during optimization. The updating position of search agents is identical to that of DA, but the food sources are selected from the archive. In order to find a well-spread Pareto optimal front, a food source is chosen from the least populated region of the obtained Pareto optimal front, similarly to the Multi-Objective Particle Swarm Optimization (MOPSO) [3] algorithm in the literature. To find the least populated area of the Pareto optimal front, the search space should be segmented. This is done by finding the best and worst objectives of Pareto optimal solutions obtained, defining a hyper sphere to cover all the solutions, and dividing the hyper spheres to equal sub hyper spheres in each iteration. After the creation of segments, the selection is done by a roulette-wheel mechanism. This mechanism allows the MODA algorithm to have higher probability of choosing food sources from the less populated segments. Therefore, the artificial dragonflies will be encouraged to fly around such regions and improve the distribution of the whole Pareto optimal front. For selecting enemies from the archive, however, the worst (most populated) hyper sphere should be chosen in order to discourage the artificial dragonflies from searching around non-promising crowded areas. The selection is done by a roulette-wheel mechanism again.

The conceptual model of the best hyper spheres for selecting a food source or removing a solution from the archive are shown in the right figure.

Multi-objective optimization

The archive should be updated regularly in each iteration and may become full during optimization. Therefore, there should be a mechanism to manage the archive. If a solution is dominated by at least one of the archive residences, it should be prevented from entering the archive. If a solution dominates some of the Pareto optimal solutions in the archive, they all should be removed from the archive, and the solution should be allowed to enter the archive. If a solution is non-dominated with respect to all of the solutions in the archive, it should be added to the archive. If the archive is full, one or more than one solutions may be removed from the most populated segments to accommodate new solution(s) in the archive. These rules are taken from the original MOPSO paper written by Professor Coello Coello and his colleagues [4]. The above figure shows the best candidate hyper sphere (segments) to remove solutions (enemies) from in case the archive become full. All the parameters of the MODA algorithm are identical to those of the DA algorithm except two new parameters for defining the maximum number of hyper spheres and archive size.

After all, the pseudo codes of MODA are as follows (for the equations, please refer to the paper):

Initialize the dragonflies population Xi (i = 1, 2, ..., n) 
Initialize step vectors ΔXi (i = 1, 2, ..., n)
Define the maximum number of hyper spheres (segments)
Define the archive size
while the end condition is not satisfied 
      Calculate the objective values of all dragonflies
      Find the non-dominated solutions
      Update the archive with respect to the obtained non-dominated solutions
      if the archive is full 
             Run the archive maintenance mechanism to omit one of the current archive members
             Add the new solution to the archive
      end if
      if any of the new added solutions to the archive is located outside the hyper spheres
             Update and re-position all of the hyper spheres to cover the new solution(s)
      end if
      Select a food source from archive: =SelectFood(archive)
      Select an enemy from archive: =SelectEnemy(archive)
      Update step vectors using Eq. (3.11) in the paper (or above the page)
      Update position vectors using Eq. (3.12) in the paper (or above the page)
      Check and correct the new positions based on the boundaries of variables
end while

References[edit | edit source]

  1. S. Mirjalili, "Dragonfly Algorithm: A New Meta-heuristic Optimization Technique for Solving Single-objective, Discrete, and Multi-objective Problems", Neural Computing and Applications, in press, 2015, DOI: http://dx.doi.org/10.1007/s00521-015-1920-1
  2. Reynolds, Craig W. "Flocks, herds and schools: A distributed behavioral model." ACM SIGGRAPH computer graphics. Vol. 21. No. 4. ACM, 1987.
  3. Coello, Carlos A. Coello, and Maximino Salazar Lechuga. "MOPSO: A proposal for multiple objective particle swarm optimization." Evolutionary Computation, 2002. CEC'02. Proceedings of the 2002 Congress on. Vol. 2. IEEE, 2002.
  4. Coello, Carlos A. Coello, Gregorio Toscano Pulido, and M. Salazar Lechuga. "Handling multiple objectives with particle swarm optimization." Evolutionary Computation, IEEE Transactions on 8.3 (2004): 256-279.

External links[edit | edit source]