# Metaheuristic

In computer science, **metaheuristic** designates a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Metaheuristics make few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, metaheuristics do not guarantee an optimal solution is ever found. Many metaheuristics implement some form of stochastic optimization.

Other terms having a similar meaning as metaheuristic, are: derivative-free, direct search, black-box, or indeed just heuristic optimizer. Several books and survey papers have been published on the subject.^{[1]}^{[2]}^{[3]}^{[4]}^{[5]}^{[6]}^{[7]}

## Contents[hide] |

Metaheuristics are used for combinatorial optimization in which an optimal solution is sought over a discrete search-space. An example problem is the travelling salesman problem where the search-space of candidate solutions grows more than exponentially as the size of the problem increases, which makes an exhaustive search for the optimal solution infeasible. Additionally, multidimensional combinatorial problems, including most design problems in engineering such as form-finding and behavior-finding, suffer from the curse of dimensionality, which also makes them infeasible for exhaustive search or analytical methods. Popular metaheuristics for combinatorial problems include simulated annealing by Kirkpatrick et al.,^{[8]} genetic algorithms by Holland et al.,^{[9]} ant colony optimization by Dorigo,^{[10]} scatter search^{[11]} and tabu search^{[12]} by Glover.

Metaheuristics are also used for problems over real-valued search-spaces, where the classic way of optimization is to derive the gradient of the function to be optimized and then employ gradient descent or a quasi-Newton method. Metaheuristics do not use the gradient or Hessian matrix so their advantage is that the function to be optimized need not be continuous or even differentiable. Popular metaheuristic optimizers for real-valued search-spaces include particle swarm optimization by Eberhart and Kennedy,^{[13]} differential evolution by Storn and Price^{[14]} and evolution strategies by Rechenberg^{[15]} and Schwefel.^{[16]}

Metaheuristics based on decomposition techniques have also been proposed for tackling hard combinatorial problems of large size.^{[17]}

Timeline of main contributions.

Many different metaheuristics are in existence and new variants are continually being proposed. Some of the most significant contributions to the field are:

- 1952: Robbins and Monro work on stochastic optimization methods.
^{[18]} - 1952: Fermi and Metropolis develop an early form of pattern search as described belatedly by Davidon.
^{[19]} - 1954: Barricelli carry out the first simulations of the evolution process and use them on general optimization problems.
^{[20]} - 1963: Rastrigin proposes random search.
^{[21]} - 1965: Matyas proposes random optimization.
^{[22]} - 1965: Rechenberg proposes evolution strategy.
^{[23]}| - 1965: Nelder and Mead propose a simplex heuristic, which was shown by Powell to converge to non-stationary points on some problems.
^{[24]} - 1966: Fogel et al. propose evolutionary programming.
^{[25]} - 1970: Hastings proposes the Metropolis-Hastings algorithm.
^{[26]} - 1970: Cavicchio proposes adaptation of control parameters for an optimizer.
^{[27]} - 1970: Kernighan and Lin propose a graph partitioning method, related to variable-depth search and prohibition-based (tabu) search.
^{[28]} - 1975: Holland proposes the genetic algorithm.
^{[9]} - 1977: Glover proposes Scatter Search.
^{[11]} - 1978: Mercer and Sampson propose a metaplan for tuning an optimizer’s parameters by using another optimizer.
^{[29]} - 1980: Smith describes genetic programming.
^{[30]} - 1983: Kirkpatrick et al. propose simulated annealing.
^{[8]} - 1986: Glover proposes tabu search, first mention of the term
*metaheuristic*.^{[12]} - 1986: Farmer et al. work on the artificial immune system.
^{[31]} - 1986: Grefenstette proposes another early form of metaplan for tuning an optimizer’s parameters by using another optimizer.
^{[32]} - 1988: First conference on genetic algorithms is organized at the University of Illinois at Urbana-Champaign.
- 1988: Koza registers his first patent on genetic programming.
^{[33]}^{[34]} - 1989: Goldberg publishes a well known book on genetic algorithms.
^{[1]} - 1989: Evolver, the first optimization software using the genetic algorithm.
- 1989: Moscato proposes the memetic algorithm.
^{[35]} - 1991: Interactive evolutionary computation.
- 1992: Dorigo proposes the ant colony algorithm.
^{[10]} - 1993: The journal,
*Evolutionary Computation*, begins publication by the Massachusetts Institute of Technology. - 1993: Fonseca and Fleming propose MOGA for multiobjective optimization.
^{[36]} - 1994: Battiti and Tecchiolli propose Reactive Search Optimization(RSO) principles for the online self-tuning of heuristics.
^{[37]}^{[38]} - 1994: Srinivas and Deb propose NSGA for multiobjective optimization.
^{[39]} - 1995: Kennedy and Eberhart propose particle swarm optimization.
^{[13]} - 1995: Wolpert and Macready prove the no free lunch theorems.
^{[40]}^{[41]} - 1996: Mühlenbein and Paaß work on the estimation of distribution algorithm.
^{[42]} - 1996: Hansen and Ostermeier propose CMA-ES.
^{[43]} - 1997: Storn and Price propose differential evolution.
^{[14]} - 1997: Rubinstein proposes the cross entropy method.
^{[44]} - 1999: Taillard and Voss propose POPMUSIC.
^{[17]} - 2001: Geem et al. propose harmony search.
^{[45]} - 2002: Deb et al. propose NSGA-II for multiobjective optimization.
^{[46]} - 2004: Nakrani and Tovey propose bees optimization.
^{[47]} - 2005: Krishnanand and Ghose propose Glowworm swarm optimization.
^{[48]}^{[49]}^{[50]} - 2005: Karaboga proposes Artificial Bee Colony Algorithm (ABC).
^{[51]} - 2006: Haddad et al. introduces honey-bee mating optimization.
^{[52]} - 2007: Hamed Shah-Hosseini introduces Intelligent Water Drops.
- 2008: Wierstra et al. propose natural evolution strategies based on the natural gradient.
^{[53]} - 2008: Yang introduces firefly algorithm.
^{[54]} - 2008: Mucherino and Seref propose the Monkey Search
- 2009: Yang and Deb introduce cuckoo search.
^{[55]} - 2011: Hamed Shah-Hosseini proposes the Galaxy-based Search Algorithm.
^{[56]} - 2011: Tamura and Yasuda propose spiral optimization.
^{[57]}^{[58]}

Mathematical analyses of metaheuristics have been presented in the literature, see e.g. Holland’s schema theorem^{[9]} for the genetic algorithm, Rechenberg’s work^{[15]} on evolution strategies, the work by Trelea,^{[59]} amongst others, for analysis of particle swarm optimization, and Zaharie^{[60]} for analysis of differential evolution. These analyses make a number of assumptions in regard to the optimizer variants and the simplicity of the optimization problems which limit their validity in real-world optimization scenarios. Performance and convergence aspects of metaheuristic optimizers are therefore often demonstrated empirically in the research literature. This has been criticized in the no free lunch set of theorems by Wolpert and Macready,^{[41]} which, among other things, prove that all optimizers perform equally well when averaged over all problems. The practical relevance of the no free lunch theorems however is minor, because they will generally not hold on the collection of problems a practitioner is facing.

For the practitioner the most relevant issue is that metaheuristics are not guaranteed to find the optimum or even a satisfactory near-optimal solution. All metaheuristics will eventually encounter problems on which they perform poorly and the practitioner must gain experience in which optimizers work well on different classes of problems.

## See also

Metaheuristic methods are, generally speaking, sub-fields of:

Sub-fields of metaheuristics include:

- Local search
- Evolutionary computing which includes, amongst others:

Other fields of interest:

- Reactive Search Optimization
- Meta-optimization
- Hyper-heuristics

## Leave a Comment