Namespaces
Variants
Actions

Difference between revisions of "Simulated annealing"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s1101402.png" />-hard optimization problems, the use of exact algorithms for the evaluation of the optimal solution is computationally intensive, requiring an effort that increases exponentially with the size of the problem (cf. also [[Algorithm, computational complexity of an|Algorithm, computational complexity of an]]). In practice, exact algorithms are used for solving only moderately sized problem instances. This results in the development of heuristic optimization techniques which provide good quality solutions in a reasonable amount of computational time. One such popular technique is simulated annealing, which has been widely applied in both discrete and continuous optimization problems [[#References|[a1]]], [[#References|[a6]]], [[#References|[a11]]]. Simulated annealing is a stochastic search method modeled according to the physical annealing process which is found in the field of thermodynamics. Annealing refers to the process of a thermal system initially melting at high temperature and then cooling slowly by lowering the temperature until it reaches a stable state (ground state), in which the system has its lowest energy. S. Kirkpatrick, C.D. Gelatt and M.P. Vecchi [[#References|[a7]]] initially proposed an effective connection between simulated annealing and combinatorial optimization, based on original work by N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller [[#References|[a8]]].
+
<!--
 +
s1101402.png
 +
$#A+1 = 45 n = 0
 +
$#C+1 = 45 : ~/encyclopedia/old_files/data/S110/S.1100140 Simulated annealing
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
The main advantage of the simulated annealing algorithm is its ability to escape from local optima by using a mechanism which allows deterioration in the objective function value. That is, in the optimization process, simulated annealing accepts not only better-than-previous solutions, but also worse-quality solutions controlled probabilistically through the temperature parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s1101403.png" />. More particularly, in the first stages of simulated annealing, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s1101404.png" /> is relatively high, the search of the solution space is widely  "explored"  so that different solution directions are identified, and often  "bad"  solutions are accepted with high probability. During the course of the algorithm, the temperature <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s1101405.png" /> decreases in order to steadily reduce the probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s1101406.png" /> of accepting solutions that lead to worse objective function values. With the allowance of controlled  "uphill"  movements one can avoid the entrapment to local optima and, eventually, higher quality solutions can be obtained.
+
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
 +
For  $  {\mathcal N} {\mathcal P} $-
 +
hard optimization problems, the use of exact algorithms for the evaluation of the optimal solution is computationally intensive, requiring an effort that increases exponentially with the size of the problem (cf. also [[Algorithm, computational complexity of an|Algorithm, computational complexity of an]]). In practice, exact algorithms are used for solving only moderately sized problem instances. This results in the development of heuristic optimization techniques which provide good quality solutions in a reasonable amount of computational time. One such popular technique is simulated annealing, which has been widely applied in both discrete and continuous optimization problems [[#References|[a1]]], [[#References|[a6]]], [[#References|[a11]]]. Simulated annealing is a stochastic search method modeled according to the physical annealing process which is found in the field of thermodynamics. Annealing refers to the process of a thermal system initially melting at high temperature and then cooling slowly by lowering the temperature until it reaches a stable state (ground state), in which the system has its lowest energy. S. Kirkpatrick, C.D. Gelatt and M.P. Vecchi [[#References|[a7]]] initially proposed an effective connection between simulated annealing and combinatorial optimization, based on original work by N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller [[#References|[a8]]].
 +
 
 +
The main advantage of the simulated annealing algorithm is its ability to escape from local optima by using a mechanism which allows deterioration in the objective function value. That is, in the optimization process, simulated annealing accepts not only better-than-previous solutions, but also worse-quality solutions controlled probabilistically through the temperature parameter $  T $.  
 +
More particularly, in the first stages of simulated annealing, where $  T $
 +
is relatively high, the search of the solution space is widely  "explored"  so that different solution directions are identified, and often  "bad"  solutions are accepted with high probability. During the course of the algorithm, the temperature $  T $
 +
decreases in order to steadily reduce the probability $  {\mathsf P} $
 +
of accepting solutions that lead to worse objective function values. With the allowance of controlled  "uphill"  movements one can avoid the entrapment to local optima and, eventually, higher quality solutions can be obtained.
  
 
There are many factors that have a strong impact on the performance of the simulated annealing algorithm:
 
There are many factors that have a strong impact on the performance of the simulated annealing algorithm:
  
The initial temperature <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s1101407.png" />. A high value of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s1101408.png" /> means that the probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s1101409.png" /> of accepting inferior solutions is also high, leading to a diversified search in the first iterations of the algorithm. With low values of the initial temperature the search becomes more localized.
+
The initial temperature $  T _ {1} $.  
 +
A high value of $  T _ {1} $
 +
means that the probability $  {\mathsf P} $
 +
of accepting inferior solutions is also high, leading to a diversified search in the first iterations of the algorithm. With low values of the initial temperature the search becomes more localized.
  
 
The thermal equilibrium. This is the condition in which further improvement in the solution cannot be expected with high probability.
 
The thermal equilibrium. This is the condition in which further improvement in the solution cannot be expected with high probability.
  
The annealing schedule, which determines in what point of the algorithm and by how much the temperature <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014010.png" /> is to be reduced.
+
The annealing schedule, which determines in what point of the algorithm and by how much the temperature $  T $
 +
is to be reduced.
  
Now, consider a minimization process. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014011.png" /> denote the change of the objective function value between the current state and the state under consideration that occurs as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014012.png" /> decreases. This change corresponds to the change in the energy level in the analogy with physical annealing. Then the probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014013.png" /> of accepting a worse quality solution is equal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014014.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014015.png" /> is the Boltzmann constant (cf. [[Boltzmann equation|Boltzmann equation]]). Simulated annealing is presented below in pseudo-code:
+
Now, consider a minimization process. Let $  \Delta E $
 +
denote the change of the objective function value between the current state and the state under consideration that occurs as $  T $
 +
decreases. This change corresponds to the change in the energy level in the analogy with physical annealing. Then the probability $  {\mathsf P} $
 +
of accepting a worse quality solution is equal to $  e ^ { {{- \Delta E } / {( k _ {B} T ) } } } $,  
 +
where $  k _ {B} $
 +
is the Boltzmann constant (cf. [[Boltzmann equation|Boltzmann equation]]). Simulated annealing is presented below in pseudo-code:
  
 
  PROCEDURE simulated annealing()
 
  PROCEDURE simulated annealing()
Line 19: Line 45:
 
   Generate randomly an initial solution;
 
   Generate randomly an initial solution;
  
   initialize <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014016.png" />;
+
   initialize $  T $;
  
   DO <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014017.png" />
+
    
 +
DO $  T > 0 $
  
   DO thermal equilibrium not reached <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014018.png" />
+
    
 +
DO thermal equilibrium not reached $  \rightarrow $
  
                 Generate a neighbour state randomly;
+
                  
 +
Generate a neighbour state randomly;
  
                 evaluate <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014019.png" />;
+
                 evaluate $  \Delta E $;
  
                 update current state
+
                  
 +
update current state
  
                 IF <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014020.png" /> with new state;
+
                 IF $  \Delta E < 0 $
 +
with new state;
  
                 IF <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014021.png" /> with new state
+
                 IF $  \Delta E \geq  0 $
 +
with new state
  
with probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014022.png" />;
+
with probability $  e ^ { {{- \Delta E } / {( k _ {B} T ) } } } $;
  
   OD;
+
    
 +
OD;
  
   Decrease <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014023.png" /> using annealing schedule;
+
   Decrease $  T $
 +
using annealing schedule;
  
 
   OD;
 
   OD;
Line 47: Line 81:
 
  END simulated annealing;A pseudo-code for a simulated annealing procedure
 
  END simulated annealing;A pseudo-code for a simulated annealing procedure
  
The following example (the quadratic assignment problem) illustrates the basic principles of simulated annealing in combinatorial optimization. The quadratic assignment problem is defined as follows: Given a set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014024.png" /> and two <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014025.png" />-matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014026.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014027.png" />, find a permutation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014028.png" /> of the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014029.png" /> that minimizes the following function:
+
The following example (the quadratic assignment problem) illustrates the basic principles of simulated annealing in combinatorial optimization. The quadratic assignment problem is defined as follows: Given a set $  N = \{ 1 \dots n \} $
 +
and two $  ( n \times n ) $-
 +
matrices $  F = f _ {ij }  $
 +
and $  D = d _ {kl }  $,  
 +
find a permutation $  p $
 +
of the set $  N $
 +
that minimizes the following function:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014030.png" /></td> </tr></table>
+
$$
 +
\sum _ {i = 1 } ^ { n }  \sum _ {j = 1 } ^ { n }  f _ {ij }  d _ {p ( i ) p ( j ) }  .
 +
$$
  
In the context of location theory one uses the quadratic assignment problem formulation to model the problem of allocating <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014031.png" /> facilities to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014032.png" /> locations with the objective to minimize the cost associated not only with the distance between locations but also with the flow. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014033.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014034.png" /> correspond to the flow and distance matrices, respectively [[#References|[a10]]].
+
In the context of location theory one uses the quadratic assignment problem formulation to model the problem of allocating $  n $
 +
facilities to $  n $
 +
locations with the objective to minimize the cost associated not only with the distance between locations but also with the flow. $  F $
 +
and $  D $
 +
correspond to the flow and distance matrices, respectively [[#References|[a10]]].
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014035.png" /> represent the temperature at stage <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014036.png" /> of the procedure and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014037.png" /> represent the annealing schedule. Then the application of simulated annealing to the quadratic assignment problem [[#References|[a2]]], [[#References|[a12]]] can be described with the following steps:
+
Let $  T _ {i} $
 +
represent the temperature at stage $  i $
 +
of the procedure and let $  T _ {1} > \dots > T _ {f} $
 +
represent the annealing schedule. Then the application of simulated annealing to the quadratic assignment problem [[#References|[a2]]], [[#References|[a12]]] can be described with the following steps:
  
Start with a feasible solution (permutation). Make an exchange between two randomly selected permutation elements (a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014038.png" />-exchange). Evaluate the consequent change <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014039.png" />.
+
Start with a feasible solution (permutation). Make an exchange between two randomly selected permutation elements (a $  2 $-
 +
exchange). Evaluate the consequent change $  \Delta E $.
  
While <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014040.png" />, repeat the above step. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014041.png" />, then randomly select a variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014042.png" /> from a [[Uniform distribution|uniform distribution]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014043.png" />. Accept the pair exchange if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014044.png" />, and repeat the process.
+
While $  \Delta E < 0 $,  
 +
repeat the above step. If $  \Delta E \geq  0 $,  
 +
then randomly select a variable $  x $
 +
from a [[Uniform distribution|uniform distribution]] $  U ( 0,1 ) $.  
 +
Accept the pair exchange if $  x < {\mathsf P} ( \Delta E ) = e ^ { {{- \Delta E } / {T _ {i} } } } $,  
 +
and repeat the process.
  
The system remains at stage <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014045.png" /> until a fixed number of pair exchanges (equilibrium) has taken place before going to the next stage.
+
The system remains at stage $  i $
 +
until a fixed number of pair exchanges (equilibrium) has taken place before going to the next stage.
  
The procedure stops when all the temperatures in the annealing schedule have been used, i.e. when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s110/s110140/s11014046.png" />.
+
The procedure stops when all the temperatures in the annealing schedule have been used, i.e. when $  i > f $.
  
 
Simulated annealing has been used to solve a wide variety of combinatorial optimization problems, such as graph partitioning and graph colouring [[#References|[a3]]], [[#References|[a4]]], VLSI design [[#References|[a7]]], quadratic assignment problems [[#References|[a2]]], image processing [[#References|[a5]]] and many others. In addition, implementations of simulated annealing in parallel environments have recently appeared, with applications in cell placement problems, traveling-salesman problems, quadratic assignment problems, and others [[#References|[a9]]]. General references on simulated annealing can be found in [[#References|[a1]]] and in [[#References|[a11]]].
 
Simulated annealing has been used to solve a wide variety of combinatorial optimization problems, such as graph partitioning and graph colouring [[#References|[a3]]], [[#References|[a4]]], VLSI design [[#References|[a7]]], quadratic assignment problems [[#References|[a2]]], image processing [[#References|[a5]]] and many others. In addition, implementations of simulated annealing in parallel environments have recently appeared, with applications in cell placement problems, traveling-salesman problems, quadratic assignment problems, and others [[#References|[a9]]]. General references on simulated annealing can be found in [[#References|[a1]]] and in [[#References|[a11]]].

Revision as of 08:14, 6 June 2020


For $ {\mathcal N} {\mathcal P} $- hard optimization problems, the use of exact algorithms for the evaluation of the optimal solution is computationally intensive, requiring an effort that increases exponentially with the size of the problem (cf. also Algorithm, computational complexity of an). In practice, exact algorithms are used for solving only moderately sized problem instances. This results in the development of heuristic optimization techniques which provide good quality solutions in a reasonable amount of computational time. One such popular technique is simulated annealing, which has been widely applied in both discrete and continuous optimization problems [a1], [a6], [a11]. Simulated annealing is a stochastic search method modeled according to the physical annealing process which is found in the field of thermodynamics. Annealing refers to the process of a thermal system initially melting at high temperature and then cooling slowly by lowering the temperature until it reaches a stable state (ground state), in which the system has its lowest energy. S. Kirkpatrick, C.D. Gelatt and M.P. Vecchi [a7] initially proposed an effective connection between simulated annealing and combinatorial optimization, based on original work by N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller [a8].

The main advantage of the simulated annealing algorithm is its ability to escape from local optima by using a mechanism which allows deterioration in the objective function value. That is, in the optimization process, simulated annealing accepts not only better-than-previous solutions, but also worse-quality solutions controlled probabilistically through the temperature parameter $ T $. More particularly, in the first stages of simulated annealing, where $ T $ is relatively high, the search of the solution space is widely "explored" so that different solution directions are identified, and often "bad" solutions are accepted with high probability. During the course of the algorithm, the temperature $ T $ decreases in order to steadily reduce the probability $ {\mathsf P} $ of accepting solutions that lead to worse objective function values. With the allowance of controlled "uphill" movements one can avoid the entrapment to local optima and, eventually, higher quality solutions can be obtained.

There are many factors that have a strong impact on the performance of the simulated annealing algorithm:

The initial temperature $ T _ {1} $. A high value of $ T _ {1} $ means that the probability $ {\mathsf P} $ of accepting inferior solutions is also high, leading to a diversified search in the first iterations of the algorithm. With low values of the initial temperature the search becomes more localized.

The thermal equilibrium. This is the condition in which further improvement in the solution cannot be expected with high probability.

The annealing schedule, which determines in what point of the algorithm and by how much the temperature $ T $ is to be reduced.

Now, consider a minimization process. Let $ \Delta E $ denote the change of the objective function value between the current state and the state under consideration that occurs as $ T $ decreases. This change corresponds to the change in the energy level in the analogy with physical annealing. Then the probability $ {\mathsf P} $ of accepting a worse quality solution is equal to $ e ^ { {{- \Delta E } / {( k _ {B} T ) } } } $, where $ k _ {B} $ is the Boltzmann constant (cf. Boltzmann equation). Simulated annealing is presented below in pseudo-code:

PROCEDURE simulated annealing()
 InputInstance();
 Generate randomly an initial solution;
 initialize  $  T $;


DO $ T > 0 $


DO thermal equilibrium not reached $ \rightarrow $


Generate a neighbour state randomly;

                evaluate  $  \Delta E $;


update current state

                IF  $  \Delta E < 0 $

with new state;

                IF  $  \Delta E \geq  0 $

with new state

with probability $ e ^ { {{- \Delta E } / {( k _ {B} T ) } } } $;


OD;

 Decrease  $  T $

using annealing schedule;

 OD;
 RETURN(solution with the lowest energy)
END simulated annealing;A pseudo-code for a simulated annealing procedure

The following example (the quadratic assignment problem) illustrates the basic principles of simulated annealing in combinatorial optimization. The quadratic assignment problem is defined as follows: Given a set $ N = \{ 1 \dots n \} $ and two $ ( n \times n ) $- matrices $ F = f _ {ij } $ and $ D = d _ {kl } $, find a permutation $ p $ of the set $ N $ that minimizes the following function:

$$ \sum _ {i = 1 } ^ { n } \sum _ {j = 1 } ^ { n } f _ {ij } d _ {p ( i ) p ( j ) } . $$

In the context of location theory one uses the quadratic assignment problem formulation to model the problem of allocating $ n $ facilities to $ n $ locations with the objective to minimize the cost associated not only with the distance between locations but also with the flow. $ F $ and $ D $ correspond to the flow and distance matrices, respectively [a10].

Let $ T _ {i} $ represent the temperature at stage $ i $ of the procedure and let $ T _ {1} > \dots > T _ {f} $ represent the annealing schedule. Then the application of simulated annealing to the quadratic assignment problem [a2], [a12] can be described with the following steps:

Start with a feasible solution (permutation). Make an exchange between two randomly selected permutation elements (a $ 2 $- exchange). Evaluate the consequent change $ \Delta E $.

While $ \Delta E < 0 $, repeat the above step. If $ \Delta E \geq 0 $, then randomly select a variable $ x $ from a uniform distribution $ U ( 0,1 ) $. Accept the pair exchange if $ x < {\mathsf P} ( \Delta E ) = e ^ { {{- \Delta E } / {T _ {i} } } } $, and repeat the process.

The system remains at stage $ i $ until a fixed number of pair exchanges (equilibrium) has taken place before going to the next stage.

The procedure stops when all the temperatures in the annealing schedule have been used, i.e. when $ i > f $.

Simulated annealing has been used to solve a wide variety of combinatorial optimization problems, such as graph partitioning and graph colouring [a3], [a4], VLSI design [a7], quadratic assignment problems [a2], image processing [a5] and many others. In addition, implementations of simulated annealing in parallel environments have recently appeared, with applications in cell placement problems, traveling-salesman problems, quadratic assignment problems, and others [a9]. General references on simulated annealing can be found in [a1] and in [a11].

References

[a1] E.H.L. Aarts, J.H.M. Korst, "Simulated annealing and Boltzmann machines" , Wiley (1989)
[a2] R.E. Burkard, F. Rendl, "A thermodynamically motivated simulation procedure for combinatorial optimization problems" European J. Operations Research , 17 (1984) pp. 169–174
[a3] D.S. Johnson, C.R. Aragon, L.A. McGeoch, C. Schevon, "Optimization by simulated annealing: an experimental evaluation. Part I: graph partitioning" Operations Research , 37 (1989) pp. 865–892
[a4] D.S. Johnson, C.R. Aragon, L.A. McGeoch, C. Schevon, "Optimization by simulated annealing: an experimental evaluation. Part II: graph coloring and number partitioning" Operations Research , 39 (1991) pp. 378–395
[a5] S. Geman, D. Geman, "Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images" IEEE Trans. Patern Analysis and Machine Intelligence , 6 (1984) pp. 721–741
[a6] R. Horst, P.M. Pardalos, "Handbook of global optimization" , Kluwer Acad. Publ. (1995)
[a7] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, "Optimization by simulated annealing" Science , 220 (1989) pp. 671–680
[a8] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, E. Teller, "Equation of state calculations by fast computing machines" J. Chem. Phys. , 21 (1953) pp. 1087–1092
[a9] P.M. Pardalos, L.S. Pitsoulis, T.D. Mavridou, M.G.C. Resende, "Parallel search for combinatorial optimization: genetic algorithms, simulated annealing, tabu search and GRASP" , Lecture Notes in Computer Science , 980 , Springer (1995) pp. 317–331
[a10] "Quadratic assignment and related problems" P.M. Pardalos (ed.) H. Wolkowicz (ed.) , Discrete Math. and Theor. Comput. Sci. , 16 , Amer. Math. Soc. (1994)
[a11] P.J.M. van Laarhoven, E.H.L. Aarts, "Simulated annealing, theory and practice" , Kluwer Acad. Publ. (1987)
[a12] M.R. Wilhelm, T.L. Ward, "Solving quadratic assignment problems by simulated annealing" IEEE Trans. , 19 (1987) pp. 107–119
How to Cite This Entry:
Simulated annealing. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Simulated_annealing&oldid=17162
This article was adapted from an original article by P.M. PardalosT.D. Mavridou (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article