Changes

Jump to navigation Jump to search
Line 87: Line 87:     
==Convergence and Prospective strategies==
 
==Convergence and Prospective strategies==
 +
 +
[[Image:SA_perform.jpg|thumb|Figure 2: An example of convergence for a Simulated Annealing Run]]
 +
 
Convergence when spoken in the context of simulated annealing refers to how (if at all) a particular algorithm will approach the correct solution (or for very difficult problems a close to correct solution).  There are many proofs of convergence that are given for certain types of simulated annealing algorithms ([[#References|[3]]] and [[#References|[7]]]), each with there own twist on cooling and other aspects of the algorithm's implementation.  To understand fully (in a mathematical sense) the subject of convergence one must look into the properties of Markov chains and there connections to Monte Carlo-like algorithms.  This topic is reserved for another wiki page.  However, citing the article by [[#References|[2]]], the ParSA library suggests that convergence speed is governed by the following equation:
 
Convergence when spoken in the context of simulated annealing refers to how (if at all) a particular algorithm will approach the correct solution (or for very difficult problems a close to correct solution).  There are many proofs of convergence that are given for certain types of simulated annealing algorithms ([[#References|[3]]] and [[#References|[7]]]), each with there own twist on cooling and other aspects of the algorithm's implementation.  To understand fully (in a mathematical sense) the subject of convergence one must look into the properties of Markov chains and there connections to Monte Carlo-like algorithms.  This topic is reserved for another wiki page.  However, citing the article by [[#References|[2]]], the ParSA library suggests that convergence speed is governed by the following equation:
   Line 99: Line 102:  
The primary objective for potential cooling strategies lies in the determination of the &alpha; and ''K'' factors given in the ParSA section on improving solution quality in a lesser amount of time.  By tracking both the chain length ''n'' and speed of convergence P(X<sub>n</sub> &notin; Cost<sub>min</sub>), one can find a linear plot relating all four of the quantities through the following relationship:
 
The primary objective for potential cooling strategies lies in the determination of the &alpha; and ''K'' factors given in the ParSA section on improving solution quality in a lesser amount of time.  By tracking both the chain length ''n'' and speed of convergence P(X<sub>n</sub> &notin; Cost<sub>min</sub>), one can find a linear plot relating all four of the quantities through the following relationship:
    +
[[Image:SA_perform_2.jpg|thumb|Figure 3: Another example of convergence for a Simulated Annealing Run]]
    
{|width="50%"
 
{|width="50%"
Line 105: Line 109:  
|align="center" width="80"|(2)
 
|align="center" width="80"|(2)
 
|}
 
|}
      
Once a sufficient number of runs have been completed, the &alpha; and ''K'' factors will be known and can thereby be exploited to find the most effective chain length to run multiple independent Markov chains.  Given the potential size of the search space, one can muse that the &alpha; factor will most likely be closer to one rather than to zero, because with the multiple run strategy, faster cooling will result in a particular chain settling very quickly to a minimum (which may be a local minimum).  After settling, the cluster can then move on to a new chain to settle to another minima.  If this is repeated, the chances of finding the global minima among one of the solutions is much greater than if only one chain were used [[#References|[4]]].
 
Once a sufficient number of runs have been completed, the &alpha; and ''K'' factors will be known and can thereby be exploited to find the most effective chain length to run multiple independent Markov chains.  Given the potential size of the search space, one can muse that the &alpha; factor will most likely be closer to one rather than to zero, because with the multiple run strategy, faster cooling will result in a particular chain settling very quickly to a minimum (which may be a local minimum).  After settling, the cluster can then move on to a new chain to settle to another minima.  If this is repeated, the chances of finding the global minima among one of the solutions is much greater than if only one chain were used [[#References|[4]]].
   −
The figures below represent convergence graphs of different optimization algorithms.  In figure 2 [[#References|[6]]], the lines with equilateral triangles dispersed throughout represent simulated annealing algorithms similar to ParSA.  The graph with the base of the triangle up represents adaptive simulated annealing, while the graph with the base of the triangles down represents non-adaptive simulated annealing.  In figures 3 and 4 [[#References|[5]]], the dotted line denoted by SA represents simulated annealing.
+
[[Image:SA_perform_3.jpg|thumb|Figure 4: Another example of convergence for a Simulated Annealing Run]]
   −
{|align=center
+
The figures to the right represent convergence graphs of different optimization algorithms.  In figure 2 [[#References|[6]]], the lines with equilateral triangles dispersed throughout represent simulated annealing algorithms similar to ParSA.  The graph with the base of the triangle up represents adaptive simulated annealing, while the graph with the base of the triangles down represents non-adaptive simulated annealing. In figures 3 and 4 [[#References|[5]]], the dotted line denoted by SA represents simulated annealing.
|[[Image:SA_perform.jpg|thumb|Figure 2: An example of convergence for a Simulated Annealing Run]]
  −
|[[Image:SA_perform_2.jpg|thumb|Figure 3: Another example of convergence for a Simulated Annealing Run]]
  −
|[[Image:SA_perform_3.jpg|thumb|Figure 4: Another example of convergence for a Simulated Annealing Run]]
  −
|}
      
==References==
 
==References==
1,359

edits

Navigation menu