Changes

Jump to navigation Jump to search
Line 86: Line 86:  
==Convergence and Prospective strategies==
 
==Convergence and Prospective strategies==
 
Convergence when spoken in the context of simulated annealing refers to how (if at all) a particular algorithm will approach the correct solution (or for very difficult problems a close to correct solution).  There are many proofs of convergence that are given for certain types of simulated annealing algorithms ([[#References|[3]]] and [[#References|[7]]]), each with there own twist on cooling and other aspects of the algorithm's implementation.  To understand fully (in a mathematical sense) the subject of convergence one must look into the properties of Markov chains and there connections to Monte Carlo-like algorithms.  This topic is reserved for another wiki page.  However, citing the article by [[#References|[2]]], the ParSA library suggests that convergence speed is governed by the following equation:
 
Convergence when spoken in the context of simulated annealing refers to how (if at all) a particular algorithm will approach the correct solution (or for very difficult problems a close to correct solution).  There are many proofs of convergence that are given for certain types of simulated annealing algorithms ([[#References|[3]]] and [[#References|[7]]]), each with there own twist on cooling and other aspects of the algorithm's implementation.  To understand fully (in a mathematical sense) the subject of convergence one must look into the properties of Markov chains and there connections to Monte Carlo-like algorithms.  This topic is reserved for another wiki page.  However, citing the article by [[#References|[2]]], the ParSA library suggests that convergence speed is governed by the following equation:
{align="center
+
 
 
{|width="50%"
 
{|width="50%"
 
|align="right"|
 
|align="right"|
Line 92: Line 92:  
|align="center" width="80"|(1)
 
|align="center" width="80"|(1)
 
|}
 
|}
|}
  −
      
Where P is the convergence speed, n is the subchain length and "''K'' and α are constants specific to the problem." [[#References|[4]]].
 
Where P is the convergence speed, n is the subchain length and "''K'' and α are constants specific to the problem." [[#References|[4]]].
Line 99: Line 97:  
The primary objective for potential cooling strategies lies in the determination of the &alpha; and ''K'' factors given in the ParSA section on improving solution quality in a lesser amount of time.  By tracking both the chain length ''n'' and speed of convergence P(X<sub>n</sub> &notin; Cost<sub>min</sub>), one can find a linear plot relating all four of the quantities through the following relationship:
 
The primary objective for potential cooling strategies lies in the determination of the &alpha; and ''K'' factors given in the ParSA section on improving solution quality in a lesser amount of time.  By tracking both the chain length ''n'' and speed of convergence P(X<sub>n</sub> &notin; Cost<sub>min</sub>), one can find a linear plot relating all four of the quantities through the following relationship:
   −
{|align="center"
+
 
 
{|width="50%"
 
{|width="50%"
 
|align="right"|
 
|align="right"|
Line 105: Line 103:  
|align="center" width="80"|(2)
 
|align="center" width="80"|(2)
 
|}
 
|}
|}
+
 
    
Once a sufficient number of runs have been completed, the &alpha; and ''K'' factors will be known and can thereby be exploited to find the most effective chain length to run multiple independent Markov chains.  Given the potential size of the search space, one can muse that the &alpha; factor will most likely be closer to one rather than to zero, because with the multiple run strategy, faster cooling will result in a particular chain settling very quickly to a minimum (which may be a local minimum).  After settling, the cluster can then move on to a new chain to settle to another minima.  If this is repeated, the chances of finding the global minima among one of the solutions is much greater than if only one chain were used [[#References|[4]]].
 
Once a sufficient number of runs have been completed, the &alpha; and ''K'' factors will be known and can thereby be exploited to find the most effective chain length to run multiple independent Markov chains.  Given the potential size of the search space, one can muse that the &alpha; factor will most likely be closer to one rather than to zero, because with the multiple run strategy, faster cooling will result in a particular chain settling very quickly to a minimum (which may be a local minimum).  After settling, the cluster can then move on to a new chain to settle to another minima.  If this is repeated, the chances of finding the global minima among one of the solutions is much greater than if only one chain were used [[#References|[4]]].
1,359

edits

Navigation menu