Line 5: |
Line 5: |
| | | |
| == ParSA and Runlength == | | == ParSA and Runlength == |
| + | Once we have generated two surfaces and their interferogram, we can let the algorithm work on it. The algorithm works by meandering through solution space and assigning a <i>Costfunction</i> to each solution it comes upon. If the solution is a poor one, the Costfunction is high, but if the solution is good, then the Costfunction is low. After the algorithm has assigned a Costfunction to a given solution, the algorithm either accepts that solution and moves to it or rejects it and finds another one. The decision of whether to accept or reject a solution with a given Costfunction depends on the <i>temperature</i> of the algorithm at that point in time. The temperature refers to the probability that the algorithm will accept or reject a poor solution. So when the algorithm is "hot" and has a high temperature, it is more likely to accept a poor solution, and when the algorithm is "cool" and has a low temperature, it is less likely to accept a poor solution. As it runs the algorithm completes a cycle that involves first "heating up," i.e., increasing its temperature so that it accepts more and more poor solutions and then "cooling down," i.e., decreasing its temperature so that it accepts fewer and fewer solutions. This cycle, by first heating up and then cooling down, ensures that the algorithm begins from a random position and the solution it ultimately accepts as the best one is not dependent on its starting position. Due to running the MIR, or Multiple Independent Runs, program, this cycle is completed many times during the running of the algorithm, and each cycle is called a run. Each run heats up and then cools down until it reaches a parameter which can be specified and which is called <i>endtemperature</i>. It cools down by a factor of <math>\alpha</math>, so that at each step n, the temperature <math>T_{n}</math> is equal to <math>\alpha T_{n-1}</math>. The first run is 100,000 steps long, and after the first run, each run is increased by a factor called <i>Beta_Runtime</i>. This continues until the final run, which is the longest run shorter than the product of <i>Runfactor</i> and 100,000. Once it has reached this longest run, the algorithm has completed one <i>Sample</i> and the next Sample is begun until the algorithm has completed the specified number of samples. |
| + | |
| + | == Analysis == |
| + | For a given problem, there is an optimal Run Length which allows the algorithm to solve the problem with the most accuracy in the least time. It is possible to find this optimal Run Length by analyzing the statistics of many runs of many different lengths. The analysis goes as follows: |
| + | |
| + | <math>P=(\kappa/n)^\alpha</math> |
| | | |
− | Once we have generated two surfaces and their interferogram, we can let the algorithm work on it. The algorithm works by meandering through solution space and assigning a <i>Costfunction</i> to each solution it comes upon. If the solution is a poor one, the Costfunction is high, but if the solution is good, then the Costfunction is low. After the algorithm has assigned a Costfunction to a given solution, the algorithm either accepts that solution and moves to it or rejects it and finds another one. The decision of whether to accept or reject a solution with a given Costfunction depends on the <i>temperature</i> of the algorithm at that point in time. The temperature refers to the probability that the algorithm will accept or reject a poor solution. So when the algorithm is "hot" and has a high temperature, it is more likely to accept a poor solution, and when the algorithm is "cool" and has a low temperature, it is less likely to accept a poor solution. As it runs the algorithm completes a cycle that involves first "heating up," i.e., increasing its temperature so that it accepts more and more poor solutions and then "cooling down," i.e., decreasing its temperature so that it accepts fewer and fewer solutions. This cycle, by first heating up and then cooling down, ensures that the algorithm begins from a random position and the solution it ultimately accepts as the best one is not dependent on its starting position. Due to running the MIR, or Multiple Independent Runs, program, this cycle is completed many times during the running of the algorithm, and each cycle is called a run. Each run heats up and then cools down until it reaches a parameter which can be specified and which is called <i>endtemperature</i>. It cools down by a factor of <math>\alpha</math>, so that at each step n, the temperature <math>T_{n}</math> is equal to <math>\alpha T_{n-1}</math>. The first run is 100,000 steps long, and after the first run, each run is increased by a factor called <i>Beta_Runtime</i>. This continues until the final run, which is the longest run shorter than the product of <i>Runfactor</i> and 100,000. Once it has reached this longest run, the algorithm has completed one <i>Sample</i> and the next Sample is begun until the algorithm has completed the specified number of samples.
| + | is the probability of failure, where n is the Run Length and \kappa and \alpha are determined by the experiment in question, and |
| + | |
| + | <math>p=(n_{f}+1)/(N+2)</math> |
| + | |
| + | is the probability of non-convergence, i.e., the probability of failure, where <math>n_{f}</math> is the number of runs that failed to converge upon a solution and N is the total number of runs. Therefore, if we set these two equal and look at the log-log plot of p against n, |
| + | |
| + | <math>log(p)=log((\kappa/n)^\alpha)</math>, |
| | | |
| + | the slope of the plot will be equal to <math>\alpha</math> and the y-intercept to <math>\kappa</math>. Then we can plug these values back into <math>P=(\kappa/n)^\alpha</math> in order to find the optimal n. |
| | | |
| == References == | | == References == |
| <div class="references-small"> | | <div class="references-small"> |
| 1. Matthew Demas, [http://zeus.phys.uconn.edu/halld/diamonds/MattDemasThesis-5-2008.pdf "Analysis of Synthetic Diamond Wafer Interferograms Using a Parallel Simulated Annealing Algorithm"], p. 40. | | 1. Matthew Demas, [http://zeus.phys.uconn.edu/halld/diamonds/MattDemasThesis-5-2008.pdf "Analysis of Synthetic Diamond Wafer Interferograms Using a Parallel Simulated Annealing Algorithm"], p. 40. |