Changes

Jump to navigation Jump to search
m
Line 23: Line 23:  
The regularization method was created for the specific purpose of automatically demodulating "noisy" fringe patterns without any further unwrapping of the phase.  Regularization algorithms involve evaluating the estimated phase field with a cost function against the actual pattern and then imposing the smoothness criterion.  This method is repeated for each pixel on the phase field, until a global minimum is reached in the cost function [[#References|[5]]],[[#References|[6]]].
 
The regularization method was created for the specific purpose of automatically demodulating "noisy" fringe patterns without any further unwrapping of the phase.  Regularization algorithms involve evaluating the estimated phase field with a cost function against the actual pattern and then imposing the smoothness criterion.  This method is repeated for each pixel on the phase field, until a global minimum is reached in the cost function [[#References|[5]]],[[#References|[6]]].
   −
Variations of the regularization method involve demodulating certain points in the low frequency region of the fringe pattern.  This point can then be used to seed the estimated phase field.  The algorithm then follows a somewhat analogously to crystal growing.
+
Variations of the regularization method involve demodulating certain points in the low frequency region of the fringe pattern.  This point can then be used to seed the estimated phase field.  The algorithm then follows somewhat analogously to crystal growing.
    
A drawback to this method arises from the fact "that a low-pass filtering and a binary threshold operation are required."
 
A drawback to this method arises from the fact "that a low-pass filtering and a binary threshold operation are required."
Line 29: Line 29:  
== Artificial Neural Network Method ==
 
== Artificial Neural Network Method ==
 
[[Image:annschem.jpg|thumb|A schematic of the interactions between artificial neurons]]
 
[[Image:annschem.jpg|thumb|A schematic of the interactions between artificial neurons]]
The human brain is composed of nearly 5 billion neurons, each of which have the simple job of recieving, integrating and transmitting of nerve pulses.  From these simple functions neurons give rise to the functionality of the brain.  The same principle was postulated in the 1940's by McCulloch and Pitts who theorized that a similar approach could be applied to computing [[#References|[7]]].
+
The human brain is composed of nearly 5 billion neurons, each of which have the simple job of receiving, integrating and transmitting of nerve pulses.  From these simple functions neurons give rise to the functionality of the brain.  The same principle was postulated in the 1940's by McCulloch and Pitts who theorized that a similar approach could be applied to computing [[#References|[7]]].
   −
Using a network of simple computerized "neurons" one can mimic the brain and thereby create complex behavior from simple components.  Differing from most computer-based programs, artificial neural network software requires a learning phase to adapt itself to the problem that it will be undertaking.  The neurons are arranged in such a fashion that they are placed into three main categories: input, hidden and output neurons.  After their creation, the network of neurons is trained under one of thre main training regiments: supervised, reinforced or unsupervised learning.  Supervised learning results from constant feedback being given to the neural network during the training sequence.  Reinforced learning can be defined by the use of simple "good" and "bad" remarks to the program after each "run."  A neural network with unsupervised learning recieves no feedback from the "trainer."  Given time constraints and the desire for a reasonable output,  supervised or reinforced learning schedules are usually adapted and thus for a given input "weights" are given to guide the neurons "reaction" for a given stimulus[[#References|[7]]].   
+
Using a network of simple computerized "neurons" one might be able to mimic the brain and thereby create complex behavior from simple components.  Differing from most computer-based programs, artificial neural network software requires a learning phase to adapt itself to the problem that it will be undertaking.  The neurons are arranged in such a fashion that they are placed into three main categories: input, hidden and output neurons.  After their creation, the network of neurons is trained under one of thre main training regiments: supervised, reinforced or unsupervised learning.  Supervised learning results from constant feedback being given to the neural network during the training sequence.  Reinforced learning can be defined by the use of simple "good" and "bad" remarks to the program after each "run."  A neural network with unsupervised learning recieves no feedback from the "trainer."  Given time constraints and the desire for a reasonable output,  supervised or reinforced learning schedules are usually adopted and thus for a given input "weights" are given to guide the neurons "reaction" for a given stimulus[[#References|[7]]].   
    
This method can either involve a system of many neurons linked together to analyze an entire image, or a small number of neurons can be analyze the image section by section.  The former required both a long learning period and a large number of neurons, thus it is both computationally expensive and time intensive.  The latter given a relatively slow rate processor and a section of six pixels and 36 neurons in total can analyze in 0.5s.  This was demonstrated by Tipper, et al and was shown to be more effective than the Fourier transform method and Schafer's algorithm [[#References|[7]]].
 
This method can either involve a system of many neurons linked together to analyze an entire image, or a small number of neurons can be analyze the image section by section.  The former required both a long learning period and a large number of neurons, thus it is both computationally expensive and time intensive.  The latter given a relatively slow rate processor and a section of six pixels and 36 neurons in total can analyze in 0.5s.  This was demonstrated by Tipper, et al and was shown to be more effective than the Fourier transform method and Schafer's algorithm [[#References|[7]]].

Navigation menu