Changes

Jump to navigation Jump to search
no edit summary
Line 1: Line 1: −
The pupose of this page is to describe the work being done in the construction of a tabletop Michelson Interferometer.  The purpose of this device in the scope of the project is two fold.  First,  the interferometer can be used to analyze the vibrational characteristics of a diamond waffer suspended from a wire frame.  Second, the interferometer can be used to study the surface profle of a diamond waffer, allowing us to see the efects that different cutting and mounting techniques have on the final product.  The main work does this semister has been on the vibrational aspects of the diamond mounting.
+
The pupose of this page is to describe the work being done in the construction of a tabletop Michelson Interferometer.  The purpose of this device in the scope of the project is two fold.  First,  the interferometer can be used to analyze the vibrational characteristics of a diamond wafer suspended from a wire frame.  Second, the interferometer can be used to study the surface profile of a diamond wafer, allowing us to see the effects that different cutting and mounting techniques have on the final product.  The main work does this semester has been on the vibrational aspects of the diamond mounting.
    
== Construction ==  
 
== Construction ==  
Line 70: Line 70:     
== Camera Calibration ==
 
== Camera Calibration ==
Analysis of videos taken of the vibraton of the diamond waffer can lead to information about both the frequency and the apmlitude of vibration.  To perform simultaneous measurments of both parameters, a precise camera calibration was necessary.
+
Analysis of videos taken of the vibration of the diamond wafer can lead to information about both the frequency and the amplitude of vibration.  To perform simultaneous measurements of both parameters, a precise camera calibration was necessary.
    
=== Calibration Set-Up ===
 
=== Calibration Set-Up ===
The following is an image of the device sed or the camera caibration:
+
The following is an image of the device used for the camera calibration:
    
IMAGE OF CALIBRATION DEVICE
 
IMAGE OF CALIBRATION DEVICE
   −
This calibraton device was used to determine the change in pixel location of the center of the lens flairs based on agular displacment.  To do this, the camera was mounted horizontally on a 1.40m long bar.  The camera end was free to rotate about a pivot located at te opposite end of the rod, above which a mirror was mounted.  A linear mcroadjustment translation stage was placed below the camea end.   
+
This calibration device was used to determine the change in pixel location of the center of the lens flares based on angular displacement.  To do this, the camera was mounted horizontally on a 1.40m long bar.  The camera end was free to rotate about a pivot located at the opposite end of the rod, above which a mirror was mounted.  A linear micro-adjustment translation stage was placed below the camera end.   
    
=== Image Analysis ===   
 
=== Image Analysis ===   
To find the location of the center of the flair on each image, several lines of code were imlimented:
+
To find the location of the center of the flair on each image, several lines of code were implemented:
    
MATLAB CODE
 
MATLAB CODE
   −
The first line imports the image, in this ase IMAG0001.jpg, convert it to a matrix, and separated the green channel.  Next, this matrix is imaged in false color in terms of intensity from blue(low) to red(hgh).  
+
The first line imports the image, in this case IMAG0001.jpg, converts it to a matrix, and separates the green channel.  Next, this matrix is imaged in false color in terms of intensity from blue(low) to red(hgh).  
 
INSERT FALSE COLOR IMAGE ON RIGHT OF THIS PARAGRAPH
 
INSERT FALSE COLOR IMAGE ON RIGHT OF THIS PARAGRAPH
The last line is the actual search function. The search functions by finding the location with the smallest difference between the intensity(a(m,n)) and the fit function we apply to image.  We use a gaussian distribution with user defined parameters as our fit.  The code for this function is:
+
The last line is the actual search function. The search functions by finding the location with the smallest difference between the intensity(a(m,n)) and the fit function we apply to image.  We use a gaussian distribution with user defined parameters as our fit.  The code for this function is:
 
    
 
    
 
       function out=gauss2(x,y,p)
 
       function out=gauss2(x,y,p)
Line 94: Line 94:  
         out=p(1)*exp(-0.5*((xx-p(2))/p(3)).^2-0.5*((yy-p(4))/p(5)).^2) + p(6);
 
         out=p(1)*exp(-0.5*((xx-p(2))/p(3)).^2-0.5*((yy-p(4))/p(5)).^2) + p(6);
   −
The parameters p(1) to p(6) are, in numerical order, (1) the amplitude of the function, (2) x location, (3)sigma x, (4) y location, (5) sigma y, and (6) an offset applied to eliminte background noise.  The function takes user inputs as an initial guess, then loops the search process to find more accurate values for the parameters.  Arbitrary values are chosen for p(1), p(3), p(5) and p(6) to begin, and are then readjusted based on the first output parameters.  Typicall, the closer these fit parameters are to the true paramters, the better the functin is at defining p(2) an p(4).
+
The parameters p(1) to p(6) are, in numerical order, (1) the amplitude of the function, (2) x location, (3)sigma x, (4) y location, (5) sigma y, and (6) an offset applied to eliminate background noise.  The function takes user inputs as an initial guess, then loops the search process to find more accurate values for the parameters.  Arbitrary values are chosen for p(1), p(3), p(5) and p(6) to begin, and are then readjusted based on the first output parameters.  Typically, the closer these fit parameters are to the true parameters, the better the function is at defining p(2) an p(4).
By knowing the change in pixel locaton betwen photos, as well as th angular displacment, a value for pixel change per mrad in the xdiection can be determined.  This proess is then repeaed for the y direction.  Once these tweo parameters are known, the video analyss can be used to fin both frequency nd amplitude.   
+
By knowing the change in pixel location between photos, as well as th angular displacement, a value for pixel change per mrad in the x diection can be determined.  This process is then repeated for the y direction.  Once these two parameters are known, the video analysis can be used to find both frequency and amplitude.   
 
==Video analysis ==
 
==Video analysis ==
The same process used for the calibration images can be used to analyze the motion of the center of the lens flair over imBecaue of the number of images involved in a video at 1200 fps, a batching process was developed to automate the proces.  First, a built in Linux program, ffmpeg, is used to separate the images from the video.  Next, the automtion script is applied to the folder contaning the video stills.  This script uses the same code as the image analysis, but loops the search pocess through a large group of images.   
+
The same process used for the calibration images can be used to analyze the motion of the center of the lens flare over timeBecause of the number of images involved in a video at 1200 fps, a batching process was developed to automate the process.  First, a built in Linux program, ffmpeg, is used to separate the images from the video.  Next, the automation script is applied to the folder containing the video stills.  This script uses the same code as the image analysis, but loops the search process through a large group of images.   
    
   
 
   
196

edits

Navigation menu