There is a GEANT-based Monte Carlo simulation program for the Radphi experiment called Gradphi. Gradphi was developed by UConn student Andriy Kurylov, and later adapted by Tom Bogue to incorporate the Jetset barrel upgrade in 1999. It is written in a combination of Fortran 77 and ANSI c, and has been tested primarily on Sparc solaris and Intel Linux platforms. The Gradphi source code is stored in the Radphi cvs repository and is available for anyone to download and run. The UConn group has primary responsibility for the maintenance of this code. A processor farm is also available at UConn to generate Monte Carlo data for Radphi.
To be a useful tool, a Monte Carlo program requires on-going development. Inconsistencies between real and simulation data are expected to show up, sometimes due to bugs, other times due to approximations, misalignment, poorly understood calibrations, etc. This means that to be useful, the simulation must evolve and data sets being used for studies must be kept up to date. This logbook has been set up as a running record of this development. It is organized as a running dialog about the behaviour of the program. Plots are included as links to keep the page compact. Useful links to other documentation are listed at the bottom of the page. A linked index to the major issues covered in the logbook is maintained below.
While looking at /export/data1/radphi/eta3gamma(S).itape, in particular, when looking at shower positions, I find a curiosity. Please see the following eps files. The generator plot is a projection from generator information requiring that the reconstruction find 3 makehit photons. The monte carlo plot is the shower position as given in the photon bank for the same 3-photon events. I don't see the same thing in data. Unfortunately, the work disk was offline this morning, so I couldn't fetch the same plot from the data there. I updated and rebuilt all libraries on grendl and jlabs1, so everything should be the same except for the operating systems. Any ideas?
I tried to reproduce your results, and got something that looks more reasonable than what you saw. To make these plots I started from grendl:/export/data1/radphi/eta3gammaS.itape and generated the standard Monte Carlo ntuple with the program cvs/Examples/mctuplet. I stored the output file on grendl:/radphi/eta3gammaS.hbook and opened that file in paw. To look at the LGD impact point for the generated photons I used the command
PAW > n/pl 1.momF(3,1:3)*110/momF(4,1:3)%momF(2,1:3)*110/momF(4,1:3)
Here is the resulting plot. Then to look at the reconstructed values from the clusterizer, I substitute the momF above with pvect and repeat the command. Here is the resulting plot. You can see in my plots that the Monte Carlo is generating signals in blocks that were not instrumented in the real detector. I will fix this in make_lgd_hits by adding a check if there is a non-zero gain constant cc[row][column] for that block before including the hit in the LGD hits table. But the problem you are seeing seems to be something else. Can you tell me how you generated your plots?
It turns out I was double-filling the same histogram.
I was investigating the claim by Mihajlo that the shower simulation in Gradphi seems to produce showers that are laterally smaller and more sharply peaked than those seen in the real LGD. To study this effect I enabled the FOLLOW_CERENKOV compiler switch in gustep.F to tell the simulation to generate individual Cerenkov photons and record in an ntuple those that produce photoelectrons on the photocathode. At the same time the simulation continues to output the normal hits in the itape file, and when I analyzed this output I found that the showers in this dataset had both larger pulse heights and rms sizes than those generated without FOLLOW_CERENKOV, when simulating the same input events. To find out more, I simply recorded the sum of all path lengths traversed by e+/e- particles in the showers and ran several dozen simulations of a single 2GeV shower at 5°. The results on the total path length are shown below.
compiler option | average total charged particle path length |
---|---|
FOLLOW_CERENKOV=true | 310 cm |
FOLLOW_CERENKOV=false | 272 cm |
To understand this anomaly, I enabled debug output and compared a trace of the tracking of particles in a shower. What I discovered is that with FOLLOW_CERENKOV the electrons and positrons in a shower were being consistently followed until their kinetic energy crossed below ECUT, which I had set to 100KeV. However when FOLLOW_CERENKOV was turned off, electrons and positrons were frequently being stopped early, while they still had a few MeV of kinetic energy left. The length of the terminal step was roughly 0.5mm, which is the normal step size in the LGD for electrons of a few MeV but is a factor of 4 or more too small to stop a few-MeV electron. This premature stopping of tracks is doubltless the cause of the anomaly. But why?
A perusal of the source code in gtelec.F turned up the answer. The code incorporates a switch called IABAN which enables or disables what is called overstopping of e+/e- tracks that are predicted to stop before their next interaction or boundary crossing. The IABAN switch is controlled by an undocumented control card ABANdon 0/1 which gives permission to overstop (abandon) shower tracks that are going to do nothing except multiple-scatter and ionize until they stop. The default value is IABAN=1, presumably for the sake of efficiency. For detectors whose response is keyed off the energy loss, this is fine because the total energy deposition in the current volume is not affected by overstopping. For lead-glass, the simulation is based on track length, which does get truncated by overstopping. The code in gtelec.F makes an exception and disables overstopping if there is explicit Cerenkov generation taking place in the medium, which explains why turning on FOLLOW_CERENKOV had the effect that it did. I reran the simulation with the card ABANdon 0 in the control file, and the correct aggregate shower path length of 310cm was recovered even without FOLLOW_CERENKOV. I conclude that all future simulations should be done with the card ABANdon 0 in the control deck.