April 27, 2000 S. Teige issued an internal note [1] describing a surprising result that he came across while studying the ω sample from the July, 2000 run. Using the LGD to give the total momentum of the forward system, and using minimum - missing - momentum to estimate the momentum of the recoil proton, he was able to reconstruct the incident photon energy and compare it to what was seen in the tagger. The surprise was that the beam energies he obtained using this method came out so low that not only did they disagree with the tagger values, but they were outside the nominal tagging range. Scott concludes his note with the following request for comments, to which I reply below.
These points, taken together, show the importnce of an independent determination of the beam endpoint energy. In other words, I think I've predicted the electron beam energy. We should check this out to see if I'm all wet or something is wrong with this analysis. So if anyone knows the answer, please tell me.
Three possible sources for the discrepancy come to mind after reading the note.
Possibility 1 is not very likely, because Scott would not have overlooked that. But it would be useful to see the tagger spectra anyway.
Possibility 2 is certainly there at some level, but can it explain a whole 1.2GeV shift in the apparent beam energy? Not likely. To check it out, I generated 10000 Monte Carlo events of the reaction γ p ω p where ω π° γ on a beryllium target using a photon beam of endpoint 5.5GeV. Using the exact Monte Carlo (not smeared) values of the beam and final-state photon momenta and the recoil proton momentum from the minimum - missing - momentum method, I calculated the missing momentum in the final state. The z-component of the missing momentum for these events is shown in Fig. 1. This is the quantity that is being plotted by Scott vs a variable electron beam energy in Figure 5 of his note. Scott's caption on that figure states that M is zero for a perfectly measured perfectly contained event. However this assumes a proton, not a nuclear target. All of the events in Fig. 1 are perfectly measured and contained, but there is some offset from zero and some width, because of the recoil of the A=8 resuidual system. We can convert this average M of 100MeV into an apparent beam energy by looking up the beam energy that corresponds to 0.1GeV in Scott's Figure 5. It does go in the right direction but it does not go far enough; this can explain at most 30% of the discrepancy.
If we accept the hypothesis that the beam energy was 5.5GeV and that the final states we reconstruct only have 80% of the beam energy even after we correct for recoil effects then we have possibility 3. To show how it is possible to have an excellent calibration of the masses of the known neutral mesons but still be off in the overall scale, consider the following. The mass-squared of a nγ state can be written as
(1) |
(2) |
(3) | |
(4) | |
(5) |
(6) |
Looking at Eq. 6, we see that m is homogeneous in cluster energy p so the mass scale is proportional to the overall energy scale factor inherent in the calibration. If we scale up all of the per-block gain factors by a factor a then all masses scale up by the same factor a. What may be less obvious is that, up to an error that amounts in our case to about 3% at the mass of the ω(783), the mass is also homogeneous in the scale of the lead glass block spacing. Of course we know the physical block spacing very well, but here it is expressed in units of the target - to - glass distance which is a factor we might have a mistaken value for and also contains an uncertainty from the average depth of the shower maximum in the lead glass. My central point is that if we have the scale for the ri's then that fact can be very effectively covered up by a compensating error in the energy scale factor a mentioned above so that the masses come out right.
They won't come out exactly right because the terms of order r break the homogeneity. However in our case these corrections are small. So the question is, how big an error in a would be required before the masses of the π°, η and ω would be noticeably out of alignment?
[RTJ] May 9, 2000To answer this question, I took a Monte Carlo sample where I know the exact cluster parameters and inflated all of the cluster coordinates by a factor a. I then tried to compensate by adjusting one overall gain factor for all blocks to get the best fit to the masses of the Big Three. Using a=1.20 and adjusting the gain downward I found a fit with the mass of the π° 900KeV too high, the mass of the ω 800KeV too low and the mass of the η right on. The results are shown in Table 1.
meson | mass with a=1 (MeV) |
mass with a=1.2 (MeV) |
---|---|---|
π° | 135.0 | 135.9 |
η | 547.3 | 547.3 |
ω | 781.9 | 781.1 |
These are very small shifts, which I argue are comparable with other uncertainties in the real data that come from things like how you fit the combinatoric background under the resonance and small assymetries in the lineshape. This is to say that I think we have a systematic error as large as 20% in the scale of our calibration constants which we have been ignoring up until now, the first time we look at the tagger.
In the presence of small shifts like this, the depth-correction gives us another chance to "calibrate away" systematic errors in our energy scale. This is not to say that the depth correction is not a fudge factor - it needs to be included - but if the only guide to the appropriateness of a given method is that it improves the mass alignment, that in itself is not sufficient. By producing small relative shifts in the mass spectrum, the depth correction can obscure the tell-tale signs of a large error in the energy scale.
What we want to do about this is partly philosophy. I have heard the argument that if all we want is the masses, then anything that doesn't affect the masses very much doesn't matter very much. But if we want to make use of the tagger then this argument is out. I propose that we re-think our calibration method to include getting the overall energy scale accurately. This might include some low-rate running to get a sample of omegas with a good tag.When you put in Be instead of H as the target, the effect is to include the Fermi-momentum smearing?
Yes, that is all I put in. But the effect is more than just smearing because the root-s of the reaction depends on the momentum of the target nucleon, and the cross section depends in turn on root-s. There is also some energy left behind in the residual nucleus -- it is not just a recoiling Li bound state. The underlying model is just a harmonic-oscillator nucleus.
January 11, 2001 R. Jones wrote a note [2] recommending a modified formula for calculating the average depth of a shower in the LGD. This depth is needed, together with the shower position in (x,y), to determine the direction of the photon momentum vector. In that study it was assumed that the Cerenkov yield is proportional to the length of charged tracks in the glass. It was pointed out that attenuation of the Cerenkov light on the way to the phototube might produce important modifications to these results.
[JG] January 14, 2001Still lurking on the radphi list, I noticed your note on shower depths. Good stuff ... makes me wonder if geant has been improved in the decade since serious glass simulations were attempted for E852. Here are a couple of comments from the peanut gallery:
Jeff, thanks for keeping an eye on this stuff. On your first point, it depends on what you are measuring how sensitive the results are to the tracking cutoff. Below about 400keV the electrons stop making light. It turns out that the range of a 400keV electron in lead glass is about 50 microns, so cutting off there essentially has no effect on the track length. This does not prove that Cerenkov yield and track length are proportional, but at least neither one are affected by the tracking cutoffs in GEANT. On your second point, my feeling is that attenuation will not shift the depth centroid by an appreciable fraction of the r.m.s. fluctuations. But since I cannot come up with a better argument than that, I will do a study. I take the following features for my attenuation model.
(7) |
The color spectrum of the detected Cerenkov photons is shown in Fig. 4. The spectrum is zero at the edges of the window, showing that the choice of window was OK. In Fig. 5 is plotted the originating z coordinate for every detected photon in a set of 100 showers from 1GeV gamma rays at normal incidence. This result will not be sensitive to the exact angle of incidence because the shower tracks are disoriented in the bulk of the shower. The average z-origin of detected Cerenkov photons is within a few mm of the depth predicted in Ref. [2]. I conclude that the effective shower depth after attenuation is taken into account does not differ appreciably from the physical one.
Since radphi is getting into the kinematic fitting business, one thing that I don't think people have worried about is the stat uncertainty in depth correction from shower to shower. In E852 the depth correction was a few percent of the target to glass distance (iirc about 20cm/535cm) and people never really worried with including fluctuations in interaction to shower distance in fits ... especially for all neutral final states with no good vertex information anyway ... target to lgd distance always had a sigma of order ~10cm (30 inch LH_2 target). This could be different radphi since target-lgd distance is much smaller and the target position is well known, even though its momentum isn't. I don't have a feel for this, but what kinds of shower to shower depth fluctuations in the depth are there? If they are large (? what's that mean ?) then the x and y uncertainties should probably be increased before the kinematic fit.
If you remember from the second figure in my note, the r.m.s. depth fluctuations for a 1GeV shower are 4.3cm. The r.m.s. depth of the initial gamma conversion is 9/7 * X0 = 4.0cm so you can see that the fluctuations are essentially given by where the first pair in the shower originates. And since the radiation length is not a strong function of energy above 100MeV we can take this number 4.3cm to be a constant.
At 20° this translates to an error in x,y from depth fluctuations alone, of 1.5cm. This is larger than the error from shower shape flucts, which was about 1cm r.m.s. from E852 if I remember my earlier work correctly. This may require us to use an elliptical error matrix on the (x,y) for each shower in the kinematic fits. That is not hard to do, but requires more knobs that we have to know how to control. I think that before we go that far we should have a study that shows observational evidence for off-diagonal error matrix elements in (x,y) for individual showers. Brent Evans is working with me on a study to look for this effect.
Under item 2 above, a detailed simulation was described that followed Cerenkov photons from their source in the shower down the length of the block to the phototube. For simplicity, the simulation had assumed that the blocks are surrounded by a layer of air and that any photons escaping from the block are absorbed. It had also no air gap between the phototube and the glass at the end of the block. Under these conditions it was seen that the effective shower depth centroid is surprisingly insensitive to attenuation in the glass, shifting only a few mm when attenuation is turned on and off. This is small compared to the scale of shower depth fluctuations of about 4.5cm r.m.s.
[DA] January 29, 2001That result may depend on the way the glass is wrapped. I have seen that wrapping scintillator in aluminized mylar can make a paddle very sensitive to light emitted near the end close to the tube. This would not be exponential, and it might produce a larger effect than what you are seeing with bulk attenuation.
Yes there may be an effect due to wrapping. To see this I will revise the model described above to remove the air gap and replace it with aluminum. I consider the thin layer of mylar to be irrelevant because of the small difference of its refractive index with that of lead glass. I take the reflectivity of aluminum to be 90% at all wavelengths and incidence angles. I also insert a 5mm air gap between the back end of the lead glass block and the phototube. There is a noticable effect from the wrapping! This can be seen in Fig. 6. The open histogram is the profile of the generated light, rescaled for comparison to the profile of light that is eventually detected at the phototube. As David said, there is an enhancement in the sensitivity to light generated near the downstream end of the block that comes from the wrapping. As a consequence, there is a shift of the depth centroid downstream by about 2cm.
Fig. 7 shows the profile of the shower depth centroid vs incident gamma energy. The error bars show the r.m.s. fluctuations from shower to shower. These showers have been generated uniformly over the face of the block to avoid any bias from the alignment of the shower with the phototube. The black curve in Fig. 7 indicates Eq. 3 in [2]. The red curve has added 2cm.
In Ref. [4] is described a procedure that introduces an angle-dependence to the conversion from shower pulse-heights to total shower energy. This is necessary in the Radphi forward calorimeter because at angles beyond 20° a rapidly increasing fraction of the shower energy leaks out the sides of the detector. In the analyses before this study was performed, it was understood that a simple linear model of pulse-heights vs shower energy was inadequate, and would produce a bias towards higher masses for decaying particles with larger energy in the lab frame. A nonlinear correction was used to cancel out this bias, by making the gain constant proportional to some small power (called epsilon in the code) of the total pulse height. By trial and error D. Armstrong found that the systematic shift of the π° peak position with π° lab energy could be approximately nulled out with a value of =-0.06 using the default calibration constants that have been in place since the summer 2000 run. Now that this nonlinearity correction has been superceded by the new angle-dependent procedure, the question arises whether the new algorithm exhibits any mass/energy bias.
[DA] March 9, 2001
Initial results from testing lgdtune, Richard's new calibration code. Based on run 8342, 10 iterations, 500K events. Starting point for iterations was gain constants from the database (therefore based on older non-linearity correction code). The following test was done to look for walk of the π° peak position with total cluster multiplicity (comment: for constant total energy in the LGD, average photon energy is inversely proportional to the total multiplicity). The total energy in the LGD was required to be at least 4.0GeV. Any pairs were counted as a π° if their invariant mass falls within 20MeV of the physical value. The resulting mass spectra were fitted to a Gaussian plus a cubic background. The sigmas (widths) appear significantly better than with the previous calibrator; however the energy cut was different so the comparison may not be fair. The following fits were made.
This result of Dave's is surprising, since comparisons of the energy curves for fixed angle in the region below 10° show good agreement between the new procedure and the old formula with =-0.06. This is shown in Fig. 8. Between 10° and 20° the shape remains the same, but the gains move to lower values, reflecting the fact that at larger angles the shower leakage decreases by as much as 10%. This kind of systematic shift in gain with angle will just be absorbed into the gains of individual blocks, and so is not observable if the calibration is done with the same procedure as the analysis. The comparison between the old epsilon treatment and the new procedure for angles between 10° and 20° is shown in Fig. 9. At least the showers within the forward 20° cone should show the same mass/energy bias as with the old treatment, and for the clusters beyond 20° the new treatment should be a significant improvement. At UConn we will try to reproduce the effect seen by D. Armstrong.
The following test was performed on the data from run 8600. First Richard ran lgdtune, with the angle-dependent depth correction in place, to set new calibration constants for the counters. These calibration constants were then used to generate a sample of π°s. We were careful to use the same makehits calls for the analysis as were used for the lgdtune calibration (with lgd_cluster_cleanup and a total energy cut at 4.5GeV). The calibration was based only on π°s from 3-cluster events, but in the analysis /π°s from all events were collected and mass spectra were fitted. The centroids are given below:
If our interpretation is correct, it is not the topology of these events that is shifting the mass but merely that the average energy of the two photons is anticorrelated to cluster multiplicity. If this is correct then the same effect should be visible by selecting only 2-cluster events and binning them in total energy. The following results taken from the same analysis (2 clusters only) give evidence that this reasoning is correct.
We conclude that the present calibration procedure is not producing these
large mass shifts through any procedural error, but that they are the
consequence of our detector acceptance. A confirmation of this can be
seen in the above data tabulated for higher cluster multiplicies, where
the typical E12 values are lower and opening angles tend to be larger
than the acceptance cutoff. There one sees a convergence of the downward
trend to a fixed value. The calibration should be trained to tune this
fixed value to the physical mass of the π°.
The lgdtune program has now been modified in this way, and results will
be forthcoming shortly.
[MK,RTJ] May 7, 2001
The mass dependance on cluster multiplicity corresponds just to the dependance on average cluster energy, so we decided to check further the "slope" of η's iso-mass curve in the E12- plot, Fig. 11. We switched to hyperbolic coordinate system and looked η mass in areas tied by different iso-normals (called f ) of the η's iso-mass curve. The following table shows how η mass depends on f :
f : |   < 0.0   | ( 0.0, 1.0) | ( 1.0, 2.0) | ( 2.0, 3.0) | ( 3.0, 4.0) | ( 4.0, 5.0) |   > 5.0   | η     [MeV]: | 524.7 | 540.4 | 549.0 | 548.9 | 552.1 | 554.6 | 564.0 |
The walk of η mass in the low and high f areas does not support conclusion that this is due LGD acceptance. Thus, there is still some room for tuning angle-dependent energy non-linearity corrections.
cluster multiplicity | π° [MeV] | width | η     [MeV]: | width | |||||
2 | 142.9 ± 0.02 | 18.3 ± 0.02 | 537.6 ± 0.08 | 38.2 ± 0.1 | |||||
3 | 136.4 ± 0.02 | 16.7 ± 0.02 | 541.6 ± 0.3 | 40.0 ± 0.4 | |||||
4 | 134.3 ± 0.02 | 16.4 ± 0.02 | 543.7 ± 2.1 | 37.4 ± 2.8 | |||||
5 | 133.8 ± 0.05 | 17.4 ± 0.06 | |||||||
6 | 133.7 ± 0.1 | 17.6 ± 0.2 | |||||||
7 | 133.5 ± 0.3 | 17.7 ± 0.5 |
f : | (-1.0, 0.0) | ( 0.0, 1.0) | ( 1.0, 2.0) | ( 2.0, 3.0) | ( 3.0, 4.0) | ( 4.0, 5.0) |   > 5.0   | η     [MeV]: | 524.7 | 535.3 | 540.3 | 538.3 | 539.1 | 540.6 | 549.9 |
Eta mass is shifted down, but more significant is that pi0 mass is stabilized around 134 MeV for higher cluster multiplicities, and f-dependence of eta mass shows less walk then before. We concluded that energy nonlinearity is not the major thing that has to be tuned (we got right pi/eta ration with previous nonlinearity) but rather depth correction. For higher energy Z is pushed backwards and that increases angle so eta lies above iso-mass line. For low energy hits Z is pushed forward and theta is turned smaller so eta walks down below eta-mass line.