Monte Carlo acceptance for φ→5γ

Richard Jones
Radphi collaboration meeting
November 17, 2001
Jefferson Lab, Newport News, VA

Outline

  1. How can we estimate the number of φ→5γ we should see?
  2. What is the Radphi acceptance for a fully reconstructed 5γ event?
  3. What is the break-down of the losses in terms of cuts?
  4. How much extra signal might we see from photoproduction off neutrons?
  5. What assumption is being made in all of this about tagging coincidences?

1. How can we estimate the number of φ→5γ we should see?
The integral of the luminosity for the summer 2K run is shown in Fig. 1 broken down by run number. Only periods where the taggerOR was running at 20MHz or higher were counted in the integral, which covers basically all runs performed with the physics trigger. The luminosity is measured in units of live-hours at a nominal 5e7 tagged photons/s and has a total integral of 424 hours. This is converted to inverse pb by multiplying by the tagging rate times tagging efficiency and by the target thickness. The numbers are shown in Table 1.
Table 1
total tagged photons while experiment was "live" 7.63e+13
estimated tagging efficiency 90%
target thickness 2.62cm
target density 1.85g/cm^3
protons per gram of target 2.67e+23
effective target thickness 1.30 protons/barn
integrated luminosity 8.9e+7 /μb
Within statistical errors, the Novirsibirsk results for φ radiative decays to π0π0γ and ηπ0γ are both 1e-4. Putting all of this together with the approximate φ photoproduction cross section of 0.45μb gives an expected 4000 π0π0γ events during the live time of the summer 2K Radphi run. Taking the 2γ decay branching ratio of the η into account yields about 1600 5γ events in the a0 channel. Of course only a fraction of these will have passed the trigger and be on tape, and only a fraction of what we put on tape we will be able to reconstruct. The acceptance of our trigger and reconstruction will be considered next.

2. What is the Radphi acceptance for a fully reconstructed 5γ event?
For this part of the study, we simulated 4000 φ→f0γ photoproduction events and subjected them to the trigger and offline reconstruction. Only photons within the tagging range were generated. As described in the previous section, this event size represents what was photoproduced by tagged photons interacting with protons in the Radphi target during the live-time of the experiment. Requiring only that the event satisfy the online trigger and that at least 2 clusters be reconstructed in the LGD leads to the effective mass spectrum shown by the black histogram in Fig. 2. If events with fewer than 4 reconstructed clusters are rejected then the mass spectrum shaded in yellow is obtained, containing about 400 events. A similar analysis of the φ→a0γ channel leads to the mass spectra shown in Fig. 3. About 100 events are found in the yellow shaded area of Fig. 3.

There is a 10% shift downward in the peak position in Figs. 2-3 from the mass of the φ. This has not been studied yet, but is probably a bias introduced by the forward acceptance. The apparent φ peak in the data seen in the ηγ analysis was shifted downward by a similar amount.

3. What is the break-down of the losses in terms of cuts?
Although not very much analysis was applied to the events in Figs. 2-3, it is instructive to look at how we manage to lose 90% of our events before the analysis really gets started. The trigger consists of 2 conditions: the requirement of a hit in all three layers of the BSD and at least 2.5GeV deposited in the LGD. Experience during the 1999 test run showed that the "BSD AND" was approximately equivalent to the requirement of at least one pixel. This is borne out in Fig. 4 and Fig. 5 which show the dominance of 1-pixel events for the f0 and a0 channels respectively. The statistics in these plots reflect the total yield of events passing the online trigger and cluster cleanup cuts. The losses from these two source are approximately equal; the trigger acceptance is about 50% and the cluster cleanup acceptance is 40 - 50%, somewhat higher for the a0 than for the f0 channel. This is explained by the tendency of π's to produce nearby clusters that fail the cluster separation condition in cluster cleanup.

The total energy found in reconstructed clusters for all triggered events is shown in Fig. 6 and Fig. 7 for the respective channels. Note that that total LGD energy (MAM condition) is well above the effective online threshold and appears to be irrelevant for these channels, as we designed. It was believed (I have not checked for Monte Carlo but believe it to be the case there as well) that the single-block threshold (DOR) is similarly irrelevant for our signal reactions. This means that the 50% loss observed for the online trigger in Monte Carlo must be due almost entirely to the recoil proton condition. Whatever it was, the online trigger acceptance cannot be changed now, but a part of the loss in the cluster cleanup stage can probably be recovered by a more sophisticated algorithm.

The forward cluster multiplicity distribution for these events is shown in Fig. 8 and Fig. 9 respectively for the f0 and a0 decay channels. These plots show clearly that, at least with the present LGD clusterizer, requiring that all 5 clusters be reconstructed in the forward calorimeter essentially kills our acceptance. It is imperative that we develop our reconstruction algorithm so that it is more flexible in its criteria for allowing an event to be classified as 5γ.

4. How much extra signal might we see from photoproduction off neutrons?
In the previous section it was noted that the detection of a charged particle in all three layers of the BSD was satisfied about 50% of the time in γp→φp. We have always assumed that the trigger would effectively exclude the comparable process γn→φn but this should be checked. To look at this, we generated 5000 and 2000 events respectively in the f0 and a0 photoneutron channels respectively. Of these 465 and 202 events, respectively, passed the online trigger requirement. After cluster cleanup these figures are reduced to 156 and 106 respectively. Not all of these events have a true BSD pixel, as can be seen in Fig. 10 and Fig. 11. Requirement of a single pixel implies essentially no cost to the photoproton signal, but further reduces the photoneutron φ yield to 67 and 51 events, respectively. The total cluster energy distribution for these events is shown in Fig. 12 for the f0 and Fig. 13 for a0. The corresponding invariant mass distributions are shown in Fig. 14 and Fig. 15, where the unshaded histogram describes all events with at least 2 clusters and the yellow shaded area comes from events with at least 4 clusters in the LGD.

5. What assumption is being made in all of this about tagging coincidences?
The estimate in section 1 above for the yield of f0γ and a0γ events in the summer 2K data set assumed that only the tagged flux in the photon beam was producing analysable triggers. What is analysable depends on the analysis, of course, so we begin with the question of what we have on tape. The BSD-TAG timing distribution shows that a coincidence window of about 25ns was used in the trigger. The accidental tagging probability is give by

where R is the tagged photon rate 5e+7/s and G is the gate width of 25ns. This comes out to 71% accidental probability, and it means that an untagged photon that produced triggerable signal during the Radphi live-time interval had a 71% probability of getting onto tape. Assuming that 80% or more of the energy in a neutral decay is deposited in the LGD (see Fig. 6 for example) this means that the Radphi trigger had essentially uninterrupted acceptance all the way down to φ threshold. We know from early studies that below 3.5GeV the likelihood of finding all 5 photons in the LGD acceptance drops rapidly, but above 3.5 the geometric acceptance levels off. Thus it might be advantageous to attempt an analysis that does not require the knowledge of the tagged photon energy.

The problem with this is that any fully-reconstructed φ events from beam photons below the tagged range would be buried under a large background of partially-reconstructed events produced by photons nearer to the end point. This suspicion is borne out in Fig. 16 which shows the inclusive spectrum of all photoproduction reactions generated by within tagged band. Most of these are incomplete reconstructions, or contain charged particles whose energies are underestimated in the LGD. Thus it is unlikely that a φ signal will be visible anywhere except within the tagged band; even there we may have to bias the analysis toward the high-energy end. The demands of the signal/background separation will almost certainly require us to make use of every piece of information we have about an event, including the tagged photon energy.

In the offline analysis the tagging coincidence window can be made much narrower than 25ns. Currently with a 10ns window we are able to contain essentially all of the BSD-TAG coincidence peak. Reducing G to 10ns lowers the accidentals rate to 40%. If we were to take a small hit in acceptance the window might be reduced as far as 8ns which brings the accidentals rate down to 33%, where it begins to look reasonable to use the tagger information in the analysis. Accidentals subtraction will have to be used to isolate the tagged signal, but this can be done on our data with essentially no cost in terms of statistical precision, i.e. without discarding events with multiple tags.