next up previous
Next: running without the CP Up: An optimized trigger for Previous: the current situation

upper-level trigger improvements

Figure 16: Decomposition of signal losses among various stages of the event pipeline as a function of the width of the CPV veto window $t_{_C}$ at a beam current of 250nA, with the upper-level trigger improvements foreseen for 1999 running in place. Values of $t_{_C}$ below 15ns represent an extrapolation of the model beyond its range of validity because in that regime the CP vetoes cannot be treated as fully efficient.
\begin{figure}\begin{center}\mbox{\epsfxsize =9.0cm\epsffile{extrap2.eps}}\end{center}\end{figure}
Figure 17: Comparison between ideal and actual rates of recording signal events on tape as a function of beam intensity, under the conditions of a level 1 trigger and improved DAQ foreseen for 1999 running.
\begin{figure}\begin{center}\mbox{\epsfxsize =9.0cm\epsffile{sit2.eps}}\end{center}\end{figure}

From Fig. 15 it is seen that level 2 processor dead-time is a leading source of loss in our system. Several people have suggested that this loss could be reduced by a pre-level-2 decision based simply upon a count of lead glass blocks over some threshold. This idea was tested, first by Phil Rubin and then by Elton Smith and Kyle Burchesky[1], using data collected during the June period. The conclusion is that this simple criterion rejects 95% or more of the level 0 triggers, while preserving almost all of what the level 2 processor would keep. In Table 1 of their report, Elton and Kyle show that for a particular choice of threshold, more than 97% of the level 0 triggers are rejected while 98% of the events with MAM values of 64 or higher (the level 2 threshold used during the June period) are preserved.

The way that this would work is as follows. The adc modules connected to the lead glass detector produce a logic signal on the Fastbus auxiliary backplane a short time after the integration gate has closed. This signal is the output of a comparator between the charge on the integrator for that channel and some programmable level. A fast OR of the outputs from all instrumented blocks in the wall could be formed, and a fast-clear could be formed if the OR signal is absent. Those familiar with the adc say that it could be ready for the next event as soon as 250ns after the fast-clear is received. The LeCroy 1877 multihit tdc specification gives 290ns as the required fast-clear settling time, but the 1875A high-resolution tdc that we use the digitize the tagger and RPD signals requires 950ns between the fast-clear and the earliest subsequent gate. Therefore the earliest that the acquisition could be re-enabled following a fast clear from a level 1 decision would be 1.2$\mu s$ from the receipt of the level 0 trigger. This is equivalent to a 10-fold reduction in the level 2 dead-time.

In order to incorporate this improvement into the model, it is necessary to break up the level 1 passing fraction ${\cal F}_1$ of 97% into a passing fraction for signal and a part for nonsignal, similar to how it was done in Eq. 14 for ${\cal F}_2$.

\begin{displaymath}
{\cal F}_1 = \frac{
f_{1s}\,f_s\,{\cal F}_{true} +
f_{1n}\,(...
..._{acc.}
}{
f_s\,{\cal F}_{true} +
(f_n-f_s)\,{\cal F}_{acc.}
}
\end{displaymath} (17)

The data in Ref. [1] were taken at beam intensities where the triggers were more than 95% nonsignal, therefore the value of 3% that they report for ${\cal F}_1$ is essentially a determination of $f_{1n}$ and leaves $f_{1s}$ unconstrained by the data. In what follows I have been conservative and set $f_{1s}$ to 1. While it is surely an overestimate to say the every signal event has at least one large hit in the LGD, at the beam intensities of interest for this model it makes very little difference to the dead-time what value is used for $f_{1s}$.

Another area where losses may be reduced is in the data acquisition dead-time. Phil Rubin and David Abbott have found a way to reduce the time it takes to read out and store an event from the 670$\mu$s observed during 1998 running to less than 300$\mu$s. Being cautious, I have taken 300$\mu$s as the new estimate for readout dead-time per event $d_{_{DAQ}}$. With the level 1 trigger and data acquisition speed-up taken into account, instead of Fig. 15 we now have Fig. 16. Fixing the value of $t_{_C}$ at 15ns (minimum value for a fully efficient veto), the yield of signal vs. beam intensity is shown in Fig. 17. A significant improvement has been obtained over the situation in 1998 shown in Fig. 12 but we are still not in a position to make effective use of beams much over 150nA.


next up previous
Next: running without the CP Up: An optimized trigger for Previous: the current situation
Richard T. Jones 2003-02-12