With two weeks until the next video meeting, I thought it might be
productive for me to share my recent work on Monte Carlo issues. I
think a consistent picture has finally emerged.
To summarize, I get a detectable-event rate difference of around 10%
between the Lipari and NEUGEN generators, with Lipari giving more
events (contrary to what I was claiming at GS in November). My
analysis is very sensitive to the parameters of the detector
simulation, most importantly the hadron tracking cut and
next-most-importantly to the parameters affecting streamer tube
efficiency. I believe my analysis could easily be reworked to be much
less sensitive to the hadron tracking cut (perhaps Antonio's analysis
is already relatively insensitive), but the dependency on the ST
efficiency is real and could contribute to our systematic error.
To summarize in more detail -- at GS in November, I erroneously
reported that my analysis got about 10% more detectable events from
the NEUGEN generator than from Lipari. This was due to some sort of
procedural error I made when generating my high-statistics (65k)
Lipari sample. (I had generated 7 files of 10,000 events each and one
of the files was rather bogus -- all the other files had about 350
detectable events per 10,000 generated, but the bad file had only
about 25. My guess is I generated it with the wrong ERP database. I
have since regenerated that file and it is now consistent with the
others.) After correcting this error, I believe I should have
reported that I got about 10% more detectable events from Lipari.
Also at GS, Antonio reported he got many more detectable events from
Lipari events he had generated than from NEUGEN events I had
generated. What we did not realize at the time was that much of this
difference was attributable to differences in the detector simulation,
rather than differences in the generator. After the GS meeting,
Antonio reported that Lipari events generated by him were almost
compatible with Lipari events generated by me, while I reported a
large discrepancy. This is probably because his analysis is less
sensitive than mine to additional low-energy neutrons which are
wandering around the detector in events generated by me due to the
lower hadron cut in my detector simulation. Even in Antonio's
analysis, there was a somewhat significant difference between Lipari
events generated by him (46.4 +/-0.7 ev/y detectable) and by me (43.5
+/-1.9 ev/y) which I now conjecture is largely due to differences in
the ST efficiencies in the detector simulation. With my analysis, the
difference was much larger -- 39.01 +/- 0.68 ev/y when generated by
Antonio and 30.41 +/- 0.48 ev/y when generated by me.
So I now think all the very puzzling inconsistencies we were seeing
have plausible explanations.
I have discovered that in my analysis, many of the detectable events
are only marginally detectable, depending on fluctuations in the GEANT
tracking. To make this quantitative, I generated 10,000 events and
then fed them to GMACRO 4 times, changing nothing but the random
number seed for GEANT. So the primary particles to be tracked by
GEANT were the same (same momentum and same location in MACRO) all
four times. I ran the 4 resulting files through my analysis. Of the
10,000 events, the great majority were never detectable by my analysis
- most are going the wrong way, or produce just short tracks in the
rock without exiting the detector. Some of the events were detectable
in all four files. But many of the detectable events were only
detectable in some of the analyses -- that is, they might or might not
be detectable depending on the fluctuations in how their GEANT
tracking developed. This is plausible because most upgoing muons
range out in the absorber, but those that straggle out could be
detectable. To be quantitative, of the 10,000 events, the number that
were detected was:
9259 events detectable 0 times out of 4
199 events detectable 1 time out of 4
134 events detectable 2 times out of 4
149 events detectable 3 times out of 4
249 events detectable 4 times out of 4
So each file contained about 486 events, of which 249 were events that
were detectable 4 times out of 4, about 120 were events that were
detectable 3 times out of 4, about 67 were events that were detectable
2 times out of 4, and about 50 were events that were detectable 1
times out of 4. (The actual number of events detectable in each file
were 469, 493, 481, 497.) So in each file, about half the events
detected in that file were events that might not have been detected if
the tracking had developed differently. This does not necessarily
mean anything is wrong, but it does suggest the possibility that a
slightly inaccurate detector simulation could give a big error in
predicted event rates.
Next, I did some work to determine, of the various cards which Antonio
and I had chosen differently to control the detector simulation, which
were most important. Here is a summary of the FFREAD cards that were
diferent in the two simulations:
CUTS: This card determines at what energy GEANT stops tracking a
particle and dumps all its remaining energy into the volume it is
currently in. It is different for different kinds of particles. I
used a value of 1 MeV for all particle types. Antonio used 0.5 MeV
for most particles, but 10 MeV for hadrons. Antonio says the GEANT
manual claims the algorithm for hadrons is inaccurate below 10 MeV.
THRS, THRW: strip and wire thresholds. I believe this controls how
much energy has to be deposited in a streamer tube gas volume in order
for the streamer tube to fire. I used the GMACRO built-in values.
Antonio used some values that had been determined by some of our
collaborators by tuning the parameters to match downgoing muon data.
For the strips, the threshold is given plane-by-plane, with the planes
under scintillator tanks and under the track etch having a smaller
threshold. All the wires used a single value.
TROT, TRST, ELAT: I don't really know what these parameters are, but I
believe they again affect streamer tube efficiencies. I used built-in
values for TRST and ELAT, and 1000 for TROT. Antonio used TRST=5,
ELAT=0.95, TROT=200. Again, Antonio's values came from the work done
to match GMACRO to data.
BACK: controls whether random background ST hits are generated. I had
this off and Antonio had it on.
PFIS, PHOT: controls whether photofission and photonuclear effect are
considered as energy-loss mechanisms while tracking a particle. I had
them off and Antonio had them on.
I generated 100,000 events, and fed them all to GEANT 6 times, each
time changing one or more parameters of the simulation. I used the
same random number generator seeds to begin each event, so the total
fluctuations from step to step were much less than would occur if I
had simply changed the random number generator without changing any
other parameter. However, if even one particle tracked differently in
one step than in a previous step, all subsequent particles tracked in
the same event would start with a different random number seed, and so
would be tracked independently of how they were tracked in the
previous step. I believe that only the CUTS and PFIS, PHOT cards
could affect GEANT tracking in any way. The other cards only affect
how the detector response is calculated after all the particles have
been tracked.
Each step I changed one thing, and left that change in effect for all
subsequent steps, so I was morphing the simulation from my way to
Antonio's way. That is, if `b' represents a parameter set my way and
`s' Surdo's way, the various steps were:
bbbbbb
sbbbbb
ssbbbb
sssbbb
ssssbb
sssssb
ssssss
I kept track of the event numbers that passed my analysis in each
step. I then counted how many events that were detectable in one step
went undetected at the next step, as well as how many events that were
not detected in one step became detectable at the next step. For
example, in step0 3792 events passed, and in step1 3711 events
passed. However, this was not simply 81 events going away. Instead,
1457 events went away (were detected in step0 but not step1) and 1376
events appeared (were not detected in step0 but were in step1).
In the following table, I give the step number, a description of how
the step differs from the previous step, the number of events (out of
100,000) that passed the first stage of my analysis (had a track
reconstructed in space using the standard MACRO tracking packages,
with 2 matching ERPs along the track), how many events passed all the
cuts of my semicontained upgoing muon analysis, and finally how many
of these events appeared or disappeared to yield the number in the
next step's PASS column.
-----------------------------------------------------------------
| STEP | What changed? | TRACK | PASS |
|-------|---------------------------------------|-------|-------|
| 0 | bob's way | 9276 | 3792 | -1457 + 1376
| 1 | cuts (except hadrons) reduced by 2 | 9251 | 3711 | -565 + 1333
| 2 | hadron cuts increased by 10 | 9090 | 4479 | -62 + 183
| 3 | plane-by-plane strip thresholds | 9284 | 4600 | -66 + 469
| 4 | used Antonio's TROT, TRST, ELAT | 9873 | 5003 | -319 + 229
| 5 | ST background hits turned on | 9612 | 4913 | -218 + 255
| 6 | Turned on PFIS and PHOT (surdo's way) | 9634 | 4950 |
-----------------------------------------------------------------
It appears that only between step0 and step1 are the fluctuations as
great as a complete retracking. By far the biggest net effect is
raising the hadron tracking cuts. I believe this is primarily due to
my analysis throwing out events where a low energy neutron produced an
ERP hit in an unexpected place. Now, Antonio says that lowering the
cut below 10 MeV gives inaccurate results. However, that does not
mean that setting the cut at 10 MeV gives accurate results -- it could
well be that neutrons below 10 MeV can wander around the detector and
give stray ST and SC hits, but we just don't have software to
accurately simulate that. Now there are things I can do to my
analysis to make it much less sensitive to these wandering neutrons,
and I suspect Antonio's analysis is already much less sensitive to
them than mine. However, he should check that lowering this cut does
not affect his event rate. We should also check whether 0.5 MeV is
low enough for the cut on other particles.
The next most important effect has to do with the streamer tube
efficiencies, which together caused a 10% effect in my analysis.
While Antonio's numbers are a better choice than mine, they are
undoubtedly not perfect. Some work must be done to estimate the error
on the numbers used and to check our sensitivity to that error. This
could be tricky because the numbers have been determined with a sample
of almost-exclusively MIPs (downgoing muons) but the efficiency to
detect recoil hadrons and electrons could increase or decrease the
number of badly-reconstructed muon tracks.
ST background should of course be turned on (I had intended to turn it
on later, after I had the Monte Carlo debugged). The PFIS and PHOT
processes are not very important, but may as well be turned on (I had
left them off for fear they would be CPU hogs, but this is not a
problem).
Regarding the meeting between Paolo Lipari with Hugh Gallagher (one of
the principal authors of NEUGEN) and other interested parties at GS in
November, there were not too many clear outcomes. There was a lot of
discussion and understanding of what the two programs are doing, and
some code was exchanged so that Paolo, Hugh and I could begin looking
at some differential distributions and see how the models differ. I
don't know if Paolo or Hugh have made any progress since then; I have
been consumed with these detector simulation issues.
However, I think there were at least two clear outcomes. First, Paolo
had developed a new way of handling nuclear effects in quasielastic
events which is more plausible and should be incorporated into any
Lipari calculations. Second, there is no reason to continue using
Morfin-Tung distribution functions; instead something like GRV94-LO
should be used. I calculated step0 through step6 above using
Morfin-Tung but with the new nuclear effects prescription. I can send
a cradle to anyone interested in using the new nuclear effects.
When I generated 100,000 events with Antonio's FFREAD cards and the
old nuclear effects prescription, I get a detectable event rate that
agrees with what I get analyzing a file of 65,000 events that Antonio
generated (my generation 38.87 +/- 0.66 ev/year; Antonio's generation
39.01 +/- 0.68 ev/year). When I use the new nuclear effects
prescription, I see about 2% more detectable events (about what I had
expected/guessed).
I have also generated 100,000 events with Antonio's FFREAD cards
running with the NEUGEN generator. I see around 9% fewer events than
Lipari with new nuclear effects.
Here is a summary of the event rates of various MC simulations:
Neugen with Surdo cards: 36.51 +/- 0.66 ev/year.
2984 ev out of 100k, 81.722 yr
Lipari oldnuc with Surdo cards: 38.87 +/- 0.54 ev/year
4847 ev out of 100k, 124.689 yr
Lipari done by Surdo: 39.01 +/- 0.68 ev/yr
3109 ev out of 65k, 79.7 yr
Lipari newnuc with Surdo cards: 39.70 +/- 0.55 ev/year
4950 ev out of 100k, 124.689 yr
Lipari newnuc with Bob cards: 30.41 +/- 0.48 ev/year
3792 ev out of 100k, 124.689 yr
Neugen with Bob cards: 27.74 +/- 0.58 ev/year
2208 ev out of 100k, 79.595 yr. Different generation volume.
Bogus Lipari oldnuc with Bob cards: 25.14 +/- 0.55 ev/year
1996 ev out of 65541, 79.39 yr. Different generation volume.
Reported at Nov GS meeting
Correct Lipari oldnuc with Bob cards: 29.22 +/- ~1 ev/year
2320 ev out of 65541, 79.39 yr because
gmacro.lip.10k.2.zeb was missing about 325 events; error is a wild
guess because I'm not sure how much to add
In the next future, I want to verify that I understand why events
appear and disappear when I change the detector simulation, and I want
to modify my analysis to be less sensitive to these details of
detector simulation. Antonio should grab some of the Lipari events I
generated with his FFREAD cards and old nuclear effects, and verify
that they give identical results to the Lipari events he has been
generating. If so, he should then grab some of my NEUGEN events with
his FFREAD cards and see what rate they give in his analysis. (I will
send Antonio separate instructions on accessing the files.)
Antonio should also get the code for Paolo's new nuclear effects
prescription, and we should both generate some events with GRV
structure functions. Antonio, could you take the lead on trying to
quantify the uncertainty on the ST simulation parameters?
Bob