Generating Monte Carlo sample, with Nuance.
Back to all README files



Table of Contents:

A. Generating Nuance vectors
B. Copy vector file to suketto
C. Simulating the events (using skdetsim)
D. PC reduction
E. FC reduction
F. Recontruction
G. Creating ntuples
H. General little hints




A. Generating Nuance vectors

First you have to download the nuance code at:
http://nuint.ps.uci.edu/nuance/default.htm

The procedure to download is fairly well explained on the website, so I won't comment on it.

(Once you have your directories set up, it is a good idea to copy the directory /Nuance/v2/data into /Nuance/v2/mydata so that if you erase something by mistake you don't have to redownload the whole thing)

There is a good README file in:
/Nuance/v2/src

In /data you have 4 kind of files:
.fzc files are the results of the rate computations
.kin files are the vector files you will use as input in skdetsim
.cards files are the cards to run nuance (easy !)
        There is two kind of run:    - generating the rates
                                                - generating the vectors
.flux files are the files which contains the info for the solar fluxes.
.hbook files are optionally created (see .cards) files.

-1-
First you need to generate the rate (= create .fzc files) to do that you can use the card containing the word 'rate'.
for example: honda_water_rate.cards

it is pretty easy to modify those cards to do what you want (see README file)

a typical command line will be:

../Linux-i686/nuanceMc.exe honda_water_rate.cards

(NB you don't have to use the card you can also use flags as shown later)

Generating the rate takes about 8 hours on neutrino, using one CPU at 99%

-2-
Then you have to generate the vector files:
(It is recommanded to put no more than 1 year of event in a file because the files get pretty big at the simulation steps and after.)

Here you have to use the card containing the word 'event', for example:
honda_water_evts.cards

again, it is pretty easy to modifiy those cards to get the right solar fluxes, number of events, to use the right rate.fzc files, etc.

a typical command will be
../Linux-i686/nuanceMc.exe honda_water_evts.cards

But if you want to create 60yrs of MC split into 60 1-yr files it is useful to use a script shell such as

/work/fdufour/nuanceMc/Nuance/v2/60yrs/create_60yrs.sh
(and a copy here:/work/fdufour/documentation/MC-generation)

This script put the vectors in the directory nuance_vector that you must have created before.
It is good hygiene to change the name of this directory (once you generated the vectors) to a name containing more info, like the flux you used etc..
for example: nu_honda3d_avg.

In this script you can see that I have used command like
../Linux-i686/nuanceMc.exe -k test.kin -r 976356 common.cards
where I use a card for everything which is common to every file and a flag for what is not common.


Once everything is done, it is good to verify that the number of events in each files is ok. (about 10'000 event per year)
you can grep on begin, like so:

grep -ic begin file.kin

back to the top

B. Copy vector file to suketto

(NB for 100yr of MC you will need about 500GB at the end of the reconstruction so it is useful to have that space available on suketto when you start.)
to copy the vector files go on suketto and do

scp -r login@neutrino.bu.edu:/../../Nuance/v2/60yrs/nu_honda3d_avg/ name_of_directory

Here as well check that everything was copied properly using
grep -ic begin file.kin
and compare with what you had a neutrino.

back to the top


C. Simulating the events (using skdetsim)

-1-
First you need to get skdetsim for the repository:

cvs checkout skdetsim

will give you the latest version of skdetsim. Make sure this is really the version you want to use. You can check the "history" of some files using:

cvs log name.F

-2-
Once you have skdetsim, you need to compile it, and for that you need to have your environement variables set correctly.
To do that you can use the command:
source /home/atmpd/skrep/##/SOURCEME

where ## it the version you want (for example 05a)
if you are using skdetsim a lot is is useful to add this line in your
.tcshrc file.

Then to compile you do (and need to be on a sukap machine):

imake_boot
gmake clean (for hygiene)
gmake skdetsim

sometimes you might still miss a library or a another .o file, this is because your environnement setting might be wrong.
Try to do a:
setenv PRIVATE_ROOT your_directory
this might solve the problem.

-3-
To run skdetsim on the condor system:

- don't forget to log on a sukap machine
- create a condor file.

here is an example of condor file:

/work/fdufour/documentation/MC-generation/skdetsim_example.condor

In the condor file you will specify:
- which card file to use (CARDFILE = sk2_odtune.card)
- where are the input vector (VECTORDIR = /net/sukatmd1/work24/fdufour/vectors_h3d_avg)
- where the ouptput should go (OUTPUTDIR = /net/sukatmd1/work24/fdufour/zbs_nuance_h3d_avg)
- where the binary is (WORKDIR = /net/sukatmd1/work24/fdufour/skdetsim)

(Before running the condor job, go into you output directory and create a directory logs, as you can see in .condor, you need it.)

Once everything is setup, you can start the condor job using:
condor_submit skdetsim_example.condor
One year needs about 3 days on one CPU to be processed, so using condor
and 60 CPU for 60 yrs, 3 days is usually enough.

you can check the status of all condor CPU using:

- condor_status
         you can check status of you CPU using
- condor_q
  you can kill you jobs doing
- condor_rm id# or username

BE CAREFUL List:

- in sk2_odtune.card:
make sure that the number of event generated is bigger than the number
you have in your vector files (the default number is 5000 and it is not
enough!!!!!!!!!!!):

C
C number of generated event
C

VECT-NEVT 50000

-4-
Once all files are simulated, it is good to count the number of events and compare with the vector files: it should be the same.
There should be around 10000 event per year.

To count number of event you can use
the program evt_stat that Wei created, there is a copy in:

/work/fdufour/documentation/MC-generation/tools/zbs-tools

NB (I have not tried to compile those program on neutrino, but they work on suketto. If you want directly the binaries, you can get them there:
/home/ww/tools/zbs-tools/solaris_sparc


back to the top


D. PC reduction


PC1-4:

First get PC-reduction code from repository (make sure it is the latest version... ) (In my case I used Mitsuka-san binary... so if the repository is not up to date, aks him, where are the right binaries )

In your work directory, you have to create MANUALLY the following directories:

pat1st
pat2nd
pat3rd
pa4th
patmue
log

In the "log" directory you have to create the following directories:
event   flag    hbk     logs   muesel  run  sub

In muesel do a
touch muesel.log

Now you can start running pc1-4 by doing a:

 ./patall.sh 1 60 > & pc1_4-1_1-60.out &

(In my case it was  ./patall_mitsuka.sh 1 60 > & pc1_4-60.out &)

I didn't use CONDOR for this part because it takes only 1 day to run 60 yrs.
(One year takes about 40 min to run)

Here is to give you an idea of the size of a file after PC4
-rw-r--r--   1 fdufour  sk       31343760 Jul  7 05:06 patrd5.run000057.001

And  after PC4 each year contains about 500 events.


PC5:

PC5 is a two steps process. This might have changed, check with the PC reduction expert first.

You can see the two shell scripts that I used in

/work/fdufour/documentation/MC-generation/patall_pc5a.sh
and patall_pc5b.sh

As for pc1-4, you have to create manually a directory pat5th and some appropriate log directory.
Then using a condor script like 60yrs_pc5.condor you can start by running
patall_pc5a.sh

the move the output directory:
mv pat5th pat5th_a

recreate a empty directory pat5th

and run patall_pc5b.sh by changing 60yrs_pc5.condor accordingly.

PC5 takes about 25 hours to run for 1 year, but I saw some years take up to 48 hours.

Here is the typical size of a file and the end of pc5:

-rw-r--r--   1 fdufour  sk       75433680 Jul 15 04:00 patrd6.run000055.001

back to the top


E. FC reduction

When I generated FC, the repository was not up to date, so I had to use Okumura-san binaries and help.
It is probably wise to ask him where is the latest binary and related files.

In my case, I had to copy:

  fccomb_sk2                 : binary file for reduction
  fccomb_all.sh               : shell file to run fccomb_sk2
  reduc_mc.sh                 : shell file for all reduction procedure
  fc4_data_reject.dat       : flasher data for FC4 reduction program
                                              to learn flasher charge pattern

from his directory.

Here is a copy of his explanation about FC reduction:
________________________________________________________________
when you apply this reduction, all reduction from FC1 to FC5 will be applied at once.
...
but FC reduction is a little complicated because FC4, so called flasher scan, is needed to learn charge pattern of flaher events in advance and match event charge pattern for all the pair among input events. so you have to reject events after reduction program has finished. when this reduction finished, two reduction log files, named as "fccomb.log" and "fcscan-cut.evt" will be appeared.
events which should be rejected are written in these logfiles.
...
but you do not have to worry about this. these all reduction procedure are written in "reduc_mc.sh". I think it will work with small changes, such as changing filename configuration, "infiles", "output", "hbook", etc.
please copy these files into your directory and try to work "reduc_mc.sh".

Regards,
Okumura

___________________________________________________________________


So you have to change reduc_mc.sh to fit your directories settings. (Pretty obvious what to do.)

One year takes about 1 day to run, so here it is worth to use CONDOR to run 60 yrs in parallel.
You can see an example of my condor script in:

/work/fdufour/documentation/MC-generation/60yrs_fc1-5.condor

You must create MANUALLY a "log" directory in the same place where your are
running reduc_mc.sh, so that the condor log file can be written.

When the program runs, it creates one directory per year, and don't be suprise
if the .dat file doesn't appear right away, it is created only at the end!
Here is the structure for a one year directory look like:

/net/sukatmd1/work24/fdufour/reduction/fc1-5/26@sukap05[47]_% ls -l
total 496482
-rw-r--r--      1 fdufour  sk       247030560 Jul  7 08:21 apfit.reduc.0026.dat
lrwxrwxrwx    1 fdufour  sk            22 Jul  6 23:05 fc4_data_reject.dat -> ../fc4_data_reject.dat
-rw-r--r--      1 fdufour  sk       5943296 Jul  7 08:20 fccomb.0026.hbk
-rw-r--r--      1 fdufour  sk        455312 Jul  7 08:20 fccomb.log
lrwxrwxrwx   1 fdufour  sk            16 Jul  6 23:05 fccomb_all.sh -> ../fccomb_all.sh
lrwxrwxrwx   1 fdufour  sk            13 Jul  6 23:05 fccomb_sk2 -> ../fccomb_sk2
-rw-r--r--     1 fdufour  sk         44519 Jul  7 08:20 fscan-cut.evt
-rw-r--r--    1 fdufour  sk           552 Jul  7 08:20 fscan-nu.evt
-rw-r--r--    1 fdufour  sk        155302 Jul  7 08:20 fscan.log
-rw-r--r--    1 fdufour  sk        184800 Jul  7 08:20 reject.list
-rw-r--r--    1 fdufour  sk        184096 Jul  7 08:20 tmp1.txt
-rw-r--r--    1 fdufour  sk           736 Jul  7 08:20 tmp2.txt


Also be carful that the log file from condor are pretty significant in size.
-rw-r--r--   1 fdufour  sk       14649848 Jul  7 08:21 26.log

At the end of the FC reduction on year contains about 4500 event.

back to the top


F. Recontruction


The main thing about the reconstruction is that it is long to run. 500 events take about a day. So be careful to plan ahead!

You might want to cut your 1 year file into 1000 events chunk to avoid having too long jobs.
To do that you can use a DASH kumac. You can find an example of such a kumac in:
/work/fdufour/documentation/MC-generation/splitfiles.kumac

-1-
First do a 'cvs get aplib' to get the reconstuction code from the repository.

apfit_sample.F is the main program calling everything else.
Inside apfit_sample.F there is a 'call apfit(##)' where ## is a number corresponding to a flag in apfit.F. You can look in apfit.F to know which flag you need. For example 0 = FC and 15360 = PC

You will need to create different executables for FC and for PC.

One good way to do is is to compile the code with one flag, say (0) for FC:

imake_boot
gmake all
gmake apfit_sample

then do
mv solaris_sparc/apfit_sample solaris_sparc/apfit_fc

Then change the flag for PC.
Do a "rm solaris_sparc/apfit_sample.o" and "rm solaris_sparc/apfit.o"
(If you want to be sure, doing "gmake clean" is the safest.)
Recompile and do
mv solaris_sparc/apfit_sample solaris_sparc/apfit_pc

(NB make sure that you environnement setting are ok, by doing setenv PRIVATE_ROOT your_local_direcotry/aplib, otherwise you might have some permission problems)
 
-2-
Then there is a whole bunch of shell scripts that you have to use. The reason for so many shell scripts is that we have to be careful to the size of the files we are working with.

All those shell scripts are available in
/work/fdufour/documentation/MC-generation as examples.

Let's summarize what is done (taking FC for example):


- apfit_fc.sh is the shell which calls the executable apfit_fc

- apfit_fc_condor.sh calls apfit_fc.sh
the goal of this shell is to send the ouptut first to the work directory
on the sukap machine and bring everything back where you want it only at the
end.

- 60yrs_fc.condor calls apfit_fc_condor.sh
passing the input file and ouput file names.
With this script you can choose how many files you want to deal with at once.
60 files is a good number.

Once the reconstruction is done, you need to merge each of the split files back into on file.

For that you can use the following program 'merge.F' (one copy is in /work/fdufour/documentation/MC-generation (on neutrino) and one copy is in /home/fdufour/zbs-tools/merge.F on suketto. like for evt-stat, the programs are know to compile and works only on suketto)

You'll need to adapt merge.sh for your own files, but by now it should be a piece of cake!

This program merge 5 files into 1, but the code in merge.F is quite stupid (aka easy to modifiy for more/less files).

One your done merging your files, it is a good idea to compare the number of event (using evt-stat as usual for zbs files) with the results you had at the end of the reduction

back to the top


G. Creating ntuples


You need to get the code from
CVS checkout official_ntuple
WARNING the README file is NOT up to date!!!
... but the code is quite easy to use.
Compile doing:
imake_boot
gmake clean (we never know)
gmake install
gmake fillnt
then do cp fillnt_simple.sh.in fillnt_simple.sh (to keep a backup of the script)
Edit fillnt_simple.sh and change %%BINDIR%% to the directory where the exectuable is. and %%PACKAGE_ROOT%% to your package_root directory. (you can check your package_root by doing 'printenv').
Then the command line to run fillnt is:
./fillnt_simple.sh -o output.hbk input.dat
5000 evt take about 20 min to run so there no need for condor.
You can find an example of script file in:
/work/fdufour/documentation/MC-generation/all_fillnt.sh
back to the top


H. General Little Hints


- to run a condor job: make sure your are on a sukap machine!
- when creating a new script (.sh file): don't forget to do chmod a+x file.sh
back to the top