@@ -52,7 +52,7 @@ For all parameters, except for the location of DST analysis output files, order
- Location of DST analysis output files (input for the program). First parameter in order, input without an index prefacing it. Example: `../../output_dir_Pre_fRPC_Cal_New/be2104506572506.hld_dst_feb21.root` (multiple files can be selected using asterisks or question-marks, e.g. `../../output_dir_Pre_fRPC_Cal_New/be21045*.hld_dst_feb21.root`).
- Location of parameters file, prefaced with index `-p`. If not specified defaults to `./feb21_dst_params.txt`. Example: `-p ../../feb21_dst_params.txt`.
- Name of the root output file containing histograms (the name is also used to name the .txt file with calculated calibration parameters) prefaced with index `-o`. If not specified, defaults to `output.root`. Example: `-o output.root`.
- Path to where both output files (root file with histograms and txt file with calculated calibration parameters) should be saved (given folder needs to be manualy created, if path to nonexistent folder is given, the files won't be saved) prefaced with index `-d`. If not specified defaults to the same directory the program was executed in. Example: `-d ./out`.
- Path to where both output files (root file with histograms and txt file with calculated calibration parameters) should be saved (given folder needs to be manually created, if path to nonexistent folder is given, the files won't be saved) prefaced with index `-d`. If not specified defaults to the same directory the program was executed in. Example: `-d ./out`.
- Number of events to analyse, prefaced with index `-e`. If not specified, all events in selected files will be analysed. Example: `-e 500000`.
- Stage of calibration. Select one of:
-`--cal-frpc-stage1` - First step of calibration (thresholds).
...
...
@@ -73,3 +73,99 @@ Optionally `2>&1 | tee fRPC_progress.txt` can also be added at the end of the co
```bash
./build/bin/elastics_monitor inputFiles
```
## Online full-dst production
The setup provides tools to monitor new hld files and submit dst production jobs to the farm. All the scripts will be installed with make install command to the `$CMAKE_INSTALL_PREFIX/bin` directory.
Before start, one need to prepare config file. There is provided default one `config.sh.example` so:
includes several variables, some of them are worth to mention:
*`DATA_DIR` - directory where results will be written, this path will be prepend to the following variables:
*`HLD_LIST_FILE` - keeps list of all found hld files
*`HLD_DST_SUBMIT_FILE` - keeps list of all submitted dst jobs (files)
*`HLD_DST_STATUS_FILE` - keeps status of the dst job, whether dst succeeded or failed
*`DST_LIST_FILE` - keeps list of all dst files
*`DST_DIR` - place where dst output files will be stored
*`LUMI_LIST_FILE` - list of the found dst inputs
*`DST_LUMI_SUBMIT_FILE` - keeps list of all submitted lumi inputs (files)
*`LUMI_DIR` - place where lumi files will be stored
*`*_PID_FILE` and `*_LOG_FILE`store process id and outputs for apps running as daemons
*`*_LOG_PREFIX` - prefix for slurm log files, which will be stored in `$DATA_DIR/slurm_log/$XXX_LOG_PREFIX-slurm-%j.log`
*`HLD_DIR` - directory to browse or monitor for new hld files
*`HLD_MASK` - search mask (glob), the files are searched inside `$HLD_DIR/$HLD_MASK`
*`DST_EVENTS` - number of events to analyze from hld file
*`*_SUBMIT_SCRIPT` - script which executes `sbatch` command, takes file as input, other variables are read out from config file
*`*_BATCH_SCRIPT` - script which is executed on the batch farm
*`DST_CHECK_SCRIPT` - program to check whether the dst production is correct
*`DST_MASK` - similar like `HLD_MASK`
*`LUMI_TOOL` - luminosity monitoring tool
*`LUMIDIR` output for lumi tool, relative to `DATA_DIR`
*`LUMI_LIST_FILE`- list of run dst files for lumi too
For most of the cases, one needs to adjust only `HLD_DIR` and `DATA_DIR`.
### Online dst
It is recommend to set up installation path with `cmake -DCMAKE_INSTALL_PREFIX=some_dir` and install files with `make install`
The basic usage is to start hld monitoring tool. Assuming that we are located in `some_dir/bin`:
```bash
./hld_watcher.py
```
Thsi will monitor every file created in `HLD_DIR` matching the `HLD_MASK` and submit the job if the new file appears. The app can be also run as a daemon with `-d` option, then the PID will be stored in `HLD_WATCH_PID_FILE` and out log in `HLD_WATCH_LOG_FILE`.
In case that some files could me missed (watcher was not working) one can run
```bash
./hld_watcher.py -l
```
which will list the directory and submit those files which were not processed yet. It is recommended to do it only a few times per day, e.g. using cron.
Each submitted file is stored in `HLD_DST_SUBMIT_FILE` therefore the tools above know which one were already send and will not submit the second time. If there is need to restart some jobs, one can remove these files from `HLD_DST_SUBMIT_FILE` and run command
```bash
./hld_watcher.py -s
```
which will go through `HLD_LIST` (so it will not list directory again) and submit those files which are not present in `HLD_DST_SUBMIT_FILE`.
See
```bash
./hld_watcher.py -h
```
for list of options.
### Luminosity monitoring
The muli is running on the DST files. The `lumi_watcher.py` program monitors the `DST_OUT` file for new files and submits jobs. It works in same way like the `hld_watcher.py` including options.