Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • main
  • use-eicrecon-algorithms
  • wdconinc-main-patch-85492
  • feat-context-service-overhaul
  • wdconinc-main-patch-17306
  • 73-add-rich-irt-algorithm
  • less-jug-xl-master
  • wdconinc-main-patch-83553
  • acts-seeding-21
  • eicrecon-migration
  • wdconinc-main-patch-77189
  • calorimeter-hit-digi-with-random-svc
  • trackprojection_writeout
  • standalone
  • feat-add-ActsSvc
  • silicon-tracker-digi
  • calorimeter-hit-digi
  • master
  • test-main
  • algorithms-integration-calorimeter-hit-digi
  • algorithms-service-proto
  • algorithms
  • acts-20
  • trajectory_info
  • irt-hepmc-jugpid
  • use-algorithms-library
  • track-projector-firstSmallerThanZ
  • pileup-tools
  • use-upstream-edm4hep
  • 95-jugfast-matchclusters-needs-updating-for-edm4hep-migration
  • 99-trackprojector-generalize-by-passing-target-surface-as-option
  • BHCalReco
  • 96-implement-simtrackerhitsmerger-to-overlay-merge-background-events
  • robin-ImagingDataShaper
  • 88-use-gsl-owner
  • vanekjan-master-patch-35348
  • brynnamoran-master-patch-77949
  • lkosarzew-master-patch-18213
  • swapneshkhade-master-patch-93112
  • Mriganka-master-patch-97470
  • billlee77-master-patch-93524
  • wdconinc-master-patch-91743
  • sim-tracker-hits-collector
  • podio-data-svc-collections
  • tflite-proof-of-concept
  • 79-testing-ci-for-alexander
  • irt-init-v01
  • irt-init-rebased
  • tof-pid-plugin
  • ayk-01
  • neutral_momentum
  • irt-init
  • track_projection
  • source_link_debug
  • acts_debug
  • track-param-imaging-cluster-init-phi-shift-to-ecal
  • 37-eta-and-phi-definition-in-merger
  • 36-need-jug-run-executable-to-launch-full-reconstruction
  • less_verbose_acts
  • debug_tracking
  • update_imcal_alg
  • test-unstable-container
  • re-update_ci
  • use_v3_image
  • imaging_cal_ML
  • topo_cluster
  • tracking_update
  • bebop_test
  • tensorflow
  • fitting
  • pid
  • new_versions_fix
  • 3-clustering-algorithm-for-calorimeter
  • 0.1
  • stable-1.2
  • v0.1
  • v0.3.1
  • v0.3.2
  • v0.3.3
  • v0.4.0
  • v0.4.1
  • v1.2.0
  • v1.5.0
  • v1.6.0
  • v1.8.0
  • v10.0.0
  • v10.0.1
  • v10.1.0
  • v11.0.0
  • v12.0.0
  • v13.0.0
  • v14.0.0
  • v14.0.1
  • v14.0.2
  • v14.0.3
  • v14.1.0
  • v14.2.0
  • v14.2.1
  • v14.2.2
  • v14.3.0
  • v15.0.0
  • v15.0.1
  • v15.0.2
  • v2.0.0
  • v3.0.0
  • v3.1.0
  • v3.2.0
  • v3.3.0
  • v3.3.1
  • v3.4.0
  • v3.5.0
  • v3.6.0
  • v4.0.0
  • v4.1.0
  • v4.2.0
  • v4.3.0
  • v4.4.0
  • v5.0.0
  • v6.0.0
  • v6.1.0
  • v7.0.0
  • v8.0.0
  • v8.0.1
  • v8.0.2
  • v9.0.0
  • v9.1.0
  • v9.2.0
  • v9.3.0
  • v9.4.0
129 results

Target

Select target project
No results found
Select Git revision
  • master
  • 26-implement-3d-clustering-algorithm-for-sampling-calorimeter
  • 11-generic-3d-clustering-of-hits
  • fitting
  • pages
  • downstream_pipelines
  • test-unstable
  • pid
  • new_versions_fix
  • 3-clustering-algorithm-for-calorimeter
  • 0.1
  • stable-1.2
  • v0.1
  • v0.3.1
  • v0.3.2
  • v0.3.3
  • v0.4.0
  • v0.4.1
  • v1.2.0
19 results
Show changes

Commits on Source 506

406 additional commits have been omitted to prevent performance issues.
186 files
+ 19567
5136
Compare changes
  • Side-by-side
  • Inline

Files

+10 −107
Original line number Original line Diff line number Diff line
---
---
Language:        Cpp
BasedOnStyle:  LLVM
BasedOnStyle:  Chromium
BreakConstructorInitializersBeforeComma: true
AccessModifierOffset: -2
ConstructorInitializerAllOnOneLineOrOnePerLine: true
AlignAfterOpenBracket: Align
AlignConsecutiveAssignments: true
AlignConsecutiveDeclarations: true
AlignEscapedNewlines: Right
AlignOperands:   true
AlignTrailingComments: true
AllowAllParametersOfDeclarationOnNextLine: true
AllowShortBlocksOnASingleLine: false
AllowShortCaseLabelsOnASingleLine: false
AllowShortFunctionsOnASingleLine: All
AllowShortIfStatementsOnASingleLine: false
AllowShortLoopsOnASingleLine: false
AlwaysBreakAfterDefinitionReturnType: None
AlwaysBreakAfterReturnType: None
AlwaysBreakBeforeMultilineStrings: false
AlwaysBreakTemplateDeclarations: true
BinPackArguments: true
BinPackParameters: true
BraceWrapping:   
  AfterClass:      false
  AfterControlStatement: false
  AfterEnum:       false
  AfterFunction:   true
  AfterNamespace:  false
  AfterObjCDeclaration: false
  AfterStruct:     false
  AfterUnion:      false
  AfterExternBlock: false
  BeforeCatch:     false
  BeforeElse:      false
  IndentBraces:    false
  SplitEmptyFunction: true
  SplitEmptyRecord: true
  SplitEmptyNamespace: true
BreakBeforeBinaryOperators: None
BreakBeforeBraces: Custom
BreakBeforeInheritanceComma: false
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: false
BreakConstructorInitializers: BeforeColon
BreakAfterJavaFieldAnnotations: false
BreakStringLiterals: true
ColumnLimit:     120
CommentPragmas:  '^ IWYU pragma:'
CompactNamespaces: false
ConstructorInitializerAllOnOneLineOrOnePerLine: false
ConstructorInitializerIndentWidth: 4
ContinuationIndentWidth: 4
Cpp11BracedListStyle: true
Cpp11BracedListStyle: true
DerivePointerAlignment: false
DisableFormat:   false
ExperimentalAutoDetectBinPacking: false
FixNamespaceComments: true
ForEachMacros:   
  - foreach
  - Q_FOREACH
  - BOOST_FOREACH
IncludeBlocks:   Preserve
IncludeCategories: 
  - Regex:           '^"(llvm|llvm-c|clang|clang-c)/'
    Priority:        2
  - Regex:           '^(<|"(gtest|gmock|isl|json)/)'
    Priority:        3
  - Regex:           '.*'
    Priority:        1
IncludeIsMainRegex: '(Test)?$'
IndentCaseLabels: false
IndentPPDirectives: None
IndentWidth:     2
IndentWrappedFunctionNames: false
JavaScriptQuotes: Leave
JavaScriptWrapImports: true
KeepEmptyLinesAtTheStartOfBlocks: true
MacroBlockBegin: ''
MacroBlockEnd:   ''
MaxEmptyLinesToKeep: 1
NamespaceIndentation: All
ObjCBlockIndentWidth: 2
ObjCSpaceAfterProperty: false
ObjCSpaceBeforeProtocolList: true
PenaltyBreakAssignment: 2
PenaltyBreakBeforeFirstCallParameter: 19
PenaltyBreakComment: 300
PenaltyBreakFirstLessLess: 120
PenaltyBreakString: 1000
PenaltyExcessCharacter: 1000000
PenaltyReturnTypeOnItsOwnLine: 60
PointerAlignment: Left
ReflowComments:  true
SortIncludes:    true
SortUsingDeclarations: true
SpaceAfterCStyleCast: false
SpaceAfterTemplateKeyword: true
SpaceBeforeAssignmentOperators: true
SpaceBeforeParens: ControlStatements
#SpaceBeforeRangeBasedForLoopColon: true
SpaceInEmptyParentheses: false
SpacesBeforeTrailingComments: 1
SpacesInAngles:  false
SpacesInContainerLiterals: true
SpacesInCStyleCastParentheses: false
SpacesInParentheses: false
SpacesInSquareBrackets: false
Standard: Cpp11
Standard: Cpp11
TabWidth:        8
#SpaceBeforeParens: ControlStatements
SpaceAfterControlStatementKeyword: true
PointerBindsToType: true
IncludeBlocks:   Preserve
UseTab:          Never
UseTab:          Never
ColumnLimit:     100
NamespaceIndentation: Inner
AlignConsecutiveAssignments: true
...
...

.clang-tidy

0 → 100644
+15 −0
Original line number Original line Diff line number Diff line
HeaderFilterRegex: ''
Checks: '
  bugprone-*,
  concurrency-*,
  cppcoreguidelines-*,
  modernize-*,
  portability-*,
  readability-*,
  -modernize-use-trailing-return-type,
  -modernize-avoid-c-arrays,
  -modernize-use-nodiscard,
  -readability-magic-numbers,
  -cppcoreguidelines-avoid-magic-numbers
'
FormatStyle: file
+48 −0
Original line number Original line Diff line number Diff line
name: linux-eic-shell

on: [push, pull_request]

jobs:
  build-test:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: cvmfs-contrib/github-action-cvmfs@v2
    - uses: eic/run-cvmfs-osg-eic-shell@main
      with:
        platform-release: "jug_xl:nightly"
        run: |
          PREFIX=${PWD}/install
          # install this repo
          cmake -B build -S . -DCMAKE_INSTALL_PREFIX=${PREFIX}
          cmake --build build -- install
    - uses: actions/upload-artifact@v3
      with:
        name: install-eic-shell
        path: install/
        if-no-files-found: error
    - uses: actions/upload-artifact@v3
      with:
        name: build-eic-shell
        path: build/
        if-no-files-found: error

  clang-tidy:
    runs-on: ubuntu-latest
    needs: build-test
    steps:
    - uses: actions/checkout@v2
    - uses: cvmfs-contrib/github-action-cvmfs@v2
    - uses: actions/download-artifact@v3
      with:
        name: build-eic-shell
        path: build/
    - uses: eic/run-cvmfs-osg-eic-shell@main
      with:
        platform-release: "jug_xl:nightly"
        run: |
          run-clang-tidy -p build -export-fixes clang_tidy_fixes.yml -extra-arg='-std=c++20'    
    - uses: actions/upload-artifact@v3
      with:
        name: clang-tidy-fixes.yml
        path: clang_tidy_fixes.yml
+5 −1
Original line number Original line Diff line number Diff line
@@ -18,11 +18,12 @@ DEBUG*/*
BUILD*/*
BUILD*/*
RELEASE*/*
RELEASE*/*
TEST*/*
TEST*/*
doc/docs/*


# cmake
# cmake
CMakeCache.txt
CMakeCache.txt
CMakeFiles
CMakeFiles
Makefile
#Makefile
cmake_install.cmake
cmake_install.cmake
install_manifest.txt
install_manifest.txt


@@ -39,3 +40,6 @@ __pycache__/


# ROOT file
# ROOT file
*.root
*.root
*.sif
**old
install/*
+91 −26
Original line number Original line Diff line number Diff line
image: eicweb.phy.anl.gov:4567/containers/eic_container/eic:latest
image: eicweb.phy.anl.gov:4567/containers/image_recipes/ubuntu_dind:latest


default:
variables:
  artifacts:
  VERSION: "${CI_COMMIT_REF_NAME}"
    paths:
      - build/


stages:
stages:
  - build
  - build
  - run
  - analysis
  - config
  - docker    ## build new version of juggler
  - deploy    ## repo-local singularity image for development work

workflow:
  ## Only rebuild on MRs and on commits to the main, as in other cases
  ## we should use the tagged jug_xl releases from eic_container.
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
    - if: '$CI_COMMIT_BRANCH == "main"'


compile:
default:

## plan:
##  Workflows:
##  - main        --> config + docker (eic_container) + singularity (this repo)
##      --> trigger eic_container master
##  - MR          --> config + docker (eic_container) + singularity (this repo)
##  - upstream trigger from eic_container (nightly) --> run main
##
##  Container images tags
##  - main        --> nightly on eicweb & DH, and export to eic_container
##  - MR          --> unstable-mr-XXX (on eicweb only, untag at end of pipeline)

juggler:local:
  image: eicweb.phy.anl.gov:4567/containers/eic_container/jug_xl:nightly
  stage: build
  stage: build
  tags:
  parallel:
    - silicon
    matrix:
  before_script:
    - CMAKE_CXX_STANDARD:
    - pwd &&  ls -lrth
      - 20
  script:
    - |
      cmake -Bbuild -S. -DCMAKE_CXX_STANDARD=${CMAKE_CXX_STANDARD} -DCMAKE_INSTALL_PREFIX=/usr/local
      cmake --build build -j20
  artifacts:
    expire_in: 1 hour
    paths:
      - build/

analysis:clang-tidy:
  image: eicweb.phy.anl.gov:4567/containers/eic_container/jug_xl:nightly
  stage: analysis
  needs: 
    - juggler:local
  script:
  script:
    - export homedir=$(pwd) &&  pwd && cd /tmp && git clone --depth=1 https://eicweb.phy.anl.gov/EIC/NPDet.git && mkdir build && cd build && cmake ../NPDet/. && make -j20 install
    - |
    - cd /tmp && git clone --depth=1 https://eicweb.phy.anl.gov/EIC/eicd.git && mkdir eicd_build && cd eicd_build && cmake ../eicd/. && make -j20 install
      run-clang-tidy -p build -j20 -export-fixes clang_tidy_fixes.yml -extra-arg='-std=c++20'
    - cd $homedir && ls -lrth && mkdir build && cd build && cmake .. && make -j10
  artifacts:

    expire_in: 1 week
run_example:
    paths:
  stage: run
      - clang_tidy_fixes.yml
  tags:
  allow_failure: true
    - silicon

version:
  stage: config 
  rules:
    - if:  '$CI_SERVER_HOST == "eicweb.phy.anl.gov"'
  script:
  script:
    - ./build/run gaudirun.py Examples/options/hello_world.py
    - |
      if [ "x${CI_PIPELINE_SOURCE}" == "xmerge_request_event" ]; then
        VERSION="unstable-mr-${CI_MERGE_REQUEST_PROJECT_ID}-${CI_MERGE_REQUEST_IID}"
      fi
      echo "VERSION=$VERSION" >> build.env
      cat build.env
  artifacts:
    reports:
      dotenv: build.env

eic_container:
  stage: deploy
  needs:
    - version
  variables:
    VERSION: "${VERSION}"
    JUGGLER_VERSION: "${CI_COMMIT_REF_NAME}"
  trigger:
    project: containers/eic_container
    strategy: depend
  allow_failure: false


run_example2:
pages:
  image: eicweb.phy.anl.gov:4567/eic/npdet/npdet:latest
  image: eicweb.phy.anl.gov:4567/containers/eic_container/alpine
  stage: run
  stage: deploy
  tags:
  rules:
    - silicon
    - if: '$CI_SERVER_HOST == "gitlab.phy.anl.gov" && $CI_COMMIT_BRANCH == "main"' 
  script:
  script:
    - ./build/run gaudirun.py JugBase/tests/options/simple_reader.py
    - apk update && apk add doxygen  graphviz ttf-ubuntu-font-family
    - cd doc && doxygen Doxyfile  && cd ..
    - mkdir -p public && cp -r doc/docs/html/* public/.
  artifacts:
    paths:
    - public
+107 −20
Original line number Original line Diff line number Diff line
cmake_minimum_required(VERSION 3.2.0)
# SPDX-License-Identifier: LGPL-3.0-or-later

# Copyright (C) 2022 Wouter Deconinck, Whitney Armstrong
set(CMAKE_PREFIX_PATH $ENV{HOME}/lib CMAKE_PREFIX_PATH)

find_package(NPDet  REQUIRED)
cmake_minimum_required(VERSION 3.19)
#---------------------------------------------------------------

# to use ROOT targets, find_package ROOT must come before find_package Gaudi
# CMP0074: find_package() uses <PackageName>_ROOT variables
# (for now) # see issue Gaudi#103
cmake_policy(SET CMP0074 NEW)
set(PODIO $ENV{PODIO})

set(CMAKE_MODULE_PATH CMAKE_MODULE_PATH PODIO)
project(Juggler VERSION 4.3.0)
find_package(podio 0.11.0 REQUIRED)

include_directories(${podio_INCLUDE_DIR})
option(JUGGLER_BUILD_TRACKING "Build tracking algorithms" TRUE)
find_package(ROOT COMPONENTS MathCore GenVector Geom REQUIRED)

find_package(DD4hep COMPONENTS DDG4 DDG4IO DDRec REQUIRED)
set(CMAKE_CXX_STANDARD 20 CACHE STRING "")
find_package(Geant4)
if(NOT CMAKE_CXX_STANDARD MATCHES "20")

  message(FATAL_ERROR "Unsupported C++ standard: ${CMAKE_CXX_STANDARD}")

endif()
find_package(GaudiProject)
set(CMAKE_CXX_STANDARD_REQUIRED ON)

set(CMAKE_CXX_EXTENSIONS OFF)
gaudi_project(Juggler v1r0

              USE Gaudi v33r1)
# Export compile commands as json for run-clang-tidy
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)

# Also use clang-tidy integration in CMake
option(ENABLE_CLANG_TIDY "Enable clang-tidy integration in cmake" OFF)
if(ENABLE_CLANG_TIDY)
  find_program(CLANG_TIDY_EXE NAMES "clang-tidy")
  if (CLANG_TIDY_EXE)
    message(STATUS "clang-tidy found: ${CLANG_TIDY_EXE}")
    set(CMAKE_CXX_CLANG_TIDY "${CLANG_TIDY_EXE}" CACHE STRING "" FORCE)
  else()
    set(CMAKE_CXX_CLANG_TIDY "" CACHE STRING "" FORCE)
  endif()
endif()

# Set default build type
set(default_build_type "Release")
if(EXISTS "${CMAKE_SOURCE_DIR}/.git")
  set(default_build_type "RelWithDebInfo")
endif()
if(NOT CMAKE_BUILD_TYPE AND NOT CMAKE_CONFIGURATION_TYPES)
  message(STATUS "Setting build type to '${default_build_type}' as none was specified.")
  set(CMAKE_BUILD_TYPE "${default_build_type}" CACHE
      STRING "Choose the type of build." FORCE)
  # Set the possible values of build type for cmake-gui
  set_property(CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS
    "Debug" "Release" "MinSizeRel" "RelWithDebInfo")
endif()

# Set all warnings
if(NOT CMAKE_BUILD_TYPE MATCHES Release)
  add_compile_options(-Wall -Wextra -Werror -Wno-error=deprecated-declarations)
endif()

find_package(Microsoft.GSL CONFIG)

find_package(EDM4EIC REQUIRED)
find_package(EDM4HEP 0.4.1 REQUIRED)

find_package(podio 0.16.3)
if(NOT podio_FOUND)
  find_package(podio 1.0 REQUIRED)
endif()
add_definitions("-Dpodio_VERSION_MAJOR=${podio_VERSION_MAJOR}")
add_definitions("-Dpodio_VERSION_MINOR=${podio_VERSION_MINOR}")
add_definitions("-Dpodio_VERSION_PATCH=${podio_VERSION_PATCH}")

find_package(ROOT COMPONENTS Core RIO Tree MathCore GenVector Geom REQUIRED)
find_package(DD4hep COMPONENTS DDRec REQUIRED)

if(JUGGLER_BUILD_TRACKING)
  find_package(Acts REQUIRED COMPONENTS Core PluginTGeo PluginDD4hep PluginJson)
  set(Acts_VERSION_MIN "20.2.0")
  set(Acts_VERSION "${Acts_VERSION_MAJOR}.${Acts_VERSION_MINOR}.${Acts_VERSION_PATCH}")
  if(${Acts_VERSION} VERSION_LESS ${Acts_VERSION_MIN}
    AND NOT "${Acts_VERSION}" STREQUAL "9.9.9")
    message(FATAL_ERROR "Acts version ${Acts_VERSION_MIN} or higher required, but ${Acts_VERSION} found")
  endif()
  add_definitions("-DActs_VERSION_MAJOR=${Acts_VERSION_MAJOR}")
  add_definitions("-DActs_VERSION_MINOR=${Acts_VERSION_MINOR}")
  add_definitions("-DActs_VERSION_PATCH=${Acts_VERSION_PATCH}")
  # Get ActsCore path for ActsExamples include
  get_target_property(ActsCore_LOCATION ActsCore LOCATION)
  get_filename_component(ActsCore_PATH ${ActsCore_LOCATION} DIRECTORY)
endif()

## Dependencies
find_package(algorithms)
find_package(Gaudi)
find_package(k4FWCore)

## Components
add_subdirectory(JugBase)
add_subdirectory(JugAlgo)
add_subdirectory(JugDigi)
add_subdirectory(JugFast)
add_subdirectory(JugPID)
add_subdirectory(JugReco)
if(JUGGLER_BUILD_TRACKING)
  add_subdirectory(JugTrack)
endif()

## CMake config
gaudi_install(CMAKE)

# create and install Juggler.xenv file as it still has a use-case
# TODO: update workflow to not need xenv files anymore
include(cmake/xenv.cmake)
+87 −0
Original line number Original line Diff line number Diff line
import os
import ROOT
from Gaudi.Configuration import *
from GaudiKernel import SystemOfUnits as units

from GaudiKernel.DataObjectHandleBase import DataObjectHandleBase
from Configurables import ApplicationMgr, EICDataSvc, PodioOutput, GeoSvc

from Configurables import PodioInput
from Configurables import Jug__Base__InputCopier_dd4pod__Geant4ParticleCollection_dd4pod__Geant4ParticleCollection_ as MCCopier
from Configurables import Jug__Base__InputCopier_dd4pod__CalorimeterHitCollection_dd4pod__CalorimeterHitCollection_ as CalCopier
from Configurables import Jug__Digi__CalorimeterHitDigi as CalorimeterHitDigi
from Configurables import Jug__Reco__ImagingPixelReco as ImagingPixelReco
from Configurables import Jug__Reco__ImagingTopoCluster as ImagingTopoCluster
from Configurables import Jug__Reco__ImagingClusterReco as ImagingClusterReco


# input arguments through environment variables
kwargs = dict()
kwargs['sf'] = float(os.environ.get('CB_EMCAL_SAMP_FRAC', '1.0'))
kwargs['input'] = os.environ.get('CB_EMCAL_SIM_FILE', '../topside/barrel_pion0_5GeV.root')
kwargs['output'] = os.environ.get('CB_EMCAL_REC_FILE', 'barrel_pion0_5GeV_cluster.root')
kwargs['compact'] = os.environ.get('CB_EMCAL_COMPACT_PATH', '../topside/test.xml')
kwargs['nev'] = int(os.environ.get('CB_EMCAL_NUMEV', 10000))

if kwargs['nev'] < 1:
    f = ROOT.TFile(kwargs['input'])
    kwargs['nev'] = f.events.GetEntries()

print(kwargs)

# get sampling fraction from system environment variable, 1.0 by default
sf = float(os.environ.get('CB_EMCAL_SAMP_FRAC', '1.0'))

geo_service = GeoSvc("GeoSvc", detectors=kwargs['compact'].split(','))
podioevent = EICDataSvc("EventDataSvc", inputs=kwargs['input'].split(','), OutputLevel=DEBUG)
out = PodioOutput("out", filename=kwargs['output'])

podioinput = PodioInput("PodioReader", collections=["mcparticles", "EcalBarrelHits"], OutputLevel=DEBUG)

copier = MCCopier("MCCopier",
                  inputCollection="mcparticles",
                  outputCollection="mcparticles2",
                  OutputLevel=DEBUG)
calcopier = CalCopier("CalCopier",
                      inputCollection="EcalBarrelHits",
                      outputCollection="EcalBarrelHits2",
                      OutputLevel=DEBUG)

imcaldigi = CalorimeterHitDigi("imcal_digi",
                               inputHitCollection="EcalBarrelHits",
                               outputHitCollection="DigiEcalBarrelHits",
                               inputEnergyUnit=units.GeV,
                               inputTimeUnit=units.ns,
                               energyResolutions=[0., 0.02, 0.],
                               dynamicRangeADC=3*units.MeV,
                               pedestalSigma=40,
                               OutputLevel=DEBUG)
imcalreco = ImagingPixelReco("imcal_reco",
                             inputHitCollection="DigiEcalBarrelHits",
                             outputHitCollection="RecoEcalBarrelHits",
                             dynamicRangeADC=3.*units.MeV,
                             pedestalSigma=40,
                             readoutClass="EcalBarrelHits",
                             layerField="layer",
                             sectorField="module")
imcalcluster = ImagingTopoCluster(inputHitCollection="RecoEcalBarrelHits",
                                  outputClusterCollection="EcalBarrelClusters",
                                  localRanges=[2.*units.mm, 2*units.mm],
                                  adjLayerRanges=[10*units.mrad, 10*units.mrad],
                                  adjLayerDiff=2,
                                  adjSectorDist=3.*units.cm)
clusterreco = ImagingClusterReco(inputClusterCollection="EcalBarrelClusters",
                                 outputLayerCollection="EcalBarrelClustersLayers",
                                 samplingFraction=sf,
                                 OutputLevel=DEBUG)

out.outputCommands = ["keep *"]

ApplicationMgr(
    TopAlg=[podioinput, copier, calcopier, imcaldigi, imcalreco, imcalcluster, clusterreco, out],
    EvtSel='NONE',
    EvtMax=kwargs['nev'],
    ExtSvc=[podioevent],
    OutputLevel=DEBUG
)
+67 −0
Original line number Original line Diff line number Diff line
import os
from Gaudi.Configuration import *
from GaudiKernel import SystemOfUnits as units

from GaudiKernel.DataObjectHandleBase import DataObjectHandleBase
from Configurables import ApplicationMgr, EICDataSvc, PodioOutput, GeoSvc

from Configurables import PodioInput
from Configurables import Jug__Digi__EcalTungstenSamplingDigi as EcalTungstenSamplingDigi
from Configurables import Jug__Reco__EcalTungstenSamplingReco as EcalTungstenSamplingReco
from Configurables import Jug__Reco__CalorimeterHitsMerger as CalorimeterHitsMerger
from Configurables import Jug__Reco__CalorimeterIslandCluster as IslandCluster
from Configurables import Jug__Reco__ClusterRecoCoG as RecoCoG

# get sampling fraction from system environment variable, 1.0 by default
sf = float(os.environ.get('CB_EMCAL_SAMP_FRAC', '1.0'))

geo_service  = GeoSvc("GeoSvc", detectors=["../athena/athena.xml"])
podioevent = EICDataSvc("EventDataSvc", inputs=["../reconstruction_benchmarks/sim_barrel_clusters.root"], OutputLevel=DEBUG)

podioinput = PodioInput("PodioReader", collections=["mcparticles", "EcalBarrelHits"], OutputLevel=DEBUG)
emcaldigi = EcalTungstenSamplingDigi("ecal_digi",
                                     inputHitCollection="EcalBarrelHits",
                                     outputHitCollection="DigiEcalBarrelHits",
                                     inputEnergyUnit=units.GeV,
                                     inputTimeUnit=units.ns,
                                     energyResolutions=[0., 0.02, 0.],
                                     dynamicRangeADC=700*units.keV,
                                     pedestalSigma=40,
                                     OutputLevel=DEBUG)
emcalreco = EcalTungstenSamplingReco("ecal_reco",
                                     inputHitCollection="DigiEcalBarrelHits",
                                     outputHitCollection="RecoEcalBarrelHits",
                                     dynamicRangeADC=700*units.keV,
                                     pedestalSigma=40,
                                     OutputLevel=DEBUG)
# readout id definition for barrel ecal
# <id>system:8,barrel:3,module:4,layer:10,slice:5,x:32:-16,y:-16</id>
# sum layers/slices, masking (8+3+4, 8+3+4+5+10-1)
xymerger = CalorimeterHitsMerger("ecal_xy_merger",
                                 fields=["layer", "slice"],
                                 inputHitCollection="RecoEcalBarrelHits",
                                 outputHitCollection="RecoEcalBarrelHitsXY")
# sum modules, masking (8+3+4+5+10, 8+3+4+5+10+32-1)
zmerger = CalorimeterHitsMerger("ecal_z_merger",
                                fields=["x", "y"],
                                inputHitCollection="RecoEcalBarrelHits",
                                outputHitCollection="RecoEcalBarrelHitsZ")
emcalcluster = IslandCluster(inputHitCollection="RecoEcalBarrelHitsXY",
                             outputClusterCollection="EcalBarrelClusters",
                             minClusterCenterEdep=0.5*units.MeV,
                             splitCluster=False,
                             groupRanges=[5.*units.cm, 5*units.cm, 5.*units.cm])
clusterreco = RecoCoG(clusterCollection="EcalBarrelClusters", logWeightBase=6.2, samplingFraction=sf, OutputLevel=DEBUG)


out = PodioOutput("out", filename="barrel_cluster.root")
out.outputCommands = ["keep *"]

ApplicationMgr(
    TopAlg = [podioinput, emcaldigi, emcalreco, xymerger, zmerger, emcalcluster, clusterreco, out],
    EvtSel = 'NONE',
    EvtMax   = 5,
    ExtSvc = [podioevent],
    OutputLevel=DEBUG
 )
Original line number Original line Diff line number Diff line
'''
    A script to visualize the cluster
    It reads the output from the Juggler component ImagingClusterReco, which is supposed to be clusters of hits after
    digitization, reconstruction, and clustering

    Author: Chao Peng (ANL)
    Date: 04/30/2021
'''

import os
import numpy as np
import pandas as pd
import argparse
import matplotlib
from matplotlib import cm
from matplotlib import pyplot as plt
from matplotlib.ticker import MultipleLocator
from matplotlib.collections import PatchCollection
from matplotlib.patches import Rectangle
from mpl_toolkits.axes_grid1 import make_axes_locatable
from utils import *
import sys


# draw cluster in a 3d axis, expect a numpy array of (nhits, 4) shape with each row contains (x, y, z, E)
# note z and x axes are switched
def draw_hits3d(axis, data, cmap, units=('mm', 'mm', 'mm', 'MeV'), fontsize=24, **kwargs):
    # normalize energy to get colors
    x, y, z, edep = np.transpose(data)
    cvals = edep - min(edep) / (max(edep) - min(edep))
    cvals[np.isnan(cvals)] = 1.0
    colors = cmap(cvals)

    # hits
    axis.scatter(z, y, x, c=colors, marker='o', **kwargs)
    axis.tick_params(labelsize=fontsize)
    axis.set_zlabel('x ({})'.format(units[2]), fontsize=fontsize + 2, labelpad=fontsize)
    axis.set_ylabel('y ({})'.format(units[1]), fontsize=fontsize + 2, labelpad=fontsize)
    axis.set_xlabel('z ({})'.format(units[0]), fontsize=fontsize + 2, labelpad=fontsize)
    cb = plt.colorbar(cm.ScalarMappable(norm=matplotlib.colors.Normalize(vmin=min(edep), vmax=max(edep)), cmap=cmap),
                      ax=axis, shrink=0.85)
    cb.ax.tick_params(labelsize=fontsize)
    cb.ax.get_yaxis().labelpad = fontsize
    cb.ax.set_ylabel('Energy Deposit ({})'.format(units[3]), rotation=90, fontsize=fontsize + 4)
    return axis


# draw a cylinder in 3d axes
# note z and x axes are switched
def draw_cylinder3d(axis, r, z, order=['x', 'y', 'z'], rsteps=500, zsteps=500, **kwargs):
    x = np.linspace(-r, r, rsteps)
    z = np.linspace(-z, z, zsteps)
    Xc, Zc = np.meshgrid(x, z)
    Yc = np.sqrt(r**2 - Xc**2)

    axis.plot_surface(Zc, Yc, Xc, alpha=0.1, **kwargs)
    axis.plot_surface(Zc, -Yc, Xc, alpha=0.1, **kwargs)
    return axis


# fit the track of cluster and draw the fit
def draw_track_fit(axis, dfh, length=200, stop_layer=8, scat_kw=dict(), line_kw=dict()):
    dfh = dfh[dfh['layer'] <= stop_layer]
    data = dfh.groupby('layer')[['z', 'y','x']].mean().values
    # data = dfh[['z', 'y', 'x']].values
    # ax.scatter(*data.T, **scat_kw)
    datamean = data.mean(axis=0)
    uu, dd, vv = np.linalg.svd(data - datamean)
    linepts = vv[0] * np.mgrid[-length:length:2j][:, np.newaxis]
    linepts += datamean
    axis.plot3D(*linepts.T, 'k:')
    return axis


# color map
def draw_heatmap(axis, x, y, weights, bins=1000, cmin=0., cmap=plt.get_cmap('rainbow'), pc_kw=dict()):
    w, xedg, yedg = np.histogram2d(x, y, weights=weights, bins=bins)
    xsz = np.mean(np.diff(xedg))
    ysz = np.mean(np.diff(yedg))
    wmin, wmax = w.min(), w.max()
    recs, clrs = [], []
    for i in np.arange(len(xedg) - 1):
        for j in np.arange(len(yedg) - 1):
            if w[i][j] > cmin:
                recs.append(Rectangle((xedg[i], yedg[j]), xsz, ysz))
                clrs.append(cmap((w[i][j] - wmin) / (wmax - wmin)))
    axis.add_collection(PatchCollection(recs, facecolor=clrs, **pc_kw))
    axis.set_xlim(xedg[0], xedg[-1])
    axis.set_ylim(yedg[0], yedg[-1])
    return axis, cm.ScalarMappable(norm=matplotlib.colors.Normalize(vmin=wmin, vmax=wmax), cmap=cmap)


# execute this script
if __name__ == '__main__':
    parser = argparse.ArgumentParser(description='Visualize the cluster from analysis')
    parser.add_argument('file', type=str, help='path to root file')
    parser.add_argument('-e', type=int, default=0, dest='iev', help='event number to plot')
    parser.add_argument('-c', type=int, default=0, dest='icl', help='cluster number to plot (0: all, -1: no cluster)')
    parser.add_argument('-s', type=int, default=8, dest='stop', help='stop layer for track fit')
    parser.add_argument('-o', type=str, default='./plots', dest='outdir', help='output directory')
    parser.add_argument('--compact', type=str, default='', dest='compact', help='compact file')
    parser.add_argument('-m', '--macros', type=str, default='rootlogon.C', dest='macros',
                        help='root macros to load (accept multiple paths separated by \",\")')
    parser.add_argument('-b', '--branch-name', type=str, default='RecoEcalBarrelHits', dest='branch',
                        help='branch name in the root file (outputLayerCollection from ImagingClusterReco)')
    parser.add_argument('--topo-size', type=float, default=2.0, dest='topo_size',
                        help='bin size for projection plot (mrad)')
    parser.add_argument('--topo-range', type=float, default=50.0, dest='topo_range',
                        help='half range for projection plot (mrad)')
    parser.add_argument('--zoom-factor', type=float, default=1.0, dest='zoom_factor',
                        help='factor for zoom-in')
    args = parser.parse_args()


    # we can read these values from xml file
    desc = compact_constants(args.compact, [
        'cb_ECal_RMin',
        'cb_ECal_ReadoutLayerThickness',
        'cb_ECal_ReadoutLayerNumber',
        'cb_ECal_Length'
    ])
    if not len(desc):
        # or define Ecal shapes
        rmin, thickness, length = 890, 20*(10. + 1.65), 860*2+500
    else:
        # convert cm to mm
        rmin = desc[0]*10.
        thickness = desc[1]*desc[2]*10.
        length = desc[3]*10.


    # read data
    load_root_macros(args.macros)
    df = get_hits_data(args.file, args.iev, branch=args.branch)
    if args.icl != 0:
        df = df[df['cluster'] == args.icl]
    if not len(df):
        print("Error: do not find any hits for cluster {:d} in event {:d}".format(args.icl, args.iev))
        exit(-1)
    # convert to polar coordinates (mrad), and stack all r values
    df['r'] = np.sqrt(df['x'].values**2 + df['y'].values**2 + df['z'].values**2)
    df['phi'] = np.arctan2(df['y'].values, df['x'].values)*1000.
    df['theta'] = np.arccos(df['z'].values/df['r'].values)*1000.
    df['eta'] = -np.log(np.tan(df['theta'].values/1000./2.))

    # truth
    dfmcp = get_mcp_simple(args.file, args.iev, 'mcparticles2').iloc[0]
    pdgbase = ROOT.TDatabasePDG()
    inpart = pdgbase.GetParticle(int(dfmcp['pid']))
    print("Incoming particle = {}, pdgcode = {}, charge = {}, mass = {}"\
          .format(inpart.GetName(), inpart.PdgCode(), inpart.Charge(), inpart.Mass()))
    # neutral particle, no need to consider magnetic field
    if np.isclose(inpart.Charge(), 0., rtol=1e-5):
        vec = dfmcp[['px', 'py', 'pz']].values
    # charge particle, use the cluster center
    else:
        flayer = df[df['layer'] == df['layer'].min()]
        vec = flayer[['x', 'y', 'z']].mean().values
    vec = vec/np.linalg.norm(vec)

    # particle line from (0, 0, 0) to the inner Ecal surface
    length = rmin/np.sqrt(vec[0]**2 + vec[1]**2)
    pline = np.transpose(vec*np.mgrid[0:length:2j][:, np.newaxis])
    cmap = truncate_colormap(plt.get_cmap('jet'), 0.1, 0.9)

    os.makedirs(args.outdir, exist_ok=True)
    # cluster plot
    fig = plt.figure(figsize=(15, 12), dpi=160)
    ax = fig.add_subplot(111, projection='3d')
    # draw particle line
    ax.plot(*pline[[2, 1]], '--', zs=pline[0], color='green')
    # draw hits
    draw_hits3d(ax, df[['x', 'y', 'z', 'edep']].values, cmap, s=5.0)
    draw_cylinder3d(ax, rmin, length, rstride=10, cstride=10, color='royalblue')
    draw_cylinder3d(ax, rmin + thickness, length, rstride=10, cstride=10, color='forestgreen')
    ax.set_zlim(-(rmin + thickness), rmin + thickness)
    ax.set_ylim(-(rmin + thickness), rmin + thickness)
    ax.set_xlim(-length, length)
    fig.tight_layout()
    fig.savefig(os.path.join(args.outdir, 'e{}_cluster.png'.format(args.iev)))


    # zoomed-in cluster plot
    fig = plt.figure(figsize=(15, 12), dpi=160)
    ax = fig.add_subplot(111, projection='3d')
    # draw particle line
    ax.plot(*pline[[2, 1]], '--', zs=pline[0], color='green')
    # draw hits
    draw_hits3d(ax, df[['x', 'y', 'z', 'edep']].values, cmap, s=20.0)
    # draw_track_fit(ax, df, stop_layer=args.stop,
    #                scat_kw=dict(color='k', s=50.0), line_kw=dict(linestyle=':', color='k', lw=3))
    # view range
    center = (length + thickness/2.)*vec
    ranges = np.vstack([center - thickness/args.zoom_factor, center + thickness/args.zoom_factor]).T
    ax.set_zlim(*ranges[0])
    ax.set_ylim(*ranges[1])
    ax.set_xlim(*ranges[2])

    fig.tight_layout()
    fig.savefig(os.path.join(args.outdir, 'e{}_cluster_zoom.png'.format(args.iev)))


    # projection plot
    # convert to mrad
    vecp = np.asarray([np.arccos(vec[2]), np.arctan2(vec[1], vec[0])])*1000.
    phi_rg = np.asarray([vecp[1] - args.topo_range, vecp[1] + args.topo_range])
    th_rg = np.asarray([vecp[0] - args.topo_range, vecp[0] + args.topo_range])
    eta_rg = np.resize(-np.log(np.tan(vecp[0]/1000./2.)), 2) + np.asarray([-args.topo_range, args.topo_range])/1000.

    fig, axs = plt.subplots(1, 2, figsize=(13, 12), dpi=160, gridspec_kw={'wspace':0., 'width_ratios': [12, 1]})
    ax, sm = draw_heatmap(axs[0], df['eta'].values, df['phi'].values, weights=df['edep'].values,
                          bins=(np.arange(*eta_rg, step=args.topo_size/1000.), np.arange(*phi_rg, step=args.topo_size)),
                          cmap=cmap, cmin=0., pc_kw=dict(alpha=0.8, edgecolor='k'))

    ax.set_ylabel(r'$\phi$ (mrad)', fontsize=32)
    ax.set_xlabel(r'$\eta$', fontsize=32)
    ax.tick_params(labelsize=28)
    ax.xaxis.set_minor_locator(MultipleLocator(5))
    ax.yaxis.set_minor_locator(MultipleLocator(5))
    ax.grid(linestyle=':', which='both')
    ax.set_axisbelow(True)
    cb = plt.colorbar(sm, cax=axs[1], shrink=0.85, aspect=1.2*20)
    cb.ax.tick_params(labelsize=28)
    cb.ax.get_yaxis().labelpad = 10
    cb.ax.set_ylabel('Energy Deposit (MeV)', rotation=90, fontsize=32)
    fig.savefig(os.path.join(args.outdir, 'e{}_topo.png'.format(args.iev)))
Original line number Original line Diff line number Diff line
'''
    A script to visualize the cluster layer-by-layer
    It reads the output from the Juggler component ImagingClusterReco, which is supposed to be clusters of hits after
    digitization, reconstruction, and clustering

    Author: Chao Peng (ANL)
    Date: 04/30/2021
'''

import os
import numpy as np
import pandas as pd
import argparse
import matplotlib
from matplotlib import cm
from matplotlib import pyplot as plt
from matplotlib.ticker import MultipleLocator
from matplotlib.collections import PatchCollection
from matplotlib.patches import Rectangle
from mpl_toolkits.axes_grid1 import make_axes_locatable
from matplotlib.backends.backend_pdf import PdfPages
from utils import *
import sys
import imageio


# color map
def draw_heatmap(axis, x, y, weights, bins=1000, vmin=None, vmax=None, cmap=plt.get_cmap('rainbow'), pc_kw=dict()):
    w, xedg, yedg = np.histogram2d(x, y, weights=weights, bins=bins)
    xsz = np.mean(np.diff(xedg))
    ysz = np.mean(np.diff(yedg))
    if vmin == None:
        vmin = w.min()
    if vmax == None:
        vmax = w.max()
    recs, clrs = [], []
    for i in np.arange(len(xedg) - 1):
        for j in np.arange(len(yedg) - 1):
            if w[i][j] > vmin:
                recs.append(Rectangle((xedg[i], yedg[j]), xsz, ysz))
                clrs.append(cmap((w[i][j] - vmin) / (vmax - vmin)))
    axis.add_collection(PatchCollection(recs, facecolor=clrs, **pc_kw))
    axis.set_xlim(xedg[0], xedg[-1])
    axis.set_ylim(yedg[0], yedg[-1])
    return axis, cm.ScalarMappable(norm=matplotlib.colors.Normalize(vmin=vmin, vmax=vmax), cmap=cmap)


# execute this script
if __name__ == '__main__':
    parser = argparse.ArgumentParser(description='Visualize the cluster (layer-wise) from analysis')
    parser.add_argument('file', type=str, help='path to root file')
    parser.add_argument('-e', type=int, default=0, dest='iev', help='event number to plot')
    parser.add_argument('-c', type=int, default=0, dest='icl', help='cluster number to plot (0: all, -1: no cluster)')
    parser.add_argument('-s', type=int, default=8, dest='stop', help='stop layer for track fit')
    parser.add_argument('-o', type=str, default='./plots', dest='outdir', help='output directory')
    parser.add_argument('-d', type=float, default=1.0, dest='dura', help='duration of gif')
    parser.add_argument('--compact', type=str, default='', dest='compact', help='compact file')
    parser.add_argument('-m', '--macros', type=str, default='rootlogon.C', dest='macros',
                        help='root macros to load (accept multiple paths separated by \",\")')
    parser.add_argument('-b', '--branch-name', type=str, default='RecoEcalBarrelHits', dest='branch',
                        help='branch name in the root file (outputLayerCollection from ImagingClusterReco)')
    parser.add_argument('--topo-size', type=float, default=2.0, dest='topo_size',
                        help='bin size for projection plot (mrad)')
    parser.add_argument('--topo-range', type=float, default=50.0, dest='topo_range',
                        help='half range for projection plot (mrad)')
    args = parser.parse_args()


    # we can read these values from xml file
    desc = compact_constants(args.compact, [
        'cb_ECal_RMin',
        'cb_ECal_ReadoutLayerThickness',
        'cb_ECal_ReadoutLayerNumber',
        'cb_ECal_Length'
    ])
    if not len(desc):
        # or define Ecal shapes
        rmin, thickness, length = 890, 20*(10. + 1.65), 860*2+500
    else:
        # convert cm to mm
        rmin = desc[0]*10.
        thickness = desc[1]*desc[2]*10.
        length = desc[3]*10.


    # read data
    load_root_macros(args.macros)
    df = get_hits_data(args.file, args.iev, args.branch)
    if args.icl != 0:
        df = df[df['cluster'] == args.icl]
    if not len(df):
        print("Error: do not find any hits for cluster {:d} in event {:d}".format(args.icl, args.iev))
        exit(-1)
    # convert to polar coordinates (mrad), and stack all r values
    df['r'] = np.sqrt(df['x'].values**2 + df['y'].values**2 + df['z'].values**2)
    df['phi'] = np.arctan2(df['y'].values, df['x'].values)*1000.
    df['theta'] = np.arccos(df['z'].values/df['r'].values)*1000.
    df['eta'] = -np.log(np.tan(df['theta'].values/1000./2.))

    # truth
    dfmcp = get_mcp_simple(args.file, args.iev, 'mcparticles2').iloc[0]
    pdgbase = ROOT.TDatabasePDG()
    inpart = pdgbase.GetParticle(int(dfmcp['pid']))
    print("Incoming particle = {}, pdgcode = {}, charge = {}, mass = {}"\
          .format(inpart.GetName(), inpart.PdgCode(), inpart.Charge(), inpart.Mass()))
    # neutral particle, no need to consider magnetic field
    if np.isclose(inpart.Charge(), 0., rtol=1e-5):
        vec = dfmcp[['px', 'py', 'pz']].values
    # charge particle, use the cluster center
    else:
        flayer = df[df['layer'] == df['layer'].min()]
        vec = flayer[['x', 'y', 'z']].mean().values
    vec = vec/np.linalg.norm(vec)

    # particle line from (0, 0, 0) to the inner Ecal surface
    length = rmin/np.sqrt(vec[0]**2 + vec[1]**2)
    pline = np.transpose(vec*np.mgrid[0:length:2j][:, np.newaxis])
    cmap = truncate_colormap(plt.get_cmap('jet'), 0.1, 0.9)

    # convert truth to mrad
    vecp = np.asarray([np.arccos(vec[2]), np.arctan2(vec[1], vec[0])])*1000.
    phi_rg = np.asarray([vecp[1] - args.topo_range, vecp[1] + args.topo_range])
    th_rg = np.asarray([vecp[0] - args.topo_range, vecp[0] + args.topo_range])
    eta_rg = np.resize(-np.log(np.tan(vecp[0]/1000./2.)), 2) + np.asarray([-args.topo_range, args.topo_range])/1000.

    os.makedirs(args.outdir, exist_ok=True)
    # cluster plot by layers (rebinned)
    pdf = PdfPages(os.path.join(args.outdir, 'e{}_c{}_layers.pdf'.format(args.iev, args.icl)))
    bpos, bprops = (0.5, 0.95), dict(boxstyle='round', facecolor='wheat', alpha=0.2)
    frames = []
    for i in np.arange(1, df['layer'].max() + 1, dtype=int):
        data = df[df['layer'] == i]
        fig, axs = plt.subplots(1, 2, figsize=(17, 16), dpi=160, gridspec_kw={'wspace':0., 'width_ratios': [16, 1]})
        ax, sm = draw_heatmap(axs[0], data['eta'].values, data['phi'].values, weights=data['edep'].values,
                              bins=(np.arange(*eta_rg, step=args.topo_size/1000.), np.arange(*phi_rg, step=args.topo_size)),
                              cmap=cmap, vmin=0.0, vmax=1.0, pc_kw=dict(alpha=0.8, edgecolor='k'))
        ax.set_ylabel(r'$\phi$ (mrad)', fontsize=28)
        ax.set_xlabel(r'$\eta$', fontsize=28)
        ax.tick_params(labelsize=24)
        ax.xaxis.set_minor_locator(MultipleLocator(5))
        ax.yaxis.set_minor_locator(MultipleLocator(5))
        ax.grid(linestyle=':', which='both')
        ax.set_axisbelow(True)
        ax.text(*bpos, 'Layer {:d}'.format(i), transform=ax.transAxes,
                fontsize=26, va='top', ha='center', bbox=bprops)
        cb = plt.colorbar(sm, cax=axs[1], shrink=0.85)
        cb.ax.tick_params(labelsize=24)
        cb.ax.get_yaxis().labelpad = 24
        cb.ax.set_ylabel('Energy Deposit (MeV)', rotation=90, fontsize=28)
        pdf.savefig(fig)
        # gif frames
        fig.savefig('ltmp.png')
        plt.close(fig)
        frames.append(imageio.imread('ltmp.png'))
    pdf.close()
    os.remove('ltmp.png')

    # build GIF
    imageio.mimsave(os.path.join(args.outdir, 'e{:d}_c{:d}_layers.gif'.format(args.iev, args.icl)),
                    frames, 'GIF', duration=args.dura)


    # cluster plot by layers (scatters)
    pdf = PdfPages(os.path.join(args.outdir, 'e{}_c{}_layers2.pdf'.format(args.iev, args.icl)))
    bpos, bprops = (0.5, 0.95), dict(boxstyle='round', facecolor='wheat', alpha=0.2)
    frames = []
    for i in np.arange(1, df['layer'].max() + 1, dtype=int):
        data = df[df['layer'] == i]
        fig, axs = plt.subplots(1, 2, figsize=(17, 16), dpi=160, gridspec_kw={'wspace':0., 'width_ratios': [16, 1]})
        ax = axs[0]
        colors = cmap(data['edep']/1.0)
        ax.scatter(data['eta'].values, data['phi'].values, c=colors, marker='s', s=15.0)
        ax.set_xlim(*eta_rg)
        ax.set_ylim(*phi_rg)
        ax.set_ylabel(r'$\phi$ (mrad)', fontsize=28)
        ax.set_xlabel(r'$\eta$', fontsize=28)
        ax.tick_params(labelsize=24)
        ax.xaxis.set_minor_locator(MultipleLocator(5))
        ax.yaxis.set_minor_locator(MultipleLocator(5))
        ax.grid(linestyle=':', which='both')
        ax.set_axisbelow(True)
        ax.text(*bpos, 'Layer {:d}'.format(i), transform=ax.transAxes,
                fontsize=26, va='top', ha='center', bbox=bprops)
        cb = plt.colorbar(sm, cax=axs[1], shrink=0.85)
        cb.ax.tick_params(labelsize=24)
        cb.ax.get_yaxis().labelpad = 24
        cb.ax.set_ylabel('Energy Deposit (MeV)', rotation=90, fontsize=28)
        pdf.savefig(fig)
        # gif frames
        fig.savefig('ltmp.png')
        plt.close(fig)
        frames.append(imageio.imread('ltmp.png'))
    pdf.close()
    os.remove('ltmp.png')

    # build GIF
    imageio.mimsave(os.path.join(args.outdir, 'e{:d}_c{:d}_layers2.gif'.format(args.iev, args.icl)),
                    frames, 'GIF', duration=args.dura)
Original line number Original line Diff line number Diff line
'''
    A script to generate the energy profile (layer-wise)
    It reads the output from the Juggler component ImagingClusterReco, which is supposed to be clusters of hits after
    digitization, reconstruction, and clustering

    Author: Chao Peng (ANL)
    Date: 04/30/2021
'''

import os
import numpy as np
import pandas as pd
import ROOT
from ROOT import gROOT, gInterpreter
import argparse
import matplotlib
from matplotlib import pyplot as plt
from matplotlib.ticker import MultipleLocator, MaxNLocator
import sys
from utils import *


def find_start_layer(grp, min_edep=0.5):
    ids, edeps = grp.sort_values('layer')[['layer', 'edep']].values.T
    edeps = np.cumsum(edeps)
    return min(ids[edeps > min_edep]) if sum(edeps > min_edep) > 0 else 20


# execute this script
if __name__ == '__main__':
    parser = argparse.ArgumentParser(description='Generate energy profiles')
    parser.add_argument('file', type=str, help='path to root file')
    parser.add_argument('-o', '--plot-dir', type=str, default='./plots', dest='outdir', help='output directory')
    parser.add_argument('-t', '--type', type=str, default='unknown', dest='type', help='profile type (used in save)')
    parser.add_argument('-e', '--energy', type=float, default=5., dest='energy', help='incident particle energy (GeV)')
    parser.add_argument('-s', '--save', type=str, default='', dest='save', help='path to save profile')
    parser.add_argument('-c', '--color', type=str, default='royalblue', dest='color', help='colors for bar plots')
    parser.add_argument('-b', '--branch-name', type=str, default='EcalBarrelClustersLayers', dest='branch',
                        help='branch name in the root file (outputLayerCollection from ImagingClusterReco)')
    parser.add_argument('-m', '--macros', type=str, default='rootlogon.C', dest='macros',
                        help='root macros to load (accept multiple paths separated by \",\")')
    args = parser.parse_args()

    load_root_macros(args.macros)
    # prepare data
    dfe = get_layers_data(args.file, branch=args.branch)
    dfe.loc[:, 'total_edep'] = dfe.groupby(['event', 'cluster'])['edep'].transform('sum')
    # dfe.loc[:, 'efrac'] = dfe['edep']/dfe['total_edep']
    dfe.loc[:, 'efrac'] = dfe['edep']/(args.energy*1000.)
    dfe = dfe[dfe['cluster'] == 0]

    os.makedirs(args.outdir, exist_ok=True)

    slayer = dfe.groupby('event').apply(lambda x: find_start_layer(x, 1.0)).astype(int)
    dfe = dfe.merge(slayer.to_frame(name='slayer'), on='event')
    # dfe.loc[:, 'eff_layer'] = dfe['layer'] - dfe['slayer']
    # prof = dfe[dfe['eff_layer'] > 0].groupby('eff_layer')['edep'].describe()
    prof = dfe.groupby('layer')['efrac'].describe()

    # print(prof['mean'])
    # plot profiles
    bpos, bprops = (0.5, 0.95), dict(boxstyle='round', facecolor='wheat', alpha=0.3)
    fig, ax = plt.subplots(figsize=(16, 9), dpi=160)
    nev = len(dfe['event'].unique())
    ax.hist(dfe.groupby('event')['slayer'].min(), weights=[1/float(nev)]*nev,
            ec='black', bins=np.arange(0.5, 10.5, step=1.0))
    ax.xaxis.set_major_locator(MaxNLocator(integer=True))
    ax.xaxis.set_minor_locator(MultipleLocator(1))
    ax.yaxis.set_minor_locator(MultipleLocator(1))
    ax.grid(linestyle=':', which='both')
    ax.tick_params(labelsize=24)
    ax.set_xlabel('Start Layer', fontsize=26)
    ax.set_ylabel('Normalized Counts', fontsize=26)
    ax.text(*bpos, 'Mininum Edep\n' + '{:.1f} MeV'.format(1.0),
            transform=ax.transAxes, fontsize=26, verticalalignment='top', bbox=bprops)
    fig.savefig(os.path.join(args.outdir, 'edep_start_{}_{}.png'.format(args.type, int(args.energy))))


    fig, ax = plt.subplots(figsize=(16, 9), dpi=160)
    ax.plot(prof.index, prof['mean'].values*100., color=args.color, lw=2)
    # ax.fill_between(prof.index, prof['25%'], prof['75%'], color=args.color, alpha=0.3)
    ax.fill_between(prof.index, (prof['mean'] - prof['std'])*100., (prof['mean'] + prof['std'])*100.,
                    color=args.color, alpha=0.3)
    ax.xaxis.set_major_locator(MaxNLocator(integer=True))
    ax.xaxis.set_minor_locator(MultipleLocator(1))
    ax.yaxis.set_minor_locator(MultipleLocator(1))
    ax.grid(linestyle=':', which='both')
    ax.tick_params(labelsize=24)
    ax.set_xlabel('Layer', fontsize=26)
    ax.set_ylabel('Energy Deposit Percentage', fontsize=26)
    fig.savefig(os.path.join(args.outdir, 'efrac_{}_{}.png'.format(args.type, int(args.energy))))

    # edep fraction by layers
    layers = np.asarray([
        [1, 5, 8,],
        [10, 15, 20],
    ])
    layers_bins = np.linspace(0, 0.5, 50)

    fig, ax = plt.subplots(*layers.shape, figsize=(16, 9), dpi=160, sharex='col', sharey='all',
                           gridspec_kw=dict(hspace=0.05, wspace=0.05))

    for ax, layer in zip(ax.flat, layers.flatten()):
        data = dfe[dfe['layer'] == layer]
        ax.hist(data['efrac'].values*100., weights=[1/float(len(data))]*len(data), bins=layers_bins,
                ec='black', color=args.color)
        ax.tick_params(labelsize=24)
        ax.xaxis.set_minor_locator(MultipleLocator(1))
        ax.grid(linestyle=':', which='both')
        # ax.set_xlabel('Energy Deposit (MeV)', fontsize=26)
        # ax.set_ylabel('Normalized Counts', fontsize=26)
        mean, std = data.describe().loc[['mean', 'std'], 'edep'].values
        label = 'Layer {}'.format(layer)
    #            + '\n' + r'$\mu = {:.3f}$ MeV'.format(mean)
    #            + '\n' + r'$\sigma = {:.3f}$ MeV'.format(std)
        ax.text(*bpos, label, transform=ax.transAxes, fontsize=26, verticalalignment='top', bbox=bprops)
        ax.set_axisbelow(True)
        ax.set_yscale('log')
    fig.text(0.5, 0.02, 'Energy Deposit Percentage', fontsize=26, ha='center')
    fig.text(0.02, 0.5, 'Normalized Counts', fontsize=26, va='center', rotation=90)
    fig.savefig(os.path.join(args.outdir, 'efrac_layers_{}_{}.png'.format(args.type, int(args.energy))))


    if args.save:
        prof.loc[:, 'energy'] = args.energy*1000.
        prof.loc[:, 'type'] = args.type
        if os.path.exists(args.save):
            prev = pd.read_csv(args.save).set_index('layer', drop=True)
            prof = pd.concat([prof, prev])
        prof.to_csv(args.save)
Original line number Original line Diff line number Diff line
'''
    A script to use the energy profile for e-pi separation
    It reads the output from the Juggler component ImagingClusterReco, which is supposed to be clusters of hits after
    digitization, reconstruction, and clustering

    Author: Chao Peng (ANL)
    Date: 04/30/2021
'''

import os
import numpy as np
import pandas as pd
import ROOT
from ROOT import gROOT, gInterpreter
import argparse
import matplotlib
from matplotlib import pyplot as plt
from matplotlib.ticker import MultipleLocator, MaxNLocator
import sys
from utils import *


def prepare_data(path, **kwargs):
    tmp = get_layers_data(path, **kwargs)
    tmp.loc[:, 'total_edep'] = tmp.groupby(['event', 'cluster'])['edep'].transform('sum')
    tmp.loc[:, 'efrac'] = tmp['edep']/tmp['total_edep']
    # tmp = tmp[tmp['cluster'] == 0]
    return tmp


def calc_chi2(grp, pr, lmin=5, lmax=12):
    grp2 = grp[(grp['layer'] >= lmin) & (grp['layer'] <= lmax)]
    mean, std = pr.loc[grp2['layer'], ['mean', 'std']].values.T*pr['energy'].mean()
    edeps = grp2['edep'].values
    return np.sqrt(np.sum((edeps - mean)**2/std**2)/float(len(edeps)))


# execute this script
if __name__ == '__main__':
    parser = argparse.ArgumentParser(description='epi_separation')
    parser.add_argument('efile', type=str, help='path to root file (electrons)')
    parser.add_argument('pifile', type=str, help='path to root file (pions)')
    parser.add_argument('--prof', type=str, default='profile.csv', help='path to electron profile')
    parser.add_argument('--plot-dir', type=str, default='./plots', dest='outdir', help='output directory')
    parser.add_argument('-b', '--branch-name', type=str, default='EcalBarrelClustersLayers', dest='branch',
                        help='branch name in the root file (outputLayerCollection from ImagingClusterReco)')
    parser.add_argument('-m', '--macros', type=str, default='rootlogon.C', dest='macros',
                        help='root macros to load (accept multiple paths separated by \",\")')
    args = parser.parse_args()

    os.makedirs(args.outdir, exist_ok=True)


    load_root_macros(args.macros)
    # prepare data
    dfe = prepare_data(args.efile, branch=args.branch)
    dfpi = prepare_data(args.pifile, branch=args.branch)

    colors = ['royalblue', 'indianred', 'limegreen']
    prof = pd.concat([pd.read_csv(p.strip()) for p in args.prof.split(',')]).reset_index(drop=True)

    # profile comparison
    fig, ax = plt.subplots(figsize=(16, 9), dpi=160)
    for title, color in zip(sorted(prof['type'].unique()), colors):
        pr = prof[prof['type'] == title]
        ax.plot(pr.index, pr['mean'], color=color, lw=2)
        ax.fill_between(pr.index, (pr['mean'] - pr['std']), (pr['mean'] + pr['std']),
                        color=color, alpha=0.3, label=title)
    ax.xaxis.set_major_locator(MaxNLocator(integer=True))
    ax.xaxis.set_minor_locator(MultipleLocator(1))
    ax.yaxis.set_minor_locator(MultipleLocator(1))
    ax.grid(linestyle=':', which='both')
    ax.tick_params(labelsize=24)
    ax.set_xlabel('Layer', fontsize=26)
    ax.set_ylabel('Energy Deposit Percentage', fontsize=26)
    ax.legend(fontsize=26, loc='upper left')
    fig.savefig(os.path.join(args.outdir, 'compare_prof.png'))


    # check profile
    layer_range = (4, 12)
    pre = prof[prof['type'].str.lower() == 'electron']
    fig, ax = plt.subplots(figsize=(16, 9), dpi=160)
    chi2 = dfe.groupby(['event', 'cluster']).apply(lambda x: calc_chi2(x, pre, *layer_range))
    ax.hist(chi2, bins=np.linspace(0, 5, 200), weights=[1/float(len(chi2))]*chi2,
            ec='royalblue', color='royalblue', alpha=0.5, label='electrons')
    chi2 = dfpi.groupby(['event', 'cluster']).apply(lambda x: calc_chi2(x, pre, *layer_range))
    # print(chi2[chi2 < 0.7])
    ax.hist(chi2, bins=np.linspace(0, 5, 200), weights=[1/float(len(chi2))]*chi2,
            ec='indianred', color='indianred', alpha=0.5, label='pions')
    ax.grid(linestyle=':', which='major')
    ax.set_axisbelow(True)
    ax.tick_params(labelsize=24)
    ax.set_xlabel(r'$\chi^2$', fontsize=26)
    ax.set_ylabel('Normalized Counts', fontsize=26)
    # ax.set_yscale('log')
    # ax.set_ylim(1e-3, 1.)
    ax.legend(title=r'$\chi^2$ for $E_{{dep}}$ in layer {:d} - {:d}'.format(*layer_range), title_fontsize=28, fontsize=26)
    fig.savefig(os.path.join(args.outdir, 'efrac_chi2.png'))
Original line number Original line Diff line number Diff line
'''
    A utility script to help the analysis of imaging calorimeter data

    Author: Chao Peng (ANL)
    Date: 04/30/2021
'''

import os
import ROOT
import numpy as np
import pandas as pd
import matplotlib
import DDG4
from ROOT import gROOT, gInterpreter


# helper function to truncate color map (for a better view from the rainbow colormap)
def truncate_colormap(cmap, minval=0.0, maxval=1.0, n=100):
    new_cmap = matplotlib.colors.LinearSegmentedColormap.from_list(
        'trunc({n},{a:.2f},{b:.2f})'.format(n=cmap.name, a=minval, b=maxval),
        cmap(np.linspace(minval, maxval, n)))
    return new_cmap


# load root macros, input is an argument string
def load_root_macros(arg_macros):
    for path in arg_macros.split(','):
        path = path.strip()
        if os.path.exists(path):
            gROOT.Macro(path)
        else:
            print('\"{}\" does not exist, skip loading it.'.format(path))


# read mc particles from root file
def get_mcp_data(path, evnums=None, branch='mcparticles2'):
    f = ROOT.TFile(path)
    events = f.events
    if evnums is None:
        evnums = np.arange(events.GetEntries())
    elif isinstance(evnums, int):
        evnums = [evnums]

    dbuf = np.zeros(shape=(5000*len(evnums), 6))
    idb = 0
    for iev in evnums:
        if iev >= events.GetEntries():
            print('Error: event {:d} is out of range (0 - {:d})'.format(iev, events.GetEntries() - 1))
            continue

        events.GetEntry(iev)
        # extract full mc particle data
        for part in getattr(events, branch):
            dbuf[idb] = (iev, part.psx, part.psy, part.psz, part.pdgID, part.status)
            idb += 1
    return pd.DataFrame(data=dbuf[:idb], columns=['event', 'px', 'py', 'pz', 'pid', 'status'])


# read mc particles from root file
def get_mcp_simple(path, evnums=None, branch='mcparticles2'):
    f = ROOT.TFile(path)
    events = f.events
    if evnums is None:
        evnums = np.arange(events.GetEntries())
    elif isinstance(evnums, int):
        evnums = [evnums]

    dbuf = np.zeros(shape=(len(evnums), 6))
    idb = 0
    for iev in evnums:
        if iev >= events.GetEntries():
            print('Error: event {:d} is out of range (0 - {:d})'.format(iev, events.GetEntries() - 1))
            continue

        events.GetEntry(iev)
        # extract full mc particle data
        part = getattr(events, branch)[2]
        dbuf[idb] = (iev, part.psx, part.psy, part.psz, part.pdgID, part.status)
        idb += 1
    return pd.DataFrame(data=dbuf[:idb], columns=['event', 'px', 'py', 'pz', 'pid', 'status'])


# read hits data from root file
def get_hits_data(path, evnums=None, branch='RecoEcalBarrelHits'):
    f = ROOT.TFile(path)
    events = f.events
    if evnums is None:
        evnums = np.arange(events.GetEntries())
    elif isinstance(evnums, int):
        evnums = [evnums]

    dbuf = np.zeros(shape=(2000*len(evnums), 7))
    idb = 0
    for iev in evnums:
        if iev >= events.GetEntries():
            print('Error: event {:d} is out of range (0 - {:d})'.format(iev, events.GetEntries() - 1))
            continue

        events.GetEntry(iev)
        for hit in getattr(events, branch):
            dbuf[idb] = (iev, hit.clusterID, hit.layerID, hit.position.x, hit.position.y, hit.position.z, hit.edep)
            idb += 1

    return pd.DataFrame(data=dbuf[:idb], columns=['event', 'cluster', 'layer', 'x', 'y', 'z', 'edep'])


# read layers data from root file
def get_layers_data(path, evnums=None, branch="EcalBarrelClustersLayers"):
    f = ROOT.TFile(path)
    events = f.events
    if evnums is None:
        evnums = np.arange(events.GetEntries())
    elif isinstance(evnums, int):
        evnums = [evnums]

    dbuf = np.zeros(shape=(2000*len(evnums), 7))
    idb = 0
    for iev in evnums:
        if iev >= events.GetEntries():
            print('Error: event {:d} is out of range (0 - {:d})'.format(iev, events.GetEntries() - 1))
            continue

        events.GetEntry(iev)
        for layer in getattr(events, branch):
            dbuf[idb] = (iev, layer.clusterID, layer.layerID,
                         layer.position.x, layer.position.y, layer.position.z, layer.edep)
            idb += 1

    return pd.DataFrame(data=dbuf[:idb], columns=['event', 'cluster', 'layer', 'x', 'y', 'z', 'edep'])


# read clusters data from root file
def get_clusters_data(path, evnums=None, branch='EcalBarrelClustersReco'):
    f = ROOT.TFile(path)
    events = f.events
    if evnums is None:
        evnums = np.arange(events.GetEntries())
    elif isinstance(evnums, int):
        evnums = [evnums]

    dbuf = np.zeros(shape=(20*len(evnums), 6))
    idb = 0
    for iev in evnums:
        if iev >= events.GetEntries():
            print('Error: event {:d} is out of range (0 - {:d})'.format(iev, events.GetEntries() - 1))
            continue

        events.GetEntry(iev)
        for k, cl in enumerate(getattr(events, branch)):
            dbuf[idb] = (iev, k, cl.nhits, cl.edep, cl.cl_theta, cl.cl_phi)
            idb += 1

    return pd.DataFrame(data=dbuf[:idb], columns=['event', 'cluster', 'nhits', 'edep', 'cl_theta', 'cl_phi'])


def compact_constants(path, names):
    if not os.path.exists(path):
        print('Cannot find compact file \"{}\".'.format(path))
        return []
    kernel = DDG4.Kernel()
    description = kernel.detectorDescription()
    kernel.loadGeometry("file:{}".format(path))
    try:
        vals = [description.constantAsDouble(n) for n in names]
    except:
        print('Fail to extract values from {}, return empty.'.format(names))
        vals = []
    kernel.terminate()
    return vals


JugAlgo/CMakeLists.txt

0 → 100644
+41 −0
Original line number Original line Diff line number Diff line
# SPDX-License-Identifier: LGPL-3.0-or-later
# Copyright (C) 2022 Sylvester Joosten

################################################################################
# Package: JugAlgo
################################################################################

gaudi_add_header_only_library(JugAlgo
  LINK
  Gaudi::GaudiKernel
  algorithms::core
  JugBase
)

file(GLOB JugAlgoPlugins_sources CONFIGURE_DEPENDS src/components/*.cpp)
gaudi_add_module(JugAlgoPlugins
  SOURCES
  ${JugAlgoPlugins_sources}
  LINK
  Gaudi::GaudiKernel
  JugBase
  JugAlgo
  algorithms::core
)

target_include_directories(JugAlgoPlugins PUBLIC
  $<BUILD_INTERFACE:${CMAKE_CURRENT_LIST_DIR}>
  $<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>
)

target_compile_options(JugAlgoPlugins PRIVATE -Wno-suggest-override)

install(TARGETS JugAlgo JugAlgoPlugins
  EXPORT JugAlgoTargets
  RUNTIME DESTINATION "${CMAKE_INSTALL_BINDIR}" COMPONENT bin
  LIBRARY DESTINATION "${CMAKE_INSTALL_LIBDIR}" COMPONENT shlib
  COMPONENT dev)

if(BUILD_TESTING)
  enable_testing()
endif()

JugAlgo/README.md

0 → 100644
+1 −0
Original line number Original line Diff line number Diff line
Juggler bindings for the `algorithms` library.
+138 −0
Original line number Original line Diff line number Diff line
// SPDX-License-Identifier: LGPL-3.0-or-later
// Copyright (C) 2022 Sylvester Joosten

#include <string>
#include <type_traits>

#include <algorithms/algorithm.h>
#include <algorithms/type_traits.h>

#include <Gaudi/Algorithm.h>
#include <GaudiKernel/Service.h>
#include <JugAlgo/IAlgoServiceSvc.h>
#include <JugAlgo/detail/DataProxy.h>

#define JUGALGO_DEFINE_ALGORITHM(algorithm, impl, ns)                                              \
  using algorithm##Base = Jug::Algo::Algorithm<impl>;                                              \
  namespace ns {                                                                                   \
  struct algorithm : algorithm##Base {                                                             \
    algorithm(const std::string& name, ISvcLocator* svcLoc) : algorithm##Base(name, svcLoc) {}     \
  };                                                                                               \
  DECLARE_COMPONENT(algorithm)                                                                     \
  }

namespace Jug::Algo {

template <class AlgoImpl> class Algorithm : public Gaudi::Algorithm {
public:
  using algo_type   = AlgoImpl;
  using input_type  = typename algo_type::input_type;
  using output_type = typename algo_type::output_type;
  using Input       = typename algo_type::Input;
  using Output      = typename algo_type::Output;

  Algorithm(const std::string& name, ISvcLocator* svcLoc)
      : Gaudi::Algorithm(name, svcLoc)
      , m_algo{name}
      , m_output{this, m_algo.outputNames()}
      , m_input{this, m_algo.inputNames()} {
    defineProperties();
  }

  StatusCode initialize() override {
    debug() << "Initializing " << name() << endmsg;

    // Algorithms uses exceptions, Gaudi uses StatusCode --> catch and propagate
    try {
      // Grab the AlgoServiceSvc
      m_algo_svc = service("AlgoServiceSvc");
      if (!m_algo_svc) {
        error() << "Unable to get an instance of the AlgoServiceSvc" << endmsg;
        return StatusCode::FAILURE;
      }

      // Forward the log level of this algorithm
      const algorithms::LogLevel level{
          static_cast<algorithms::LogLevel>(msgLevel() > 0 ? msgLevel() - 1 : 0)};
      debug() << "Setting the logger level to " << algorithms::logLevelName(level) << endmsg;
      m_algo.level(level);

      // Init our data structures
      debug() << "Initializing data structures" << endmsg;
      m_input.init();
      m_output.init();

      // configure properties
      debug() << "Configuring properties" << endmsg;
      initProperties();

      // validate properties
      debug() << "Validating properties" << endmsg;
      m_algo.validate();

      // call the internal algorithm init
      debug() << "Initializing underlying algorithm " << m_algo.name() << endmsg;
      m_algo.init();
    } catch (const std::exception& e) {
      fatal() << e.what() << endmsg;
      return StatusCode::FAILURE;
    }
    return StatusCode::SUCCESS;
  }

  StatusCode execute(const EventContext&) const override {
    try {
      m_algo.process(m_input.get(), m_output.get());
    } catch (const std::exception& e) {
      error() << e.what() << endmsg;
      return StatusCode::FAILURE;
    }
    return StatusCode::SUCCESS;
  }

protected:
  template <typename T> void setAlgoProp(std::string_view name, T&& value) {
    m_algo.template setProperty<T>(name, value);
  }
  template <typename T> T getAlgoProp(std::string name) const {
    return m_algo.template getProperty<T>(name);
  }
  bool hasAlgoProp(std::string_view name) const { return m_algo.hasProperty(name); }

private:
  // to be called during construction (before init())
  void defineProperties() {
    for (const auto& [key, prop] : m_algo.getProperties()) {
      std::visit(
          [this, key = key](auto&& val) {
            using T = std::decay_t<decltype(val)>;
            this->m_props.emplace(
                key, std::make_unique<Gaudi::Property<T>>(this, std::string(key), val));
          },
          prop.get());
    }
  }

  // to be called during init() --> will actually set the underlying alo properties
  void initProperties() {
    for (const auto& [key, prop] : m_algo.getProperties()) {
      std::visit(
          [this, key = key](auto&& val) {
            using T                = std::decay_t<decltype(val)>;
            const auto* gaudi_prop = static_cast<Gaudi::Property<T>*>(this->m_props.at(key).get());
            const auto prop_val    = gaudi_prop->value();
            this->m_algo.setProperty(key, prop_val);
          },
          prop.get());
    }
  }

  algo_type m_algo;
  SmartIF<IAlgoServiceSvc> m_algo_svc;
  detail::DataProxy<output_type> m_output;
  detail::DataProxy<input_type> m_input;
  std::map<std::string_view, std::unique_ptr<Gaudi::Details::PropertyBase>> m_props;
};

} // namespace Jug::Algo
Original line number Original line Diff line number Diff line
// SPDX-License-Identifier: LGPL-3.0-or-later
// Copyright (C) 2022 Sylvester Joosten

#pragma once

#include <GaudiKernel/IService.h>

// Juggler bindings for all required algorithms services
// Will setup all necessary services (using the algorithms::ServiceSvc)

class GAUDI_API IAlgoServiceSvc : virtual public IService {
public:
  DeclareInterfaceID(IAlgoServiceSvc, 1, 0);
  virtual ~IAlgoServiceSvc() {}
  // No actual API needed, as this service will do all the magic behind the screens
};
Original line number Original line Diff line number Diff line
#pragma once

#include <string>
#include <tuple>
#include <vector>

#include <algorithms/algorithm.h>
#include <algorithms/type_traits.h>

#include <Gaudi/Algorithm.h>
#include <k4FWCore/DataHandle.h>

namespace Jug::Algo::detail {

enum class DataMode : unsigned { kInput, kOutput };

// Generate properties for each of the data arguments
template <class T, DataMode kMode> class DataElement {

public:
  using value_type = std::conditional_t<kMode == DataMode::kInput, algorithms::input_type_t<T>,
                                        algorithms::output_type_t<T>>;
  using data_type  = algorithms::data_type_t<T>;
  constexpr static const bool kIsOptional = algorithms::is_optional_v<T>;

  template <class Owner>
  DataElement(Owner* owner, std::string_view name)
      : m_data_name{std::make_unique<Gaudi::Property<std::string>>(owner, std::string(name), "")}
      , m_owner{owner} {}
  void init() {
    if (m_handle) {
      // treat error: already initialized
    }
    if (!m_data_name->empty()) {
      m_handle = std::make_unique<DataHandle<data_type>>(
          *m_data_name,
          ((kMode == DataMode::kInput) ? Gaudi::DataHandle::Reader : Gaudi::DataHandle::Writer),
          m_owner);
    } else if (!algorithms::is_optional_v<T>) {
      // treat error: member not optional but no collection name given
    }
  }
  value_type get() const {
    if constexpr (kIsOptional) {
      if (!m_handle) {
        return nullptr;
      }
    }
    if constexpr (kMode == DataMode::kInput) {
      return m_handle->get();
    } else {
      return m_handle->createAndPut();
    }
  }

private:
  std::unique_ptr<Gaudi::Property<std::string>>
      m_data_name; // This needs to be a pointer, else things go wrong once we go through
                   // createElements - probably something about passing the Property through an
                   // rvalue (or copy) constructor
  std::unique_ptr<DataHandle<data_type>> m_handle;
  gsl::not_null<Gaudi::Algorithm*> m_owner;
};

// Specialization for vectors
template <class T, class A, DataMode kMode> class DataElement<std::vector<T, A>, kMode> {
public:
  using value_type = std::conditional_t<kMode == DataMode::kInput, algorithms::input_type_t<T>,
                                        algorithms::output_type_t<T>>;
  using data_type  = algorithms::data_type_t<T>;

  template <class Owner>
  DataElement(Owner* owner, std::string_view name)
      : m_data_names{std::make_unique<Gaudi::Property<std::vector<std::string>>>(
            owner, std::string(name), {})}
      , m_owner{owner} {}
  void init() {
    if (!m_handles.empty()) {
      // treat error: already initialized
    }
    if (!m_data_names->empty()) {
      for (const auto& name : *m_data_names) {
        if (!name.empty()) {
          m_handles.emplace_back(std::make_unique<DataHandle<data_type>>(
              name,
              (kMode == DataMode::kInput ? Gaudi::DataHandle::Reader : Gaudi::DataHandle::Writer),
              m_owner));
        } else {
          // treat error: empty name
        }
      }
    } else {
      // OK if nothing given here, no error
    }
  }
  std::vector<value_type> get() const {
    std::vector<value_type> ret;
    for (auto& handle : m_handles) {
      if constexpr (kMode == DataMode::kInput) {
        ret.emplace_back(handle->get());
      } else {
        ret.emplace_back(handle->createAndPut());
      }
    }
    return ret;
  }

private:
  std::unique_ptr<Gaudi::Property<std::vector<std::string>>> m_data_names;
  std::vector<std::unique_ptr<DataHandle<T>>> m_handles;
  gsl::not_null<Gaudi::Algorithm*> m_owner;
};

template <DataMode kMode, class Owner, class NamesArray, class Tuple, size_t... I>
auto createElements(Owner* owner, const NamesArray& names, const Tuple&, std::index_sequence<I...>)
    -> std::tuple<DataElement<std::tuple_element_t<I, Tuple>, kMode>...> {
  return {DataElement<std::tuple_element_t<I, Tuple>, kMode>(owner, std::get<I>(names))...};
}

// Call ::get() on each element of the HandleTuple, and return the result in the format of
// ReturnTuple
template <class ReturnTuple, class HandleTuple, size_t... I>
ReturnTuple getElements(HandleTuple& handles, std::index_sequence<I...>) {
  return {std::get<I>(handles).get()...};
}

// Create data handle structure for all members

template <class Data> class DataProxy {
public:
  static constexpr DataMode kMode =
      (algorithms::is_input_v<Data> ? DataMode::kInput : DataMode::kOutput);
  using value_type              = typename Data::value_type;
  using data_type               = typename Data::data_type;
  constexpr static size_t kSize = Data::kSize;
  using names_type              = typename Data::key_type;
  using elements_type =
      decltype(createElements<kMode>(std::declval<Gaudi::Algorithm*>(), names_type(), data_type(),
                                     std::make_index_sequence<kSize>()));

  template <class Owner>
  DataProxy(Owner* owner, const names_type& names)
      : m_elements{
            createElements<kMode>(owner, names, data_type(), std::make_index_sequence<kSize>())} {}
  void init() {
    std::apply([](auto&&... el) { (el.init(), ...); }, m_elements);
  }
  value_type get() const {
    return getElements<value_type>(m_elements, std::make_index_sequence<kSize>());
  }

private:
  elements_type m_elements;
};

} // namespace Jug::Algo::detail
Original line number Original line Diff line number Diff line
// SPDX-License-Identifier: LGPL-3.0-or-later
// Copyright (C) 2022 Sylvester Joosten

#include <string>

#include <GaudiKernel/Service.h>
#include <JugAlgo/IAlgoServiceSvc.h>
#include <k4Interface/IGeoSvc.h>

#include <algorithms/geo.h>
#include <algorithms/logger.h>
#include <algorithms/random.h>
#include <algorithms/service.h>

// too many services? :P
class AlgoServiceSvc : public extends<Service, IAlgoServiceSvc> {
public:
  AlgoServiceSvc(const std::string& name, ISvcLocator* svc) : base_class(name, svc) {}
  virtual ~AlgoServiceSvc() = default;

  virtual StatusCode initialize() final;
  virtual StatusCode finalize() final { return StatusCode::SUCCESS; }

private:
  SmartIF<IGeoSvc> m_geoSvc;
  Gaudi::Property<size_t> m_randomSeed{this, "randomSeed", 1};
};

DECLARE_COMPONENT(AlgoServiceSvc)

// Implementation

StatusCode AlgoServiceSvc::initialize() {
  StatusCode sc = Service::initialize();
  if (!sc.isSuccess()) {
    fatal() << "Error initializing AlgoServiceSvc" << endmsg;
    return sc;
  }

  try {
    auto& serviceSvc = algorithms::ServiceSvc::instance();
    info() << "ServiceSvc declared " << serviceSvc.services().size() << " services" << endmsg;
    // Now register custom initializers
    // Starting with the logger
    const algorithms::LogLevel level{
        static_cast<algorithms::LogLevel>(msgLevel() > 0 ? msgLevel() - 1 : 0)};
    info() << "Setting up algorithms::LogSvc with default level " << algorithms::logLevelName(level)
           << endmsg;
    serviceSvc.setInit<algorithms::LogSvc>([this,level](auto&& logger) {
      this->info() << "Initializing the algorithms::LogSvc using the Gaudi logger" << endmsg;
      logger.defaultLevel(level);
      logger.init(
          [this](const algorithms::LogLevel l, std::string_view caller, std::string_view msg) {
            const std::string text = fmt::format("[{}] {}", caller, msg);
            if (l == algorithms::LogLevel::kCritical) {
              this->fatal() << text << endmsg;
            } else if (l == algorithms::LogLevel::kError) {
              this->error() << text << endmsg;
            } else if (l == algorithms::LogLevel::kWarning) {
              this->warning() << text << endmsg;
            } else if (l == algorithms::LogLevel::kInfo) {
              this->info() << text << endmsg;
            } else if (l == algorithms::LogLevel::kDebug) {
              this->debug() << text << endmsg;
            } else if (l == algorithms::LogLevel::kTrace) {
              this->verbose() << text << endmsg;
            }
          });
    });
    // geo Service
    info() << "Setting up algorithms::GeoSvc" << endmsg;
    m_geoSvc = service("GeoSvc");
    if (!m_geoSvc) {
      error() << "Unable to locate Geometry Service. "
              << "Make sure you have GeoSvc in the right order in the configuration." << endmsg;
      return StatusCode::FAILURE;
    }
    serviceSvc.setInit<algorithms::GeoSvc>([this](auto&& g) {
      this->info() << "Initializing algorithms::RandomSvc with the Juggler GeoSvc" << endmsg;
      g.init(m_geoSvc->getDetector());
    });
    // setup random service
    info() << "Setting up algorithms::RandomSvc\n"
           << "  --> using internal STL 64-bit MT engine\n"
           << "  --> seed set to" << m_randomSeed << endmsg;
    serviceSvc.setInit<algorithms::RandomSvc>([this](auto&& r) {
      this->info() << "Initializing the algorithms::RandomSvc" << endmsg;
      r.setProperty("seed", m_randomSeed);
      r.init();
    });
    info() << "Initializing services" << endmsg;

    // first set own log level to verbose so we actually display everything that is requested
    // (this overrides what was initally set through the OutputLevel property)
    updateMsgStreamOutputLevel(MSG::VERBOSE);

    // Validate our service setup (this will throw if we encounter issues)
    serviceSvc.init();

  } catch (const std::exception& e) {
    fatal() << e.what() << endmsg;
    return StatusCode::FAILURE;
  }

  info() << "AlgoServiceSvc initialized successfully" << endmsg;
  return StatusCode::SUCCESS;
}
Original line number Original line Diff line number Diff line
# SPDX-License-Identifier: LGPL-3.0-or-later
# Copyright (C) 2022 Wouter Deconinck, Whitney Armstrong

################################################################################
################################################################################
# Package: JugBase
# Package: JugBase
################################################################################
################################################################################
gaudi_subdir(JugBase v1r0)


gaudi_add_header_only_library(JugBase
find_package(EICD)
  LINK
#set(PODIO $ENV{PODIO})
  Gaudi::GaudiKernel
#set(CMAKE_MODULE_PATH CMAKE_MODULE_PATH PODIO)
#find_package(podio 0.11.01 REQUIRED)
#include_directories(${podio_INCLUDE_DIR})

find_package(Acts REQUIRED COMPONENTS Core IdentificationPlugin TGeoPlugin DD4hepPlugin )

find_package(ROOT COMPONENTS RIO Tree Core REQUIRED)
find_package(DD4hep COMPONENTS DDG4 DDG4IO DDRec REQUIRED)

# this declaration will not be needed in the future
gaudi_depends_on_subdirs(GaudiAlg GaudiKernel)
gaudi_install_scripts()
gaudi_install_python_modules()

gaudi_add_library(JugBase
  src/*.cpp 
                  INCLUDE_DIRS EICD PODIO ROOT $ENV{HOME}/stow/podio/include
                  LINK_LIBRARIES GaudiAlgLib GaudiKernel ROOT DD4hep::DDG4IO
                  PUBLIC_HEADERS JugBase)
target_link_libraries(JugBase 
  podio::podioRootIO
  podio::podioRootIO
  ActsCore ActsDD4hepPlugin
  ROOT::Core ROOT::RIO ROOT::Tree
  DD4hep::DDRec
  ActsCore ActsPluginDD4hep
  k4FWCore::k4FWCore
)
)
target_compile_options(JugBase PRIVATE -Wno-suggest-override)


file(GLOB JugBasePlugins_sources CONFIGURE_DEPENDS src/components/*.cpp)
gaudi_add_module(JugBasePlugins
gaudi_add_module(JugBasePlugins
                 src/components/*.cpp src/components/ReadTestConsumer.cxx
  SOURCES
                 LINK_LIBRARIES GaudiAlgLib GaudiKernel JugBase ROOT NPDet::DD4podIO  ActsCore ActsDD4hepPlugin)
  ${JugBasePlugins_sources}
  LINK
  Gaudi::GaudiKernel
  ROOT::Core ROOT::RIO ROOT::Tree
  JugBase
  EDM4HEP::edm4hep
  DD4hep::DDRec
  ActsCore ActsPluginDD4hep ActsPluginJson
  EDM4EIC::edm4eic
)

target_include_directories(JugBasePlugins PUBLIC
  $<BUILD_INTERFACE:${CMAKE_CURRENT_LIST_DIR}>
  $<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>
)

target_compile_options(JugBasePlugins PRIVATE -Wno-suggest-override)
target_compile_options(JugBasePlugins PRIVATE -Wno-suggest-override)


#gaudi_add_test(ProduceForReadTest
install(TARGETS JugBase JugBasePlugins
  EXPORT JugBaseTargets
  RUNTIME DESTINATION "${CMAKE_INSTALL_BINDIR}" COMPONENT bin
  LIBRARY DESTINATION "${CMAKE_INSTALL_LIBDIR}" COMPONENT shlib
  COMPONENT dev)

if(BUILD_TESTING)
  enable_testing()
endif()

#add_test(NAME ProduceForReadTest
#         WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}
#         COMMAND ${CMAKE_BINARY_DIR}/run ${PROJECT_SOURCE_DIR}/JugBase/scripts/gaudirun tests/options/simple_producer.py)
#add_test(NAME ReadTest
#         WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}
#         COMMAND ${CMAKE_BINARY_DIR}/run ${PROJECT_SOURCE_DIR}/JugBase/scripts/gaudirun tests/options/simple_reader.py)
#add_test(NAME ReadGeoTest
#         WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}
#         WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}
#               FRAMEWORK tests/options/simple_producer.py)
#         COMMAND ${CMAKE_BINARY_DIR}/run ${PROJECT_SOURCE_DIR}/JugBase/scripts/gaudirun tests/options/reader_with_geosvc.py)
gaudi_add_test(ReadTest
#add_test(NAME CheckReadCollectionSize
               WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}
               FRAMEWORK tests/options/simple_reader.py)
gaudi_add_test(ReadGeoTest
               WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}
               FRAMEWORK tests/options/reader_with_geosvc.py)
#gaudi_add_test(CheckReadCollectionSize
#         ENVIRONMENT PYTHONPATH+=${PODIO_PYTHON_DIR}
#         ENVIRONMENT PYTHONPATH+=${PODIO_PYTHON_DIR}
#         WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}
#         WORKING_DIRECTORY ${PROJECT_SOURCE_DIR}
#         COMMAND python JugBase/tests/scripts/check_coll_after_read.py
#         COMMAND python JugBase/tests/scripts/check_coll_after_read.py

JugBase/JugBase/DataHandle.h

deleted100644 → 0
+0 −199
Original line number Original line Diff line number Diff line
#ifndef JUGBASE_DATAHANDLE_H
#define JUGBASE_DATAHANDLE_H

#include "JugBase/DataWrapper.h"
#include "JugBase/PodioDataSvc.h"

#include "GaudiKernel/AlgTool.h"
#include "GaudiKernel/Algorithm.h"
#include <GaudiKernel/DataObjectHandle.h>
#include <GaudiKernel/GaudiException.h>
#include <Gaudi/Property.h>
#include <GaudiKernel/ServiceLocatorHelper.h>

#include "TTree.h"

#include <type_traits>

/**
 * Specialisation of the Gaudi DataHandle
 * for use with podio collections.
 */ 
template <typename T>
class DataHandle : public DataObjectHandle<DataWrapper<T>> {

public:
  friend class Algorithm;
  friend class AlgTool;

public:
  DataHandle();
  ~DataHandle();

  /// Initialises mother class
  DataHandle(DataObjID& descriptor, Gaudi::DataHandle::Mode a, IDataHandleHolder* fatherAlg);

  DataHandle(const std::string& k, Gaudi::DataHandle::Mode a, IDataHandleHolder* fatherAlg);
   
   ///Retrieve object from transient data store
  const T* get();

  /**
   * Register object in transient store
   */
  void put(T* object);

  /**
  * Create and register object in transient store
  */
  T* createAndPut();

private:
  ServiceHandle<IDataProviderSvc> m_eds;
  bool m_isGoodType{false};
  bool m_isCollection{false};
  T* m_dataPtr;
};

template <typename T>
DataHandle<T>::~DataHandle() {
  // release memory allocated for primitive types (see comments in ctor)
  if constexpr (std::is_integral_v<T> || std::is_floating_point_v<T>) {
    delete m_dataPtr;
  }
}

//---------------------------------------------------------------------------
template <typename T>
DataHandle<T>::DataHandle(DataObjID& descriptor, Gaudi::DataHandle::Mode a, IDataHandleHolder* fatherAlg)
    : DataObjectHandle<DataWrapper<T>>(descriptor, a, fatherAlg), m_eds("EventDataSvc", "DataHandle") {
      
}
/// The DataHandle::Writer constructor is used to create the corresponding branch in the output file
template <typename T>
DataHandle<T>::DataHandle(const std::string& descriptor, Gaudi::DataHandle::Mode a, IDataHandleHolder* fatherAlg)
    : DataObjectHandle<DataWrapper<T>>(descriptor, a, fatherAlg), m_eds("EventDataSvc", "DataHandle") {

  if (a == Gaudi::DataHandle::Writer) {
    m_eds.retrieve();
    PodioDataSvc* pds;
    pds = dynamic_cast<PodioDataSvc*>( m_eds.get());
    m_dataPtr = nullptr;
  if (nullptr != pds) {
    if (std::is_convertible<T*,podio::CollectionBase*>::value) {
       // case 1: T is a podio collection
       // for this case creation of branches is still handled in PodioOutput
       // (but could be moved here in the future) 
    } else if constexpr (std::is_integral_v<T>) {
       // case 2: T is some integer type
       // the call signature for TTree Branch is different for primitive types
       // in particular, we pass the pointer, not the adress of the pointer
       // and have to append a char indicating type (see TTree documentation)
       // therefore  space needs to be allocated for the integer 
       m_dataPtr = new T();
       TTree* tree = pds->eventDataTree();
       tree->Branch(descriptor.c_str(),  m_dataPtr, (descriptor + "/I").c_str());
    } else if constexpr (std::is_floating_point_v<T>) {
       // case 3: T is some floating point type
       // similar case 2, distinguish floats and doubles by size
       m_dataPtr = new T();
       TTree* tree = pds->eventDataTree();
       if (sizeof(T) > 4) {
         tree->Branch(descriptor.c_str(),  m_dataPtr, (descriptor + "/D").c_str());
       } else {
         tree->Branch(descriptor.c_str(),  m_dataPtr, (descriptor + "/F").c_str());
       }
    } else {
       // case 4: T is any other type (for which exists a root dictionary,
       // otherwise i/o will fail)
       // this includes std::vectors of ints, floats
       TTree* tree = pds->eventDataTree();
       tree->Branch(descriptor.c_str(),  &m_dataPtr);
      }
    }
  }
}

/**
 * Try to retrieve from the transient store. If the retrieval succeded and
 * this is the first time we retrieve, perform a dynamic cast to the desired
 * object. Then finally set the handle as Read.
 * If this is not the first time we cast and the cast worked, just use the
 * static cast: we do not need the checks of the dynamic cast for every access!
 */
template <typename T>
const T* DataHandle<T>::get() {
  DataObject* dataObjectp = nullptr;
  auto sc = m_eds->retrieveObject(DataObjectHandle<DataWrapper<T>>::fullKey().key(), dataObjectp);

  if (LIKELY(sc.isSuccess())) {
    if (UNLIKELY(!m_isGoodType && !m_isCollection)) {
      // only do this once (if both are false after this, we throw exception)
      m_isGoodType = nullptr != dynamic_cast<DataWrapper<T>*>(dataObjectp);
      if (!m_isGoodType) {
        auto tmp = dynamic_cast<DataWrapper<podio::CollectionBase>*>(dataObjectp);
        if (tmp != nullptr) {
          m_isCollection = nullptr != dynamic_cast<T*>(tmp->collectionBase());
        }
      }
    }
    if (LIKELY(m_isGoodType)) {
      DataObjectHandle<DataWrapper<T>>::setRead();
      return static_cast<DataWrapper<T>*>(dataObjectp)->getData();
    } else if (m_isCollection) {
      // The reader does not know the specific type of the collection. So we need a reinterpret_cast if the handle was
      // created by the reader.
      DataWrapper<podio::CollectionBase>* tmp = static_cast<DataWrapper<podio::CollectionBase>*>(dataObjectp);
      DataObjectHandle<DataWrapper<T>>::setRead();
      return reinterpret_cast<const T*>(tmp->collectionBase());
    } else {
      std::string errorMsg("The type provided for " + DataObjectHandle<DataWrapper<T>>::toString() +
                           " is different from the one of the object in the store.");
      throw GaudiException(errorMsg, "wrong product type", StatusCode::FAILURE);
    }
  }
  std::string msg("Could not retrieve product " + DataObjectHandle<DataWrapper<T>>::toString());
  throw GaudiException(msg, "wrong product name", StatusCode::FAILURE);
}

//---------------------------------------------------------------------------
template <typename T>
void DataHandle<T>::put(T* objectp) {
  DataWrapper<T>* dw = new DataWrapper<T>();
  // in case T is of primitive type, we must not change the pointer address
  // (see comments in ctor) instead copy the value of T into allocated memory
  if constexpr (std::is_integral_v<T> || std::is_floating_point_v<T>) {
    *m_dataPtr = *objectp;
  } else {
    m_dataPtr = objectp;
  }
  dw->setData(objectp);
  DataObjectHandle<DataWrapper<T>>::put(dw);

}
//---------------------------------------------------------------------------
/**
 * Create the collection, put it in the DataObjectHandle and return the
 * pointer to the data. Call this function if you create a collection and
 * want to save it.
 */
template <typename T>
T* DataHandle<T>::createAndPut() {
  T* objectp = new T();
  this->put(objectp);
  return objectp;
}

// temporary to allow property declaration
namespace Gaudi {
template <class T>
class Property<::DataHandle<T>&> : public ::DataObjectHandleProperty {
public:
  Property(const std::string& name, ::DataHandle<T>& value) : ::DataObjectHandleProperty(name, value) {}

  /// virtual Destructor
  virtual ~Property() {}
};
}

#endif

JugBase/JugBase/DataWrapper.h

deleted100644 → 0
+0 −46
Original line number Original line Diff line number Diff line
#ifndef JUGBASE_DATAWRAPPER_H
#define JUGBASE_DATAWRAPPER_H

#include <type_traits>

// Include files
#include "GaudiKernel/DataObject.h"
#include "podio/CollectionBase.h"

class GAUDI_API DataWrapperBase : public DataObject {
public:
  // ugly hack to circumvent the usage of boost::any yet
  // DataSvc would need a templated register method
  virtual podio::CollectionBase* collectionBase() = 0;
  virtual ~DataWrapperBase(){};
};

template <class T>
class GAUDI_API DataWrapper : public DataWrapperBase {
public:
  DataWrapper() : DataWrapperBase(), m_data(nullptr){};
  virtual ~DataWrapper();

  const T* getData() { return m_data; }
  void setData(T* data) { m_data = data; }
  /// try to cast to collectionBase; may return nullptr;
  virtual podio::CollectionBase* collectionBase();

private:
  T* m_data;
};

template <class T>
DataWrapper<T>::~DataWrapper<T>() {
  if (m_data != nullptr) delete m_data;
}

template <class T>
podio::CollectionBase* DataWrapper<T>::collectionBase() {
  if (std::is_base_of<podio::CollectionBase, T>::value) {
    return reinterpret_cast<podio::CollectionBase*>(m_data);
  }
  return nullptr;
}

#endif

JugBase/JugBase/IEDMMergeTool.h

deleted100644 → 0
+0 −28
Original line number Original line Diff line number Diff line
#ifndef JUGBASE_IEDMMERGETOOL_H
#define JUGBASE_IEDMMERGETOOL_H

#include "GaudiKernel/IAlgTool.h"

namespace podio {
class EventStore;
}

/** @class IEDMMergeTool 
 *
 * Interface for the tool used in the overlay algorithm. 
 * Must implement the correct collection and I/O and merging behavior,
 * especially for the case when there are associations between parts of the EDM.
 */
class IEDMMergeTool : virtual public IAlgTool {
public:
  DeclareInterfaceID(IEDMMergeTool, 1, 0);
  /// read pileup collection from store and handle them internally
  virtual StatusCode readPileupCollection(podio::EventStore& store) = 0;
  /// merge internally stored pileup (and signal) collections and make them accessible (typically via event store)
  virtual StatusCode mergeCollections() = 0;

  /// read signal collection from event store and
  virtual StatusCode readSignal() = 0;
};

#endif  // JUGBASE_IEDMMERGETOOL_H

JugBase/JugBase/IGeoSvc.h

deleted100644 → 0
+0 −44
Original line number Original line Diff line number Diff line
//
//  IGeoSvc.h
//
//
//  Created by Julia Hrdinka on 30/03/15.
//
//

#ifndef IGEOSVC_H
#define IGEOSVC_H

#include "GaudiKernel/IService.h"

namespace dd4hep {
class Detector;
class DetElement;
namespace rec {
class CellIDPositionConverter;
}
}

namespace Acts {
class TrackingGeometry;
}

class G4VUserDetectorConstruction;

class GAUDI_API IGeoSvc : virtual public IService {

public:
  /// InterfaceID
  DeclareInterfaceID(IGeoSvc, 1, 0);
  // receive DD4hep Geometry
  virtual dd4hep::DetElement getDD4HepGeo() = 0;
  virtual dd4hep::Detector* detector() = 0;
  virtual std::shared_ptr<const dd4hep::rec::CellIDPositionConverter> cellIDPositionConverter() const = 0;
  virtual std::shared_ptr<const Acts::TrackingGeometry> trackingGeometry() const = 0;
  // receive Geant4 Geometry
  //virtual G4VUserDetectorConstruction* getGeant4Geo() = 0;

  virtual ~IGeoSvc() {}
};

#endif  // IGEOSVC_H

JugBase/JugBase/IPileUpTool.h

deleted100644 → 0
+0 −38
Original line number Original line Diff line number Diff line
#ifndef GENERATION_IPILEUPTOOL_H
#define GENERATION_IPILEUPTOOL_H

#include "GaudiKernel/IAlgTool.h"

/** @class IPileUpTool IPileUpTool.h "Generation/IPileUpTool.h"
 *
 *  Abstract interface to pile up tools. Generates the number of pile-up
 *  interactions to generate for each event.
 *
 *  @author Patrick Robbe
 *  @date   2005-08-17
 */

namespace HepMC {
class GenEvent;
}

static const InterfaceID IID_IPileUpTool("IPileUpTool", 3, 0);

class IPileUpTool : virtual public IAlgTool {
public:
  static const InterfaceID& interfaceID() { return IID_IPileUpTool; }

  /** Computes the number of pile-up interactions in the event.
   *  @param[out] currentLuminosity  Luminosity of the current event.
   *  @return Number of pile-up interactions to generate.
   */
  virtual unsigned int numberOfPileUp() = 0;

  virtual double getMeanPileUp() = 0;

  /// Print various counters at the end of the job
  virtual void printPileUpCounters() = 0;

};

#endif  // GENERATION_IPILEUPTOOL_H

JugBase/JugBase/KeepDropSwitch.h

deleted100644 → 0
+0 −27
Original line number Original line Diff line number Diff line
#ifndef EXAMPLES_KEEPDROPSWITCH_H
#define EXAMPLES_KEEPDROPSWITCH_H

#include <map>
#include <string>
#include <vector>

std::vector<std::string> split(const std::string& s, char delim);

int wildcmp(const char* wild, const char* string);

class KeepDropSwitch {
public:
  enum Cmd { KEEP, DROP, UNKNOWN };
  typedef std::vector<std::string> CommandLines;
  KeepDropSwitch() {}
  explicit KeepDropSwitch(const CommandLines& cmds) { m_commandlines = cmds; }
  bool isOn(const std::string& astring) const;

private:
  bool getFlag(const std::string& astring) const;
  Cmd extractCommand(const std::string cmdLine) const;
  CommandLines m_commandlines;
  mutable std::map<std::string, bool> m_cache;
};

#endif

JugBase/JugBase/PodioDataSvc.h

deleted100644 → 0
+0 −90
Original line number Original line Diff line number Diff line
#ifndef JUGBASE_PODIODATASVC_H
#define JUGBASE_PODIODATASVC_H

#include "GaudiKernel/DataSvc.h"
#include "GaudiKernel/IConversionSvc.h"
// PODIO
#include "podio/CollectionBase.h"
#include "podio/CollectionIDTable.h"
#include "podio/EventStore.h"
#include "podio/ROOTReader.h"

#include <utility>
// Forward declarations

/** @class PodioEvtSvc EvtDataSvc.h
 *
 *   An EvtDataSvc for PODIO classes
 *
 *  @author B. Hegner
 */
class PodioDataSvc : public DataSvc {
public:
  typedef std::vector<std::pair<std::string, podio::CollectionBase*>> CollRegistry;

  /** Initialize the service.
   *  - attaches data loader
   *  - registers input filenames  
   */
  virtual StatusCode initialize();
  virtual StatusCode reinitialize();
  virtual StatusCode finalize();
  virtual StatusCode clearStore();

  /// Standard Constructor
  PodioDataSvc(const std::string& name, ISvcLocator* svc);

  /// Standard Destructor
  virtual ~PodioDataSvc();

  // Use DataSvc functionality except where we override
  using DataSvc::registerObject;
  /// Overriding standard behaviour of evt service
  /// Register object with the data store.
  virtual StatusCode registerObject(std::string_view parentPath,
                                    std::string_view fullPath,
                                    DataObject* pObject) override final;

  StatusCode readCollection(const std::string& collectionName, int collectionID);

  virtual const CollRegistry& getCollections() const { return m_collections; }
  virtual const CollRegistry& getReadCollections() const { return m_readCollections; }
  virtual podio::CollectionIDTable* getCollectionIDs() { return m_collectionIDs; }

  /// Set the collection IDs (if reading a file)
  void setCollectionIDs(podio::CollectionIDTable* collectionIds);
  /// Resets caches of reader and event store, increases event counter
  void endOfRead();


  TTree* eventDataTree() {return m_eventDataTree;}


private:

  // eventDataTree
  TTree* m_eventDataTree;
  /// PODIO reader for ROOT files
  podio::ROOTReader m_reader;
  /// PODIO EventStore, used to initialise collections
  podio::EventStore m_provider;
  /// Counter of the event number
  int m_eventNum{0};
  /// Number of events in the file / to process
  int m_eventMax{-1};


  SmartIF<IConversionSvc> m_cnvSvc;

  // special members for podio handling
  std::vector<std::pair<std::string, podio::CollectionBase*>> m_collections;
  std::vector<std::pair<std::string, podio::CollectionBase*>> m_readCollections;
  podio::CollectionIDTable* m_collectionIDs;

protected:
  /// ROOT file name the input is read from. Set by option filename
  std::vector<std::string> m_filenames;
  std::string m_filename;
  std::string m_treename;
};
#endif  // CORE_PODIODATASVC_H
+40 −0
Original line number Original line Diff line number Diff line
// SPDX-License-Identifier: LGPL-3.0-or-later
// Copyright (C) 2022 Whitney Armstrong

#ifndef IParticleSvc_H
#define IParticleSvc_H

#include <GaudiKernel/IService.h>
#include <unordered_map>

namespace Jug::Base {

  /** Simple particle data.
   *
   */
  struct ParticleData {
    int         pdgCode;
    int         charge;
    double      mass; //std::string name;
  };
} // namespace Jug::Base

/** Particle interface.
 *
 * \ingroup base
 */
class GAUDI_API IParticleSvc : virtual public IService {
public:
  using Particle    = Jug::Base::ParticleData;
  using ParticleMap = std::map<int, Particle>;

public:
  /// InterfaceID
  DeclareInterfaceID(IParticleSvc, 1, 0);
  virtual ~IParticleSvc() {}

  virtual ParticleMap particleMap() const = 0;
  virtual Particle    particle(int pdg) const = 0;
};

#endif  // IParticleSvc_H
Original line number Original line Diff line number Diff line
// SPDX-License-Identifier: LGPL-3.0-or-later
// Copyright (C) 2022 Wouter Deconinck

#pragma once

#include "Math/Vector4D.h"
using ROOT::Math::PxPyPzEVector;

#include "edm4hep/MCParticleCollection.h"
#include "edm4eic/ReconstructedParticleCollection.h"

namespace Jug::Base::Beam {

  template<class collection>
  auto find_first_with_pdg(
      const collection& parts,
      const std::set<int32_t>& pdg) {
    std::vector<decltype(parts[0])> c;
    for (const auto& p: parts) {
      if (pdg.count(p.getPDG()) > 0) {
        c.push_back(p);
        break;
      }
    }
    return c;
  }

  template<class collection>
  auto find_first_with_status_pdg(
      const collection& parts,
      const std::set<int32_t>& status,
      const std::set<int32_t>& pdg) {
    std::vector<decltype(parts[0])> c;
    for (const auto& p: parts) {
      if (status.count(p.getGeneratorStatus()) > 0 &&
          pdg.count(p.getPDG()) > 0) {
        c.push_back(p);
        break;
      }
    }
    return c;
  }

  inline auto find_first_beam_electron(const edm4hep::MCParticleCollection& mcparts) {
    return find_first_with_status_pdg(mcparts, {4}, {11});
  }

  inline auto find_first_beam_hadron(const edm4hep::MCParticleCollection& mcparts) {
    return find_first_with_status_pdg(mcparts, {4}, {2212, 2112});
  }

  inline auto find_first_scattered_electron(const edm4hep::MCParticleCollection& mcparts) {
    return find_first_with_status_pdg(mcparts, {1}, {11});
  }

  inline auto find_first_scattered_electron(const edm4eic::ReconstructedParticleCollection& rcparts) {
    return find_first_with_pdg(rcparts, {11});
  }

  template<typename Vector3>
  PxPyPzEVector
  round_beam_four_momentum(
      const Vector3& p_in,
      const float mass,
      const std::vector<float>& pz_set,
      const float crossing_angle = 0.0) {
    PxPyPzEVector p_out;
    for (const auto& pz : pz_set) {
      if (fabs(p_in.z / pz - 1) < 0.1) {
        p_out.SetPz(pz);
        break;
      }
    }
    p_out.SetPx(p_out.Pz() * sin(crossing_angle));
    p_out.SetPz(p_out.Pz() * cos(crossing_angle));
    p_out.SetE(std::hypot(p_out.Px(), p_out.Pz(), mass));
    return p_out;
  }

} // namespace Jug::Base::Beam
Original line number Original line Diff line number Diff line
// SPDX-License-Identifier: LGPL-3.0-or-later
// Copyright (C) 2022 Wouter Deconinck, Barak Schmookler

#pragma once

#include "Math/Vector4D.h"
using ROOT::Math::PxPyPzEVector;

#include "Math/LorentzRotation.h"
#include "Math/LorentzVector.h"
#include "Math/RotationX.h"
#include "Math/RotationY.h"
#include "Math/Boost.h"

namespace Jug::Base::Boost {

  using ROOT::Math::LorentzRotation;

  inline LorentzRotation determine_boost(PxPyPzEVector ei, PxPyPzEVector pi) {

    using ROOT::Math::RotationX;
    using ROOT::Math::RotationY;
    using ROOT::Math::Boost;

    // Step 1: Find the needed boosts and rotations from the incoming lepton and hadron beams
    // (note, this will give you a perfect boost, in principle you will not know the beam momenta exactly and should use an average)

    // Define the Boost to make beams back-to-back
    const auto cmBoost = (ei + pi).BoostToCM();

    const Boost boost_to_cm(-cmBoost);

    // This will boost beams from a center of momentum frame back to (nearly) their original energies
    const Boost boost_to_headon(cmBoost); // FIXME

    // Boost and rotate the incoming beams to find the proper rotations TLorentzVector

    // Boost to COM frame
    boost_to_cm(pi);
    boost_to_cm(ei);
    // Rotate to head-on
    RotationY rotAboutY(-1.0*atan2(pi.Px(), pi.Pz())); // Rotate to remove x component of beams
    RotationX rotAboutX(+1.0*atan2(pi.Py(), pi.Pz())); // Rotate to remove y component of beams

    LorentzRotation tf;
    tf *= boost_to_cm;
    tf *= rotAboutY;
    tf *= rotAboutX;
    tf *= boost_to_headon;
    return tf;
  }

  inline PxPyPzEVector apply_boost(const LorentzRotation& tf, PxPyPzEVector part) {

    // Step 2: Apply boosts and rotations to any particle 4-vector
    // (here too, choices will have to be made as to what the 4-vector is for reconstructed particles)
  
    // Boost and rotate particle 4-momenta into the headon frame
    tf(part);
    return part;
  }

} // namespace Jug::Base::Boost
Original line number Original line Diff line number Diff line
#pragma once
#include <tuple>

namespace Jug::Utils {

  /** Enumerate helper.
   *
   * ref: https://www.reedbeta.com/blog/python-like-enumerate-in-cpp17/
   */
  template <typename T, typename TIter = decltype(std::begin(std::declval<T>())),
            typename = decltype(std::end(std::declval<T>()))>
  constexpr auto Enumerate(T&& iterable)
  {
    struct iterator
    {
        size_t i;
        TIter iter;
        bool operator != (const iterator & other) const { return iter != other.iter; }
        void operator ++ () { ++i; ++iter; }
        auto operator * () const { return std::tie(i, *iter); }
    };
    struct iterable_wrapper
    {
        T iterable;
        auto begin() { return iterator{ 0, std::begin(iterable) }; }
        auto end() { return iterator{ 0, std::end(iterable) }; }
    };
    return iterable_wrapper{ std::forward<T>(iterable) };
}

} // namespace Jug::Utils

JugBase/src/KeepDropSwitch.cpp

deleted100644 → 0
+0 −100
Original line number Original line Diff line number Diff line
#include "JugBase/KeepDropSwitch.h"

#include <iostream>
#include <sstream>
#include <stdexcept>

int wildcmp(const char* wild, const char* string) {
  // Written by Jack Handy - <A href="mailto:jakkhandy@hotmail.com">jakkhandy@hotmail.com</A>
  const char *cp = nullptr, *mp = nullptr;
  while ((*string) && (*wild != '*')) {
    if ((*wild != *string) && (*wild != '?')) {
      return 0;
    }
    wild++;
    string++;
  }
  while (*string) {
    if (*wild == '*') {
      if (!*++wild) {
        return 1;
      }
      mp = wild;
      cp = string + 1;
    } else if ((*wild == *string) || (*wild == '?')) {
      wild++;
      string++;
    } else {
      wild = mp;
      string = cp++;
    }
  }
  while (*wild == '*') {
    wild++;
  }
  return !*wild;
}

std::vector<std::string> split(const std::string& s, char delim) {
  std::vector<std::string> elems;
  std::stringstream ss(s);
  std::string item;
  while (std::getline(ss, item, delim)) {
    if (item != "") elems.push_back(item);
  }
  return elems;
}

bool KeepDropSwitch::isOn(const std::string& astring) const {
  typedef std::map<std::string, bool>::const_iterator MIter;
  MIter im = m_cache.find(astring);
  if (im != m_cache.end())
    return im->second;
  else {
    bool val = getFlag(astring);
    m_cache.insert(std::pair<std::string, bool>(astring, val));
    return val;
  }
}

bool KeepDropSwitch::getFlag(const std::string& astring) const {
  bool flag = true;
  for (const auto& cmdline : m_commandlines) {
    std::vector<std::string> words = split(cmdline, ' ');
    if (words.size() != 2) {
      std::ostringstream msg;
      msg << "malformed command string : " << cmdline;
      throw std::invalid_argument(msg.str());
    }
    std::string cmd = words[0];
    std::string pattern = words[1];
    Cmd theCmd = UNKNOWN;
    if (cmd == "keep")
      theCmd = KEEP;
    else if (cmd == "drop")
      theCmd = DROP;
    else {
      std::ostringstream msg;
      msg << "malformed command in line: " << std::endl;
      msg << cmdline << std::endl;
      msg << "should be keep or drop, lower case" << std::endl;
      throw std::invalid_argument(msg.str());
    }
    bool match = wildcmp(pattern.c_str(), astring.c_str());
    if (not match)
      continue;
    else if (theCmd == KEEP)
      flag = true;
    else
      flag = false;
  }
  return flag;
}

KeepDropSwitch::Cmd KeepDropSwitch::extractCommand(const std::string cmdline) const {
  std::vector<std::string> words = split(cmdline, ' ');
  for (auto& word : words)
    std::cout << "'" << word << "' ";
  std::cout << std::endl;
  return UNKNOWN;
}

JugBase/src/PodioDataSvc.cpp

deleted100644 → 0
+0 −118
Original line number Original line Diff line number Diff line
#include "JugBase/PodioDataSvc.h"
#include "GaudiKernel/IConversionSvc.h"
#include "GaudiKernel/IEventProcessor.h"
#include "GaudiKernel/ISvcLocator.h"

#include "JugBase/DataWrapper.h"

#include "TTree.h"

StatusCode PodioDataSvc::initialize() {
  // Nothing to do: just call base class initialisation
  StatusCode status = DataSvc::initialize();
  ISvcLocator* svc_loc = serviceLocator();

  // Attach data loader facility
  m_cnvSvc = svc_loc->service("EventPersistencySvc");
  status = setDataLoader(m_cnvSvc);

  if (m_filename != "") {
    m_filenames.push_back(m_filename);
  }

  if (m_filenames.size() > 0) {
    if (m_filenames[0] != "") {
      m_reader.openFiles(m_filenames);
      m_eventMax = m_reader.getEntries();
      auto idTable = m_reader.getCollectionIDTable();

      setCollectionIDs(idTable);
      m_provider.setReader(&m_reader);
    }
  }
  return status;
}

StatusCode PodioDataSvc::reinitialize() {
  // Do nothing for this service
  return StatusCode::SUCCESS;
}

StatusCode PodioDataSvc::finalize() {
  m_cnvSvc = 0; // release
  DataSvc::finalize().ignore();
  return StatusCode::SUCCESS;
}

StatusCode PodioDataSvc::clearStore() {
  for (auto& collNamePair : m_collections) {
    if (collNamePair.second != nullptr) {
      collNamePair.second->clear();
    }
  }
  for (auto& collNamePair : m_readCollections) {
    if (collNamePair.second != nullptr) {
      collNamePair.second->clear();
    }
  }
  DataSvc::clearStore().ignore();
  m_collections.clear();
  m_readCollections.clear();
  return StatusCode::SUCCESS;
}

void PodioDataSvc::endOfRead() {
  if (m_eventMax != -1) {
    m_provider.clearCaches();
    m_reader.endOfEvent();
    if (m_eventNum++ > m_eventMax) {
      info() << "Reached end of file with event " << m_eventMax << endmsg;
      IEventProcessor* eventProcessor;
      service("ApplicationMgr", eventProcessor);
      eventProcessor->stopRun();
    }
  }
}

void PodioDataSvc::setCollectionIDs(podio::CollectionIDTable* collectionIds) {
  if (m_collectionIDs != nullptr) {
    delete m_collectionIDs;
  }
  m_collectionIDs = collectionIds;
}

/// Standard Constructor
PodioDataSvc::PodioDataSvc(const std::string& name, ISvcLocator* svc)
    : DataSvc(name, svc), m_collectionIDs(new podio::CollectionIDTable()) {

      m_eventDataTree = new TTree("EVENT", "Events tree");
    }

/// Standard Destructor
PodioDataSvc::~PodioDataSvc() {}

StatusCode PodioDataSvc::readCollection(const std::string& collName, int collectionID) {
  podio::CollectionBase* collection(nullptr);
  m_provider.get(collectionID, collection);
  auto wrapper = new DataWrapper<podio::CollectionBase>;
  int id = m_collectionIDs->add(collName);
  collection->setID(id);
  wrapper->setData(collection);
  m_readCollections.emplace_back(std::make_pair(collName, collection));
  return DataSvc::registerObject("/Event", "/" + collName, wrapper);
}

StatusCode PodioDataSvc::registerObject(std::string_view parentPath, std::string_view fullPath, DataObject* pObject) {
  DataWrapperBase* wrapper = dynamic_cast<DataWrapperBase*>(pObject);
  if (wrapper != nullptr) {
    podio::CollectionBase* coll = wrapper->collectionBase();
    if (coll != nullptr) {
      size_t pos = fullPath.find_last_of("/");
      std::string shortPath(fullPath.substr(pos + 1, fullPath.length()));
      int id = m_collectionIDs->add(shortPath);
      coll->setID(id);
      m_collections.emplace_back(std::make_pair(shortPath, coll));
    }
  }
  return DataSvc::registerObject(parentPath, fullPath, pObject);
}
+0 −24
Original line number Original line Diff line number Diff line
#include "ConstPileUp.h"

DECLARE_COMPONENT(ConstPileUp)

ConstPileUp::ConstPileUp(const std::string& type, const std::string& name, const IInterface* parent)
    : GaudiTool(type, name, parent) {
  declareInterface<IPileUpTool>(this);
}

ConstPileUp::~ConstPileUp() { ; }

StatusCode ConstPileUp::initialize() {
  StatusCode sc = GaudiTool::initialize();
  printPileUpCounters();
  return sc;
}

unsigned int ConstPileUp::numberOfPileUp() { return m_numPileUpEvents; }

double ConstPileUp::getMeanPileUp() { return m_numPileUpEvents; }

void ConstPileUp::printPileUpCounters() {
  info() << "Current number of pileup events: " << m_numPileUpEvents << endmsg;
}
+0 −33
Original line number Original line Diff line number Diff line
#ifndef GENERATION_CONSTPILEUP_H
#define GENERATION_CONSTPILEUP_H

#include "JugBase/IPileUpTool.h"
#include "GaudiAlg/GaudiTool.h"

/** @class ConstPileUp 
 *
 *  Tool to generate number of pile-up events to be mixed with signal event.
 *  Concrete implementation of a IPileUpTool -- the most trivial one, actually,
 *  returning just a constant (that can be specified as a property).
 *  The interface is kept close to the originals in LHCb's Gauss(ino). 
 *
 *  @author Valentin Volkl
 *  @date   2015-12-16
 */
class ConstPileUp : public GaudiTool, virtual public IPileUpTool {
public:
  ConstPileUp(const std::string& type, const std::string& name, const IInterface* parent);
  virtual ~ConstPileUp();
  virtual StatusCode initialize();
  /// in the special case of constant pileup,
  /// this method is identical to getMeanPileUp
  virtual unsigned int numberOfPileUp();
  virtual double getMeanPileUp();
  virtual void printPileUpCounters();

private:
  /// Number of min bias events to pile on signal event.
  Gaudi::Property<unsigned int> m_numPileUpEvents{this, "numPileUpEvents", 0, "number of pile-up events"};
};

#endif  // GENERATION_CONSTPILEUP_H
Original line number Original line Diff line number Diff line
// SPDX-License-Identifier: LGPL-3.0-or-later
// Copyright (C) 2022 Whitney Armstrong, Wouter Deconinck

#include "EICDataSvc.h"
#include "EICDataSvc.h"


// Instantiation of a static factory class used by clients to create
// Instantiation of a static factory class used by clients to create
// instances of this service
// instances of this service
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
DECLARE_COMPONENT(EICDataSvc)
DECLARE_COMPONENT(EICDataSvc)


/// Standard Constructor
/// Standard Constructor
EICDataSvc::EICDataSvc(const std::string& name, ISvcLocator* svc) : PodioDataSvc(name, svc) {
EICDataSvc::EICDataSvc(const std::string& name, ISvcLocator* svc) : PodioDataSvc(name, svc) {
  declareProperty("tree", m_treename = "", "Names of the tree read");
  declareProperty("inputs", m_filenames = {}, "Names of the files to read");
  declareProperty("inputs", m_filenames = {}, "Names of the files to read");
  declareProperty("input", m_filename = "", "Name of the file to read");
  declareProperty("input", m_filename = "", "Name of the file to read");
}
}


/// Standard Destructor
/// Standard Destructor
EICDataSvc::~EICDataSvc() {}
EICDataSvc::~EICDataSvc() = default;
Original line number Original line Diff line number Diff line
#ifndef JUGBASE_FCCDATASVC_H
// SPDX-License-Identifier: LGPL-3.0-or-later
#define JUGBASE_FCCDATASVC_H
// Copyright (C) 2022 Whitney Armstrong


#include "JugBase/PodioDataSvc.h"
#ifndef JUGBASE_EICDATASVC_H
#define JUGBASE_EICDATASVC_H

#include <k4FWCore/PodioDataSvc.h>


class EICDataSvc : public PodioDataSvc {
class EICDataSvc : public PodioDataSvc {


@@ -12,4 +15,4 @@ public:
  /// Standard Destructor
  /// Standard Destructor
  virtual ~EICDataSvc();
  virtual ~EICDataSvc();
};
};
#endif  // JUGBASE_FCCDATASVC_H
#endif  // JUGBASE_EICDATASVC_H
Original line number Original line Diff line number Diff line
//
// SPDX-License-Identifier: LGPL-3.0-or-later
//  GeoSvc.cxx
// Copyright (C) 2022 Whitney Armstrong, Wouter Deconinck
//
//
//  Created by Julia Hrdinka on 30/03/15.
//
//


#include "GeoSvc.h"
#include "GeoSvc.h"
#include "GaudiKernel/Service.h"
#include "GaudiKernel/Service.h"
//#include "GeoConstruction.h"
#include "TGeoManager.h"
#include "TGeoManager.h"


#include "DD4hep/Printout.h"
#include "DD4hep/Printout.h"


#include "JugBase/ACTSLogger.h"

#include "Acts/Geometry/TrackingGeometry.hpp"
#include "Acts/Plugins/DD4hep/ConvertDD4hepDetector.hpp"
using namespace Gaudi;
using namespace Gaudi;


// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
DECLARE_COMPONENT(GeoSvc)
DECLARE_COMPONENT(GeoSvc)


GeoSvc::GeoSvc(const std::string& name, ISvcLocator* svc)
GeoSvc::GeoSvc(const std::string& name, ISvcLocator* svc)
    : base_class(name, svc)
    : base_class(name, svc)
    //, m_incidentSvc("IncidentSvc", "GeoSvc")
    , m_trackingGeo(nullptr)
    , m_dd4hepgeo(0)
    //, m_geant4geo(0)
    , m_log(msgSvc(), name) {}
    , m_log(msgSvc(), name) {}


GeoSvc::~GeoSvc() {
GeoSvc::~GeoSvc() {
  if (m_dd4hepgeo) {
  if (m_dd4hepGeo != nullptr) {
    try {
    try {
      m_dd4hepgeo->destroyInstance();
      m_dd4hepGeo->destroyInstance();
      m_dd4hepgeo = 0;
      m_dd4hepGeo = nullptr;
    } catch (...) {
    } catch (...) {
    }
    }
  }
  }
@@ -41,52 +28,21 @@ GeoSvc::~GeoSvc() {


StatusCode GeoSvc::initialize() {
StatusCode GeoSvc::initialize() {
  StatusCode sc = Service::initialize();
  StatusCode sc = Service::initialize();
  if (!sc.isSuccess())
  if (!sc.isSuccess()) {
    return sc;
    return sc;
  }
  // Turn off TGeo printouts if appropriate for the msg level
  // Turn off TGeo printouts if appropriate for the msg level
  if (msgLevel() >= MSG::INFO) {
  if (msgLevel() >= MSG::INFO) {
    TGeoManager::SetVerboseLevel(0);
    TGeoManager::SetVerboseLevel(0);
  }
  }
  uint printoutLevel = msgLevel();
  uint printoutLevel = msgLevel();
  dd4hep::setPrintLevel(dd4hep::PrintLevel(printoutLevel));
  dd4hep::setPrintLevel(dd4hep::PrintLevel(printoutLevel));
  // m_incidentSvc->addListener(this, "GeometryFailure");
  if (buildDD4HepGeo().isFailure()) {
  if (buildDD4HepGeo().isFailure())
    m_log << MSG::ERROR << "Could not build DD4Hep geometry" << endmsg;
    m_log << MSG::ERROR << "Could not build DD4Hep geometry" << endmsg;
  else
  } else {
    m_log << MSG::INFO << "DD4Hep geometry SUCCESSFULLY built" << endmsg;
    m_log << MSG::INFO << "DD4Hep geometry SUCCESSFULLY built" << endmsg;

  // if (buildGeant4Geo().isFailure())
  //  m_log << MSG::ERROR << "Could not build Geant4 geometry" << endmsg;
  // else
  //  m_log << MSG::INFO << "Geant4 geometry SUCCESSFULLY built" << endmsg;
  // if (m_failureFlag) {
  //  return StatusCode::FAILURE;
  //}
  Acts::Logging::Level geoMsgLevel;
  switch (msgLevel()) {
  case (MSG::DEBUG):
    geoMsgLevel = Acts::Logging::DEBUG;
    break;
  case (MSG::VERBOSE):
    geoMsgLevel = Acts::Logging::VERBOSE;
    break;
  case (MSG::INFO):
    geoMsgLevel = Acts::Logging::INFO;
    break;
  case (MSG::WARNING):
    geoMsgLevel = Acts::Logging::WARNING;
    break;
  case (MSG::FATAL):
    geoMsgLevel = Acts::Logging::FATAL;
    break;
  case (MSG::ERROR):
    geoMsgLevel = Acts::Logging::ERROR;
    break;
  default:
    geoMsgLevel = Acts::Logging::VERBOSE;
  }
  }
  m_trackingGeo = std::move(Acts::convertDD4hepDetector(m_dd4hepgeo->world(), geoMsgLevel, Acts::equidistant,

                                                        Acts::equidistant, Acts::equidistant));
  return StatusCode::SUCCESS;
  return StatusCode::SUCCESS;
}
}


@@ -94,38 +50,19 @@ StatusCode GeoSvc::finalize() { return StatusCode::SUCCESS; }


StatusCode GeoSvc::buildDD4HepGeo() {
StatusCode GeoSvc::buildDD4HepGeo() {
  // we retrieve the the static instance of the DD4HEP::Geometry
  // we retrieve the the static instance of the DD4HEP::Geometry
  m_dd4hepgeo = &(dd4hep::Detector::getInstance());
  m_dd4hepGeo = &(dd4hep::Detector::getInstance());
  m_dd4hepgeo->addExtension<IGeoSvc>(this);
  m_dd4hepGeo->addExtension<IGeoSvc>(this);


  // load geometry
  // load geometry
  for (auto& filename : m_xmlFileNames) {
  for (auto& filename : m_xmlFileNames) {
    m_log << MSG::INFO << "loading geometry from file:  '" << filename << "'" << endmsg;
    m_log << MSG::INFO << "loading geometry from file:  '" << filename << "'" << endmsg;
    m_dd4hepgeo->fromCompact(filename);
    m_dd4hepGeo->fromCompact(filename);
  }
  }
  m_dd4hepgeo->volumeManager();
  m_dd4hepGeo->volumeManager();
  m_dd4hepgeo->apply("DD4hepVolumeManager", 0, 0);
  m_dd4hepGeo->apply("DD4hepVolumeManager", 0, nullptr);
  m_cellid_converter = std::make_shared<const dd4hep::rec::CellIDPositionConverter>(*m_dd4hepgeo);
  return StatusCode::SUCCESS;
  return StatusCode::SUCCESS;
}
}


dd4hep::Detector* GeoSvc::detector() { return (m_dd4hepgeo); }
dd4hep::Detector* GeoSvc::getDetector() { return (m_dd4hepGeo); }

dd4hep::DetElement GeoSvc::getDD4HepGeo() { return (detector()->world()); }

//StatusCode GeoSvc::buildGeant4Geo() {
//  std::shared_ptr<G4VUserDetectorConstruction> detector(new det::GeoConstruction(*lcdd()));
//  m_geant4geo = detector;
//  if (m_geant4geo) {
//    return StatusCode::SUCCESS;
//  } else
//    return StatusCode::FAILURE;
//}

//G4VUserDetectorConstruction* GeoSvc::getGeant4Geo() { return (m_geant4geo.get()); }


//void GeoSvc::handle(const Incident& inc) {
dd4hep::DetElement GeoSvc::getDD4HepGeo() { return (getDetector()->world()); }
//  error() << "Handling incident '" << inc.type() << "'" << endmsg;
//  if (!inc.type().compare("GeometryFailure")) {
//    m_failureFlag = true;
//  }
//}
Original line number Original line Diff line number Diff line
// SPDX-License-Identifier: LGPL-3.0-or-later
// Copyright (C) 2022 Whitney Armstrong, Wouter Deconinck

//
//
//  GeoSvc.h
//  GeoSvc.h
//
//
@@ -10,7 +13,7 @@
#define GEOSVC_H
#define GEOSVC_H


// Interface
// Interface
#include "JugBase/IGeoSvc.h"
#include <k4Interface/IGeoSvc.h>


// Gaudi
// Gaudi
#include "GaudiKernel/MsgStream.h"
#include "GaudiKernel/MsgStream.h"
@@ -22,8 +25,28 @@
#include "DDRec/CellIDPositionConverter.h"
#include "DDRec/CellIDPositionConverter.h"
#include "DDRec/SurfaceManager.h"
#include "DDRec/SurfaceManager.h"
#include "DDRec/Surface.h"
#include "DDRec/Surface.h"
#include "DD4hep/DD4hepUnits.h"




class GeoSvc : public extends<Service, IGeoSvc> {
class GeoSvc : public extends<Service, IGeoSvc> {
private:

  /** DD4hep detector interface class.
   * This is the main dd4hep detector handle.
   * <a href="https://dd4hep.web.cern.ch/dd4hep/reference/classdd4hep_1_1Detector.html">See DD4hep Detector documentation</a>
   */
  dd4hep::Detector* m_dd4hepGeo = nullptr;

  /// XML-files with the detector description
  Gaudi::Property<std::vector<std::string>> m_xmlFileNames{
      this, "detectors", {}, "Detector descriptions XML-files"};

  Gaudi::Property<std::string> _{
      this, "materials", "", "(unused)"};

  /// Gaudi logging output
  MsgStream m_log;


public:
public:
  GeoSvc(const std::string& name, ISvcLocator* svc);
  GeoSvc(const std::string& name, ISvcLocator* svc);
@@ -44,38 +67,24 @@ public:
   */
   */
  virtual dd4hep::DetElement getDD4HepGeo() override;
  virtual dd4hep::DetElement getDD4HepGeo() override;


  virtual std::shared_ptr<const dd4hep::rec::CellIDPositionConverter> cellIDPositionConverter() const {
    return m_cellid_converter;
  }

  /** Get the main dd4hep Detector.
  /** Get the main dd4hep Detector.
   * Returns the pointer to the main dd4hep detector class.
   * Returns the pointer to the main dd4hep detector class.
   * <a href="https://dd4hep.web.cern.ch/dd4hep/reference/classdd4hep_1_1Detector.html">See DD4hep Detector documentation</a>
   */
   */
  virtual dd4hep::Detector* detector() override ;
  virtual dd4hep::Detector* getDetector() override;

  virtual dd4hep::Detector* lcdd() {
  /** Gets the ACTS tracking geometry.
    return getDetector();
   */
  };
  virtual std::shared_ptr<const Acts::TrackingGeometry> trackingGeometry() const;
  virtual G4VUserDetectorConstruction* getGeant4Geo() override {

    assert(false && "getGeant4Geo() noy implemented");
private:
    return nullptr; 

  };
  /// ACTS Tracking  Geometry
  virtual std::string constantAsString(std::string const& name) override {
  std::shared_ptr<const Acts::TrackingGeometry> m_trackingGeo;
    assert(false && "constantsAsString() noy implemented");

    (void)name;
  /// Pointer to the interface to the DD4hep geometry
    return "";
  dd4hep::Detector* m_dd4hepgeo;


  std::shared_ptr<const dd4hep::rec::CellIDPositionConverter> m_cellid_converter = nullptr;//(*(m_geoSvc->detector()));
      

  /// XML-files with the detector description
  Gaudi::Property<std::vector<std::string>> m_xmlFileNames{this, "detectors", {}, "Detector descriptions XML-files"};

  /// output
  MsgStream m_log;
  };
  };


inline std::shared_ptr<const Acts::TrackingGeometry> GeoSvc::trackingGeometry() const { return m_trackingGeo; }
};


#endif // GEOSVC_H
#endif // GEOSVC_H
+267 −0
Original line number Original line Diff line number Diff line
// SPDX-License-Identifier: LGPL-3.0-or-later
// Copyright (C) 2022 Sylvester Joosten

#include "ParticleSvc.h"

// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
DECLARE_COMPONENT(ParticleSvc)

namespace {
const IParticleSvc::ParticleMap kParticleMap = {
    {           0, {           0,   0,   0.0            }},  // unknown
    {          11, {          11,  -1,   0.000510998928 }},  // e-
    {         -11, {         -11,   1,   0.000510998928 }},  // e+
    {          13, {          13,  -1,   0.105658357    }},  // mu-
    {         -13, {         -13,   1,   0.105658357    }},  // mu+
    {          22, {          22,   0,   0.0            }},  // gamma
    {         111, {         111,   0,   0.1349766      }},  // p0
    {         113, {         111,   0,   0.76850        }},  // rho(770)^0
    {         115, {         115,   0,   1.31800        }},  // a_2(1320)^0
    {         211, {         211,   1,   0.1395701      }},  // pi+
    {        -211, {        -211,  -1,   0.1395701      }},  // pi-
    {         213, {         213,   1,   0.76690        }},  // rho+
    {        -213, {        -213,  -1,   0.76690        }},  // rho-
    {         215, {         215,   1,   1.31800        }},  // a_2(1320)+
    {        -215, {        -215,  -1,   1.31800        }},  // a_2(1320)-
    {         221, {         221,   0,   0.54745        }},  // eta
    {         223, {         223,   0,   0.78194        }},  // omega
    {         225, {         225,   0,   1.27500        }},  // f_2(1270)
    {         130, {         130,   0,   0.49767        }},  // KL_0
    {         310, {         310,   0,   0.49767        }},  // KS_0
    {         311, {         311,   0,   0.49767        }},  // K^0
    {        -311, {        -311,   0,   0.49767        }},  // K~^0
    {         313, {         313,   0,   0.89610        }},  // K*(892)^0
    {        -313, {        -313,   0,   0.89610        }},  // K*(892)~^0
    {         315, {         315,   0,   1.43200        }},  // K*_2(1430)^0
    {        -315, {        -315,   0,   1.43200        }},  // K*_2(1430)~^0
    {         321, {         321,   1,   0.49360        }},  // K^+
    {        -321, {        -321,  -1,   0.49360        }},  // K^-
    {         323, {         323,   1,   0.89160        }},  // K*(892)^+
    {        -323, {        -323,  -1,   0.89160        }},  // K*(892)^-
    {         325, {         325,   1,   1.42500        }},  // K*_2(1430)^+
    {        -325, {        -325,  -1,   1.42500        }},  // K*_2(1430)^-
    {         331, {         331,   0,   0.95777        }},  // eta'(958)
    {         333, {         333,   0,   1.01940        }},  // phi(1020)
    {         335, {         335,   0,   1.52500        }},  // f'_2(1525)
    {         411, {         411,   1,   1.86930        }},  // D+
    {        -411, {         411,  -1,   1.86930        }},  // D-
    {         413, {         413,   1,   2.01000        }},  // D*(2010)^+
    {        -413, {        -413,  -1,   2.01000        }},  // D*(2010)^-
    {         415, {         415,   1,   2.46000        }},  // D*_2(2460)^+
    {        -415, {        -415,  -1,   2.46000        }},  // D*_2(2460)^-
    {         421, {         421,   0,   1.86450        }},  // D^0
    {        -421, {        -421,   0,   1.86450        }},  // D~^0
    {         423, {         423,   0,   2.00670        }},  // D*(2007)^0
    {        -423, {        -423,   0,   2.00670        }},  // D*(2007)~^0
    {         425, {         425,   0,   2.46000        }},  // D*_2(2460)^0
    {        -425, {        -425,   0,   2.46000        }},  // D*_2(2460)~^0
    {         431, {         431,   1,   1.96850        }},  // D_s^+
    {        -431, {        -431,  -1,   1.96850        }},  // D_s^-
    {         433, {         433,   1,   2.11240        }},  // D*_s^+
    {        -433, {        -433,  -1,   2.11240        }},  // D*_s^-
    {         435, {         435,   1,   2.57350        }},  // D*_s2(2573)^+
    {        -435, {        -435,  -1,   2.57350        }},  // D*_s2(2573)^-
    {         441, {         441,   0,   2.97980        }},  // eta_c(1S)
    {         443, {         443,   0,   3.09688        }},  // J/psi(1S)
    {         445, {         445,   0,   3.55620        }},  // chi_c2(1P)
    {         511, {         511,   0,   5.27920        }},  // B^0
    {        -511, {        -511,   0,   5.27920        }},  // B~^0
    {         513, {         513,   0,   5.32480        }},  // B*^0
    {        -513, {        -513,   0,   5.32480        }},  // B*~^0
    {         515, {         515,   0,   5.83000        }},  // B*_2^0
    {        -515, {        -515,   0,   5.83000        }},  // B*_2~^0
    {         521, {         521,   1,   5.27890        }},  // B^+
    {        -521, {        -521,  -1,   5.27890        }},  // B^-
    {         523, {         523,   1,   5.32480        }},  // B*^+
    {        -523, {        -523,  -1,   5.32480        }},  // B*^-
    {         525, {         525,   1,   5.83000        }},  // B*_2^+
    {        -525, {        -525,  -1,   5.83000        }},  // B*_2^-
    {         531, {         531,   0,   5.36930        }},  // B_s^0
    {        -531, {        -531,   0,   5.36930        }},  // B_s~^0
    {         533, {         533,   0,   5.41630        }},  // B*_s^0
    {        -533, {        -533,   0,   5.41630        }},  // B*_s~^0
    {         535, {         535,   0,   6.07000        }},  // B*_s2^0
    {        -535, {        -535,   0,   6.07000        }},  // B*_s2~^0
    {         541, {         541,   1,   6.59400        }},  // B_c^+
    {        -541, {        -541,  -1,   6.59400        }},  // B_c^-
    {         543, {         543,   1,   6.60200        }},  // B*_c^+
    {        -543, {        -543,  -1,   6.60200        }},  // B*_c^-
    {         545, {         545,   1,   7.35000        }},  // B*_c2^+
    {        -545, {        -545,  -1,   7.35000        }},  // B*_c2^-
    {         551, {         551,   0,   9.40000        }},  // eta_b(1S)
    {         553, {         553,   0,   9.46030        }},  // Upsilon(1S)
    {         555, {         555,   0,   9.91320        }},  // chi_b2(1P)
    {         990, {         990,   0,   0.00000        }},  // pomeron
    {        1114, {        1114,  -1,   1.23400        }},  // Delta^-
    {       -1114, {       -1114,   1,   1.23400        }},  // Delta~^+
    {        2112, {        2112,   0,   0.93957        }},  // n
    {       -2112, {       -2112,   0,   0.93957        }},  // n~^0
    {        2114, {        2114,   0,   1.23300        }},  // Delta^0
    {       -2114, {       -2114,   0,   1.23300        }},  // Delta~^0
    {        2212, {        2212,   1,   0.93827        }},  // p^+
    {       -2212, {       -2212,  -1,   0.93827        }},  // p~^-
    {        2214, {        2214,   1,   1.23200        }},  // Delta^+
    {       -2214, {       -2214,  -1,   1.23200        }},  // Delta~^-
    {        2224, {        2224,   2,   1.23100        }},  // Delta^++
    {       -2224, {       -2224,  -2,   1.23100        }},  // Delta~^--
    {        3112, {        3112,  -1,   1.19744        }},  // Sigma^-
    {       -3112, {       -3112,   1,   1.19744        }},  // Sigma~^+
    {        3114, {        3114,  -1,   1.38720        }},  // Sigma*^-
    {       -3114, {       -3114,   1,   1.38720        }},  // Sigma*~^+
    {        3122, {        3122,   0,   1.11568        }},  // Lambda^0
    {       -3122, {       -3122,   0,   1.11568        }},  // Lambda~^0
    {        3212, {        3212,   0,   1.19255        }},  // Sigma^0
    {       -3212, {       -3212,   0,   1.19255        }},  // Sigma~^0
    {        3214, {        3214,   0,   1.38370        }},  // Sigma*^0
    {       -3214, {       -3214,   0,   1.38370        }},  // Sigma*~^0
    {        3222, {        3222,   1,   1.18937        }},  // Sigma^+
    {       -3222, {       -3222,  -1,   1.18937        }},  // Sigma~^-
    {        3224, {        3224,   1,   1.38280        }},  // Sigma*^+
    {       -3224, {       -3224,  -1,   1.38280        }},  // Sigma*~^-
    {        3312, {        3312,  -1,   1.32130        }},  // Xi^-
    {       -3312, {       -3312,   1,   1.32130        }},  // Xi~^+
    {        3314, {        3314,  -1,   1.53500        }},  // Xi*^-
    {       -3314, {       -3314,   1,   1.53500        }},  // Xi*~^+
    {        3322, {        3322,   0,   1.31490        }},  // Xi^0
    {       -3322, {       -3322,   0,   1.31490        }},  // Xi~^0
    {        3324, {        3324,   0,   1.53180        }},  // Xi*^0
    {       -3324, {       -3324,   0,   1.53180        }},  // Xi*~^0
    {        3334, {        3334,  -1,   1.67245        }},  // Omega^-
    {       -3334, {       -3334,   1,   1.67245        }},  // Omega~^+
    {        4112, {        4112,   0,   2.45210        }},  // Sigma_c^0
    {       -4112, {       -4112,   0,   2.45210        }},  // Sigma_c~^0
    {        4114, {        4114,   0,   2.50000        }},  // Sigma*_c^0
    {       -4114, {       -4114,   0,   2.50000        }},  // Sigma*_c~^0
    {        4122, {        4122,   1,   2.28490        }},  // Lambda_c^+
    {       -4122, {       -4122,  -1,   2.28490        }},  // Lambda_c~^-
    {        4132, {        4132,   0,   2.47030        }},  // Xi_c^0
    {       -4132, {       -4132,   0,   2.47030        }},  // Xi_c~^0
    {        4212, {        4212,   1,   2.45350        }},  // Sigma_c^+
    {       -4212, {       -4212,  -1,   2.45350        }},  // Sigma_c~^-
    {        4214, {        4214,   1,   2.50000        }},  // Sigma*_c^+
    {       -4214, {       -4214,  -1,   2.50000        }},  // Sigma*_c~^-
    {        4222, {        4222,   2,   2.45290        }},  // Sigma_c^++
    {       -4222, {       -4222,  -2,   2.45290        }},  // Sigma_c~^--
    {        4224, {        4224,   2,   2.50000        }},  // Sigma*_c^++
    {       -4224, {       -4224,  -2,   2.50000        }},  // Sigma*_c~^--
    {        4232, {        4232,   1,   2.46560        }},  // Xi_c^+
    {       -4232, {       -4232,  -1,   2.46560        }},  // Xi_c~^-
    {        4312, {        4312,   0,   2.55000        }},  // Xi'_c^0
    {       -4312, {       -4312,   0,   2.55000        }},  // Xi'_c~^0
    {        4314, {        4314,   0,   2.63000        }},  // Xi*_c^0
    {       -4314, {       -4314,   0,   2.63000        }},  // Xi*_c~^0
    {        4322, {        4322,   1,   2.55000        }},  // Xi'_c^+
    {       -4322, {       -4322,  -1,   2.55000        }},  // Xi'_c~^-
    {        4324, {        4324,   1,   2.63000        }},  // Xi*_c^+
    {       -4324, {       -4324,  -1,   2.63000        }},  // Xi*_c~^-
    {        4332, {        4332,   0,   2.70400        }},  // Omega_c^0
    {       -4332, {       -4332,   0,   2.70400        }},  // Omega_c~^0
    {        4334, {        4334,   0,   2.80000        }},  // Omega*_c^0
    {       -4334, {       -4334,   0,   2.80000        }},  // Omega*_c~^0
    {        4412, {        4412,   1,   3.59798        }},  // Xi_cc^+
    {       -4412, {       -4412,  -1,   3.59798        }},  // Xi_cc~^-
    {        4414, {        4414,   1,   3.65648        }},  // Xi*_cc^+
    {       -4414, {       -4414,  -1,   3.65648        }},  // Xi*_cc~^-
    {        4422, {        4422,   2,   3.59798        }},  // Xi_cc^++
    {       -4422, {       -4422,  -2,   3.59798        }},  // Xi_cc~^--
    {        4424, {        4424,   2,   3.65648        }},  // Xi*_cc^++
    {       -4424, {       -4424,  -2,   3.65648        }},  // Xi*_cc~^--
    {        4432, {        4432,   1,   3.78663        }},  // Omega_cc^+
    {       -4432, {       -4432,  -1,   3.78663        }},  // Omega_cc~^-
    {        4434, {        4434,   1,   3.82466        }},  // Omega*_cc^+
    {       -4434, {       -4434,  -1,   3.82466        }},  // Omega*_cc~^-
    {        4444, {        4444,   2,   4.91594        }},  // Omega*_ccc^++
    {       -4444, {       -4444,  -2,   4.91594        }},  // Omega*_ccc~^--
    {        5112, {        5112,  -1,   5.80000        }},  // Sigma_b^-
    {       -5112, {       -5112,   1,   5.80000        }},  // Sigma_b~^+
    {        5114, {        5114,  -1,   5.81000        }},  // Sigma*_b^-
    {       -5114, {       -5114,   1,   5.81000        }},  // Sigma*_b~^+
    {        5122, {        5122,   0,   5.64100        }},  // Lambda_b^0
    {       -5122, {       -5122,   0,   5.64100        }},  // Lambda_b~^0
    {        5132, {        5132,  -1,   5.84000        }},  // Xi_b^-
    {       -5132, {       -5132,   1,   5.84000        }},  // Xi_b~^+
    {        5142, {        5142,   0,   7.00575        }},  // Xi_bc^0
    {       -5142, {       -5142,   0,   7.00575        }},  // Xi_bc~^0
    {        5212, {        5212,   0,   5.80000        }},  // Sigma_b^0
    {       -5212, {       -5212,   0,   5.80000        }},  // Sigma_b~^0
    {        5214, {        5214,   0,   5.81000        }},  // Sigma*_b^0
    {       -5214, {       -5214,   0,   5.81000        }},  // Sigma*_b~^0
    {        5222, {        5222,   1,   5.80000        }},  // Sigma_b^+
    {       -5222, {       -5222,  -1,   5.80000        }},  // Sigma_b~^-
    {        5224, {        5224,   1,   5.81000        }},  // Sigma*_b^+
    {       -5224, {       -5224,  -1,   5.81000        }},  // Sigma*_b~^-
    {        5232, {        5232,   0,   5.84000        }},  // Xi_b^0
    {       -5232, {       -5232,   0,   5.84000        }},  // Xi_b~^0
    {        5242, {        5242,   1,   7.00575        }},  // Xi_bc^+
    {       -5242, {       -5242,  -1,   7.00575        }},  // Xi_bc~^-
    {        5312, {        5312,  -1,   5.96000        }},  // Xi'_b^-
    {       -5312, {       -5312,   1,   5.96000        }},  // Xi'_b~^+
    {        5314, {        5314,  -1,   5.97000        }},  // Xi*_b^-
    {       -5314, {       -5314,   1,   5.97000        }},  // Xi*_b~^+
    {        5322, {        5322,   0,   5.96000        }},  // Xi'_b^0
    {       -5322, {       -5322,   0,   5.96000        }},  // Xi'_b~^0
    {        5324, {        5324,   0,   5.97000        }},  // Xi*_b^0
    {       -5324, {       -5324,   0,   5.97000        }},  // Xi*_b~^0
    {        5332, {        5332,  -1,   6.12000        }},  // Omega_b^-
    {       -5332, {       -5332,   1,   6.12000        }},  // Omega_b~^+
    {        5334, {        5334,  -1,   6.13000        }},  // Omega*_b^-
    {       -5334, {       -5334,   1,   6.13000        }},  // Omega*_b~^+
    {        5342, {        5342,   0,   7.19099        }},  // Omega_bc^0
    {       -5342, {       -5342,   0,   7.19099        }},  // Omega_bc~^0
    {        5412, {        5412,   0,   7.03724        }},  // Xi'_bc^0
    {       -5412, {       -5412,   0,   7.03724        }},  // Xi'_bc~^0
    {        5414, {        5414,   0,   7.04850        }},  // Xi*_bc^0
    {       -5414, {       -5414,   0,   7.04850        }},  // Xi*_bc~^0
    {        5422, {        5422,   1,   7.03724        }},  // Xi'_bc^+
    {       -5422, {       -5422,  -1,   7.03724        }},  // Xi'_bc~^-
    {        5424, {        5424,   1,   7.04850        }},  // Xi*_bc^+
    {       -5424, {       -5424,  -1,   7.04850        }},  // Xi*_bc~^-
    {        5432, {        5432,   0,   7.21101        }},  // Omega'_bc^0
    {       -5432, {       -5432,   0,   7.21101        }},  // Omega'_bc~^0
    {        5434, {        5434,   0,   7.21900        }},  // Omega*_bc^0
    {       -5434, {       -5434,   0,   7.21900        }},  // Omega*_bc~^0
    {        5442, {        5442,   1,   8.30945        }},  // Omega_bcc^+
    {       -5442, {       -5442,  -1,   8.30945        }},  // Omega_bcc~^-
    {        5444, {        5444,   1,   8.31325        }},  // Omega*_bcc^+
    {       -5444, {       -5444,  -1,   8.31325        }},  // Omega*_bcc~^-
    {        5512, {        5512,  -1,   10.42272       }},  // Xi_bb^-
    {       -5512, {       -5512,   1,   10.42272       }},  // Xi_bb~^+
    {        5514, {        5514,  -1,   10.44144       }},  // Xi*_bb^-
    {       -5514, {       -5514,   1,   10.44144       }},  // Xi*_bb~^+
    {        5522, {        5522,   0,   10.42272       }},  // Xi_bb^0
    {       -5522, {       -5522,   0,   10.42272       }},  // Xi_bb~^0
    {        5524, {        5524,   0,   10.44144       }},  // Xi*_bb^0
    {       -5524, {       -5524,   0,   10.44144       }},  // Xi*_bb~^0
    {        5532, {        5532,  -1,   10.60209       }},  // Omega_bb^-
    {       -5532, {       -5532,   1,   10.60209       }},  // Omega_bb~^+
    {        5534, {        5534,  -1,   10.61426       }},  // Omega*_bb^-
    {       -5534, {       -5534,   1,   10.61426       }},  // Omega*_bb~^+
    {        5542, {        5542,   0,   11.70767       }},  // Omega_bbc^0
    {       -5542, {       -5542,   0,   11.70767       }},  // Omega_bbc~^0
    {        5544, {        5544,   0,   11.71147       }},  // Omega*_bbc^0
    {       -5544, {       -5544,   0,   11.71147       }},  // Omega*_bbc~^0
    {        5554, {        5554,  -1,   15.11061       }},  // Omega*_bbb^-
    {       -5554, {       -5554,   1,   15.11061       }},  // Omega*_bbb~^+
    {  1000010020, {  1000010020,   1,   1.87561        }},  // Deuterium
    {  1000010030, {  1000010030,   1,   2.80925        }},  // Tritium
    {  1000020030, {  1000020030,   2,   2.80923        }},  // He-3
    {  1000020040, {  1000020040,   2,   3.72742        }},  // Alpha
};
}

ParticleSvc::ParticleSvc(const std::string& name, ISvcLocator* svc)
    : base_class(name, svc), m_particleMap{kParticleMap} {}

ParticleSvc::~ParticleSvc() = default;

StatusCode ParticleSvc::initialize()
{
  StatusCode sc = Service::initialize();
  if (!sc.isSuccess())
  {
    fatal() << "Error initializing ParticleSvc" << endmsg;
    return sc;
  }
  info() << "ParticleSvc initialized successfully" << endmsg;
  return StatusCode::SUCCESS;
}
+44 −0
Original line number Original line Diff line number Diff line
// SPDX-License-Identifier: LGPL-3.0-or-later
// Copyright (C) 2022 Whitney Armstrong, Sylvester Joosten

#ifndef PARTICLESVC_H
#define PARTICLESVC_H

#include <map>

#include "GaudiKernel/Service.h"

#include "JugBase/IParticleSvc.h"

/** Simple particle service.
 *
 *  This meant to provide basic particle information for reconstruction purposes.
 *  If particle data is needed, be sure to grab everything you can in an initialization
 *  step. Currently the  returned Particle/ParticleMap are by value.
 */
class ParticleSvc : public extends<Service, IParticleSvc> {
public:
  using Particle    = Jug::Base::ParticleData;
  using ParticleMap = std::map<int, Particle>;

  const ParticleMap m_particleMap;

public:
  ParticleSvc(const std::string& name, ISvcLocator* svc);

  virtual ~ParticleSvc();

  virtual StatusCode initialize() final;
  virtual StatusCode finalize() final { return StatusCode::SUCCESS; }

  virtual ParticleMap particleMap() const { return m_particleMap; }
  virtual Particle particle(int pdg) const {
    if (m_particleMap.count(pdg) == 0) {
      // error
      return m_particleMap.at(0);
    }
    return m_particleMap.at(pdg);
  }
};

#endif
Original line number Original line Diff line number Diff line
#include "PileupDigiTrackHitMergeTool.h"

#include "podio/EventStore.h"

#include "datamodel/DigiTrackHitAssociationCollection.h"
#include "datamodel/MCParticle.h"
#include "datamodel/MCParticleCollection.h"
#include "datamodel/PositionedTrackHitCollection.h"
#include "datamodel/TrackHitCollection.h"

DECLARE_COMPONENT(PileupDigiTrackHitMergeTool)
DECLARE_COMPONENT_WITH_ID(PileupDigiTrackHitMergeTool, "PileupDigiTrackHitMergeTool")

PileupDigiTrackHitMergeTool::PileupDigiTrackHitMergeTool(const std::string& aType, const std::string& aName,
                                                         const IInterface* aParent)
    : GaudiTool(aType, aName, aParent) {
  declareInterface<IEDMMergeTool>(this);
  declareProperty("signalPositionedHits", m_hitsSignal);
  declareProperty("signalDigiHits", m_posHitsSignal);
  declareProperty("signalParticles", m_particlesSignal);
  declareProperty("mergedPositionedHits", m_hitsMerged);
  declareProperty("mergedDigiHits", m_digiHitsMerged);
  declareProperty("mergedParticles", m_particlesMerged);
}

StatusCode PileupDigiTrackHitMergeTool::initialize() { return StatusCode::SUCCESS; }

StatusCode PileupDigiTrackHitMergeTool::finalize() { return StatusCode::SUCCESS; }

StatusCode PileupDigiTrackHitMergeTool::readPileupCollection(podio::EventStore& store) {
  // local pointers, to be filled by the event store
  const fcc::PositionedTrackHitCollection* hitCollection;
  const fcc::DigiTrackHitAssociationCollection* digiHitCollection;
  const fcc::MCParticleCollection* particleCollection;

  // get collection address and store it in container
  bool hitCollectionPresent = store.get(m_pileupHitsBranchName, hitCollection);
  if (hitCollectionPresent) {
    m_hitCollections.push_back(hitCollection);
  } else {
    warning() << "No collection could be read from branch " << m_pileupHitsBranchName << endmsg;
    return StatusCode::FAILURE;
  }

  /// as above, for the positioned collection
  bool digiHitCollectionPresent = store.get(m_pileupPosHitsBranchName, digiHitCollection);
  if (digiHitCollectionPresent) {
    m_digiTrackHitsCollections.push_back(digiHitCollection);
  } else {
    warning() << "No collection could be read from branch " << m_pileupPosHitsBranchName << endmsg;
    return StatusCode::FAILURE;
  }

  /// as above, for the particle collection (if wanted)
  if (m_mergeParticles) {
    bool particleCollectionPresent = store.get(m_pileupParticleBranchName, particleCollection);
    if (particleCollectionPresent) {
      m_particleCollections.push_back(particleCollection);
    } else {
      warning() << "No collection could be read from branch " << m_pileupParticleBranchName << endmsg;
      return StatusCode::FAILURE;
    }
  }

  return StatusCode::SUCCESS;
}

StatusCode PileupDigiTrackHitMergeTool::readSignal() {
  // get collection from event store
  auto collHitsSig = m_hitsSignal.get();
  auto collPosHitsSig = m_posHitsSignal.get();
  auto collPartSig = m_particlesSignal.get();
  // store them in internal container
  m_hitCollections.push_back(collHitsSig);
  m_digiTrackHitsCollections.push_back(collPosHitsSig);
  // check if particles should be merged
  if (m_mergeParticles) {
    m_particleCollections.push_back(collPartSig);
  }

  return StatusCode::SUCCESS;
}

StatusCode PileupDigiTrackHitMergeTool::mergeCollections() {

  // ownership given to data service at end of execute
  fcc::PositionedTrackHitCollection* collHitsMerged = new fcc::PositionedTrackHitCollection();
  fcc::DigiTrackHitAssociationCollection* collDigiHitsMerged = new fcc::DigiTrackHitAssociationCollection();
  fcc::MCParticleCollection* collParticlesMerged = new fcc::MCParticleCollection();

  unsigned int collectionCounter = 0;
  for (size_t j = 0; j < m_digiTrackHitsCollections.size(); j++) {
    unsigned int trackIDCounter = 0;
    std::unordered_map<unsigned, unsigned> oldToNewTrackIDs;

    auto offset = collectionCounter * m_trackIDCollectionOffset;
    auto digiHitColl = m_digiTrackHitsCollections.at(j);
    auto hitColl = m_hitCollections.at(j);
    for (int i = 0; i < digiHitColl->size(); i++) {
      auto clon = hitColl->at(i).clone();
      // add pileup vertex counter with an offset
      // i.e. for the signal event, 'bits' is just the trackID taken from geant
      // for the n-th pileup event, 'bits' is the trackID + n * offset
      // offset needs to be big enough to ensure uniqueness of trackID
      if (isTrackerHit(hitColl->at(i).cellId())) {
        if (trackIDCounter > m_trackIDCollectionOffset) {
          error() << "Event contains too many tracks to guarantee a unique trackID";
          error() << " The offset width or trackID field size needs to be adjusted!" << endmsg;
          return StatusCode::FAILURE;
        }
        auto search = oldToNewTrackIDs.find(clon.bits());
        if (search != oldToNewTrackIDs.end()) {
          // was already present - use already created new trackID
          unsigned newTrackID = (search->second + offset);
          clon.bits(newTrackID);
        } else {
          // was not already present - create new trackID
          oldToNewTrackIDs[clon.bits()] = trackIDCounter;

          unsigned newTrackID = (trackIDCounter + offset);
          clon.bits(newTrackID);

          trackIDCounter++;
        }
      }
      auto newDigiHit = digiHitColl->at(i).clone();
      newDigiHit.hit(clon);
      collHitsMerged->push_back(clon);
      collDigiHitsMerged->push_back(newDigiHit);
    }
    ///@todo find possibility in case not only tracker hits are produced
    if (m_mergeParticles) {
      auto partColl = m_particleCollections.at(j);
      for (int i = 0; i < partColl->size(); i++) {
        auto clonParticle = partColl->at(i).clone();
        auto search = oldToNewTrackIDs.find(clonParticle.bits());
        if (search != oldToNewTrackIDs.end()) {
          // was not already present - create new trackID
          clonParticle.bits(search->second + offset);
        } else {
          // was not already present - create new trackID
          // this can happen if particles do not reach the tracker
          oldToNewTrackIDs[clonParticle.bits()] = trackIDCounter;

          unsigned newTrackID = (trackIDCounter + offset);
          clonParticle.bits(newTrackID);

          trackIDCounter++;
        }
        collParticlesMerged->push_back(clonParticle);
      }
    }
    ++collectionCounter;
  }
  m_hitsMerged.put(collHitsMerged);
  m_digiHitsMerged.put(collDigiHitsMerged);
  m_particlesMerged.put(collParticlesMerged);

  m_hitCollections.clear();
  m_digiTrackHitsCollections.clear();
  m_particleCollections.clear();

  return StatusCode::SUCCESS;
}
Original line number Original line Diff line number Diff line
#ifndef JUGBASE_PILEUPDIGITRACKERHITSMERGETOOL_H
#define JUGBASE_PILEUPDIGITRACKERHITSMERGETOOL_H

// Gaudi
#include "GaudiAlg/GaudiTool.h"

// FCCSW
#include "JugBase/DataHandle.h"
#include "JugBase/IEDMMergeTool.h"

namespace fcc {
class PositionedTrackHit;
class PositionedTrackHitCollection;
class DigiTrackHitAssociation;
class DigiTrackHitAssociationCollection;
class MCParticle;
class MCParticleCollection;
}

/** @class PileupDigiTrackHitMergeTool
 *
 * Implemenation of the MergeTool for Digitization hits and, if requested, simulated hits.
 * While merging, this algorithm tries to keep the trackIDs unique by adding the pileup event number with an offset.
 * This should be transparent, but the trackIDs will be non-consecutive.
 *
 */

class PileupDigiTrackHitMergeTool : public GaudiTool, virtual public IEDMMergeTool {
public:
  explicit PileupDigiTrackHitMergeTool(const std::string& aType, const std::string& aName, const IInterface* aParent);
  virtual ~PileupDigiTrackHitMergeTool() = default;

  virtual StatusCode initialize() override final;

  virtual StatusCode finalize() override final;

  /// fill the member containers of collection pointers in this class with a collection from the event store
  virtual StatusCode readPileupCollection(podio::EventStore& store) override final;

  /// merge the collections in the member container and put them on  the event store
  virtual StatusCode mergeCollections() override final;

  /// fill the member container of collection pointer with a collection from the event store
  virtual StatusCode readSignal() override final;

private:
  /// Name of the branch from which to read pileup collection
  Gaudi::Property<std::string> m_pileupHitsBranchName{this, "pileupHitsBranch", "positionedHits",
                                                      "Name of the branch from which to read pileup collection"};
  /// Name of the branch from which to read pileup collection
  Gaudi::Property<std::string> m_pileupPosHitsBranchName{this, "pileupDigiHitsBranch", "digiHits",
                                                         "Name of the branch from which to read pileup collection"};

  /// Name of the branch from which to read pileup collection
  Gaudi::Property<std::string> m_pileupParticleBranchName{this, "pileupParticleBranch", "simParticles",
                                                          "Name of the branch from which to read pileup collection"};

  /// Flag indicating if particles are present and should be merged
  Gaudi::Property<bool> m_mergeParticles{this, "mergeParticles", false,
                                         "Flag indicating if particles are present and should be merged"};

  /// internal container to keep of collections to be merged
  std::vector<const fcc::PositionedTrackHitCollection*> m_hitCollections;
  /// internal container to keep of collections to be merged
  std::vector<const fcc::DigiTrackHitAssociationCollection*> m_digiTrackHitsCollections;
  /// internal container to keep of collections to be merge
  std::vector<const fcc::MCParticleCollection*> m_particleCollections;

  /// Output of this tool: merged collection
  DataHandle<fcc::PositionedTrackHitCollection> m_hitsMerged{"overlay/hits", Gaudi::DataHandle::Writer, this};
  /// Output of this tool: merged collection
  DataHandle<fcc::DigiTrackHitAssociationCollection> m_digiHitsMerged{"overlay/digiHits", Gaudi::DataHandle::Writer,
                                                                      this};
  /// Output of this tool: merged collection
  DataHandle<fcc::MCParticleCollection> m_particlesMerged{"overlay/particles", Gaudi::DataHandle::Writer, this};

  /// input to this tool: signal collection
  DataHandle<fcc::PositionedTrackHitCollection> m_hitsSignal{"overlay/signalHits", Gaudi::DataHandle::Reader, this};
  /// input to this tool: signal collection
  DataHandle<fcc::DigiTrackHitAssociationCollection> m_posHitsSignal{"overlay/signalDigiHits",
                                                                     Gaudi::DataHandle::Reader, this};
  /// input to this tool: signal collection
  DataHandle<fcc::MCParticleCollection> m_particlesSignal{"overlay/signalParticles", Gaudi::DataHandle::Reader, this};

  /// offset with which the pileup event number is added to the trackID
  const unsigned int m_trackIDCollectionOffset = 2.5e6;

  /// Mask to obtain system ID
  const unsigned long long m_systemMask = 0xf;

  /// Private function internally used to check if a hit is a tracker hit
  bool isTrackerHit(unsigned long long cellId) const;
};

inline bool PileupDigiTrackHitMergeTool::isTrackerHit(unsigned long long cellId) const {
  return ((cellId & m_systemMask) < 5);
}

#endif  // JUGBASE_PILEUPDIGITRACKERHITSMERGETOOL_H
+0 −119
Original line number Original line Diff line number Diff line
#include "PileupHitMergeTool.h"

#include "podio/EventStore.h"

#include "datamodel/CaloHitCollection.h"
#include "datamodel/PositionedCaloHitCollection.h"
#include "datamodel/PositionedTrackHitCollection.h"
#include "datamodel/TrackHitCollection.h"

typedef PileupHitMergeTool<fcc::CaloHitCollection, fcc::PositionedCaloHitCollection> PileupCaloHitMergeTool;
typedef PileupHitMergeTool<fcc::TrackHitCollection, fcc::PositionedTrackHitCollection> PileupTrackHitMergeTool;
DECLARE_COMPONENT(PileupTrackHitMergeTool)
DECLARE_COMPONENT_WITH_ID(PileupTrackHitMergeTool, "PileupTrackHitMergeTool")
DECLARE_COMPONENT(PileupCaloHitMergeTool)
DECLARE_COMPONENT_WITH_ID(PileupCaloHitMergeTool, "PileupCaloHitMergeTool")

template <class Hits, class PositionedHits>
PileupHitMergeTool<Hits, PositionedHits>::PileupHitMergeTool(const std::string& aType, const std::string& aName,
                                                             const IInterface* aParent)
    : GaudiTool(aType, aName, aParent) {
  declareInterface<IEDMMergeTool>(this);
  declareProperty("signalHits", m_hitsSignal);
  declareProperty("signalPositionedHits", m_posHitsSignal);
  declareProperty("mergedHits", m_hitsMerged);
  declareProperty("mergedPositionedHits", m_posHitsMerged);
}

template <class Hits, class PositionedHits>
StatusCode PileupHitMergeTool<Hits, PositionedHits>::initialize() {
  return StatusCode::SUCCESS;
}

template <class Hits, class PositionedHits>
StatusCode PileupHitMergeTool<Hits, PositionedHits>::finalize() {
  return StatusCode::SUCCESS;
}

template <class Hits, class PositionedHits>
StatusCode PileupHitMergeTool<Hits, PositionedHits>::readPileupCollection(podio::EventStore& store) {
  // local pointers, to be filled by the event store
  const Hits* hitCollection;
  const PositionedHits* posHitCollection;

  // get collection address and store it in container
  bool hitCollectionPresent = store.get(m_pileupHitsBranchName, hitCollection);
  if (hitCollectionPresent) {
    m_hitCollections.push_back(hitCollection);
  } else {
    warning() << "No collection could be read from branch " << m_pileupHitsBranchName << endmsg;
    return StatusCode::FAILURE;
  }

  /// as above, for the positioned collection
  bool posHitCollectionPresent = store.get(m_pileupPosHitsBranchName, posHitCollection);
  if (posHitCollectionPresent) {
    m_posHitCollections.push_back(posHitCollection);
  } else {
    warning() << "No collection could be read from branch " << m_pileupPosHitsBranchName << endmsg;
    return StatusCode::FAILURE;
  }

  return StatusCode::SUCCESS;
}

template <class Hits, class PositionedHits>
StatusCode PileupHitMergeTool<Hits, PositionedHits>::readSignal() {
  // get collection from event sture
  auto collHitsSig = m_hitsSignal.get();
  auto collPosHitsSig = m_posHitsSignal.get();

  // store them in internal container
  m_hitCollections.push_back(collHitsSig);
  m_posHitCollections.push_back(collPosHitsSig);

  return StatusCode::SUCCESS;
}

template <class Hits, class PositionedHits>
StatusCode PileupHitMergeTool<Hits, PositionedHits>::mergeCollections() {

  // ownership given to data service at end of execute
  Hits* collHitsMerged = new Hits();
  PositionedHits* collPosHitsMerged = new PositionedHits();

  unsigned int collectionCounter = 0;
  for (auto hitColl : m_hitCollections) {
    // copy hits
    for (const auto elem : *hitColl) {

      auto clon = elem.clone();
      // add pileup vertex counter with an offset
      // i.e. for the signal event, 'bits' is just the trackID taken from geant
      // for the n-th pileup event, 'bits' is the trackID + n * offset
      // offset needs to be big enough to ensure uniqueness of trackID
      if (elem.bits() > m_trackIDCollectionOffset) {
        error() << "Event contains too many tracks to guarantee a unique trackID";
        error() << " The offset width or trackID field size needs to be adjusted!" << endmsg;
        return StatusCode::FAILURE;
      }

      clon.bits(clon.bits() + collectionCounter * m_trackIDCollectionOffset);
      collHitsMerged->push_back(clon);
    }
    ++collectionCounter;
  }
  for (auto posHitColl : m_posHitCollections) {
    // copy positioned hits
    for (const auto elem : *posHitColl) {
      collPosHitsMerged->push_back(elem.clone());
    }
  }

  m_hitsMerged.put(collHitsMerged);
  m_posHitsMerged.put(collPosHitsMerged);

  m_hitCollections.clear();
  m_posHitCollections.clear();
  return StatusCode::SUCCESS;
}
+0 −71
Original line number Original line Diff line number Diff line
#ifndef JUGBASE_PILEUPTRACKERHITSMERGETOOL_H
#define JUGBASE_PILEUPTRACKERHITSMERGETOOL_H

// Gaudi
#include "GaudiAlg/GaudiTool.h"

// FCCSW
#include "JugBase/DataHandle.h"
#include "JugBase/IEDMMergeTool.h"

namespace fcc {
class HitCollection;
class PositionedHitCollection;
}

/** @class PileupHitMergeTool
 *
 * Implemenation of the MergeTool for *Hits and *PositionedHits, templated to give versions for Tracker / Calorimeter
 * While merging, this algorithm tries to keep the trackIDs unique by adding the pileup event number with an offset.
 * This should be transparent, but the trackIDs will be non-consecutive.
 *
 *
 */

template <class Hits, class PositionedHits>
class PileupHitMergeTool : public GaudiTool, virtual public IEDMMergeTool {
public:
  explicit PileupHitMergeTool(const std::string& aType, const std::string& aName, const IInterface* aParent);
  virtual ~PileupHitMergeTool() = default;

  virtual StatusCode initialize() override final;

  virtual StatusCode finalize() override final;

  /// fill the member containers of collection pointers in this class with a collection from the event store
  virtual StatusCode readPileupCollection(podio::EventStore& store) override final;

  /// merge the collections in the member container and put them on  the event store
  virtual StatusCode mergeCollections() override final;

  /// fill the member container of collection pointer with a collection from the event store
  virtual StatusCode readSignal() override final;

private:
  /// Name of the branch from which to read pileup collection
  Gaudi::Property<std::string> m_pileupHitsBranchName{this, "pileupHitsBranch", "hits",
                                                      "Name of the branch from which to read pileup collection"};
  /// Name of the branch from which to read pileup collection
  Gaudi::Property<std::string> m_pileupPosHitsBranchName{this, "pileupPositionedHitsBranch", "positionedHits",
                                                         "Name of the branch from which to read pileup collection"};

  /// internal container to keep of collections to be merged
  std::vector<const Hits*> m_hitCollections;
  /// internal container to keep of collections to be merged
  std::vector<const PositionedHits*> m_posHitCollections;

  /// Output of this tool: merged collection
  DataHandle<Hits> m_hitsMerged{"overlay/hits", Gaudi::DataHandle::Writer, this};
  /// Output of this tool: merged collection
  DataHandle<PositionedHits> m_posHitsMerged{"overlay/positionedHits", Gaudi::DataHandle::Writer, this};

  /// input to this tool: signal collection
  DataHandle<Hits> m_hitsSignal{"overlay/signalHits", Gaudi::DataHandle::Reader, this};
  /// input to this tool: signal collection
  DataHandle<PositionedHits> m_posHitsSignal{"overlay/signalPositionedHits", Gaudi::DataHandle::Reader, this};

  /// offset with which the pileup event number is added to the trackID 
  const unsigned int m_trackIDCollectionOffset = 2 << 20;
};

#endif  // JUGBASE_PILEUPTRACKERHITSMERGETOOL_H
+0 −88
Original line number Original line Diff line number Diff line
#include "PileupOverlayAlg.h"

#include "JugBase/IEDMMergeTool.h"




DECLARE_COMPONENT(PileupOverlayAlg)

PileupOverlayAlg::PileupOverlayAlg(const std::string& aName, ISvcLocator* aSvcLoc)
    : GaudiAlgorithm(aName, aSvcLoc), m_store(), m_reader() {
  declareProperty("PileUpTool", m_pileUpTool);
}

StatusCode PileupOverlayAlg::initialize() {

  for (auto& toolname : m_saveToolNames) {
    m_mergeTools.push_back(tool<IEDMMergeTool>(toolname));
  }
  m_minBiasEventIndex = 0;
  StatusCode sc = GaudiAlgorithm::initialize();
  IRndmGenSvc* randSvc = svc<IRndmGenSvc>("RndmGenSvc", true);
  sc = m_flatDist.initialize(randSvc, Rndm::Flat(0., 1.));

  m_pileupFileIndex = 0;
  if (m_doShuffleInputFiles) {
    m_pileupFileIndex = static_cast<unsigned int>(m_flatDist() * m_pileupFilenames.size());
  }
  m_reader.openFiles({m_pileupFilenames[m_pileupFileIndex]});
  m_store.setReader(&m_reader);
  return sc;
}

StatusCode PileupOverlayAlg::finalize() {
  StatusCode sc = GaudiAlgorithm::finalize();
  return sc;
}

StatusCode PileupOverlayAlg::execute() {
  unsigned nEvents = m_reader.getEntries();

  if (!m_noSignal) {
    for (auto& tool : m_mergeTools) {
      tool->readSignal();
    }
  }

  const unsigned int numPileUp = m_pileUpTool->numberOfPileUp();
  for (unsigned iev = 0; iev < numPileUp; ++iev) {

    // introduce some randomness into min bias event selection
    // to avoid bias
    if (m_randomizePileup == true) {
      if (m_flatDist() < m_skipProbability) {
        ++m_minBiasEventIndex;
      }
    }
    m_reader.goToEvent(m_minBiasEventIndex);

    verbose() << "Reading in pileup event #" << m_minBiasEventIndex << " from pool #" << m_pileupFileIndex << " ..."
              << endmsg;
    for (auto& tool : m_mergeTools) {
      tool->readPileupCollection(m_store);
    }

    m_minBiasEventIndex = (m_minBiasEventIndex + 1);
    if (m_minBiasEventIndex >= nEvents) {
      m_minBiasEventIndex = 0;
      if (m_doShuffleInputFiles) {
        m_pileupFileIndex = static_cast<unsigned int>(m_flatDist() * m_pileupFilenames.size());
      } else {
        m_pileupFileIndex = (m_pileupFileIndex + 1) % m_pileupFilenames.size();
      }
      m_store.clearCaches();
      m_reader.closeFiles();
      verbose() << "switching to pileup file " << m_pileupFilenames[m_pileupFileIndex] << endmsg;
      m_reader.openFiles({m_pileupFilenames[m_pileupFileIndex]});
      nEvents = m_reader.getEntries();
    }
    m_store.clearCaches();
  }

  for (auto& tool : m_mergeTools) {
    tool->mergeCollections();
  }

  return StatusCode::SUCCESS;
}
+0 −73
Original line number Original line Diff line number Diff line
#ifndef JUGBASE_PILEUPOVERLAYALG_H
#define JUGBASE_PILEUPOVERLAYALG_H

#include "JugBase/DataHandle.h"
#include "JugBase/IEDMMergeTool.h"
#include "JugBase/IPileUpTool.h"
#include "GaudiAlg/GaudiAlgorithm.h"
#include "GaudiKernel/RndmGenerators.h"

#include "podio/EventStore.h"
#include "podio/ROOTReader.h"

// forward declarations
class IEDMMergeTool;

/*** @class PileupOverlayAlg
 *  Algorithm for Pileup I/O and merging
 *
 * The main purpose of this class is to keep a list of merge tools corresponding to different
 * datatypes, open a file and let the tools read pileup data and merge it with the signal.
 * It is thus flexible with regards to the data collection types, as everything is delegated to
 * tools with a very general interface. One thing this algorithm does do is keep track of the
 * number of pileup events (via the pileup tool) and the current position in the minimum bias pool
 * ( which can be randomized by skipping events with a probability that can be set in the options file ).
 *
 */

class PileupOverlayAlg : public GaudiAlgorithm {
public:
  explicit PileupOverlayAlg(const std::string&, ISvcLocator*);
  virtual ~PileupOverlayAlg() = default;
  virtual StatusCode initialize() override final;
  virtual StatusCode execute() override final;
  virtual StatusCode finalize() override final;

private:
  /// flag to determine whether to randomly skipy events when reading the pileup store
  Gaudi::Property<bool> m_randomizePileup{
      this, "randomizePileup", false,
      "flag to determine whether to randomly skip events when reading the pileup store"};
  /// probability of skipping  next event in randomized read
  Gaudi::Property<float> m_skipProbability{this, "skipProbability", 0.5, "Probability with which to skip next event"};
  /// random number generator for randomizing pileup reads
  Rndm::Numbers m_flatDist;
  /// Pileup Tool to specify the number of minimum bias events
  ToolHandle<IPileUpTool> m_pileUpTool{"ConstPileUp/PileUpTool", this};

  /// vector of tool interface pointers, to be replaced with ToolHandleArray
  std::vector<IEDMMergeTool*> m_mergeTools;
  /// names of the merge tools
  Gaudi::Property<std::vector<std::string>> m_saveToolNames{this, "mergeTools", {}, "names of the merge tools"};

  // list of minimum bias data readers / stores
  // once the root reader supports multiple files
  // this algo should take advantage of it
  /// list of filenames for the minimum bias pools
  Gaudi::Property<std::vector<std::string>> m_pileupFilenames{
      this, "pileupFilenames", {"min_bias_pool.root"}, "Name of File containing pileup events"};
  Gaudi::Property<bool> m_doShuffleInputFiles{this, "doShuffleInputFiles", false, "Shuffle list of input files for additional randomization"};

  Gaudi::Property<bool> m_noSignal{this, "noSignal", false, "Set to true if you don't want to provide a signal collection"};
  /// store for the minimum bias file
  podio::EventStore m_store;
  /// reader for the minimum bias file
  podio::ROOTReader m_reader;

  /// event index within current minimum bias pool
  unsigned int m_minBiasEventIndex;
  /// index of the current minimum bias pool
  unsigned int m_pileupFileIndex;
};

#endif /* JUGBASE_PILEUPOVERLAYALG_H */
+0 −106
Original line number Original line Diff line number Diff line
#include "PileupParticlesMergeTool.h"

#include "podio/EventStore.h"

#include "datamodel/GenVertexCollection.h"
#include "datamodel/MCParticleCollection.h"

DECLARE_COMPONENT(PileupParticlesMergeTool)

PileupParticlesMergeTool::PileupParticlesMergeTool(const std::string& aType, const std::string& aName,
                                                   const IInterface* aParent)
    : GaudiTool(aType, aName, aParent) {
  declareInterface<IEDMMergeTool>(this);
  declareProperty("signalGenVertices", m_vertIn);
  declareProperty("signalGenParticles", m_partIn);
  declareProperty("mergedGenParticles", m_partOut);
  declareProperty("mergedGenVertices", m_vertOut);
}

StatusCode PileupParticlesMergeTool::initialize() { return StatusCode::SUCCESS; }

StatusCode PileupParticlesMergeTool::finalize() { return StatusCode::SUCCESS; }

StatusCode PileupParticlesMergeTool::readPileupCollection(podio::EventStore& store) {
  const fcc::MCParticleCollection* mcParticleCollection;
  const fcc::GenVertexCollection* genVertexCollection;

  bool mcParticleCollectionPresent = store.get(m_pileupGenParticlesBranchName, mcParticleCollection);
  debug() << "size of collection about to be read " << mcParticleCollection->size() << endmsg;
  if (mcParticleCollectionPresent) {
    m_MCParticleCollections.push_back(mcParticleCollection);
  } else {
    warning() << "No collection could be read from branch " << m_pileupGenParticlesBranchName << endmsg;
  }

  bool genVertexCollectionPresent = store.get(m_pileupGenVerticesBranchName, genVertexCollection);
  if (genVertexCollectionPresent) {
    m_GenVertexCollections.push_back(genVertexCollection);
  } else {
    warning() << "No collection could be read from branch " << m_pileupGenVerticesBranchName << endmsg;
  }

  return StatusCode::SUCCESS;
}

StatusCode PileupParticlesMergeTool::readSignal() {
  auto collVSig = m_vertIn.get();
  auto collPSig = m_partIn.get();

  m_MCParticleCollections.push_back(collPSig);
  m_GenVertexCollections.push_back(collVSig);

  return StatusCode::SUCCESS;
}

StatusCode PileupParticlesMergeTool::mergeCollections() {

  debug() << "merge " << m_GenVertexCollections.size() << "  collections ..." << endmsg;

  // ownership given to data service at end of execute
  fcc::MCParticleCollection* collPOut = new fcc::MCParticleCollection();
  fcc::GenVertexCollection* collVOut = new fcc::GenVertexCollection();

  // need to keep track of the accumulated length
  // of collections already merged
  unsigned int collectionOffset = 0;

  // merge vertices

  for (auto genVertexColl : m_GenVertexCollections) {
    debug() << " traverse collection of size " << genVertexColl->size() << endmsg;
    for (const auto elem : *genVertexColl) {
      collVOut->push_back(elem.clone());
    }
  }
  // merge particles, keeping the references up to date
  for (unsigned int collCounter = 0; collCounter < m_MCParticleCollections.size(); ++collCounter) {
    auto mcPartColl = m_MCParticleCollections[collCounter];
    for (const auto elem : *mcPartColl) {
      auto newPart = fcc::MCParticle(elem.core());
      if (elem.startVertex().isAvailable()) {
        // update reference: find new index in merged collection
        // add offset since the signal particles are copied in new collection first
        unsigned int newIdStart = elem.startVertex().getObjectID().index + collectionOffset;
        // update startVertex
        newPart.startVertex((*collVOut)[newIdStart]);
      }
      if (elem.endVertex().isAvailable()) {
        // update reference: find new index in merged collection
        // add offset since the signal particles are copied in new collection first
        unsigned int newIdEnd = elem.endVertex().getObjectID().index + collectionOffset;
        // update startVertex
        newPart.endVertex((*collVOut)[newIdEnd]);
      }
      collPOut->push_back(newPart);
    }
    collectionOffset += m_GenVertexCollections[collCounter]->size();
  }

  m_vertOut.put(collVOut);
  m_partOut.put(collPOut);

  m_MCParticleCollections.clear();
  m_GenVertexCollections.clear();
  return StatusCode::SUCCESS;
}
+0 −51
Original line number Original line Diff line number Diff line
#ifndef JUGBASE_PILEUPPARTICLESMERGETOOL_H
#define JUGBASE_PILEUPPARTICLESMERGETOOL_H

// Gaudi
#include "GaudiAlg/GaudiTool.h"

// FCCSW
#include "JugBase/DataHandle.h"
#include "JugBase/IEDMMergeTool.h"

namespace fcc {
class MCParticleCollection;
class GenVertexCollection;
}

/** @class PileupParticlesMergeTool
 *
 * Implementation of the EDMMergeTool interface used for pileup overlay
 * of generated particles. Keeps track of the association between particles and vertices.
 */

class PileupParticlesMergeTool : public GaudiTool, virtual public IEDMMergeTool {
public:
  explicit PileupParticlesMergeTool(const std::string& aType, const std::string& aName, const IInterface* aParent);
  virtual ~PileupParticlesMergeTool() = default;

  virtual StatusCode initialize() override final;

  virtual StatusCode finalize() override final;

  virtual StatusCode readPileupCollection(podio::EventStore& store) override final;

  virtual StatusCode mergeCollections() override final;

  virtual StatusCode readSignal() override final;

private:
  Gaudi::Property<std::string> m_pileupGenParticlesBranchName{this, "genParticlesBranch", "genParticles", ""};
  Gaudi::Property<std::string> m_pileupGenVerticesBranchName{this, "genVerticesBranch", "genVertices", ""};

  std::vector<const fcc::MCParticleCollection*> m_MCParticleCollections;
  std::vector<const fcc::GenVertexCollection*> m_GenVertexCollections;

  DataHandle<fcc::GenVertexCollection> m_vertOut{"overlay/allGenVertices", Gaudi::DataHandle::Writer, this};
  DataHandle<fcc::MCParticleCollection> m_partOut{"overlay/allGenParticles", Gaudi::DataHandle::Writer, this};

  DataHandle<fcc::GenVertexCollection> m_vertIn{"overlay/signalGenVertices", Gaudi::DataHandle::Reader, this};
  DataHandle<fcc::MCParticleCollection> m_partIn{"overlay/signalGenParticles", Gaudi::DataHandle::Reader, this};
};

#endif  // JUGBASE_PILEUPPARTICLESMERGETOOL_H
+0 −50
Original line number Original line Diff line number Diff line
#include "PodioInput.h"

#include "TFile.h"
#include "TROOT.h"

#include "JugBase/DataWrapper.h"
#include "JugBase/PodioDataSvc.h"

DECLARE_COMPONENT(PodioInput)

PodioInput::PodioInput(const std::string& name, ISvcLocator* svcLoc) : GaudiAlgorithm(name, svcLoc) {}

StatusCode PodioInput::initialize() {
  if (GaudiAlgorithm::initialize().isFailure()) return StatusCode::FAILURE;

  // check whether we have the PodioEvtSvc active
  m_podioDataSvc = dynamic_cast<PodioDataSvc*>(evtSvc().get());
  if (nullptr == m_podioDataSvc) return StatusCode::FAILURE;

  auto idTable = m_podioDataSvc->getCollectionIDs();
  for (auto& name : m_collectionNames) {
    debug() << "Finding collection " << name << " in collection registry." << endmsg;
    if (!idTable->present(name)) {
      error() << "Requested product " << name << " not found." << endmsg;
      return StatusCode::FAILURE;
    }
    m_collectionIDs.push_back(idTable->collectionID(name));
  }
  return StatusCode::SUCCESS;
}

StatusCode PodioInput::execute() {
  size_t cntr = 0;
  // Re-create the collections from ROOT file
  for (auto& id : m_collectionIDs) {
    const std::string& collName = m_collectionNames.value().at(cntr++);
    debug() << "Registering collection to read " << collName << " with id " << id << endmsg;
    if (m_podioDataSvc->readCollection(collName, id).isFailure()) {
      return StatusCode::FAILURE;
    }
  }
  // Tell data service that we are done with requested collections
  m_podioDataSvc->endOfRead();
  return StatusCode::SUCCESS;
}

StatusCode PodioInput::finalize() {
  if (GaudiAlgorithm::finalize().isFailure()) return StatusCode::FAILURE;
  return StatusCode::SUCCESS;
}
+0 −42
Original line number Original line Diff line number Diff line
#ifndef JUGBASE_PODIOINPUT_H
#define JUGBASE_PODIOINPUT_H
// Gaaudi
#include "GaudiAlg/GaudiAlgorithm.h"

// STL
#include <string>
#include <vector>

// forward declarations
// from JugBase:
class PodioDataSvc;

/** @class PodioInput JugBase/components/PodioInput.h PodioInput.h
 *
 *  Class that allows to read ROOT files written with PodioOutput
 *
 *  @author J. Lingemann
 */

class PodioInput : public GaudiAlgorithm {

public:
  /// Constructor.
  PodioInput(const std::string& name, ISvcLocator* svcLoc);
  /// Initialization of PodioInput. Acquires the data service, opens root file and creates trees.
  virtual StatusCode initialize();
  /// Execute. Re-creates collections that are specified to be read and sets references.
  virtual StatusCode execute();
  /// Finalize. Closes ROOT file.
  virtual StatusCode finalize();

private:
  /// Name of collections to read. Set by option collections (this is temporary)
  Gaudi::Property<std::vector<std::string>> m_collectionNames{this, "collections", {}, "Places of collections to read"};
  /// Collection IDs (retrieved with CollectionIDTable from ROOT file, using collection names)
  std::vector<int> m_collectionIDs;
  /// Data service: needed to register objects and get collection IDs. Just an observing pointer.
  PodioDataSvc* m_podioDataSvc;
};

#endif
+0 −28
Original line number Original line Diff line number Diff line
#include "RangePileUp.h"

DECLARE_COMPONENT(RangePileUp)

RangePileUp::RangePileUp(const std::string& type, const std::string& name, const IInterface* parent)
    : GaudiTool(type, name, parent) {
  declareInterface<IPileUpTool>(this);
}

RangePileUp::~RangePileUp() { ; }

StatusCode RangePileUp::initialize() {
  StatusCode sc = GaudiTool::initialize();
  if (sc.isFailure()) return sc;
  return sc;
}

unsigned int RangePileUp::numberOfPileUp() {
  m_currentNumPileUpEvents = m_pileupRange[m_rangeIndex];
  m_rangeIndex = (m_rangeIndex + 1)  % m_pileupRange.size();
  return m_currentNumPileUpEvents;
}

double RangePileUp::getMeanPileUp() { return m_currentNumPileUpEvents; }

void RangePileUp::printPileUpCounters() {
  info() << "Current number of pileup events:  " << m_currentNumPileUpEvents << endmsg;
}

JugFast/CMakeLists.txt

0 → 100644
+27 −0

File added.

Preview size limit exceeded, changes collapsed.

JugPID/CMakeLists.txt

0 → 100644
+24 −0

File added.

Preview size limit exceeded, changes collapsed.

+30 −0

File added.

Preview size limit exceeded, changes collapsed.

LICENSE

0 → 100644
+166 −0

File added.

Preview size limit exceeded, changes collapsed.

LICENSE.spdx

0 → 100644
+6 −0

File added.

Preview size limit exceeded, changes collapsed.

+77 −1

File changed.

Preview size limit exceeded, changes collapsed.

VERSION

0 → 100644
+1 −0

File added.

Preview size limit exceeded, changes collapsed.

cmake/Juggler.xenv.in

0 → 100644
+10 −0

File added.

Preview size limit exceeded, changes collapsed.

cmake/xenv.cmake

0 → 100644
+11 −0

File added.

Preview size limit exceeded, changes collapsed.

demo/compact/elements.xml

deleted100644 → 0
+0 −884

File deleted.

Preview size limit exceeded, changes collapsed.

demo/compact/materials.xml

deleted100644 → 0
+0 −149

File deleted.

Preview size limit exceeded, changes collapsed.

derp.root

deleted100644 → 0
−300 KiB

File deleted.

Preview size limit exceeded, changes collapsed.

doc/Doxyfile

0 → 100644
+2669 −0

File added.

Preview size limit exceeded, changes collapsed.

+1470 −0

File added.

Preview size limit exceeded, changes collapsed.

doc/page.dox

0 → 100644
+56 −0

File added.

Preview size limit exceeded, changes collapsed.

doc/pages.md

0 → 100644
+12 −0

File added.

Preview size limit exceeded, changes collapsed.