Skip to content
Snippets Groups Projects
Name Last commit Last update
README.md

Guide to Tracking Development in Juggler for ATHENA Collaboration

Overview

This guide assumes the reader has some basic knowledge of tracking and computing. Notably,

  • How to effectively use git and has an eicweb account.
  • Some basic shell scripting and python
  • How to use ROOT with modern c++ (RDataframes are particularly important)
  • Some familiarity with juggler

What this guide is not

  • This is not a demonstration of ultimate track reconstruction (nowhere near it).
  • This is not an analysis tutorial.

What this guide is

  • A summary and overview of the basic framework components.
  • A resource for algorithm developers.
  • A guide to the development workflow driven by CI/CD.

Data Model

The data model is called eicd

Track finding and fitting is based around using ACTS with the detector geometry constructed via DD4hep.

Repositories and Workflow

Repositories

The collaboration uses the EIC group on eicweb which contains the subgroups detectors and benchmarks.

The main software components locally developed are:

  • juggler - Event processing framework (i.e. algorithms live)
  • eicd - EIC data model
  • npdet - collection of dd4hep simulation plugins and tools.

The key collaboration/user code repositories are:

  • detectors/ip6 - IP6 specifics (forward and backward beamline and detectors).
  • detectors/athena - ATHENA detector
  • Detector benchmarks - Set of analysis scripts run on the Geant4 output before any digitization or reconstruction. Also contains some detector calibrations.
  • Reconstruction benchmarks - Analysis of the many aspects of reconstruction. This is where the tracking performance benchmarks and plots live. Also a good place for developing new algorithms.
  • Physics benchmarks - Analysis of reconstructed for physics performance. The goal is to provide metrics for optimizing detector design and reconstruction.

Pipelines and Artifacts

The SWG leverages gitlab's CI/CD features heavily in our workflow. Here are some simplified explanations of these.

Pipeline

A pipeline is an automated set of jobs/scripts that are triggered by certain actions, such as pushing a merge request or merging into the master/main branch of a repository. Typically there is one pipeline per repository but there can multiple and a pipline can trigger downstream pipelines ("child" pipelines) or it can be triggered by an upstream pipeline. They can also be triggered manually.

The graph below show some of the downstream pipeline triggers (arrows) between different repositories.

graph TD;
  ip[IP6<br>detectors/ip6] --> athena[ATHENA<br>detectors/athena]
  athena-->db[Detector Benchmarks<br>benchmarks/detector_benchmarks];
  db-->rb[Reconstruct Benchmarks<br>benchmarks/reconstruction_benchmarks];
  db-->pb[Physics Benchmarks<br>benchmarks/physics_benchmarks];
  juggler[juggler<br>algorithms]-->rb;
  juggler-->pb;

Note that on any change to the detectors will cause all the benchmarks to be run.

"OK, pipelines run automatically. What is the big deal?

Artifacts

All pipeline jobs have "artifacts" which are just selected files that are saved and can be downloaded individually or as a zip file.

Note artifacts are not the output data which is far too big. Artifacts are small files such as images, plots, text files, reports, etc.

Here is an image and link to a pdf of the latest ATHENA detector version.