The discovery of the Higgs boson by the CMS and ATLAS experiments at the Large Hadron Collider represented the culmination of decades of intellectual investment and billions of dollars in detector and algorithm development. The particle detectors used to measure the aftermath of the creation and decay of a Higgs boson are 10-story tall, 300m long cameras with half a billion pixels that measure the passing of particles escaping these violent collisions. These images are reconstructed to identify the particles, their total energy and direction of motion, that were detected using hand crafted algorithms that can take decades to develop and optimize. With Deep Learning greatly out performing hand made algorithms in the object recognition space for 2D images, this project aims to extend those techniques to 3D particle detectors in order to reduce the development time and improve the performance of reconstruction algorithms for current and future experiments. This is not straightforward due to the detector geometry and size. 2D AI methods for object detection typically depend on images with uniform 2D lattice structures, whereas particle detectors have pixels that are non-uniformly shaped, spaced, and positioned in three dimensions. This requires the standard Convolutional Neural Networks to be replaced by Graph or Point Cloud Neural Networks. We have been working with Computer Scientists at Rochester Inst. of Tech. to apply their research in these networks to ATLAS data.
Initial investigations are promising, with previously developed models, PointNet++ and DGCNN, able to positively identify detector pixels as being triggered by electrons, jets, or background with a total mean Intersection over Union of 0.776 and 0.82, respectively (see paper for details). Next steps include exploring custom networks to improve performance, and including more particle signatures.