Julia Gonski, Jenni Ott, Shiva Abbaszadeh, Sagar Addepalli, Matteo Cremonesi, Jennet Dickinson, Giuseppe Di Guglielmo, Erdem Yigit Ertorer, Lindsey Gray, Ryan Herbst, Christian Herwig, Tae Min Hong, Benedikt Maier, Maryam Bayat Makou, David Miller, Mark S. Neubauer, Cristián Peña, Dylan Rankin, Seon-Hee, Seo, Giordon Stark, Alexander Tapper, Audrey Corbeil Therrien, Ioannis Xiotidis, Keisuke Yoshihara, G Abarajithan, Sagar Addepalli, Nural Akchurin, Carlos Argüelles, Saptaparna Bhattacharya, Lorenzo Borella, Christian Boutan, Tom Braine, James Brau, Martin Breidenbach, Antonio Chahine, Talal Ahmed Chowdhury, Yuan-Tang Chou, Seokju Chung, Alberto Coppi, Mariarosaria D'Alfonso, Abhilasha Dave, Chance Desmet, Angela Di Fulvio, Karri DiPetrillo, Javier Duarte, Auralee Edelen, Jan Eysermans, Yongbin Feng, Emmett Forrestel, Dolores Garcia, Loredana Gastaldo, Julián García Pardiñas, Lino Gerlach, Loukas Gouskos, Katya Govorkova, Carl Grace, Christopher Grant, Philip Harris, Ciaran Hasnip, Timon Heim, Abraham Holtermann, Tae Min Hong, Gian Michele Innocenti, Koji Ishidoshiro, Miaochen Jin, Jyothisraj Johnson, Stephen Jones, Andreas Jung, Georgia Karagiorgi, Ryan Kastner, Nicholas Kamp, Doojin Kim, Kyoungchul Kong, Katie Kudela, Jelena Lalic, Bo-Cheng Lai, Yun-Tsung Lai, Tommy Lam, Jeffrey Lazar, Aobo Li, Zepeng Li, Haoyun Liu, Vladimir Lončar, Luca Macchiarulo, Christopher Madrid, Benedikt Maier, Zhenghua Ma, Prashansa Mukim, Mark S. Neubauer, Victoria Nguyen, Sungbin Oh, Isobel Ojalvo, Hideyoshi Ozaki, Simone Pagan Griso, Myeonghun Park, Christoph Paus, Santosh Parajuli, Benjamin Parpillon, Sara Pozzi, Ema Puljak, Benjamin Ramhorst, Amy Roberts, Larry Ruckman, Kate Scholberg, Sebastian Schmitt, Noah Singer, Eluned Anne Smith, Alexandre Sousa, Michael Spannowsky, Sioni Summers, Yanwen Sun, Daniel Tapia Takaki, Antonino Tumeo, Caterina Vernieri, Belina von Krosigk, Yash Vora, Linyan Wan, Michael H. L. S. Wang, Amanda Weinstein, Andy White, Simon Williams, Felix Yu
The next generation of particle physics experiments will face a new era of challenges in data acquisition, due to unprecedented data rates and volumes along with extreme environments and operational constraints. Harnessing this data for scientific discovery demands real-time inference and decision-making, intelligent data reduction, and efficient processing architectures beyond current capabilities. Crucial to the success of this experimental paradigm are several emerging technologies, such as artificial intelligence and machine learning (AI/ML), silicon microelectronics, and the advent of quantum algorithms and processing. Their intersection includes areas of research such as low-power and low-latency devices for edge computing, heterogeneous accelerator systems, reconfigurable hardware, novel codesign and synthesis strategies, readout for cryogenic or high-radiation environments, and analog computing. This white paper presents a community-driven vision to identify and prioritize research and development opportunities in hardware-based ML systems and corresponding physics applications, contributing towards a successful transition to the new data frontier of fundamental science.
Benedikt Maier, Siddharth M. Narayanan, Gianfranco de Castro, Maxim Goncharov, Christoph Paus, Matthias Schott
Particle production from secondary proton-proton collisions, commonly referred to as pile-up, impair the sensitivity of both new physics searches and precision measurements at LHC experiments. We propose a novel algorithm, PUMA, for identifying pile-up objects with the help of deep neural networks based on sparse transformers. These attention mechanisms were developed for natural language processing but have become popular in other applications. In a realistic detector simulation, our method outperforms classical benchmark algorithms for pile-up mitigation in key observables. It provides a perspective for mitigating the effects of pile-up in the high luminosity era of the LHC, where up to 200 proton-proton collisions are expected to occur simultaneously.
Benedikt Maier
The production of the Higgs boson in association with a single top quark is sensitive to the relative sign of the coupling parameters describing its interaction with fermions and gauge bosons. The tHq production mode therefore provides an good handle on the Yukawa coupling Yt. The first searches for single-top + Higgs in the H>bbbar, gamma gamma, tau+tau- and W+W- decay channels are presented, using the full 8 TeV dataset recorded with the CMS detector. Special emphasis is put on the analyses' peculiarities and their dominating systematic uncertainties, and a combination of all individual channels is performed. The analyses are optimized for a scenario of Yt=-1, which is enhanced by a factor of 13 with respect to the Standard Model production rate. The observed combined upper exclusion limit is 2.8 times the cross section of this exotic scenario (2.0 expected).
Benedikt Maier
Measurements of the cross section and of the interactions happening at the tWb vertext are performed in the single top t-channel at center-of-mass energies of 7 and 8 TeV. Results of both ATLAS and CMS collaborations are presented. No indications for new physics and no deviations from the Standard Model predictions within the experimental and theoretical uncertainties are found.
Aritra Bal, Tristan Brandes, Fabio Iemmi, Markus Klute, Benedikt Maier, Vinicius Mikuni, Thea Aarrestad
Knowledge distillation is a form of model compression that allows artificial neural networks of different sizes to learn from one another. Its main application is the compactification of large deep neural networks to free up computational resources, in particular on edge devices. In this article, we consider proton-proton collisions at the High-Luminosity LHC (HL-LHC) and demonstrate a successful knowledge transfer from an event-level graph neural network (GNN) to a particle-level small deep neural network (DNN). Our algorithm, DistillNet, is a DNN that is trained to learn about the provenance of particles, as provided by the soft labels that are the GNN outputs, to predict whether or not a particle originates from the primary interaction vertex. The results indicate that for this problem, which is one of the main challenges at the HL-LHC, there is minimal loss during the transfer of knowledge to the small student network, while improving significantly the computational resource needs compared to the teacher. This is demonstrated for the distilled student network on a CPU, as well as for a quantized and pruned student network deployed on a field-programmable gate array. Our study proves that knowledge transfer between networks of different complexity can be used for fast artificial intelligence (AI) in high-energy physics that improves the expressiveness of observables over non-AI-based reconstruction algorithms. Such an approach can become essential at the HL-LHC experiments, e.g., to comply with the resource budget of their trigger stages.
Benedikt Maier
The associated production of Higgs boson and single top quark is of particular interest since it is senstive to the relative sign of the Higgs boson coupling to gauge bosons and the Yukawa coupling $y$ to fermions. The presented analysis is setting upper production limits on a model with $y_\text{t}=-1$, which has an enhanced cross section compared to the standard model expectation. For this it focusses on the Higgs boson decaying to a pair of b quarks and uses the full dataset of $pp$ collisions recorded with the CMS detector in 2012. It reports an upper limit on 7.57 times the expected cross section, with an expected sensitivity of 5.14. This translates into the exclusion of associated tHq production with $y_\text{t}=-1$-like characteristics with a cross section smaller than 1.77\,pb.
Philip Harris, Michael Kagan, Jeffrey Krupa, Benedikt Maier, Nathaniel Woodward
Self-Supervised Learning (SSL) is at the core of training modern large machine learning models, providing a scheme for learning powerful representations that can be used in a variety of downstream tasks. However, SSL strategies must be adapted to the type of training data and downstream tasks required. We propose RS3L ("Re-simulation-based self-supervised representation learning"), a novel simulation-based SSL strategy that employs a method of re-simulation to drive data augmentation for contrastive learning in the physical sciences, particularly, in fields that rely on stochastic simulators. By intervening in the middle of the simulation process and re-running simulation components downstream of the intervention, we generate multiple realizations of an event, thus producing a set of augmentations covering all physics-driven variations available in the simulator. Using experiments from high-energy physics, we explore how this strategy may enable the development of a foundation model; we show how RS3L pre-training enables powerful performance in downstream tasks such as discrimination of a variety of objects and uncertainty mitigation. In addition to our results, we make the RS3L dataset publicly available for further studies on how to improve SSL strategies.
Loukas Gouskos, Fabio Iemmi, Sascha Liechti, Benedikt Maier, Vinicius Mikuni, Huilin Qu
We propose a novel strategy for disentangling proton collisions at hadron colliders such as the LHC that considerably improves over the current state of the art. Employing a metric inspired by optimal transport problems as the cost function of a graph neural network, our algorithm is able to compare two particle collections with different noise levels and learns to flag particles originating from the main interaction amidst products from up to 200 simultaneous pileup collisions. We thereby sidestep the critical task of obtaining a ground truth by labeling particles and avoid arduous human annotation in favor of labels derived in situ through a self-supervised process. We demonstrate how our approach - which, unlike competing algorithms, is trivial to implement - improves the resolution in key objects used in precision measurements and searches alike and present large sensitivity gains in searching for exotic Higgs boson decays at the High-Luminosity LHC.
Max Marriott-Clarke, Lazar Novakovic, Elizabeth Ratzer, Robert J. Bainbridge, Loukas Gouskos, Benedikt Maier
We propose a novel clustering approach for point-cloud segmentation based on supervised contrastive metric learning (CML). Rather than predicting cluster assignments or object-centric variables, the method learns a latent representation in which points belonging to the same object are embedded nearby while unrelated points are separated. Clusters are then reconstructed using a density-based readout in the learned metric space, decoupling representation learning from cluster formation and enabling flexible inference. The approach is evaluated on simulated data from a highly granular calorimeter, where the task is to separate highly overlapping particle showers represented as sets of calorimeter hits. A direct comparison with object condensation (OC) is performed using identical graph neural network backbones and equal latent dimensionality, isolating the effect of the learning objective. The CML method produces a more stable and separable embedding geometry for both electromagnetic and hadronic particle showers, leading to improved local neighbourhood consistency, a more reliable separation of overlapping showers, and better generalization when extrapolating to unseen multiplicities and energies. This translates directly into higher reconstruction efficiency and purity, particularly in high-multiplicity regimes, as well as improved energy resolution. In mixed-particle environments, CML maintains strong performance, suggesting robust learning of the shower topology, while OC exhibits significant degradation. These results demonstrate that similarity-based representation learning combined with density-based aggregation is a promising alternative to object-centric approaches for point cloud segmentation in highly granular detectors.
Luca Anzalone, Simranjit Singh Chhibra, Benedikt Maier, Nadezda Chernyavskaya, Maurizio Pierini
We present a family of conditional dual auto-encoders (CoDAEs) for generic and model-independent new physics searches at colliders. New physics signals, which arise from new types of particles and interactions, are considered in our study as anomalies causing deviations in data with respect to expected background events. In this work, we perform a normal-only anomaly detection, which employs only background samples, to search for manifestations of a dark version of strong force applying (variational) auto-encoders on raw detector images, which are large and highly sparse, without leveraging any physics-based pre-processing or strong assumption on the signals. The proposed CoDAE has a dual-encoder design, which is general and can learn an auxiliary yet compact latent space through spatial conditioning, showing a neat improvement over competitive physics-based baselines and related approaches, therefore also reducing the gap with fully supervised models. It is the first time an unsupervised model is shown to exhibit excellent discrimination against multiple dark shower models, illustrating the suitability of this method as an accurate, fast, model-independent algorithm to deploy, e.g., in the real-time event triggering systems of Large Hadron Collider experiments such as ATLAS and CMS.
Francesco Toschi, Benedikt Maier, Greta Heine, Torben Ferber, Sebastian Kempf, Markus Klute, Belina von Krosigk
Ultra-sensitive cryogenic calorimeters have become a favored technology with widespread application where eV-scale energy resolutions are needed. In this article, we characterize the performance of an X-ray magnetic microcalorimeter (MMC) using a Fe-55 source. Employing an optimum filter-based amplitude estimation and energy reconstruction, we demonstrate that an unprecedented FWHM resolution of $ΔE_\mathrm{FWHM} = \left(1.25\pm0.17\mathrm{\scriptsize{(stat)}}^{+0.05}_{-0.07}\mathrm{\scriptsize{(syst)}}\right)\,\text{eV}$ can be achieved. We also derive the best possible resolution and discuss limiting factors affecting the measurement. The analysis pipeline for the MMC data developed in this paper is furthermore an important step for the realization of the proposed superfluid helium-based experiment DELight, which will search for direct interaction of dark matter with masses below 100 MeV/c$^2$.
Ruoqing Zheng, Chang Sun, Qibin Liu, Lauri Laatu, Arianna Cox, Benedikt Maier, Alexander Tapper, Jose G. F. Coutinho, Wayne Luk, Zhiqiang Que
We present JetFormer, a versatile and scalable encoder-only Transformer architecture for particle jet tagging at the Large Hadron Collider (LHC). Unlike prior approaches that are often tailored to specific deployment regimes, JetFormer is designed to operate effectively across the full spectrum of jet tagging scenarios, from high-accuracy offline analysis to ultra-low-latency online triggering. The model processes variable-length sets of particle features without relying on input of explicit pairwise interactions, yet achieves competitive or superior performance compared to state-of-the-art methods. On the large-scale JetClass dataset, a large-scale JetFormer matches the accuracy of the interaction-rich ParT model (within 0.7%) while using 37.4% fewer FLOPs, demonstrating its computational efficiency and strong generalization. On benchmark HLS4ML 150P datasets, JetFormer consistently outperforms existing models such as MLPs, Deep Sets, and Interaction Networks by 3-4% in accuracy. To bridge the gap to hardware deployment, we further introduce a hardware-aware optimization pipeline based on multi-objective hyperparameter search, yielding compact variants like JetFormer-tiny suitable for FPGA-based trigger systems with sub-microsecond latency requirements. Through structured pruning and quantization, we show that JetFormer can be aggressively compressed with minimal accuracy loss. By unifying high-performance modeling and deployability within a single architectural framework, JetFormer provides a practical pathway for deploying Transformer-based jet taggers in both offline and online environments at the LHC. Code is available at https://github.com/walkieq/JetFormer.
Francesco Toschi, Axel Brunold, Lea Burmeister, Klaus Eitel, Christian Enss, Eleanor Fascione, Torben Ferber, Rahel Gabriel, Lena Hauswald, Felix Kahlhoefer, Sebastian Kempf, Markus Klute, Belina von Krosigk, Sebastian Lindemann, Benedikt Maier, Marc Schumann, Melih Solmaz, Kathrin Valerius, Friedrich Carl Wagner
Superfluid ${}^4$He is an ideal candidate for the direct detection of light dark matter via nuclear recoils thanks to its low nuclear mass and the possibility to reach a low detection energy threshold by exploiting the generated quasiparticles. The design of future detectors based on this target, such as the DELight experiment, requires a proper understanding of the formation and partitioning of the signal for different energy depositions from various sources. This work presents an overview of the physical processes involved in the energy deposition of recoiling electrons and ions, and describes a Monte Carlo approach to the partitioning of the signal into different channels. Despite an overall good agreement with existing literature, differences in the region of interest for light dark matter searches below 200 eV are observed.
Jorge de Blas, Monica Dunford, Emanuele Bagnaschi, Ayres Freitas, Pier Paolo Giardino, Christian Grefe, Michele Selvaggi, Angela Taliercio, Falk Bartels, Andrea Dainese, Cristinel Diaconu, Chiara Signorile-Signorile, Néstor Armesto, Roberta Arnaldi, Andy Buckley, David d'Enterria, Antoine Gérardin, Valentina Mantovani Sarti, Sven-Olaf Moch, Marco Pappagallo, Raimond Snellings, Urs Achim Wiedemann, Gino Isidori, Marie-Hélène Schune, Maria Laura Piscopo, Marta Calvi, Yuval Grossman, Thibaud Humair, Andreas Jüttner, Jernej F. Kamenik, Matthew Kenzie, Patrick Koppenburg, Radoslav Marchevski, Angela Papa, Guillaume Pignol, Justine Serrano, Pilar Hernandez, Sara Bolognesi, Ivan Esteban, Stephen Dolan, Valerie Domcke, Joseph Formaggio, M. C. Gonzalez-Garcia, Aart Heijboer, Aldo Ianni, Joachim Kopp, Elisa Resconi, Mark Scott, Viola Sordini, Fabio Maltoni, Rebeca Gonzalez Suarez, Benedikt Maier, Timothy Cohen, Annapaola de Cosa, Nathaniel Craig, Roberto Franceschini, Loukas Gouskos, Aurelio Juste, Sophie Renner, Lesya Shchutska, Jocelyn Monroe, Matthew McCullough, Yohei Ema, Paolo Agnes, Francesca Calore, Emanuele Castorina, Aaron Chou, Monica D'Onofrio, Maksym Ovchynnikov, Tina Pollman, Josef Pradler, Yotam Soreq, Julia Katharina Vogel, Gianluigi Arduini, Philip Burrows, Jacqueline Keintzel, Deepa Angal-Kalinin, Bernhard Auchmann, Massimo Ferrario, Angeles Faus Golfe, Roberto Losito, Anke-Susanne Mueller, Tor Raubenheimer, Marlene Turner, Pierre Vedrine, Hans Weise, Walter Wuensch, Chenghui Yu, Thomas Bergauer, Ulrich Husemann, Dorothea vom Bruch, Thea Aarrestad, Daniela Bortoletto, Shikma Bressler, Marcel Demarteau, Michael Doser, Gabriella Gaudio, Inés Gil-Botella, Andrea Giuliani, Fabrizio Palla, Rok Pestotnik, Felix Sefkow, Frank Simon, Maksym Titov, Tommaso Boccali, Borut Kersevan, Daniel Murnane, Gonzalo Merino Arevalo, John Derek Chapman, Frank-Dieter Gaede, Stefano Giagu, Maria Girone, Heather M. Gray, Giovanni Iadarola, Stephane Jezequel, Gregor Kasieczka, David Lange, Sinéad M. Ryan, Nicole Skidmore, Sofia Vallecorsa, Eric Laenen, Anadi Canepa, Xinchou Lou, Rogerio Rosenfeld, Yuji Yamazaki, Roger Forty, Karl Jakobs, Hugh Montgomery, Mike Seidel, Paris Sphicas
Benedikt Maier, Michael Spannowsky, Simon Williams
Oct 15, 2025·quant-ph·PDF We study continuous-variable photonic quantum extreme learning machines as fast, low-overhead front-ends for collider data processing. Data is encoded in photonic modes through quadrature displacements and propagated through a fixed-time Gaussian quantum substrate. The final readout occurs through Gaussian-compatible measurements to produce a high-dimensional random feature map. Only a linear classifier is trained, using a single linear solve, so retraining is fast, and the optical path and detector response set the analytical and inference latency. We evaluate this architecture on two representative classification tasks, top-jet tagging and Higgs-boson identification, with parameter-matched multi-layer perceptron (MLP) baselines. Using standard public datasets and identical train, validation, and test splits, the photonic Quantum Extreme Learning Machine (QELM) outperforms an MLP with two hidden units for all considered training sizes, and matches or exceeds an MLP with ten hidden units at large sample sizes, while training only the linear readout. These results indicate that Gaussian photonic extreme-learning machines can provide compact and expressive random features at fixed latency. The combination of deterministic timing, rapid retraining, low optical power, and room temperature operation makes photonic QELMs a credible building block for online data selection and even first-stage trigger integration at future collider experiments.
Aritra Bal, Markus Klute, Benedikt Maier, Melik Oughton, Eric Pezone, Michael Spannowsky
We introduce 1P1Q, a novel quantum data encoding scheme for high-energy physics (HEP), where each particle is assigned to an individual qubit, enabling direct representation of collision events without classical compression. We demonstrate the effectiveness of 1P1Q in quantum machine learning (QML) through two applications: a Quantum Autoencoder (QAE) for unsupervised anomaly detection and a Variational Quantum Circuit (VQC) for supervised classification of top quark jets. Our results show that the QAE successfully distinguishes signal jets from background QCD jets, achieving superior performance compared to a classical autoencoder while utilizing significantly fewer trainable parameters. Similarly, the VQC achieves competitive classification performance, approaching state-of-the-art classical models despite its minimal computational complexity. Furthermore, we validate the QAE on real experimental data from the CMS detector, establishing the robustness of quantum algorithms in practical HEP applications. These results demonstrate that 1P1Q provides an effective and scalable quantum encoding strategy, offering new opportunities for applying quantum computing algorithms in collider data analysis.
Anubha Bal, Edward Curtis, Anne-Marie Magnan, Benedikt Maier, Tania Robens, Nicholas Wardle
We present a search for new scalar bosons predicted by the Inert Doublet Model at an $e^+e^-$ machine with centre-of-mass energies of 240 and 365 GeV. Within this model, four additional scalar bosons ($H,\, A,\, H^+$ and $H^-$) are predicted. Due to an additional symmetry, the lightest new scalar, here chosen to be $H$, is stable and provides an adequate dark matter candidate. The search for pair production of the new scalars is investigated in final states with two electrons or two muons, in the context of the future circular collider proposal, FCC-ee. Building on previous studies in the context of the CLIC proposal, this analysis extends the search to detector-level objects, using a parametric neural network to enhance the signal contributions over the Standard Model backgrounds, and sets projected exclusion and discovery contours in the $M_A-M_H$ vs. $M_H$ plane. With a total integrated luminosity of 10.8 (2.7) ab$^{-1}$ for $\sqrt{s}=240$ (365) GeV, the discovery reach for the model goes up to $m_H= 108$ (157) GeV for $M_A-M_H=15$ GeV. For exclusion, almost the entire phase-space available in the $M_A-M_H$ vs. $M_H$ plane is expected to be ruled out at 95\% CL, reaching up to $M_H=110$ (165) GeV.
Tomohiro Abe, Yoav Afik, Andreas Albert, Christopher R. Anelli, Liron Barak, Martin Bauer, J. Katharina Behr, Nicole F. Bell, Antonio Boveia, Oleg Brandt, Giorgio Busoni, Linda M. Carpenter, Yu-Heng Chen, Caterina Doglioni, Alison Elliot, Motoko Fujiwara, Marie-Helene Genest, Raffaele Gerosa, Stefania Gori, Johanna Gramling, Alexander Grohsjean, Giuliano Gustavino, Kristian Hahn, Ulrich Haisch, Lars Henkelmann, Junji Hisano, Anders Huitfeldt, Valerio Ippolito, Felix Kahlhoefer, Greg Landsberg, Steven Lowette, Benedikt Maier, Fabio Maltoni, Margarete Muehlleitner, Jose M. No, Priscilla Pani, Giacomo Polesello, Darren D. Price, Tania Robens, Giulia Rovelli, Yoram Rozen, Isaac W. Sanderson, Rui Santos, Stanislava Sevova, David Sperka, Kevin Sung, Tim M. P. Tait, Koji Terashi, Francesca C. Ungaro, Eleni Vryonidou, Shin-Shan Yu, Sau Lan Wu, Chen Zhou
Dark matter (DM) simplified models are by now commonly used by the ATLAS and CMS Collaborations to interpret searches for missing transverse energy ($E_T^\mathrm{miss}$). The coherent use of these models sharpened the LHC DM search program, especially in the presentation of its results and their comparison to DM direct-detection (DD) and indirect-detection (ID) experiments. However, the community has been aware of the limitations of the DM simplified models, in particular the lack of theoretical consistency of some of them and their restricted phenomenology leading to the relevance of only a small subset of $E_T^\mathrm{miss}$ signatures. This document from the LHC Dark Matter Working Group identifies an example of a next-generation DM model, called $\textrm{2HDM+a}$, that provides the simplest theoretically consistent extension of the DM pseudoscalar simplified model. A comprehensive study of the phenomenology of the $\textrm{2HDM+a}$ model is presented, including a discussion of the rich and intricate pattern of mono-$X$ signatures and the relevance of other DM as well as non-DM experiments. Based on our discussions, a set of recommended scans are proposed to explore the parameter space of the $\textrm{2HDM+a}$ model through LHC searches. The exclusion limits obtained from the proposed scans can be consistently compared to the constraints on the $\textrm{2HDM+a}$ model that derive from DD, ID and the DM relic density.
Yutaro Iiyama, Benedikt Maier, Daniel Abercrombie, Maxim Goncharov, Christoph Paus
Dynamo is a full-stack software solution for scientific data management. Dynamo's architecture is modular, extensible, and customizable, making the software suitable for managing data in a wide range of installation scales, from a few terabytes stored at a single location to hundreds of petabytes distributed across a worldwide computing grid. This article documents the core system design of Dynamo and describes the applications that implement various data management tasks. A brief report is also given on the operational experiences of the system at the CMS experiment at the CERN Large Hadron Collider and at a small scale analysis facility.
Guillaume Albouy, Jared Barron, Hugues Beauchesne, Elias Bernreuther, Marcella Bona, Cesare Cazzaniga, Cari Cesarotti, Timothy Cohen, Annapaola de Cosa, David Curtin, Zeynep Demiragli, Caterina Doglioni, Alison Elliot, Karri Folan DiPetrillo, Florian Eble, Carlos Erice, Chad Freer, Aran Garcia-Bellido, Caleb Gemmell, Marie-Hélène Genest, Giovanni Grilli di Cortona, Giuliano Gustavino, Nicoline Hemme, Tova Holmes, Deepak Kar, Simon Knapen, Suchita Kulkarni, Luca Lavezzo, Steven Lowette, Benedikt Maier, Seán Mee, Stephen Mrenna, Harikrishnan Nair, Jeremi Niedziela, Christos Papageorgakis, Nukulsinh Parmar, Christoph Paus, Kevin Pedro, Ana Peixoto, Alexx Perloff, Tilman Plehn, Christiane Scherb, Pedro Schwaller, Jessie Shelton, Akanksha Singh, Sukanya Sinha, Torbjörn Sjöstrand, Aris G. B. Spourdalakis, Daniel Stolarski, Matthew J. Strassler, Andrii Usachov, Carlos Vázquez Sierra, Christopher B. Verhaaren, Long Wang
In this work, we consider the case of a strongly coupled dark/hidden sector, which extends the Standard Model (SM) by adding an additional non-Abelian gauge group. These extensions generally contain matter fields, much like the SM quarks, and gauge fields similar to the SM gluons. We focus on the exploration of such sectors where the dark particles are produced at the LHC through a portal and undergo rapid hadronization within the dark sector before decaying back, at least in part and potentially with sizeable lifetimes, to SM particles, giving a range of possibly spectacular signatures such as emerging or semi-visible jets. Other, non-QCD-like scenarios leading to soft unclustered energy patterns or glueballs are also discussed. After a review of the theory, existing benchmarks and constraints, this work addresses how to build consistent benchmarks from the underlying physical parameters and present new developments for the PYTHIA Hidden Valley module, along with jet substructure studies. Finally, a series of improved search strategies is presented in order to pave the way for a better exploration of the dark showers at the LHC.