Alan J. Barr, Teng Jian Khoo, Partha Konar, Kyoungchul Kong, Christopher G. Lester, Konstantin T. Matchev, Myeonghun Park
We revisit the process of transversification and agglomeration of particle momenta that are often performed in analyses at hadron colliders, and show that many of the existing mass-measurement variables proposed for hadron colliders are far more closely related to each other than is widely appreciated, and indeed can all be viewed as a common mass bound specialized for a variety of purposes.
Steven R. Jackson, Teng Jian Khoo, Frederick W. Strauch
Jun 14, 2012·quant-ph·PDF Quantum walks have been shown to have impressive transport properties compared to classical random walks. However, imperfections in the quantum walk algorithm can destroy any quantum mechanical speed-up due to Anderson localization. We numerically study the effect of static disorder on a quantum walk on the glued trees graph. For small disorder, we find that the dominant effect is a type of quantum decay, and not quantum localization. For intermediate disorder, there is a crossover to diffusive transport, while a localization transition is observed at large disorder, in agreement with Anderson localization on the Cayley tree.
Anthony Badea, William James Fawcett, John Huth, Teng Jian Khoo, Riccardo Poggi, Lawrence Lee
High-multiplicity signatures at particle colliders can arise in Standard Model processes and beyond. With such signatures, difficulties often arise from the large dimensionality of the kinematic space. For final states containing a single type of particle signature, this results in a combinatorial problem that hides underlying kinematic information. We explore using a neural network that includes a Lorentz Layer to extract high-dimensional correlations. We use the case of squark decays in $R$-Parity-violating Supersymmetry as a benchmark, comparing the performance to that of classical methods. With this approach, we demonstrate significant improvement over traditional methods.
Michael Gerbush, Teng Jian Khoo, Daniel Phalen, Aaron Pierce, David Tucker-Smith
Color-octet scalars, if present at the TeV scale, will be produced in abundance at the LHC. We discuss in some detail the phenomenology of scalars in the (8,2)_{1/2} representation, recently identified by Manohar and Wise as an addition to the standard-model Higgs sector consistent with the principle of minimal flavor violation. Couplings of this multiplet to the Higgs lift the mass degeneracy among its states, possibly allowing for two-body decays of a heavier colored scalar to a lighter one and a gauge boson. We perform a renormalization group analysis of these couplings and find that limits from Tevatron searches leave little room for these decays. This fact, and the assumption of minimal flavor violation, lead us to study the case where the octets decay to the heaviest kinematically accessible fermion pairs. Focusing on pair-production events leading to (t t-bar t t-bar), (b b-bar b b-bar), and (b b-bar t t-bar) final states, we find that discovery at the LHC should be possible up to masses exceeding 1 TeV.
Mohamed Aly, Jackson Burzynski, Bryan Cardwell, Daniel C. Craik, Tal van Daalen, Tomas Dado, Ayanabha Das, Antonio Delgado Peris, Caterina Doglioni, Peter Elmer, Engin Eren, Martin B. Eriksen, Jonas Eschle, Giulio Eulisse, Conor Fitzpatrick, José Flix Molina, Alessandra Forti, Ben Galewsky, Sean Gasiorowski, Aman Goel, Loukas Gouskos, Enrico Guiraud, Kanhaiya Gupta, Stephan Hageboeck, Allison Reinsvold Hall, Lukas Heinrich, Alexander Held, José M. Hernández, Michel Hernández Villanueva, Julius Hrivnac, Michel Jouvin, Teng Jian Khoo, Luke Kreczko, Nils Krumnack, Thomas Kuhr, Baidyanath Kundu, Eric Lancon, Johannes Lange, Paul Laycock, Kilian Lieret, Nicholas J. Manganelli, Pere Mato Villa, Andrzej Novak, Antonio Perez-Calero Yzquierdo, Jim Pivarski, Mason Proffitt, Jonas Rembser, Eduardo Rodrigues, Grigori Rybkin, Jana Schaarschmidt, Henry F. Schreiner, Markus Schulz, Andrea Sciabà, Sezen Sekmen, Elizabeth Sexton-Kennedy, Oksana Shadura, Tibor Simko, Nathan Simpson, Jaydip Singh, Nicola Skidmore, Nicholas Smith, Michael Sokoloff, Graeme A. Stewart, Giles C. Strong, Gokhan Unel, Vassil Vassilev, Mark Waterlaat, Gordon Watts, Efe Yazgan
The second workshop on the HEP Analysis Ecosystem took place 23-25 May 2022 at IJCLab in Orsay, to look at progress and continuing challenges in scaling up HEP analysis to meet the needs of HL-LHC and DUNE, as well as the very pressing needs of LHC Run 3 analysis. The workshop was themed around six particular topics, which were felt to capture key questions, opportunities and challenges. Each topic arranged a plenary session introduction, often with speakers summarising the state-of-the art and the next steps for analysis. This was then followed by parallel sessions, which were much more discussion focused, and where attendees could grapple with the challenges and propose solutions that could be tried. Where there was significant overlap between topics, a joint discussion between them was arranged. In the weeks following the workshop the session conveners wrote this document, which is a summary of the main discussions, the key points raised and the conclusions and outcomes. The document was circulated amongst the participants for comments before being finalised here.
William Balunas, Donatella Cavalli, Teng Jian Khoo, Matthew Klein, Peter Loch, Federica Piazza, Caterina Pizio, Silvia Resconi, Douglas Schaefer, Russell Smith, Sarah Williams
Missing transverse momentum is a crucial observable for physics at hadron colliders, being the only constraint on the kinematics of "invisible" objects such as neutrinos and hypothetical dark matter particles. Computing missing transverse momentum at the highest possible precision, particularly in experiments at the energy frontier, can be a challenging procedure due to ambiguities in the distribution of energy and momentum between many reconstructed particle candidates. This paper describes a novel solution for efficiently encoding information required for the computation of missing transverse momentum given arbitrary selection criteria for the constituent reconstructed objects. Pileup suppression using information from both the calorimeter and the inner detector is an integral component of the reconstruction procedure. Energy calibration and systematic variations are naturally supported. Following this strategy, the ATLAS Collaboration has been able to optimise the use of missing transverse momentum in diverse analyses throughout Runs 2 and 3 of the Large Hadron Collider and for future analyses.
HEP Software Foundation, :, Thea Aarrestad, Simone Amoroso, Markus Julian Atkinson, Joshua Bendavid, Tommaso Boccali, Andrea Bocci, Andy Buckley, Matteo Cacciari, Paolo Calafiura, Philippe Canal, Federico Carminati, Taylor Childers, Vitaliano Ciulli, Gloria Corti, Davide Costanzo, Justin Gage Dezoort, Caterina Doglioni, Javier Mauricio Duarte, Agnieszka Dziurda, Peter Elmer, Markus Elsing, V. Daniel Elvira, Giulio Eulisse, Javier Fernandez Menendez, Conor Fitzpatrick, Rikkert Frederix, Stefano Frixione, Krzysztof L Genser, Andrei Gheata, Francesco Giuli, Vladimir V. Gligorov, Hadrien Benjamin Grasland, Heather Gray, Lindsey Gray, Alexander Grohsjean, Christian Gütschow, Stephan Hageboeck, Philip Coleman Harris, Benedikt Hegner, Lukas Heinrich, Burt Holzman, Walter Hopkins, Shih-Chieh Hsu, Stefan Höche, Philip James Ilten, Vladimir Ivantchenko, Chris Jones, Michel Jouvin, Teng Jian Khoo, Ivan Kisel, Kyle Knoepfel, Dmitri Konstantinov, Attila Krasznahorkay, Frank Krauss, Benjamin Edward Krikler, David Lange, Paul Laycock, Qiang Li, Kilian Lieret, Miaoyuan Liu, Vladimir Loncar, Leif Lönnblad, Fabio Maltoni, Michelangelo Mangano, Zachary Louis Marshall, Pere Mato, Olivier Mattelaer, Joshua Angus McFayden, Samuel Meehan, Alaettin Serhan Mete, Ben Morgan, Stephen Mrenna, Servesh Muralidharan, Ben Nachman, Mark S. Neubauer, Tobias Neumann, Jennifer Ngadiuba, Isobel Ojalvo, Kevin Pedro, Maurizio Perini, Danilo Piparo, Jim Pivarski, Simon Plätzer, Witold Pokorski, Adrian Alan Pol, Stefan Prestel, Alberto Ribon, Martin Ritter, Andrea Rizzi, Eduardo Rodrigues, Stefan Roiser, Holger Schulz, Markus Schulz, Marek Schönherr, Elizabeth Sexton-Kennedy, Frank Siegert, Andrzej Siódmok, Graeme A Stewart, Malik Sudhir, Sioni Paris Summers, Savannah Jennifer Thais, Nhan Viet Tran, Andrea Valassi, Marc Verderi, Dorothea Vom Bruch, Gordon T. Watts, Torre Wenaus, Efe Yazgan
Common and community software packages, such as ROOT, Geant4 and event generators have been a key part of the LHC's success so far and continued development and optimisation will be critical in the future. The challenges are driven by an ambitious physics programme, notably the LHC accelerator upgrade to high-luminosity, HL-LHC, and the corresponding detector upgrades of ATLAS and CMS. In this document we address the issues for software that is used in multiple experiments (usually even more widely than ATLAS and CMS) and maintained by teams of developers who are either not linked to a particular experiment or who contribute to common software within the context of their experiment activity. We also give space to general considerations for future software and projects that tackle upcoming challenges, no matter who writes it, which is an area where community convergence on best practice is extremely useful.
Tim Schwägerl, Cigdem Issever, Karl Jansen, Teng Jian Khoo, Stefan Kühn, Cenk Tüysüz, Hannsjörg Weber
Mar 23, 2023·quant-ph·PDF The reconstruction of trajectories of charged particles is a key computational challenge for current and future collider experiments. Considering the rapid progress in quantum computing, it is crucial to explore its potential for this and other problems in high-energy physics. The problem can be formulated as a quadratic unconstrained binary optimization (QUBO) and solved using the variational quantum eigensolver (VQE) algorithm. In this work the effects of dividing the QUBO into smaller sub-QUBOs that fit on the hardware available currently or in the near term are assessed. Then, the performance of the VQE on small sub-QUBOs is studied in an ideal simulation, using a noise model mimicking a quantum device and on IBM quantum computers. This work serves as a proof of principle that the VQE could be used for particle tracking and investigates modifications of the VQE to make it more suitable for combinatorial optimization.
B. C. Allanach, T. J. Khoo, C. G. Lester, S. L. Williams
Recent ATLAS data significantly extend the exclusion limits for supersymmetric particles. We examine the impact of such data on global fits of the constrained minimal supersymmetric standard model (CMSSM) to indirect and cosmological data. We calculate the likelihood map of the ATLAS search, taking into account systematic errors on the signal and on the background. We validate our calculation against the ATLAS determinaton of 95% confidence level exclusion contours. A previous CMSSM global fit is then re-weighted by the likelihood map, which takes a bite at the high probability density region of the global fit, pushing scalar and gaugino masses up.
B. C. Allanach, T. J. Khoo, K. Sakurai
Recent LHC data significantly extend the exclusion limits for supersymmetric particles, particularly in the jets plus missing transverse momentum channels. The most recent such data have so far been interpreted by the experiment in only two different supersymmetry breaking models: the constrained minimal supersymmetric standard model (CMSSM) and a simplified model with only squarks and gluinos and massless neutralinos. We compare kinematical distributions of supersymmetric signal events predicted by the CMSSM and anomaly mediated supersymmetry breaking (mAMSB) before calculating exclusion limits in mAMSB. We obtain a lower limit of 900 GeV on squark and gluino masses at the 95% confidence level for the equal mass limit, tan(beta)=10 and mu>0.
A. J. Barr, T. J. Khoo, P. Konar, K. Kong, C. G. Lester, K. T. Matchev, M. Park
This paper seeks to demonstrate that many of the existing mass-measurement variables proposed for hadron colliders (mT, mEff, mT2, missing pT, hT, rootsHatMin, etc.) are far more closely related to each other than is widely appreciated, and indeed can all be viewed as a common mass bound specialized for a variety of purposes. A consequence of this is that one may understand better the strengths and weaknesses of each variable, and the circumstances in which each can be used to best effect. In order to achieve this, we find it necessary first to revisit the seemingly empty and infertile wilderness populated by the subscript "T" (as in pT) in order to remind ourselves what this process of transversification actually means. We note that, far from being simple, transversification can mean quite different things to different people. Those readers who manage to battle through the barrage of transverse notation distinguishing mass-preserving projections from velocity preserving projections, and `early projection' from `late projection', will find their efforts rewarded towards the end of the paper with (i) a better understanding of how collider mass variables fit together, (ii) an appreciation of how these variables could be generalized to search for things more complicated than supersymmetry, (iii) will depart with an aversion to thoughtless or naive use of the so-called `transverse' methods of any of the popular computer Lorentz-vector libraries, and (iv) will take care in their subsequent papers to be explicit about which of the 61 identified variants of the `transverse mass' they are employing.
T. J. Khoo, A. Reinsvold Hall, N. Skidmore, S. Alderweireldt, J. Anders, C. Burr, W. Buttinger, P. David, L. Gouskos, L. Gray, S. Hageboeck, A. Krasznahorkay, P. Laycock, A. Lister, Z. Marshall, A. B. Meyer, T. Novak, S. Rappoccio, M. Ritter, E. Rodrigues, J. Rumsevicius, L. Sexton-Kennedy, N. Smith, G. A. Stewart, S. Wertz
In High Energy Physics (HEP), analysis metadata comes in many forms -- from theoretical cross-sections, to calibration corrections, to details about file processing. Correctly applying metadata is a crucial and often time-consuming step in an analysis, but designing analysis metadata systems has historically received little direct attention. Among other considerations, an ideal metadata tool should be easy to use by new analysers, should scale to large data volumes and diverse processing paradigms, and should enable future analysis reinterpretation. This document, which is the product of community discussions organised by the HEP Software Foundation, categorises types of metadata by scope and format and gives examples of current metadata solutions. Important design considerations for metadata systems, including sociological factors, analysis preservation efforts, and technical factors, are discussed. A list of best practices and technical requirements for future analysis metadata systems is presented. These best practices could guide the development of a future cross-experimental effort for analysis metadata tools.