Andrea Addazi, Andy Buckley, Jose Bellido, Zhen Cao, Ruben Conceição, Lorenzo Cazon, Armando di Matteo, Bruce Dawson, Kasumasa Kawata, Paolo Lipari, Analiza Mariazzi, Marco Muzio, Shoichi Ogio, Sergey Ostapchenko, Mário Pimenta, Tanguy Pierog, Andres Romero-Wolf, Felix Riehn, David Schmidt, Eva Santos, Frank Schroeder, Karen Caballero-Mora, Pat Scott, Takashi Sako, Carlos Todero Peixoto, Ralf Ulrich, Darko Veberic, Martin White
Air showers, produced by the interaction of energetic cosmic rays with the atmosphere, are an excellent alternative to study particle physics at energies beyond any human-made particle accelerator. For that, it is necessary to identify first the mass composition of the primary cosmic ray (and its energy). None of the existing high energy interaction models have been able to reproduce coherently all air shower observables over the entire energy and zenith angle phase space. This is despite having tried all possible combinations for the cosmic ray mass composition. This proposal outlines a self-consistent strategy to study high energy particle interactions and identify the energy spectra and mass composition of cosmic rays. This strategy involves the participation of different particle accelerators and astrophysics experiments. This is important to cover the entire cosmic ray energy range and a larger phase-space of shower observables to probe the high energy interaction models.
Christian Bierlich, Andy Buckley, Christian Holm Christensen, Peter Harald Lindenov Christiansen, Cody B. Duncan, Jan Fiete Grosse-Oetringhaus, Przemyslaw Karczmarczyk, Patrick Kirchgaeßer, Jochen Klein, Leif Lönnblad, Roberto Preghenella, Christine O. Rasmussen, Maria Stefaniak, Vytautas Vislavicus
The Rivet library is an important toolkit in particle physics, and serves as a repository for analysis data and code. It allows for comparisons between data and theoretical calculations of the final state of collision events. This paper outlines several recent additions and improvements to the framework to include support for analysis of heavy ion collision simulated data. The paper also presents examples of these recent developments and their applicability in implementing concrete physics analyses.
Andy Buckley, Anders Kvellestad, Are Raklev, Pat Scott, Jon Vegard Sparre, Jeriek Van den Abeele, Ingrid A. Vazquez-Holm
The evaluation of higher-order cross-sections is an important component in the search for new physics, both at hadron colliders and elsewhere. For most new physics processes of interest, total cross-sections are known at next-to-leading order (NLO) in the strong coupling $α_s$, and often beyond, via either higher-order terms at fixed powers of $α_s$, or multi-emission resummation. However, the computation time for such higher-order cross-sections is prohibitively expensive, and precludes efficient evaluation in parameter-space scans beyond two dimensions. Here we describe the software tool $\textsf{xsec}$, which allows for fast evaluation of cross-sections based on the use of machine-learning regression, using distributed Gaussian processes trained on a pre-generated sample of parameter points. This first version of the code provides all NLO Minimal Supersymmetric Standard Model strong-production cross-sections at the LHC, for individual flavour final states, evaluated in a fraction of a second. Moreover, it calculates regression errors, as well as estimates of errors from higher-order contributions, from uncertainties in the parton distribution functions, and from the value of $α_s$. While we focus on a specific phenomenological model of supersymmetry, the method readily generalises to any process where it is possible to generate a sufficient training sample.
Johannes Bellm, Andy Buckley, Xuan Chen, Aude Gehrmann-De Ridder, Thomas Gehrmann, Nigel Glover, Alexander Huss, Joao Pires, Stefan Höche, Joey Huston, Silvan Kuttimalai, Simon Plätzer, Emanuele Re
We perform a phenomenological study of $Z$ plus jet, Higgs plus jet and di-jet production at the Large Hadron Collider. We investigate in particular the dependence of the leading jet cross section on the jet radius as a function of the jet transverse momentum. Theoretical predictions are obtained using perturbative QCD calculations at the next-to and next-to-next-to-leading order, using a range of renormalization and factorization scales. The fixed order predictions are compared to results obtained from matching next-to-leading order calculations to parton showers. A study of the scale dependence as a function of the jet radius is used to provide a better estimate of the scale uncertainty for small jet sizes. The non-perturbative corrections as a function of jet radius are estimated from different generators.
The GAMBIT Collaboration, Peter Athron, Csaba Balázs, Torsten Bringmann, Andy Buckley, Marcin Chrząszcz, Jan Conrad, Jonathan M. Cornell, Lars A. Dal, Joakim Edsjö, Ben Farmer, Paul Jackson, Felix Kahlhoefer, Abram Krislock, Anders Kvellestad, James McKay, Farvah Mahmoudi, Gregory D. Martinez, Antje Putze, Are Raklev, Christopher Rogan, Aldo Saavedra, Christopher Savage, Pat Scott, Nicola Serra, Christoph Weniger, Martin White
One of the simplest viable models for dark matter is an additional neutral scalar, stabilised by a $\mathbb{Z}_2$ symmetry. Using the GAMBIT package and combining results from four independent samplers, we present Bayesian and frequentist global fits of this model. We vary the singlet mass and coupling along with 13 nuisance parameters, including nuclear uncertainties relevant for direct detection, the local dark matter density, and selected quark masses and couplings. We include the dark matter relic density measured by Planck, direct searches with LUX, PandaX, SuperCDMS and XENON100, limits on invisible Higgs decays from the Large Hadron Collider, searches for high-energy neutrinos from dark matter annihilation in the Sun with IceCube, and searches for gamma rays from annihilation in dwarf galaxies with the Fermi-LAT. Viable solutions remain at couplings of order unity, for singlet masses between the Higgs mass and about 300 GeV, and at masses above $\sim$1 TeV. Only in the latter case can the scalar singlet constitute all of dark matter. Frequentist analysis shows that the low-mass resonance region, where the singlet is about half the mass of the Higgs, can also account for all of dark matter, and remains viable. However, Bayesian considerations show this region to be rather fine-tuned.
Andy Buckley
I describe the work of the CEDAR collaboration in developing tools for tuning and validating Monte Carlo event generator programs. The core CEDAR task is to interface the Durham HepData database of experimental measurements to event generator validation tools such as the UCL JetWeb system - this has necessitated the migration of HepData to a new relational database system and a Java-based interaction model. The "number crunching" part of JetWeb is also being upgraded, from the Fortran HZTool library to the new C++ Rivet system and a generator interfacing layer named RivetGun. Finally, I describe how Rivet is already being used as a central part of a new generator tuning system, and summarise two other CEDAR activities, HepML and HepForge.
Andy Buckley, Mike Whalley
We describe the status of the HepData database system, following a major re-development in time for the advent of LHC data. The new HepData system benefits from use of modern database and programming language technologies, as well as a variety of high-quality tools for interfacing the data sources and their presentation, primarily via the Web. The new back-end provides much more flexible and semantic data representations than before, on which new external applications can be built to respond to the data demands of the LHC experimental era. The HepData re-development was largely motivated by a desire to have a single source of reference data for Monte Carlo validation and tuning tools, whose status and connection to HepData we also briefly review.
Andy Buckley, Hendrik Hoeth, Heiko Lacker, Holger Schulz, Jan Eike von Seggern
Data analyses in hadron collider physics depend on background simulations performed by Monte Carlo (MC) event generators. However, calculational limitations and non-perturbative effects require approximate models with adjustable parameters. In fact, we need to simultaneously tune many phenomenological parameters in a high-dimensional parameter-space in order to make the MC generator predictions fit the data. It is desirable to achieve this goal without spending too much time or computing resources iterating parameter settings and comparing the same set of plots over and over again. We present extensions and improvements to the MC tuning system, Professor, which addresses the aforementioned problems by constructing a fast analytic model of a MC generator which can then be easily fitted to data. Using this procedure it is for the first time possible to get a robust estimate of the uncertainty of generator tunings. Furthermore, we can use these uncertainty estimates to study the effect of new (pseudo-) data on the quality of tunings and therefore decide if a measurement is worthwhile in the prospect of generator tuning. The potential of the Professor method outside the MC tuning area is presented as well.
Andy Buckley, Jon Butterworth, Joseph Egan, Christian Gutschow, Sihyun Jeon, Martin Habedank, Tomasz Procter, Peng Wang, Yoran Yeh, Luzhan Yue
The CONTUR toolkit exploits RIVET and its library of more than a thousand energy-frontier differential cross-section measurements from the Large Hadron Collider to allow rapid limit-setting and consistency checks for new physics models. In this note we summarise the main changes in the new CONTUR 3 major release series. These include additional statistical treatments, efficiency improvements, new plotting utilities and many new measurements and Standard Model predictions.
Andy Buckley, Hendrik Hoeth, Heiko Lacker, Holger Schulz, Jan Eike von Seggern
In this article we describe Professor, a new program for tuning model parameters of Monte Carlo event generators to experimental data by parameterising the per-bin generator response to parameter variations and numerically optimising the parameterised behaviour. Simulated experimental analysis data is obtained using the Rivet analysis toolkit. This paper presents the Professor procedure and implementation, illustrated with the application of the method to tunes of the Pythia 6 event generator to data from the LEP/SLD and Tevatron experiments. These tunes are substantial improvements on existing standard choices, and are recommended as base tunes for LHC experiments, to be themselves systematically improved upon when early LHC data is available.
Andy Buckley, Jonathan Butterworth, Stefan Gieseke, David Grellscheid, Stefan Hoche, Hendrik Hoeth, Frank Krauss, Leif Lonnblad, Emily Nurse, Peter Richardson, Steffen Schumann, Michael H. Seymour, Torbjorn Sjostrand, Peter Skands, Bryan Webber
We review the physics basis, main features and use of general-purpose Monte Carlo event generators for the simulation of proton-proton collisions at the Large Hadron Collider. Topics included are: the generation of hard-scattering matrix elements for processes of interest, at both leading and next-to-leading QCD perturbative order; their matching to approximate treatments of higher orders based on the showering approximation; the parton and dipole shower formulations; parton distribution functions for event generators; non-perturbative aspects such as soft QCD collisions, the underlying event and diffractive processes; the string and cluster models for hadron formation; the treatment of hadron and tau decays; the inclusion of QED radiation and beyond-Standard-Model processes. We describe the principal features of the ARIADNE, Herwig++, PYTHIA 8 and SHERPA generators, together with the Rivet and Professor validation and tuning tools, and discuss the physics philosophy behind the proper use of these generators and tools. This review is aimed at phenomenologists wishing to understand better how parton-level predictions are translated into hadron-level events as well as experimentalists wanting a deeper insight into the tools available for signal and background simulation at the LHC.
Jack Y. Araz, Andy Buckley, Benjamin Fuks
High-momentum top quarks are a natural physical system in collider experiments for testing models of new physics, and jet substructure methods are key both to exploiting their largest decay mode and to assuaging resolution difficulties as the boosted system becomes increasingly collimated in the detector. To be used in new-physics interpretation studies, it is crucial that related methods get implemented in analysis frameworks allowing for the reinterpretation of the results of the LHC such as MadAnalysis 5 and Rivet. We describe the implementation of the HEPTopTagger algorithm in these two frameworks, and we exemplify the usage of the resulting functionalities to explore the sensitivity of boosted top reconstruction performance to new physics contributions from the Standard Model Effective Field Theory. The results of this study lead to important conclusions about the implicit assumption of Standard-Model-like top-quark decays in associated collider analyses, and for the prospects to constrain the Standard Model Effective Field Theory via kinematic observables built from boosted semileptonic top-antitop events selected using HEPTopTagger.
Andy Buckley, Deepak Kar, Sukanya Sinha
As no evidence for classic WIMP-based signatures of dark matter have been found at the LHC, several phenomenological studies have raised the possibility of accessing a strongly-interacting dark sector through new collider-event topologies. If dark mesons exist, their evolution and hadronization procedure are currently little constrained. They could decay promptly and result in QCD-like jet structures, even though the original decaying particles are dark sector ones; they could behave as semi-visible jets; or they could behave as completely detector-stable hadrons, in which case the final state is just the missing transverse momentum. In this contribution we will introduce a study performed to explore use of jet substructure methods to distinguish dark-sector from QCD jets in the first two scenarios, using observables in a IRC-safe linear basis, and discuss ways forward for this approach to dark-matter at the LHC.
Jack Y. Araz, Andy Buckley, Benjamin Fuks, Humberto Reyes-Gonzalez, Wolfgang Waltenberger, Sophie L. Williamson, Jamie Yellen
To gain a comprehensive view of what the LHC tells us about physics beyond the Standard Model (BSM), it is crucial that different BSM-sensitive analyses can be combined. But in general, search analyses are not statistically orthogonal, so performing comprehensive combinations requires knowledge of the extent to which the same events co-populate multiple analyses' signal regions. We present a novel, stochastic method to determine this degree of overlap and a graph algorithm to efficiently find the combination of signal regions with no mutual overlap that optimises expected upper limits on BSM-model cross-sections. The gain in exclusion power relative to single-analysis limits is demonstrated with models with varying degrees of complexity, ranging from simplified models to a 19-dimensional supersymmetric model.
Enrico Bothmann, Andy Buckley, Ilektra A. Christidi, Christian Gütschow, Stefan Höche, Max Knobbe, Tim Martin, Marek Schönherr
Poor computing efficiency of precision event generators for LHC physics has become a bottleneck for Monte-Carlo event simulation campaigns. We provide solutions to this problem by focusing on two major components of general-purpose event generators: The PDF evaluator and the matrix-element generator. For a typical production setup in the ATLAS experiment, we show that the two can consume about 80% of the total runtime. Using NLO simulations of $pp\to\ell^+\ell^-+\text{jets}$ and $pp\to t\bar{t}+\text{jets}$ as an example, we demonstrate that the computing footprint of LHAPDF and Sherpa can be reduced by factors of order 10, while maintaining the formal accuracy of the event sample. The improved codes are made publicly available.
Andy Buckley, Matthew Citron, Sylvain Fichet, Sabine Kraml, Wolfgang Waltenberger, Nicholas Wardle
We discuss the simplified likelihood framework as a systematic approximation scheme for experimental likelihoods such as those originating from LHC experiments. We develop the simplified likelihood from the Central Limit Theorem keeping the next-to-leading term in the large $N$ expansion to correctly account for asymmetries. Moreover, we present an efficient method to compute the parameters of the simplified likelihood from Monte Carlo simulations. The approach is validated using a realistic LHC-like analysis, and the limits of the approximation are explored. Finally, we discuss how the simplified likelihood data can be conveniently released in the HepData error source format and automatically built from it, making this framework a convenient tool to transmit realistic experimental likelihoods to the community.
Andy Buckley, Frank Krauss, Simon Plätzer, Michael Seymour, Simone Alioli, Jeppe Andersen, Johannes Bellm, Jon Butterworth, Mrinal Dasgupta, Claude Duhr, Stefano Frixione, Stefan Gieseke, Keith Hamilton, Gavin Hesketh, Stefan Hoeche, Hannes Jung, Wolfgang Kilian, Leif Lönnblad, Fabio Maltoni, Michelangelo Mangano, Stephen Mrenna, Zoltán Nagy, Paolo Nason, Emily Nurse, Thorsten Ohl, Carlo Oleari, Andreas Papaefstathiou, Tilman Plehn, Stefan Prestel, Emanuele Ré, Juergen Reuter, Peter Richardson, Gavin Salam, Marek Schoenherr, Steffen Schumann, Frank Siegert, Andrzej Siódmok, Malin Sjödahl, Torbjörn Sjöstrand, Peter Skands, Davison Soper, Gregory Soyez, Bryan Webber
Monte Carlo event generators (MCEGs) are the indispensable workhorses of particle physics, bridging the gap between theoretical ideas and first-principles calculations on the one hand, and the complex detector signatures and data of the experimental community on the other hand. All collider physics experiments are dependent on simulated events by MCEG codes such as Herwig, Pythia, Sherpa, POWHEG, and MG5_aMC@NLO to design and tune their detectors and analysis strategies. The development of MCEGs is overwhelmingly driven by a vibrant community of academics at European Universities, who also train the next generations of particle phenomenologists. The new challenges posed by possible future collider-based experiments and the fact that the first analyses at Run II of the LHC are now frequently limited by theory uncertainties urge the community to invest into further theoretical and technical improvements of these essential tools. In this short contribution to the European Strategy Update, we briefly review the state of the art, and the further developments that will be needed to meet the challenges of the next generation.
Andy Buckley, Holger Schulz
MC models of multiple partonic scattering inevitably introduce many free parameters, either fundamental to the models or from their integration with MC treatments of primary-scattering evolution. This non-perturbative and non-factorisable physics in particular cannot currently be constrained from theoretical principles, and hence parameter optimisation against experimental data is required. This process is commonly referred to as MC tuning. We summarise the principles, problems and history of MC tuning, and the still-evolving modern approach to both model optimisation and estimation of modelling uncertainties.
Stephen Brown, Andy Buckley, Christoph Englert, James Ferrando, Peter Galler, David J Miller, Liam Moore, Michael Russell, Chris White, Neil Warrack
We describe the latest TopFitter analysis, which uses top quark observables to fit the Wilson Coefficients of the SM augmented with dimension-6 operators. In particular, we discuss the inclusion of new LHC Run II data and the implementation of particle-level observables.
Andy Buckley, Deepak Kar, Karl Nordstrom
We describe the design and implementation of detector-bias emulation in the Rivet MC event analysis system. Implemented using C++ efficiency and kinematic smearing functors, it allows detector effects to be specified within an analysis routine, customised to the exact phase-space and reconstruction working points of the analysis. A set of standard detector functions for the physics objects of Runs 1 and 2 of the ATLAS and CMS experiments is also provided. Finally, as jet substructure is an important class of physics observable usually considered to require an explicit detector simulation, we demonstrate that a smearing approach, tuned to available substructure data and implemented in Rivet, can accurately reproduce jet-structure biases observed by ATLAS.