Guillaume Chalons, Mark D. Goodsell, Sabine Kraml, Humberto Reyes-González, Sophie L. Williamson
Dirac gauginos are a well-motivated extension of the MSSM, leading to interesting phenomenological consequences. At the LHC, gluino-pair production is enhanced while squark production is suppressed as compared to the MSSM, and the decay signatures are altered by a more complex chargino and neutralino spectrum. We investigate how this impacts current gluino and squark mass limits from Run~2 of the LHC. Concretely, we compare different assumptions about the electroweak-ino spectrum through four benchmark models paying particular attention to the effect of the trilinear $λ_S$ coupling, which induces a mass splitting between the mostly bino/U(1) adjoint states. Among other results, we show that for large $λ_S$ the additional $\tildeχ^0_2\to f\bar f \tildeχ^0_1$ decays somewhat weaken the limits on gluinos (squarks) in the case of heavy squarks (gluinos). Moreover, we compare the limits in the gluino vs. squark mass plane to those obtained in equivalent MSSM scenarios.
Mark D. Goodsell, Sabine Kraml, Humberto Reyes-González, Sophie L. Williamson
Supersymmetric models with Dirac instead of Majorana gaugino masses have distinct phenomenological consequences. In this paper, we investigate the electroweakino sector of the Minimal Dirac Gaugino Supersymmetric Standard Model (MDGSSM) with regards to dark matter (DM) and collider constraints. We delineate the parameter space where the lightest neutralino of the MDGSSM is a viable DM candidate, that makes for at least part of the observed relic abundance while evading constraints from DM direct detection, LEP and lowenergy data, and LHC Higgs measurements. The collider phenomenology of the thus emerging scenarios is characterised by the richer electroweakino spectrum as compared to the Minimal Supersymmetric Standard Model (MSSM) -- 6 neutralinos and 3 charginos instead of 4 and 2 in the MSSM, naturally small mass splittings, and the frequent presence of long-lived particles, both charginos and/or neutralinos. Reinterpreting ATLAS and CMS analyses with the help of SModelS and MadAnalysis 5, we discuss the sensitivity of existing LHC searches for new physics to these scenarios and show which cases can be constrained and which escape detection. Finally, we propose a set of benchmark points which can be useful for further studies, designing dedicated experimental analyses and/or investigating the potential of future experiments.
Gaël Alguero, Jan Heisig, Charanjit Khosa, Sabine Kraml, Suchita Kulkarni, Andre Lessa, Humberto Reyes-González, Wolfgang Waltenberger, Alicia Wongel
We present version 2 of SModelS, a program package for the fast reinterpretation of LHC searches for new physics on the basis of simplified model results. The major novelty of the SModelS v2 series is an extended topology description with a flexible number of particle attributes, such as spin, charge, decay width, etc. This enables, in particular, the treatment of a wide range of signatures with long-lived particles. Moreover, constraints from prompt and long-lived searches can be evaluated simultaneously in the same run. The current database includes results from searches for heavy stable charged particles, disappearing tracks, displaced jets and displaced leptons, in addition to a large number of prompt searches. The capabilities of the program are demonstrated by two physics applications: constraints on long-lived charged scalars in the scotogenic model, and constraints on the electroweak-ino sector in the Minimal Supersymmetric Standard Model.
Humberto Reyes-Gonzalez, Riccardo Torre
Normalizing Flows (NFs) are emerging as a powerful class of generative models, as they not only allow for efficient sampling, but also deliver, by construction, density estimation. They are of great potential usage in High Energy Physics (HEP), where complex high dimensional data and probability distributions are everyday's meal. However, in order to fully leverage the potential of NFs it is crucial to explore their robustness as data dimensionality increases. Thus, in this contribution, we discuss the performances of some of the most popular types of NFs on the market, on some toy data sets with increasing number of dimensions.
Guillaume Chalons, Mark Goodsell, Sabine Kraml, Humberto Reyes-González, Sophie L. Williamson
Most SUSY searches at the LHC are optimised for the MSSM, where gauginos are Majorana particles. By introducing Dirac gauginos, we obtain an enriched phenomenology, from which considerable differences in the LHC signatures and limits are expected as compared to the MSSM. Concretely, in the minimal Dirac gaugino model (MDGSSM) we have six neutralino and three chargino states. Moreover, production cross sections are enhanced for gluinos, while for squarks they are suppressed. In this contribution, we explore the consequences of the current LHC limits on gluinos and squarks in this model.
Gaël Alguero, Jan Heisig, Charanjit K. Khosa, Sabine Kraml, Suchita Kulkarni, Andre Lessa, Philipp Neuhuber, Humberto Reyes-González, Wolfgang Waltenberger, Alicia Wongel
SModelS is an automatized tool enabling the fast interpretation of simplified model results from the LHC within any model of new physics respecting a $\mathbb{Z}_2$ symmetry. In this contribution, we report on two important updates of SModelS during 2020: the extension of the SModelS' database with 13 ATLAS and 10 CMS analyses, including 5 ATLAS and 1 CMS analyses at full Run~2 luminosity, and the ability to use full likelihoods now provided by ATLAS in the form of pyhf JSON files. Moreover, we briefly explain how to use SModelS and give an overview of ongoing developments.
Humberto Reyes-Gonzalez, Riccardo Torre
We propose the NFLikelihood, an unsupervised version, based on Normalizing Flows, of the DNNLikelihood proposed in Ref.[1]. We show, through realistic examples, how Autoregressive Flows, based on affine and rational quadratic spline bijectors, are able to learn complicated high-dimensional Likelihoods arising in High Energy Physics (HEP) analyses. We focus on a toy LHC analysis example already considered in the literature and on two Effective Field Theory fits of flavor and electroweak observables, whose samples have been obtained throught the HEPFit code. We discuss advantages and disadvantages of the unsupervised approach with respect to the supervised one and discuss possible interplays of the two.
Oz Amram, Luca Anzalone, Joschka Birk, Darius A. Faroughy, Anna Hallin, Gregor Kasieczka, Michael Krämer, Ian Pang, Humberto Reyes-Gonzalez, David Shih
Foundation models are deep learning models pre-trained on large amounts of data which are capable of generalizing to multiple datasets and/or downstream tasks. This work demonstrates how data collected by the CMS experiment at the Large Hadron Collider can be useful in pre-training foundation models for HEP. Specifically, we introduce the AspenOpenJets dataset, consisting of approximately 178M high $p_T$ jets derived from CMS 2016 Open Data. We show how pre-training the OmniJet-$α$ foundation model on AspenOpenJets improves performance on generative tasks with significant domain shift: generating boosted top and QCD jets from the simulated JetClass dataset. In addition to demonstrating the power of pre-training of a jet-based foundation model on actual proton-proton collision data, we provide the ML-ready derived AspenOpenJets dataset for further public use.
Pietro Cappelli, Gaia Grosso, Marco Letizia, Humberto Reyes-González, Marco Zanetti
Generative models are increasingly central to scientific workflows, yet their systematic use and interpretation require a proper understanding of their limitations through rigorous validation. Classic approaches struggle with scalability, statistical power, or interpretability when applied to high-dimensional data, making it difficult to certify the reliability of these models in realistic, high-dimensional scientific settings. Here, we propose the use of the New Physics Learning Machine (NPLM), a learning-based approach to goodness-of-fit testing inspired by the Neyman--Pearson construction, to test generative networks trained on high-dimensional scientific data. We demonstrate the performance of NPLM for validation in two benchmark cases: generative models trained on mixtures of Gaussian models with increasing dimensionality, and a public end-to-end model, known as FlowSim, developed to generate high-energy physics collision events. We demonstrate that the NPLM can serve as a powerful validation method while also providing a means to diagnose sub-optimally modeled regions of the data.
Tommaso Dorigo, Pietro Vischia, Shahzaib Abbas, Tosin Adewumi, Lama Alkhaled, Lorenzo Arsini, Muhammad Awais, Maxim Borisyak, András Bóta, Florian Bury, Sascha Caron, James Carzon, Long Chen, Prakash C. Chhipa, Paul Christakopoulos, Jacopo De Piccoli, Andrea De Vita, Zlatan Dimitrov, Michele Doro, Luigi Favaro, Francesco Ferranti, Santiago Folgueras, Rihab Gargouri, Nicolas R. Gauger, Andrea Giammanco, Christian Glaser, Tobias Golling, João A. Gonçalves, Hui Han, Hamza Hanif, Lukas Heinrich, Yan Chai Hum, Florent Imbert, Andreas Ipp, Michael Kagan, Noor Kainat Syeda, Rukshak Kapoor, Aparup Khatua, Eduard J. Kerkhoven, Jan Kieseler, Tobias Kortus, Ashish Kumar Singh, Marius S. Köppel, Daniel Lanchares, Ann Lee, Pelayo Leguina, Christos Leonidopoulos, Giuseppe Levi, Boying Li, Chang Liu, Marcus Liwicki, Karl Lowenmark, Enrico Lupi, Carlo Mancini-Terracciano, Dominik Maršík, Leonidas Matsakas, Hamam Mokayed, Federico Nardi, Amirhossein Nayebiastaneh, Xuan T. Nguyen, Aitor Orio, Jingjing Pan, Jigar Patel, Carmelo Pellegrino, María Pereira Martínez, Karolos Potamianos, Shah Rukh Qasim, Martin Ravn, Luis Recabarren Vergara, Humberto Reyes-González, Hipolito A. Riveros Guevara, Ippocratis D. Saltas, Rajkumar Saini, Fredrik Sandin, Alexander Schilling, Kylian Schmidt, Nicola Serra, Saqib Shahzad, Foteini Simistira Liwicki, Giles C. Strong, Kristian Tchiorniy, Mia Tosi, Andrey Ustyuzhanin, Xabier Cid Vidal, Kinga A. Wozniak, Mengqing Wu, Zahraa Zaher
The optimization of large experiments in fundamental science, such as detectors for subnuclear physics at particle colliders, shares with the optimization of complex systems for industrial or societal applications the common issue of addressing the inter-relation between parameters describing the hardware used in data production and parameters used to analyse those data. While in many cases this coupling can be ignored -- when the problem can be successfully factored into simpler sub-tasks and the latter addressed serially -- there are situations in which that approach fails to converge to the absolute maximum of expected performance, as it results in a mis-alignment of the optimized hardware and software solutions. In this work we consider a few use cases of interest in fundamental science collected primarily from particle physics and related areas, and a pot-pourri of industrial and societal applications where the matter is similarly of relevance. We discuss the emergence of strong hardware-software coupling in some of those systems, as well as co-design procedures that may be deployed to identify the global maximum of their relevant utility functions. We observe how numerous opportunities exist to advance methods and tools for hardware-software co-design optimization, bridging fundamental science and industry through application- and challenge-driven projects, and shaping the future of scientific experiments and industrial systems.
Jack Y. Araz, Andy Buckley, Benjamin Fuks, Humberto Reyes-Gonzalez, Wolfgang Waltenberger, Sophie L. Williamson, Jamie Yellen
To gain a comprehensive view of what the LHC tells us about physics beyond the Standard Model (BSM), it is crucial that different BSM-sensitive analyses can be combined. But in general, search analyses are not statistically orthogonal, so performing comprehensive combinations requires knowledge of the extent to which the same events co-populate multiple analyses' signal regions. We present a novel, stochastic method to determine this degree of overlap and a graph algorithm to efficiently find the combination of signal regions with no mutual overlap that optimises expected upper limits on BSM-model cross-sections. The gain in exclusion power relative to single-analysis limits is demonstrated with models with varying degrees of complexity, ranging from simplified models to a 19-dimensional supersymmetric model.
Jesse C. Cresswell, Brendan Leigh Ross, Gabriel Loaiza-Ganem, Humberto Reyes-Gonzalez, Marco Letizia, Anthony L. Caterini
Precision measurements and new physics searches at the Large Hadron Collider require efficient simulations of particle propagation and interactions within the detectors. The most computationally expensive simulations involve calorimeter showers. Advances in deep generative modelling - particularly in the realm of high-dimensional data - have opened the possibility of generating realistic calorimeter showers orders of magnitude more quickly than physics-based simulation. However, the high-dimensional representation of showers belies the relative simplicity and structure of the underlying physical laws. This phenomenon is yet another example of the manifold hypothesis from machine learning, which states that high-dimensional data is supported on low-dimensional manifolds. We thus propose modelling calorimeter showers first by learning their manifold structure, and then estimating the density of data across this manifold. Learning manifold structure reduces the dimensionality of the data, which enables fast training and generation when compared with competing methods.
Mark D. Goodsell, Sabine Kraml, Humberto Reyes-González, Sophie L. Williamson
Supersymmetric dark matter has been studied extensively in the context of the MSSM, where gauginos have Majorana masses. Introducing Dirac gaugino masses, we obtain an enriched phenomenology from which considerable differences in, e.g., LHC signatures can be expected. Concretely, in the Minimal Dirac Gaugino Model (MDGSSM) we have an electroweakino sector extended by two extra neutralinos and one extra chargino. The bino- and wino-like states bring about small mass splittings leading to the frequent presence of scenarios with Long Lived Particles (LLPs). In this contribution, we delineate the parameter space of the electroweakino sector of the MDGSSM, where the lightest neutralino is a viable dark matter candidate that escapes current dark matter direct detection. We then focus on the allowed regions that contain LLPs and confront them against the corresponding LHC searches. Finally, we discuss the predominant case of long-lived neutralinos, to which no search is currently sensitive.
Jack Y. Araz, Andy Buckley, Gregor Kasieczka, Jan Kieseler, Sabine Kraml, Anders Kvellestad, Andre Lessa, Tomasz Procter, Are Raklev, Humberto Reyes-Gonzalez, Krzysztof Rolbiecki, Sezen Sekmen, Gokhan Unel
With the increasing usage of machine-learning in high-energy physics analyses, the publication of the trained models in a reusable form has become a crucial question for analysis preservation and reuse. The complexity of these models creates practical issues for both reporting them accurately and for ensuring the stability of their behaviours in different environments and over extended timescales. In this note we discuss the current state of affairs, highlighting specific practical issues and focusing on the most promising technical and strategic approaches to ensure trustworthy analysis-preservation. This material originated from discussions in the LHC Reinterpretation Forum and the 2023 PhysTeV workshop at Les Houches.
Andrea Coccaro, Marco Letizia, Humberto Reyes-Gonzalez, Riccardo Torre
Normalizing flows have emerged as a powerful brand of generative models, as they not only allow for efficient sampling of complicated target distributions but also deliver density estimation by construction. We propose here an in-depth comparison of coupling and autoregressive flows, both based on symmetric (affine) and non-symmetric (rational quadratic spline) bijectors, considering four different architectures: real-valued non-Volume preserving (RealNVP), masked autoregressive flow (MAF), coupling rational quadratic spline (C-RQS), and autoregressive rational quadratic spline (A-RQS). We focus on a set of multimodal target distributions of increasing dimensionality ranging from 4 to 400. The performances were compared by means of different test statistics for two-sample tests, built from known distance measures: the sliced Wasserstein distance, the dimension-averaged one-dimensional Kolmogorov--Smirnov test, and the Frobenius norm of the difference between correlation matrices. Furthermore, we included estimations of the variance of both the metrics and the trained models. Our results indicate that the A-RQS algorithm stands out both in terms of accuracy and training speed. Nonetheless, all the algorithms are generally able, without too much fine-tuning, to learn complicated distributions with limited training data and in a reasonable time of the order of hours on a Tesla A40 GPU. The only exception is the C-RQS, which takes significantly longer to train, does not always provide good accuracy, and becomes unstable for large dimensionalities. All algorithms were implemented using \textsc{TensorFlow2} and \textsc{TensorFlow Probability} and have been made available on \href{https://github.com/NF4HEP/NormalizingFlowsHD}{GitHub}.
Joep Geuskens, Nishank Gite, Michael Krämer, Vinicius Mikuni, Alexander Mück, Benjamin Nachman, Humberto Reyes-González
Identifying the origin of high-energy hadronic jets ('jet tagging') has been a critical benchmark problem for machine learning in particle physics. Jets are ubiquitous at colliders and are complex objects that serve as prototypical examples of collections of particles to be categorized. Over the last decade, machine learning-based classifiers have replaced classical observables as the state of the art in jet tagging. Increasingly complex machine learning models are leading to increasingly more effective tagger performance. Our goal is to address the question of convergence -- are we getting close to the fundamental limit on jet tagging or is there still potential for computational, statistical, and physical insights for further improvements? We address this question using state-of-the-art generative models to create a realistic, synthetic dataset with a known jet tagging optimum. Various state-of-the-art taggers are deployed on this dataset, showing that there is a significant gap between their performance and the optimum. Our dataset and software are made public to provide a benchmark task for future developments in jet tagging and other areas of particle physics.
Federico Ambrogi, Juhi Dutta, Jan Heisig, Sabine Kraml, Suchita Kulkarni, Ursula Laa, Andre Lessa, Philipp Neuhuber, Humberto Reyes-González, Wolfgang Waltenberger, Matthias Wolf
SModelS is an automatised tool enabling the fast interpretation of simplified model results from the LHC within any model of new physics respecting a $\mathbb{Z}_2$ symmetry. With the version 1.2 we announce several new features. First, previous versions were restricted to missing energy signatures and assumed prompt decays within each decay chain. SModelS v1.2 considers the lifetime of each $\mathbb{Z}_2$-odd particle and appropriately takes into account missing energy, heavy stable charge particle and R-hadron signatures. Second, the current version allows for a combination of signal regions in efficiency map results whenever a covariance matrix is available from the experiment. This is an important step towards fully exploiting the constraining power of efficiency map results. Several other improvements increase the user-friendliness, such as the use of wildcards in the selection of experimental results, and a faster database which can be given as a URL. Finally, smodelsTools provides an interactive plots maker to conveniently visualize the results of a model scan.
Claudius Krause, Michele Faucci Giannelli, Gregor Kasieczka, Benjamin Nachman, Dalila Salamani, David Shih, Anna Zaborowska, Oz Amram, Kerstin Borras, Matthew R. Buckley, Erik Buhmann, Thorsten Buss, Renato Paulo Da Costa Cardoso, Anthony L. Caterini, Nadezda Chernyavskaya, Federico A. G. Corchia, Jesse C. Cresswell, Sascha Diefenbacher, Etienne Dreyer, Vijay Ekambaram, Engin Eren, Florian Ernst, Luigi Favaro, Matteo Franchini, Frank Gaede, Eilam Gross, Shih-Chieh Hsu, Kristina Jaruskova, Benno Käch, Jayant Kalagnanam, Raghav Kansal, Taewoo Kim, Dmitrii Kobylianskii, Anatolii Korol, William Korcari, Dirk Krücker, Katja Krüger, Marco Letizia, Shu Li, Qibin Liu, Xiulong Liu, Gabriel Loaiza-Ganem, Thandikire Madula, Peter McKeown, Isabell-A. Melzer-Pellmann, Vinicius Mikuni, Nam Nguyen, Ayodele Ore, Sofia Palacios Schweitzer, Ian Pang, Kevin Pedro, Tilman Plehn, Witold Pokorski, Huilin Qu, Piyush Raikwar, John A. Raine, Humberto Reyes-Gonzalez, Lorenzo Rinaldi, Brendan Leigh Ross, Moritz A. W. Scham, Simon Schnake, Chase Shimmin, Eli Shlizerman, Nathalie Soybelman, Mudhakar Srivatsa, Kalliopi Tsolaki, Sofia Vallecorsa, Kyongmin Yeo, Rui Zhang
We present the results of the "Fast Calorimeter Simulation Challenge 2022" - the CaloChallenge. We study state-of-the-art generative models on four calorimeter shower datasets of increasing dimensionality, ranging from a few hundred voxels to a few tens of thousand voxels. The 31 individual submissions span a wide range of current popular generative architectures, including Variational AutoEncoders (VAEs), Generative Adversarial Networks (GANs), Normalizing Flows, Diffusion models, and models based on Conditional Flow Matching. We compare all submissions in terms of quality of generated calorimeter showers, as well as shower generation time and model size. To assess the quality we use a broad range of different metrics including differences in 1-dimensional histograms of observables, KPD/FPD scores, AUCs of binary classifiers, and the log-posterior of a multiclass classifier. The results of the CaloChallenge provide the most complete and comprehensive survey of cutting-edge approaches to calorimeter fast simulation to date. In addition, our work provides a uniquely detailed perspective on the important problem of how to evaluate generative models. As such, the results presented here should be applicable for other domains that use generative AI and require fast and faithful generation of samples in a large phase space.
Jon Butterworth, Sabine Kraml, Harrison Prosper, Andy Buckley, Louie Corpe, Cristinel Diaconu, Mark Goodsell, Philippe Gras, Martin Habedank, Clemens Lange, Kati Lassila-Perini, André Lessa, Rakhi Mahbubani, Judita Mamužić, Zach Marshall, Thomas McCauley, Humberto Reyes-Gonzalez, Krzysztof Rolbiecki, Sezen Sekmen, Giordon Stark, Graeme Watt, Jonas Würzinger, Shehu AbdusSalam, Aytul Adiguzel, Amine Ahriche, Ben Allanach, Mohammad M. Altakach, Jack Y. Araz, Alexandre Arbey, Saiyad Ashanujjaman, Volker Austrup, Emanuele Bagnaschi, Sumit Banik, Csaba Balazs, Daniele Barducci, Philip Bechtle, Samuel Bein, Nicolas Berger, Tisa Biswas, Fawzi Boudjema, Jamie Boyd, Carsten Burgard, Jackson Burzynski, Jordan Byers, Giacomo Cacciapaglia, Cécile Caillol, Orhan Cakir, Christopher Chang, Gang Chen, Andrea Coccaro, Yara do Amaral Coutinho, Andreas Crivellin, Leo Constantin, Giovanna Cottin, Hridoy Debnath, Mehmet Demirci, Juhi Dutta, Joe Egan, Carlos Erice Cid, Farida Fassi, Matthew Feickert, Arnaud Ferrari, Pavel Fileviez Perez, Dillon S. Fitzgerald, Roberto Franceschini, Benjamin Fuks, Lorenz Gärtner, Kirtiman Ghosh, Andrea Giammanco, Alejandro Gomez Espinosa, Letícia M. Guedes, Giovanni Guerrieri, Christian Gütschow, Abdelhamid Haddad, Mahsana Haleem, Hassane Hamdaoui, Sven Heinemeyer, Lukas Heinrich, Ben Hodkinson, Gabriela Hoff, Cyril Hugonie, Sihyun Jeon, Adil Jueid, Deepak Kar, Anna Kaczmarska, Venus Keus, Michael Klasen, Kyoungchul Kong, Joachim Kopp, Michael Krämer, Manuel Kunkel, Bertrand Laforge, Theodota Lagouri, Eric Lancon, Peilian Li, Gabriela Lima Lichtenstein, Yang Liu, Steven Lowette, Jayita Lahiri, Siddharth Prasad Maharathy, Farvah Mahmoudi, Vasiliki A. Mitsou, Sanjoy Mandal, Michelangelo Mangano, Kentarou Mawatari, Peter Meinzinger, Manimala Mitra, Mojtaba Mohammadi Najafabadi, Sahana Narasimha, Siavash Neshatpour, Jacinto P. Neto, Mark Neubauer, Mohammad Nourbakhsh, Giacomo Ortona, Rojalin Padhan, Orlando Panella, Timothée Pascal, Brian Petersen, Werner Porod, Farinaldo S. Queiroz, Shakeel Ur Rahaman, Are Raklev, Hossein Rashidi, Patricia Rebello Teles, Federico Leo Redi, Jürgen Reuter, Tania Robens, Abhishek Roy, Subham Saha, Ahmetcan Sansar, Kadir Saygin, Nikita Schmal, Jeffrey Shahinian, Sukanya Sinha, Ricardo C. Silva, Tim Smith, Tibor Šimko, Andrzej Siodmok, Ana M. Teixeira, Tamara Vázquez Schröder, Carlos Vázquez Sierra, Yoxara Villamizar, Wolfgang Waltenberger, Peng Wang, Martin White, Kimiko Yamashita, Ekin Yoruk, Xuai Zhuang
Waleed Abdallah, Shehu AbdusSalam, Azar Ahmadov, Amine Ahriche, Gaël Alguero, Benjamin C. Allanach, Jack Y. Araz, Alexandre Arbey, Chiara Arina, Peter Athron, Emanuele Bagnaschi, Yang Bai, Michael J. Baker, Csaba Balazs, Daniele Barducci, Philip Bechtle, Aoife Bharucha, Andy Buckley, Jonathan Butterworth, Haiying Cai, Claudio Campagnari, Cari Cesarotti, Marcin Chrzaszcz, Andrea Coccaro, Eric Conte, Jonathan M. Cornell, Louie Dartmoor Corpe, Matthias Danninger, Luc Darmé, Aldo Deandrea, Nishita Desai, Barry Dillon, Caterina Doglioni, Juhi Dutta, John R. Ellis, Sebastian Ellis, Farida Fassi, Matthew Feickert, Nicolas Fernandez, Sylvain Fichet, Jernej F. Kamenik, Thomas Flacke, Benjamin Fuks, Achim Geiser, Marie-Hélène Genest, Akshay Ghalsasi, Tomas Gonzalo, Mark Goodsell, Stefania Gori, Philippe Gras, Admir Greljo, Diego Guadagnoli, Sven Heinemeyer, Lukas A. Heinrich, Jan Heisig, Deog Ki Hong, Tetiana Hryn'ova, Katri Huitu, Philip Ilten, Ahmed Ismail, Adil Jueid, Felix Kahlhoefer, Jan Kalinowski, Deepak Kar, Yevgeny Kats, Charanjit K. Khosa, Valeri Khoze, Tobias Klingl, Pyungwon Ko, Kyoungchul Kong, Wojciech Kotlarski, Michael Krämer, Sabine Kraml, Suchita Kulkarni, Anders Kvellestad, Clemens Lange, Kati Lassila-Perini, Seung J. Lee, Andre Lessa, Zhen Liu, Lara Lloret Iglesias, Jeanette M. Lorenz, Danika MacDonell, Farvah Mahmoudi, Judita Mamuzic, Andrea C. Marini, Pete Markowitz, Pablo Martinez Ruiz del Arbol, David Miller, Vasiliki Mitsou, Stefano Moretti, Marco Nardecchia, Siavash Neshatpour, Dao Thi Nhung, Per Osland, Patrick H. Owen, Orlando Panella, Alexander Pankov, Myeonghun Park, Werner Porod, Darren Price, Harrison Prosper, Are Raklev, Jürgen Reuter, Humberto Reyes-González, Thomas Rizzo, Tania Robens, Juan Rojo, Janusz A. Rosiek, Oleg Ruchayskiy, Veronica Sanz, Kai Schmidt-Hoberg, Pat Scott, Sezen Sekmen, Dipan Sengupta, Elizabeth Sexton-Kennedy, Hua-Sheng Shao, Seodong Shin, Luca Silvestrini, Ritesh Singh, Sukanya Sinha, Jory Sonneveld, Yotam Soreq, Giordon H. Stark, Tim Stefaniak, Jesse Thaler, Riccardo Torre, Emilio Torrente-Lujan, Gokhan Unel, Natascia Vignaroli, Wolfgang Waltenberger, Nicholas Wardle, Graeme Watt, Georg Weiglein, Martin J. White, Sophie L. Williamson, Jonas Wittbrodt, Lei Wu, Stefan Wunsch, Tevong You, Yang Zhang, José Zurita