Bingyi Liu, Jian Teng, Hongfei Xue, Enshu Wang, Chuanhui Zhu, Pu Wang, Libing Wu
Collaborative perception significantly enhances individual vehicle perception performance through the exchange of sensory information among agents. However, real-world deployment faces challenges due to bandwidth constraints and inevitable calibration errors during information exchange. To address these issues, we propose mmCooper, a novel multi-agent, multi-stage, communication-efficient, and collaboration-robust cooperative perception framework. Our framework leverages a multi-stage collaboration strategy that dynamically and adaptively balances intermediate- and late-stage information to share among agents, enhancing perceptual performance while maintaining communication efficiency. To support robust collaboration despite potential misalignments and calibration errors, our framework prevents misleading low-confidence sensing information from transmission and refines the received detection results from collaborators to improve accuracy. The extensive evaluation results on both real-world and simulated datasets demonstrate the effectiveness of the mmCooper framework and its components.
Jian Teng, Sungwon La, Jesse T. Ault
A parallel-plate rotational rheometer measures the viscosity of a fluid by rotating the top plate relative to the bottom plate in order to induce a shear on the fluid and measuring the torques and forces that result as a function of the induced rotation rate. Manufacturing imperfections can often lead to unintentional misalignment of the plates of the rheometer, where the top and bottom plates are not perfectly parallel, and this misalignment can affect the fluid dynamics inside the rheometer. This study examines the effect that misalignment has on the viscosity measurements of Newtonian fluids in the limit of small rheometer gap heights. A theoretical model for the behavior of a general Newtonian fluid in a misaligned rheometer with a small gap height is derived using perturbation expansions. The theoretical results show that at small gap heights, misalignment can produce additional secondary velocity components and pressures in the fluid, which affect the forces and moments in the rheometer. In such cases at small Reynolds numbers, the dominant forces and moments acting on the top plate of the rheometer are the viscous force in the direction parallel to the tilt axis, the pressure moment in the direction perpendicular to the tilt axis and in the cross-sectional plane, and the viscous moment in the direction along the height of the rheometer. These forces and moments on the top plate were found to increase as the misalignment tilt angle increases, leading to the rheometer underestimating the viscosity of the fluid by a greater magnitude with larger tilt angles. Three-dimensional numerical simulations validate the theoretical predictions.
Jian Teng, Bhargav Rallabandi, Jesse T. Ault
Solute-surface interactions have garnered considerable interest in recent years as a novel control mechanism for driving unique fluid dynamics and particle transport with potential applications in fields such as biomedicine, the development of microfluidic devices, and enhanced oil recovery. In this study, we will discuss dispersion induced by the diffusioosmotic motion near a charged wall in the presence of a solute concentration gradient. Here, we introduce a plug of salt with a Gaussian distribution at the center of a channel with no background flow. As the solute diffuses, the concentration gradient drives a diffusioosmotic slip flow at the walls, which results in a recirculating flow in the channel; this, in turn, drives an advective flux of the solute concentration. This effect leads to cross-stream diffusion of the solute, altering the effective diffusivity of the solute as it diffuses along the channel. We derive theoretical predictions for the solute dynamics using a multiple-timescale analysis to quantify the dispersion driven by the solute-surface interactions. Furthermore, we derive a cross-sectionally averaged concentration equation with an effective diffusivity analogous to that from Taylor dispersion. In addition, we use numerical simulations to validate our theoretical predictions.
Sheng Xu, Enshu Wang, Hongfei Xue, Jian Teng, Bingyi Liu, Yi Zhu, Pu Wang, Libing Wu, Chunming Qiao
Collaborative perception allows connected vehicles to overcome occlusions and limited viewpoints by sharing sensory information. However, existing approaches struggle to achieve high accuracy under strict bandwidth constraints and remain highly vulnerable to random transmission packet loss. We introduce QPoint2Comm, a quantized point-cloud communication framework that dramatically reduces bandwidth while preserving high-fidelity 3D information. Instead of transmitting intermediate features, QPoint2Comm directly communicates quantized point-cloud indices using a shared codebook, enabling efficient reconstruction with lower bandwidth than feature-based methods. To ensure robustness to possible communication packet loss, we employ a masked training strategy that simulates random packet loss, allowing the model to maintain strong performance even under severe transmission failures. In addition, a cascade attention fusion module is proposed to enhance multi-vehicle information integration. Extensive experiments on both simulated and real-world datasets demonstrate that QPoint2Comm sets a new state of the art in accuracy, communication efficiency, and resilience to packet loss.
Steven R. Jackson, Teng Jian Khoo, Frederick W. Strauch
Jun 14, 2012·quant-ph·PDF Quantum walks have been shown to have impressive transport properties compared to classical random walks. However, imperfections in the quantum walk algorithm can destroy any quantum mechanical speed-up due to Anderson localization. We numerically study the effect of static disorder on a quantum walk on the glued trees graph. For small disorder, we find that the dominant effect is a type of quantum decay, and not quantum localization. For intermediate disorder, there is a crossover to diffusive transport, while a localization transition is observed at large disorder, in agreement with Anderson localization on the Cayley tree.
Alan J. Barr, Teng Jian Khoo, Partha Konar, Kyoungchul Kong, Christopher G. Lester, Konstantin T. Matchev, Myeonghun Park
We revisit the process of transversification and agglomeration of particle momenta that are often performed in analyses at hadron colliders, and show that many of the existing mass-measurement variables proposed for hadron colliders are far more closely related to each other than is widely appreciated, and indeed can all be viewed as a common mass bound specialized for a variety of purposes.
Anthony Badea, William James Fawcett, John Huth, Teng Jian Khoo, Riccardo Poggi, Lawrence Lee
High-multiplicity signatures at particle colliders can arise in Standard Model processes and beyond. With such signatures, difficulties often arise from the large dimensionality of the kinematic space. For final states containing a single type of particle signature, this results in a combinatorial problem that hides underlying kinematic information. We explore using a neural network that includes a Lorentz Layer to extract high-dimensional correlations. We use the case of squark decays in $R$-Parity-violating Supersymmetry as a benchmark, comparing the performance to that of classical methods. With this approach, we demonstrate significant improvement over traditional methods.
Jieru Ren, Bubo Ma, Lirong Liu, Wenqing Wei, Benzheng Chen, Shizheng Zhang, Hao Xu, Zhongmin Hu, Fangfang Li, Xing Wang, Shuai Yin, Jianhua Feng, Xianming Zhou, Yifang Gao, Yuan Li, Xiaohua Shi, Jianxing Li, Xueguang Ren, Zhongfeng Xu, Zhigang Deng, Wei Qi, Shaoyi Wang, Quanping Fan, Bo Cui, Weiwu Wang, Zongqiang Yuan, Jian Teng, Yuchi Wu, Zhurong Cao, Zongqing Zhao, Yuqiu Gu, Leifeng Cao, Shaoping Zhu, Rui Cheng, Yu Lei, Zhao Wang, Zexian Zhou, Guoqing Xiao, Hongwei Zhao, Dieter H. H. Hoffmann, Weimin Zhou, Yongtao Zhao
We report on charge state measurements of laser-accelerated carbon ions in the energy range of several MeV penetrating a dense partially ionized plasma. The plasma was generated by irradiation of a foam target with laser-induced hohlraum radiation in the soft X-ray regime. We used the tri-cellulose acetate (C$_{9}$H$_{16}$O$_{8}$) foam of 2 mg/cm$^{-3}$ density, and $1$-mm interaction length as target material. This kind of plasma is advantageous for high-precision measurements, due to good uniformity and long lifetime compared to the ion pulse length and the interaction duration. The plasma parameters were diagnosed to be T$_{e}$=17 eV and n$_{e}$=4 $\times$ 10$^{20}$ cm$^{-3}$. The average charge states passing through the plasma were observed to be higher than those predicted by the commonly-used semiempirical formula. Through solving the rate equations, we attribute the enhancement to the target density effects which will increase the ionization rates on one hand and reduce the electron capture rates on the other hand. In previsous measurement with partially ionized plasma from gas discharge and z-pinch to laser direct irradiation, no target density effects were ever demonstrated. For the first time, we were able to experimentally prove that target density effects start to play a significant role in plasma near the critical density of Nd-Glass laser radiation. The finding is important for heavy ion beam driven high energy density physics and fast ignitions.
Bo Zhang, Zhi-meng Zhang, Zhi-gang Deng, Wei Hong, Jian Teng, Shu-kai He, Wei-min Zhou, Yu-qiu Gu
Nonlinear Compton scattering (NCS) and nonlinear Breit-Wheeler (NBW) process are strongly multi-photon and highly nonlinear processes. In ultra intense lasers (normalized field amplitude $a_0 \gg 1$), radiation formation length is much shorter than a period and single NCS/NBW cannot be described as scatterings of electrons dressing plane waves with $γ$ photons for what they feel is a local constant crossed field. However, present theories in constant crossed fields are hard to give some important quantum features due to divergence problems, such as number of laser photons involved, instantaneous angular distribution and detailed spectrum. As an alternative, present understanding of single NCS/NBW in ultra intense lasers includes several classical and semi-quantum ideas such as forward emission, recoil reaction and spectrum cutoff. We investigated multi-photon effects on NCS/NBW in ultra intense lasers by extracting the number of laser photons involved in a single process in ultra intense lasers from formulae of existing theories. New features of single NCS in ultra intense lasers including fixed emission angle to instantaneous electron momentum, instantaneous deflection of electron, and disappearance of spectrum cutoff are deduced. Similar features of single NBW in ultra intense lasers including non-vanishing emission angles to instantaneous $γ$ photon momentum, disappearance of spectrum cutoff and appearance of spectrum lower limit are also obtained. Simulations show that corresponding signals of multi-photon effects are significant on $10$PW scale and stronger lasers.
Bubo Ma, Jieru Ren, Lirong Liu, Wenqing Wei, Benzheng Chen, Shizheng Zhang, Hao Xu, Zhongmin Hu, Fangfang Li, Xing Wang, Shuai Yin, Jianhua Feng, Xianming Zhou, Yifang Gao, Yuan Li, Xiaohua Shi, Jianxing Li, Xueguang Ren, Zhongfeng Xu, Zhigang Deng, Wei Qi, Shaoyi Wang, Quanping Fan, Bo Cui, Weiwu Wang, Zongqiang Yuan, Jian Teng, Yuchi Wu, Zhurong Cao, Zongqing Zhao, Yuqiu Gu, Leifeng Cao, Shaoping Zhu, Rui Cheng, Yu Lei, Zhao Wang, Zexian Zhou, Guoqing Xiao, Hongwei Zhao, Dieter H. H. Hoffmann, Weimin Zhou, Yongtao Zhao
The charge equilibration of laser-accelerated carbon ion beams in 2 mg/cm3 foam target was investigated experimentally. The ions were generated through target normal sheath acceleration mechanism in laser-foil interaction scheme. This allows to get the equilibrium charge state in wide energy range near Bragg peak within a single shot. By using foam, the charge equilibration measurement in density regime between gas and solid state was firstly reached out experimentally. It was found that the theoretical predictions with tabulated cross section data for gas target greatly underestimated the charge states. The experimental data are in close agreement with both semi-empirical formula as well as rate equation predictions based on ion-solid interactions. The important role of target density effects that increase the ionization probability and decrease the electron capture probability through frequent multi-collisions in foam are demonstrated. The double electron processes are shown to have little influence on the average charge states. The findings are essential for high energy density physics research where the foams are widely used, and have impacts on a broad range of applications in medical, biological and material fields. The method also provides a new approach to investigate the interaction mechanism of swift heavy ions in matter by taking advantage of the laser-accelerated short-pulse wide-energy range ions.
Zhiyao Zhang, Zhijie Li, Yunpeng Wang, Huiyu Yang, Wenhui Peng, Jian Teng, Jianchun Wang
The accurate and fast prediction of long-term dynamics of turbulence presents a significant challenge for both traditional numerical simulations and machine learning methods. In recent years, the emergence of neural operators has provided a promising approach to address this issue. The implicit U-Net enhanced Fourier neural operator (IU-FNO) has successfully demonstrated long-term stable predictions for three-dimensional incompressible turbulence. In this study, we extend this method to the three-dimensional chemically reacting compressible turbulence. Numerical results show that the IU-FNO model predicts flow dynamics significantly faster than the traditional dynamic Smagorinsky model (DSM) used in large eddy simulation (LES). In terms of prediction accuracy, the IU-FNO framework outperforms the traditional DSM in predicting the energy spectra of velocity, temperature, and density, the probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of temperature. Therefore, the IU-FNO represents a highly promising approach for predicting chemically reacting compressible turbulence.
Wen-Qing Wei, Shi-Zheng Zhang, Zhi-Gang Deng, Wei Qi, Hao Xu, Li-Rong Liu, Jia-Lin Zhang, Fang-Fang Li, Xing Xu, Zhong-Min Hu, Ben-Zheng Chen, Bu-Bo Ma, Jian-Xing Li, Xue-Guang Ren, Zhong-Feng Xu, Dieter H. H. Hoffmann, Quan-Ping Fan, Wei-Wu Wang, Shao-Yi Wang, Jian Teng, Bo Cui, Feng Lu, Lei Yang, Yu-Qiu Gu, Zong-Qing Zhao, Rui Cheng, Zhao Wang, Yu Lei, Guo-Qing Xiao, Hong-Wei Zhao, Bing Liu, Guan-Chao Zhao, Min-Sheng Liu, Hua-Sheng Xie, Lei-Feng Cao, Jie-Ru Ren, Wei-Min Zhou, Yong-Tao Zhao
A novel intense beam-driven scheme for high yield of the tri-alpha reaction 11B(p,α)2α was investigated. We used a foam target made of cellulose triacetate (TAC, C_9H_{16}O_8) doped with boron. It was then heated volumetrically by soft X-ray radiation from a laser heated hohlraum and turned into a homogenous, and long living plasma. We employed a picosecond laser pulse to generate a high-intensity energetic proton beam via the well-known Target Normal Sheath Acceleration (TNSA) mechanism. We observed up to 10^{10}/sr α particles per laser shot. This constitutes presently the highest yield value normalized to the laser energy on target. The measured fusion yield per proton exceeds the classical expectation of beam-target reactions by up to four orders of magnitude under high proton intensities. This enhancement is attributed to the strong electric fields and nonequilibrium thermonuclear fusion reactions as a result of the new method. Our approach shows opportunities to pursue ignition of aneutronic fusion.
T. J. Khoo, A. Reinsvold Hall, N. Skidmore, S. Alderweireldt, J. Anders, C. Burr, W. Buttinger, P. David, L. Gouskos, L. Gray, S. Hageboeck, A. Krasznahorkay, P. Laycock, A. Lister, Z. Marshall, A. B. Meyer, T. Novak, S. Rappoccio, M. Ritter, E. Rodrigues, J. Rumsevicius, L. Sexton-Kennedy, N. Smith, G. A. Stewart, S. Wertz
In High Energy Physics (HEP), analysis metadata comes in many forms -- from theoretical cross-sections, to calibration corrections, to details about file processing. Correctly applying metadata is a crucial and often time-consuming step in an analysis, but designing analysis metadata systems has historically received little direct attention. Among other considerations, an ideal metadata tool should be easy to use by new analysers, should scale to large data volumes and diverse processing paradigms, and should enable future analysis reinterpretation. This document, which is the product of community discussions organised by the HEP Software Foundation, categorises types of metadata by scope and format and gives examples of current metadata solutions. Important design considerations for metadata systems, including sociological factors, analysis preservation efforts, and technical factors, are discussed. A list of best practices and technical requirements for future analysis metadata systems is presented. These best practices could guide the development of a future cross-experimental effort for analysis metadata tools.
A. J. Barr, T. J. Khoo, P. Konar, K. Kong, C. G. Lester, K. T. Matchev, M. Park
This paper seeks to demonstrate that many of the existing mass-measurement variables proposed for hadron colliders (mT, mEff, mT2, missing pT, hT, rootsHatMin, etc.) are far more closely related to each other than is widely appreciated, and indeed can all be viewed as a common mass bound specialized for a variety of purposes. A consequence of this is that one may understand better the strengths and weaknesses of each variable, and the circumstances in which each can be used to best effect. In order to achieve this, we find it necessary first to revisit the seemingly empty and infertile wilderness populated by the subscript "T" (as in pT) in order to remind ourselves what this process of transversification actually means. We note that, far from being simple, transversification can mean quite different things to different people. Those readers who manage to battle through the barrage of transverse notation distinguishing mass-preserving projections from velocity preserving projections, and `early projection' from `late projection', will find their efforts rewarded towards the end of the paper with (i) a better understanding of how collider mass variables fit together, (ii) an appreciation of how these variables could be generalized to search for things more complicated than supersymmetry, (iii) will depart with an aversion to thoughtless or naive use of the so-called `transverse' methods of any of the popular computer Lorentz-vector libraries, and (iv) will take care in their subsequent papers to be explicit about which of the 61 identified variants of the `transverse mass' they are employing.
Michael Gerbush, Teng Jian Khoo, Daniel Phalen, Aaron Pierce, David Tucker-Smith
Color-octet scalars, if present at the TeV scale, will be produced in abundance at the LHC. We discuss in some detail the phenomenology of scalars in the (8,2)_{1/2} representation, recently identified by Manohar and Wise as an addition to the standard-model Higgs sector consistent with the principle of minimal flavor violation. Couplings of this multiplet to the Higgs lift the mass degeneracy among its states, possibly allowing for two-body decays of a heavier colored scalar to a lighter one and a gauge boson. We perform a renormalization group analysis of these couplings and find that limits from Tevatron searches leave little room for these decays. This fact, and the assumption of minimal flavor violation, lead us to study the case where the octets decay to the heaviest kinematically accessible fermion pairs. Focusing on pair-production events leading to (t t-bar t t-bar), (b b-bar b b-bar), and (b b-bar t t-bar) final states, we find that discovery at the LHC should be possible up to masses exceeding 1 TeV.
Mohamed Aly, Jackson Burzynski, Bryan Cardwell, Daniel C. Craik, Tal van Daalen, Tomas Dado, Ayanabha Das, Antonio Delgado Peris, Caterina Doglioni, Peter Elmer, Engin Eren, Martin B. Eriksen, Jonas Eschle, Giulio Eulisse, Conor Fitzpatrick, José Flix Molina, Alessandra Forti, Ben Galewsky, Sean Gasiorowski, Aman Goel, Loukas Gouskos, Enrico Guiraud, Kanhaiya Gupta, Stephan Hageboeck, Allison Reinsvold Hall, Lukas Heinrich, Alexander Held, José M. Hernández, Michel Hernández Villanueva, Julius Hrivnac, Michel Jouvin, Teng Jian Khoo, Luke Kreczko, Nils Krumnack, Thomas Kuhr, Baidyanath Kundu, Eric Lancon, Johannes Lange, Paul Laycock, Kilian Lieret, Nicholas J. Manganelli, Pere Mato Villa, Andrzej Novak, Antonio Perez-Calero Yzquierdo, Jim Pivarski, Mason Proffitt, Jonas Rembser, Eduardo Rodrigues, Grigori Rybkin, Jana Schaarschmidt, Henry F. Schreiner, Markus Schulz, Andrea Sciabà, Sezen Sekmen, Elizabeth Sexton-Kennedy, Oksana Shadura, Tibor Simko, Nathan Simpson, Jaydip Singh, Nicola Skidmore, Nicholas Smith, Michael Sokoloff, Graeme A. Stewart, Giles C. Strong, Gokhan Unel, Vassil Vassilev, Mark Waterlaat, Gordon Watts, Efe Yazgan
The second workshop on the HEP Analysis Ecosystem took place 23-25 May 2022 at IJCLab in Orsay, to look at progress and continuing challenges in scaling up HEP analysis to meet the needs of HL-LHC and DUNE, as well as the very pressing needs of LHC Run 3 analysis. The workshop was themed around six particular topics, which were felt to capture key questions, opportunities and challenges. Each topic arranged a plenary session introduction, often with speakers summarising the state-of-the art and the next steps for analysis. This was then followed by parallel sessions, which were much more discussion focused, and where attendees could grapple with the challenges and propose solutions that could be tried. Where there was significant overlap between topics, a joint discussion between them was arranged. In the weeks following the workshop the session conveners wrote this document, which is a summary of the main discussions, the key points raised and the conclusions and outcomes. The document was circulated amongst the participants for comments before being finalised here.
B. C. Allanach, T. J. Khoo, C. G. Lester, S. L. Williams
Recent ATLAS data significantly extend the exclusion limits for supersymmetric particles. We examine the impact of such data on global fits of the constrained minimal supersymmetric standard model (CMSSM) to indirect and cosmological data. We calculate the likelihood map of the ATLAS search, taking into account systematic errors on the signal and on the background. We validate our calculation against the ATLAS determinaton of 95% confidence level exclusion contours. A previous CMSSM global fit is then re-weighted by the likelihood map, which takes a bite at the high probability density region of the global fit, pushing scalar and gaugino masses up.
B. C. Allanach, T. J. Khoo, K. Sakurai
Recent LHC data significantly extend the exclusion limits for supersymmetric particles, particularly in the jets plus missing transverse momentum channels. The most recent such data have so far been interpreted by the experiment in only two different supersymmetry breaking models: the constrained minimal supersymmetric standard model (CMSSM) and a simplified model with only squarks and gluinos and massless neutralinos. We compare kinematical distributions of supersymmetric signal events predicted by the CMSSM and anomaly mediated supersymmetry breaking (mAMSB) before calculating exclusion limits in mAMSB. We obtain a lower limit of 900 GeV on squark and gluino masses at the 95% confidence level for the equal mass limit, tan(beta)=10 and mu>0.
William Balunas, Donatella Cavalli, Teng Jian Khoo, Matthew Klein, Peter Loch, Federica Piazza, Caterina Pizio, Silvia Resconi, Douglas Schaefer, Russell Smith, Sarah Williams
Missing transverse momentum is a crucial observable for physics at hadron colliders, being the only constraint on the kinematics of "invisible" objects such as neutrinos and hypothetical dark matter particles. Computing missing transverse momentum at the highest possible precision, particularly in experiments at the energy frontier, can be a challenging procedure due to ambiguities in the distribution of energy and momentum between many reconstructed particle candidates. This paper describes a novel solution for efficiently encoding information required for the computation of missing transverse momentum given arbitrary selection criteria for the constituent reconstructed objects. Pileup suppression using information from both the calorimeter and the inner detector is an integral component of the reconstruction procedure. Energy calibration and systematic variations are naturally supported. Following this strategy, the ATLAS Collaboration has been able to optimise the use of missing transverse momentum in diverse analyses throughout Runs 2 and 3 of the Large Hadron Collider and for future analyses.
HEP Software Foundation, :, Thea Aarrestad, Simone Amoroso, Markus Julian Atkinson, Joshua Bendavid, Tommaso Boccali, Andrea Bocci, Andy Buckley, Matteo Cacciari, Paolo Calafiura, Philippe Canal, Federico Carminati, Taylor Childers, Vitaliano Ciulli, Gloria Corti, Davide Costanzo, Justin Gage Dezoort, Caterina Doglioni, Javier Mauricio Duarte, Agnieszka Dziurda, Peter Elmer, Markus Elsing, V. Daniel Elvira, Giulio Eulisse, Javier Fernandez Menendez, Conor Fitzpatrick, Rikkert Frederix, Stefano Frixione, Krzysztof L Genser, Andrei Gheata, Francesco Giuli, Vladimir V. Gligorov, Hadrien Benjamin Grasland, Heather Gray, Lindsey Gray, Alexander Grohsjean, Christian Gütschow, Stephan Hageboeck, Philip Coleman Harris, Benedikt Hegner, Lukas Heinrich, Burt Holzman, Walter Hopkins, Shih-Chieh Hsu, Stefan Höche, Philip James Ilten, Vladimir Ivantchenko, Chris Jones, Michel Jouvin, Teng Jian Khoo, Ivan Kisel, Kyle Knoepfel, Dmitri Konstantinov, Attila Krasznahorkay, Frank Krauss, Benjamin Edward Krikler, David Lange, Paul Laycock, Qiang Li, Kilian Lieret, Miaoyuan Liu, Vladimir Loncar, Leif Lönnblad, Fabio Maltoni, Michelangelo Mangano, Zachary Louis Marshall, Pere Mato, Olivier Mattelaer, Joshua Angus McFayden, Samuel Meehan, Alaettin Serhan Mete, Ben Morgan, Stephen Mrenna, Servesh Muralidharan, Ben Nachman, Mark S. Neubauer, Tobias Neumann, Jennifer Ngadiuba, Isobel Ojalvo, Kevin Pedro, Maurizio Perini, Danilo Piparo, Jim Pivarski, Simon Plätzer, Witold Pokorski, Adrian Alan Pol, Stefan Prestel, Alberto Ribon, Martin Ritter, Andrea Rizzi, Eduardo Rodrigues, Stefan Roiser, Holger Schulz, Markus Schulz, Marek Schönherr, Elizabeth Sexton-Kennedy, Frank Siegert, Andrzej Siódmok, Graeme A Stewart, Malik Sudhir, Sioni Paris Summers, Savannah Jennifer Thais, Nhan Viet Tran, Andrea Valassi, Marc Verderi, Dorothea Vom Bruch, Gordon T. Watts, Torre Wenaus, Efe Yazgan
Common and community software packages, such as ROOT, Geant4 and event generators have been a key part of the LHC's success so far and continued development and optimisation will be critical in the future. The challenges are driven by an ambitious physics programme, notably the LHC accelerator upgrade to high-luminosity, HL-LHC, and the corresponding detector upgrades of ATLAS and CMS. In this document we address the issues for software that is used in multiple experiments (usually even more widely than ATLAS and CMS) and maintained by teams of developers who are either not linked to a particular experiment or who contribute to common software within the context of their experiment activity. We also give space to general considerations for future software and projects that tackle upcoming challenges, no matter who writes it, which is an area where community convergence on best practice is extremely useful.