Disha Bhatia, Sabyasachi Chakraborty, Amol Dighe
We present a class of minimal $U(1)_X$ models as a plausible solution to the $R_K$ anomaly that can also help reproduce the neutrino mixing pattern. The symmetries and the corresponding $X$-charges of the fields are determined in a bottom-up approach demanding both theoretical and experimental consistencies. The breaking of $U(1)_X$ symmetry results in a massive $Z^{\prime}$, whose couplings with leptons and quarks are necessarily non-universal to address the $R_K$ anomaly. In the process, an additional Higgs doublet is introduced to generate quark mixings. The mixings in the neutrino sector are generated through Type-I seesaw mechanism by the addition of three right handed neutrinos and a scalar singlet. The $Z^{\prime}$ can be probed with a few hundred fb$^{-1}$ of integrated luminosity at the 13 TeV LHC in the di-muon channel.
Shankha Banerjee, Geneviève Bélanger, Disha Bhatia, Benjamin Fuks, Sreerup Raychaudhuri
Non-minimal simplified extensions of the Standard Model have gained considerable currency in the context of dark matter searches at the LHC, since they predict enhanced mono-Higgs and mono-$W/Z$ signatures over large parts of the parameter space. However, these non-minimal models obviously lack the simplicity and directness of the original simplified models, and are more heavily dependent on the model assumptions. We propose to classify these models generically on the basis of additional mediator(s) and dark matter particles. As an example, we take up a scenario involving multiple pseudoscalar mediators, and a single Dirac dark matter particle, the latter being a popular introduction to ensure ultraviolet completion of theories with multiple pseudoscalar fields. In the chosen scenario, we discuss the viable channels and signatures of relevance at the future runs of the LHC. These are then compared with the minimal simplified scenarios and distinguishing features are pinpointed.
Disha Bhatia, Ushoshi Maitra, Saurabh Niyogi
We present a comprehensive analysis of observing a light Higgs boson in the mass range $70$ -- $110$ GeV at the 13/14 TeV LHC, in the context of the type-I two-Higgs-doublet model. The decay of the light Higgs to a pair of bottom quarks is dominant in most parts of the parameter space, except in the fermiophobic limit. Here its decay to bosons, (mainly a pair of photons), becomes important. We perform an extensive collider analysis for the $b\bar{b}$ and $γγ$ final states. The light scalar is tagged in the highly boosted regimes for the $b \bar{b}$ mode to reduce the enormous QCD background. This decay can be observed with a few thousand fb$^{-1}$ of integrated luminosity at the LHC. Near the fermiophobic limit, the decay of the light Higgs to a pair of photons can even be probed with a few hundred fb$^{-1}$ of integrated luminosity at the LHC.
Disha Bhatia
We investigate a recently proposed non-thermal mechanism for dark matter production, in which a small initial dark matter ($χ$) number density undergoes exponential growth through scatterings with bath particles ($φ$) in the early universe ($χφ\to χχ$). The process ends when the scattering rate becomes Boltzmann suppressed. The analysis, in literature, is performed on the simplifying assumption of the dark matter phase space tracing the equilibrium distribution of either standard model or a hidden sector bath. Owing to the non-thermal nature of the production mechanism, this assumption may not necessarily hold. In this work, we test the validity of this assumption by numerically solving the unintegrated Boltzmann equation for the dark matter distribution. Our results, independent of the initial conditions, show that after exponential growth ceases, the dark matter distribution exhibits equilibrium-like behaviour at low comoving momentum, especially for higher couplings. While full kinetic equilibrium-like behaviour is not reached across all momentum modes, the scaled equilibrium approximation provides reasonable estimates for the dark matter abundance. For more accurate results, however, the full unintegrated Boltzmann equation must be solved.
Disha Bhatia, Nishita Desai, Siddharth Dwivedi
Determining if the SM-like Higgs is part of an extended Higgs sector is the most important question to be asked after discovery. A light charged Higgs boson with mass smaller than the sum of top and bottom quarks is naturally allowed in Type-I two Higgs doublet model and can be produced in association with neutral scalars for large parts of parameter space at the LHC. Such low mass charged scalars typically have dominant decays to the fermionic modes viz. $τν$ and $c s$. However in the presence of light neutral scalar ($\varphi$), the charged Higgs boson has a substantial branching fraction into the bosonic decay modes $H^{\pm} \to W^{(*)} \varphi$. Identifying the heavier neutral Higgs ($H$) with the observed 125 GeV Higgs and working in the limit $M_{H^\pm} \approx M_A$, we examine charged Higgs production and decay in the bosonic mode $p p \to H^\pm h \to W^{(*)}h h$. The presence of two light Higgses ($h$) is then the key to identifying charged Higgs production. The light Higgs branching ratio is largely dominated by the $b\bar{b}$ mode except when close to the fermiophobic limit. Here, the rates into $b \bar b$ and $γγ$ can be comparable and we can use the $γγb\bar{b}$ signature. This signature is complementary to the $h h \to 4γ$ which has been previously discussed in literature. Using the lepton from the $W$ boson, we demonstrate with a cut-and-count analysis that both the new light neutral Higgs as well as charged Higgs can be probed with reasonable significance at 13.6 TeV LHC with 300-3000 fb$^{-1}$ of integrated luminosity.
Disha Bhatia, Nishita Desai, Amol Dighe
We analyze the class of models with an extra $U(1)_X$ gauge symmetry that can account for the $b \to s \ell \ell$ anomalies by modifying the Wilson coefficients $C_{9e}$ and $C_{9μ}$ from their standard model values. At the same time, these models generate appropriate quark mixing, and give rise to neutrino mixing via the Type-I seesaw mechanism. Apart from the gauge boson $Z'$, these frugal models only have three right-handed neutrinos for the seesaw mechanism, an additional $SU(2)_L$ scalar doublet for quark mixing, and a SM-singlet scalar that breaks the $U(1)_X$ symmetry. This set-up identifies a class of leptonic symmetries, and necessitates non-zero but equal charges for the first two quark generations. If the quark mixing beyond the standard model were CKM-like, all these symmetries would be ruled out by the latest flavor constraints on Wilson coefficients and collider constraints on $Z'$ parameters. However, we identify a single-parameter source of non-minimal flavor violation that allows a wider class of $U(1)_X$ symmetries to be compatible with all data. We show that the viable leptonic symmetries have to be of the form $L_e \pm 3 L_μ- L_τ$ or $L_e - 3 L_μ+ L_τ$, and determine the $(M_{Z^\prime}, g_{Z^\prime})$ parameter space that may be probed by the high-luminosity data at the LHC.
Disha Bhatia, Sabyasachi Chakraborty, Amol Dighe
We identify a class of $U(1)_X$ models which can explain the $R_K$ anomaly and the neutrino mixing pattern, by using a bottom-up approach. The different $X$-charges of lepton generations account for the lepton universality violation required to explain $R_K$. In addition to the three right-handed neutrinos needed for the Type-I seesaw mechanism, these minimal models only introduce an additional doublet Higgs and a singlet scalar. While the former helps in reproducing the quark mixing structure, the latter gives masses to neutrinos and the new gauge boson $Z^\prime$. Our bottom-up approach determines the $X$-charges of all particles using theoretical consistency and experimental constraints. We find the parameter space allowed by the constraints from neutral meson mixing, rare $b\to s$ decays and direct collider searches for $Z^\prime$. Such a $Z^\prime$ may be observable at the ongoing run of the Large Hadron Collider with a few hundred fb$^{-1}$ of integrated luminosity.
Chiara Arina, Benjamin Fuks, Luca Panizzi, Michael J. Baker, Alan S. Cornell, Jan Heisig, Benedikt Maier, Rute Pedro, Dominique Trischuk, Diyar Agin, Alexandre Arbey, Giorgio Arcadi, Emanuele Bagnaschi, Kehang Bai, Disha Bhatia, Mathias Becker, Alexander Belyaev, Ferdinand Benoit, Monika Blanke, Jackson Burzynski, Jonathan M. Butterworth, Antimo Cagnotta, Lorenzo Calibbi, Linda M. Carpenter, Xabier Cid Vidal, Emanuele Copello, Louie Corpe, Francesco D'Eramo, Aldo Deandrea, Aman Desai, Caterina Doglioni, Sunil M. Dogra, Mathias Garny, Mark D. Goodsell, Sohaib Hassan, Philip Coleman Harris, Julia Harz, Alejandro Ibarra, Alberto Orso Maria Iorio, Felix Kahlhoefer, Deepak Kar, Shaaban Khalil, Valery Khoze, Pyungwon Ko, Sabine Kraml, Greg Landsberg, Andre Lessa, Laura Lopez-Honorez, Alberto Mariotti, Vasiliki A. Mitsou, Kirtimaan Mohan, Chang-Seong Moon, Alexander Moreno Briceño, María Moreno Llácer, Léandre Munoz-Aillaud, Taylor Murphy, Anele M. Ncube, Wandile Nzuza, Clarisse Prat, Lena Rathmann, Thobani Sangweni, Dipan Sengupta, William Shepherd, Sukanya Sinha, Tim M. P. Tait, Andrea Thamm, Michel H. G. Tytgat, Zirui Wang, David Yu, Shin-Shan Yu
This report, summarising work achieved in the context of the LHC Dark Matter Working Group, investigates the phenomenology of $t$-channel dark matter models, spanning minimal setups with a single dark matter candidate and mediator to more complex constructions closer to UV-complete models. For each considered class of models, we examine collider, cosmological and astrophysical implications. In addition, we explore scenarios with either promptly decaying or long-lived particles, as well as featuring diverse dark matter production mechanisms in the early universe. By providing a unified analysis framework, numerical tools and guidelines, this work aims to support future experimental and theoretical efforts in exploring $t$-channel dark matter models at colliders and in cosmology.
Disha Bhatia, Satyanarayan Mukhopadhyay
Using the upper bound on the inelastic reaction cross-section implied by S-matrix unitarity, we derive the thermally averaged maximum dark matter (DM) annihilation rate for general $k \rightarrow 2$ number-changing reactions, with $k \geq 2$, taking place either entirely within the dark sector, or involving standard model fields. This translates to a maximum mass of the particle saturating the observed DM abundance, which, for dominantly $s$-wave annihilations, is obtained to be around $130$ TeV, $1$ GeV, $7$ MeV and $110$ keV, for $k=2,3,4$ and $5$, respectively, in a radiation dominated Universe, for a real or complex scalar DM stabilized by a minimal symmetry. For modified thermal histories in the pre-big bang nucleosynthesis era, with an intermediate period of matter domination, values of reheating temperature higher than $\mathcal{O}(200)$ GeV for $k \geq 4$, $\mathcal{O}(1)$ TeV for $k=3$ and $\mathcal{O}(50)$ TeV for $k=2$ are strongly disfavoured by the combined requirements of unitarity and DM relic abundance, for DM freeze-out before reheating.
Debjyoti Bardhan, Disha Bhatia, Amit Chakraborty, Ushoshi Maitra, Sreerup Raychaudhuri, Tousik Samui
The recent observation of a modest excess in diphoton final states at the LHC, by both the ATLAS and CMS Collaborations, has sparked off the expected race among theorists to find the right explanation for this proto-resonance, assuming that the signal will survive and not prove to be yet another statistical fluctuation. We carry out a general analysis of this `signal' in the case of a scalar which couples only to pairs of gluons (for production) and photons (for diphoton decay modes), and establish that an explanation of the observed resonance, taken together with the null results of new physics searches in all the other channels, requires a scalar with rather exotic behaviour. We then demonstrate that a fairly simple-minded extension of the minimal Randall-Sundrum model can yield a radion candidate which might reproduce this exotic behaviour.
B. C. Allanach, D. Bhatia, A. M. Iyer
We examine the phenomenology of the production, at the 13 TeV Large Hadron Collider (LHC), of a heavy resonance $X$, which decays via other new on-shell particles $n$ into multi- (i.e.\ three or more) photon final states. In the limit that $n$ has a much smaller mass than $X$, the multi-photon final state may dominantly appear as a two photon final state because the $γ$s from the $n$ decay are highly collinear and remain unresolved. We discuss how to discriminate this scenario from $X \rightarrow γγ$: rather than discarding non-isolated photons, it is better instead to relax the isolation criterion and instead form photon jet substructure variables. The spins of $X$ and $n$ leave their imprint upon the distribution of pseudorapidity gap $Δη$ between the apparent two photon states. Depending on the total integrated luminosity, this can be used in many cases to claim discrimination between the possible spin choices of $X$ and $n$, although the case where $X$ and $n$ are both scalar particles cannot be discriminated from the direct $X \rightarrow γγ$ decay in this manner. Information on the mass of $n$ can be gained by considering the mass of each photon jet.
Disha Bhatia, Ushoshi Maitra, Sreerup Raychaudhuri
We make a careful analysis of $W^\pmγ$ production at the LHC, identifying the $W^\pm$ through leptonic decays, with a view to exploring the sensitivity of the machine to anomalous $CP$-conserving $WWγ$ interactions. All the available kinematic variables are used, but we find that the most useful one is the opening angle in the transverse plane between the decay products of the $W^\pm$. It is shown that even a simple-minded analysis using this variable can lead to a much greater sensitivity at the LHC than the current constraints on the relevant parameters.