C. J. Argue, Anupam Gupta, Guru Guruganesh, Ziye Tang
We study the problem of chasing convex bodies online: given a sequence of convex bodies $K_t\subseteq \mathbb{R}^d$ the algorithm must respond with points $x_t\in K_t$ in an online fashion (i.e., $x_t$ is chosen before $K_{t+1}$ is revealed). The objective is to minimize the sum of distances between successive points in this sequence. Bubeck et al. (STOC 2019) gave a $2^{O(d)}$-competitive algorithm for this problem. We give an algorithm that is $O(\min(d, \sqrt{d \log T}))$-competitive for any sequence of length $T$.
Domagoj Bradac, Anupam Gupta, Sahil Singla, Goran Zuzic
In classical secretary problems, a sequence of $n$ elements arrive in a uniformly random order, and we want to choose a single item, or a set of size $K$. The random order model allows us to escape from the strong lower bounds for the adversarial order setting, and excellent algorithms are known in this setting. However, one worrying aspect of these results is that the algorithms overfit to the model: they are not very robust. Indeed, if a few "outlier" arrivals are adversarially placed in the arrival sequence, the algorithms perform poorly. E.g., Dynkin's popular $1/e$-secretary algorithm fails with even a single adversarial arrival. We investigate a robust version of the secretary problem. In the Byzantine Secretary model, we have two kinds of elements: green (good) and red (rogue). The values of all elements are chosen by the adversary. The green elements arrive at times uniformly randomly drawn from $[0,1]$. The red elements, however, arrive at adversarially chosen times. Naturally, the algorithm does not see these colors: how well can it solve secretary problems? We give algorithms which get value comparable to the value of the optimal green set minus the largest green item. Specifically, we give an algorithm to pick $K$ elements that gets within $(1-\varepsilon)$ factor of the above benchmark, as long as $K \geq \mathrm{poly}(\varepsilon^{-1} \log n)$. We extend this to the knapsack secretary problem, for large knapsack size $K$. For the single-item case, an analogous benchmark is the value of the second-largest green item. For value-maximization, we give a $\mathrm{poly} \log^* n$-competitive algorithm, using a multi-layered bucketing scheme that adaptively refines our estimates of second-max over time. For probability-maximization, we show the existence of a good randomized algorithm, using the minimax principle.
Vincent Cohen-Addad, Anupam Gupta, Philip N. Klein, Jason Li
The (non-uniform) sparsest cut problem is the following graph-partitioning problem: given a "supply" graph, and demands on pairs of vertices, delete some subset of supply edges to minimize the ratio of the supply edges cut to the total demand of the pairs separated by this deletion. Despite much effort, there are only a handful of nontrivial classes of supply graphs for which constant-factor approximations are known. We consider the problem for planar graphs, and give a $(2+\varepsilon)$-approximation algorithm that runs in quasipolynomial time. Our approach defines a new structural decomposition of an optimal solution using a "patching" primitive. We combine this decomposition with a Sherali-Adams-style linear programming relaxation of the problem, which we then round. This should be compared with the polynomial-time approximation algorithm of Rao (1999), which uses the metric linear programming relaxation and $\ell_1$-embeddings, and achieves an $O(\sqrt{\log n})$-approximation in polynomial time.
Anupam Gupta, Pascale Magaud, Christine Lafforgue, Micheline Abbas
Finite-size neutrally buoyant particles in a channel flow are known to accumulate at specific equilibrium positions or spots in the channel cross-section if the flow inertia is finite at the particle scale. Experiments in different conduit geometries have shown that while reaching equilibrium locations, particles tend also to align regularly in the streamwise direction. In this paper, the Force Coupling Method was used to numerically investigate the inertia-induced particle alignment, using square channel geometry. The method was first shown to be suitable to capture the quasi-steady lift force that leads to particle cross-streamline migration in channel flow. Then the particle alignment in the flow direction was investigated by calculating the particle relative trajectories as a function of flow inertia and of the ratio between the particle size and channel hydraulic diameter. The flow streamlines were examined around the freely rotating particles at equilibrium, revealing stable small-scale vortices between aligned particles. The streamwise inter-particle spacing between aligned particles at equilibrium was calculated and compared to available experimental data in square channel flow (Gao {\it et al.} Microfluidics and Nanofluidics {\bf 21}, 154 (2017)). The new result highlighted by our numerical simulations is that the inter-particle spacing is unconditionally stable only for a limited number of aligned particles in a single train, the threshold number being dependent on the confinement (particle-to-channel size ratio) and on the Reynolds number. For instance, when the particle Reynolds number is $\approx1$ and the particle-to-channel height size ratio is $\approx0.1$, the maximum number of stable aligned particles per train is equal to 3. This agrees with statistics realized on the experiments of (Gao {\it et al.} Microfluidics and Nanofluidics {\bf 21}, 154 (2017)).
Anupam Gupta, Tomer Koren, Kunal Talwar
We study the stochastic multi-armed bandits problem in the presence of adversarial corruption. We present a new algorithm for this problem whose regret is nearly optimal, substantially improving upon previous work. Our algorithm is agnostic to the level of adversarial contamination and can tolerate a significant amount of corruption with virtually no degradation in performance.
Anupam Gupta, Viswanath Nagarajan
We study a general stochastic probing problem defined on a universe V, where each element e in V is "active" independently with probability p_e. Elements have weights {w_e} and the goal is to maximize the weight of a chosen subset S of active elements. However, we are given only the p_e values-- to determine whether or not an element e is active, our algorithm must probe e. If element e is probed and happens to be active, then e must irrevocably be added to the chosen set S; if e is not active then it is not included in S. Moreover, the following conditions must hold in every random instantiation: (1) the set Q of probed elements satisfy an "outer" packing constraint, and (2) the set S of chosen elements satisfy an "inner" packing constraint. The kinds of packing constraints we consider are intersections of matroids and knapsacks. Our results provide a simple and unified view of results in stochastic matching and Bayesian mechanism design, and can also handle more general constraints. As an application, we obtain the first polynomial-time $Ω(1/k)$-approximate "Sequential Posted Price Mechanism" under k-matroid intersection feasibility constraints.
Anupam Gupta, Amit Kumar
In the Steiner Forest problem, we are given terminal pairs $\{s_i, t_i\}$, and need to find the cheapest subgraph which connects each of the terminal pairs together. In 1991, Agrawal, Klein, and Ravi, and Goemans and Williamson gave primal-dual constant-factor approximation algorithms for this problem; until now, the only constant-factor approximations we know are via linear programming relaxations. We consider the following greedy algorithm: Given terminal pairs in a metric space, call a terminal "active" if its distance to its partner is non-zero. Pick the two closest active terminals (say $s_i, t_j$), set the distance between them to zero, and buy a path connecting them. Recompute the metric, and repeat. Our main result is that this algorithm is a constant-factor approximation. We also use this algorithm to give new, simpler constructions of cost-sharing schemes for Steiner forest. In particular, the first "group-strict" cost-shares for this problem implies a very simple combinatorial sampling-based algorithm for stochastic Steiner forest.
Albert Gu, Anupam Gupta, Amit Kumar
In the online Steiner tree problem, a sequence of points is revealed one-by-one: when a point arrives, we only have time to add a single edge connecting this point to the previous ones, and we want to minimize the total length of edges added. For two decades, we know that the greedy algorithm maintains a tree whose cost is O(log n) times the Steiner tree cost, and this is best possible. But suppose, in addition to the new edge we add, we can change a single edge from the previous set of edges: can we do much better? Can we maintain a tree that is constant-competitive? We answer this question in the affirmative. We give a primal-dual algorithm, and a novel dual-based analysis, that makes only a single swap per step (in addition to adding the edge connecting the new point to the previous ones), and such that the tree's cost is only a constant times the optimal cost. Previous results for this problem gave an algorithm that performed an amortized constant number of swaps: for each n, the number of swaps in the first n steps was O(n). We also give a simpler tight analysis for this amortized case.
Vishwanath Shukla, Anupam Gupta, Rahul Pandit
We present the first direct-numerical-simulation study of the statistical properties of two-dimensional superfluid turbulence in the Hall-Vinen-Bekharevich-Khalatnikov two-fluid model. We show that both normal-fluid and superfluid energy spectra can exhibit two power-law regimes, the first associated with an inverse cascade of energy and the second with the forward cascade of enstrophy. We quantify the mutual-friction-induced alignment of normal and superfluid velocities by obtaining probability distribution functions of the angle between them and the ratio of their moduli. Our study leads to specific suggestions for experiments.
Anupam Gupta, Euiwoong Lee, Jason Li, Marcin Mucha, Heather Newman, Sherry Sarkar
We show how to round any half-integral solution to the subtour-elimination relaxation for the TSP, while losing a less-than-1.5 factor. Such a rounding algorithm was recently given by Karlin, Klein, and Oveis Gharan based on sampling from max-entropy distributions. We build on an approach of Haddadan and Newman to show how sampling from the matroid intersection polytope, and a new use of max-entropy sampling, can give better guarantees.
Soni D. Prajapati, Akshay Bhatnagar, Anupam Gupta
We simulate active Brownian particles (ABPs) with soft-repulsive interactions subjected to a four-roll-mill flow. In the absence of flow, this system exhibits motility-induced phase separation (MIPS). To investigate the interplay between MIPS and flow-induced mixing, we introduce dimensionless parameters: a scaled time, $τ$, and a scaled velocity, ${\rm v}$, characterizing the ratio of ABP to fluid time and velocity scales, respectively. The parameter space defined by $τ$ and ${\rm v}$ reveals three distinct ABP distribution regimes. At low velocities ${\rm v} \ll 1$, flow dominates, leading to a homogeneous mixture. Conversely, at high velocities ${\rm v} \gg 1$, motility prevails, resulting in MIPS. In the intermediate regime (${\rm v} \sim 1$), the system's behavior depends on $τ$. For $τ<1$, a moderately mixed homogeneous phase emerges, while for $τ>1$, a novel phase, termed flow-induced phase separation (FIPS), arises due to the combined effects of flow topology and ABP motility and size. To characterize these phases, we analyze drift velocity, diffusivity, mean-squared displacement, giant number fluctuations, radial distribution function, and cluster-size distribution.
Sahil Islam, Mohd. Suhail Rizvi, Anupam Gupta
Embryonic tissues deform across broad spatial and temporal scales and relax stress through active rearrangements. A quantitative link between cell-scale activity, spatial forcing, and emergent tissue-scale mechanics remains incomplete. Here, we use a vertex-based tissue model with active force fluctuations to study how motility controls viscoelastic response. After validation against experimental presomitic mesoderm relaxation dynamics, we extract intrinsic mechanical timescales using stress relaxation and oscillatory shear. The model captures motility-dependent shifts between elastic and viscous behavior and the coexistence of fast relaxation with long-lived residual stress. When subjected to spatially patterned, temporally pulsed forcing, tissues behave as mechanical filters: long-wavelength inputs are accumulated, whereas short-wavelength, cell-scale perturbations are rapidly erased, largely independent of motility. Simulations with localized motility hotspots, motivated by spatially confined FGF signaling reported in vertebrate limb development, produce sustained protrusive tissue deformations consistent with experimentally observed early bud-like morphologies. Together, these results establish a minimal framework linking motility-driven activity to wavelength-selective mechanical memory and emergent tissue patterning.
Soni D. Prajapati, Kusum Seervi, Akshay Bhatnagar, Anupam Gupta
We investigate the collective dynamics of active Brownian particles (ABPs) subjected to a steady two-dimensional four-roll-mill flow using numerical simulations. By varying the packing fraction ($φ$), we uncover a novel flow-induced phase separation (FIPS) that emerges beyond a critical density ($φ\geq 0.6$). The mean-square displacement (MSD) exhibits an intermediate bump between ballistic and diffusive regimes, indicating transient trapping and flow-guided clustering. The effective diffusivity decreases quadratically with $φ$, while the drift velocity remains nearly constant, demonstrating that large-scale transport is primarily dictated by the background flow. Number fluctuations show a crossover from normal to giant scaling, signaling the onset of long-range density inhomogeneities in the FIPS regime. Our findings provide new insights into the coupling between activity, crowding, and flow, offering a unified framework for understanding phase behavior in driven active matter systems.
C. J. Argue, Sébastien Bubeck, Michael B. Cohen, Anupam Gupta, Yin Tat Lee
Friedman and Linial introduced the convex body chasing problem to explore the interplay between geometry and competitive ratio in metrical task systems. In convex body chasing, at each time step $t \in \mathbb{N}$, the online algorithm receives a request in the form of a convex body $K_t \subseteq \mathbb{R}^d$ and must output a point $x_t \in K_t$. The goal is to minimize the total movement between consecutive output points, where the distance is measured in some given norm. This problem is still far from being understood, and recently Bansal et al. gave an algorithm for the nested version, where each convex body is contained within the previous one. We propose a different strategy which is $O(d \log d)$-competitive algorithm for this nested convex body chasing problem, improving substantially over previous work. Our algorithm works for any norm. This result is almost tight, given an $Ω(d)$ lower bound for the $\ell_{\infty}$.
Rohan Ghuge, Anupam Gupta, Viswanath Nagarajan
In the stochastic submodular cover problem, the goal is to select a subset of stochastic items of minimum expected cost to cover a submodular function. Solutions in this setting correspond to sequential decision processes that select items one by one "adaptively" (depending on prior observations). While such adaptive solutions achieve the best objective, the inherently sequential nature makes them undesirable in many applications. We ask: how well can solutions with only a few adaptive rounds approximate fully-adaptive solutions? We give nearly tight answers for both independent and correlated settings, proving smooth tradeoffs between the number of adaptive rounds and the solution quality, relative to fully adaptive solutions. Experiments on synthetic and real datasets show qualitative improvements in the solutions as we allow more rounds of adaptivity; in practice, solutions with a few rounds of adaptivity are nearly as good as fully adaptive solutions.
Anupam Gupta, David G. Harris, Euiwoong Lee, Jason Li
In the $k$-cut problem, we want to find the lowest-weight set of edges whose deletion breaks a given (multi)graph into $k$ connected components. Algorithms of Karger \& Stein can solve this in roughly $O(n^{2k})$ time. On the other hand, lower bounds from conjectures about the $k$-clique problem imply that $Ω(n^{(1-o(1))k})$ time is likely needed. Recent results of Gupta, Lee \& Li have given new algorithms for general $k$-cut in $n^{1.98k + O(1)}$ time, as well as specialized algorithms with better performance for certain classes of graphs (e.g., for small integer edge weights). In this work, we resolve the problem for general graphs. We show that the Contraction Algorithm of Karger outputs any fixed $k$-cut of weight $αλ_k$ with probability $Ω_k(n^{-αk})$, where $λ_k$ denotes the minimum $k$-cut weight. This also gives an extremal bound of $O_k(n^k)$ on the number of minimum $k$-cuts and an algorithm to compute $λ_k$ with roughly $n^k \mathrm{polylog}(n)$ runtime. Both are tight up to lower-order factors, with the algorithmic lower bound assuming hardness of max-weight $k$-clique. The first main ingredient in our result is an extremal bound on the number of cuts of weight less than $2 λ_k/k$, using the Sunflower lemma. The second ingredient is a fine-grained analysis of how the graph shrinks -- and how the average degree evolves -- in the Karger process.
Kolluru Venkata Kiran, Anupam Gupta, Akhilesh Kumar Verma and, Rahul Pandit
We use the mean-bacterial-velocity model to investigate the \textit{irreversibility} of two-dimensional (2D) \textit{bacterial turbulence} and to compare it with its 2D fluid-turbulence counterpart. We carry out extensive direct numerical simulations of Lagrangian tracer particles that are advected by the velocity field in this model. Our work uncovers an important, qualitative way in which irreversibility in bacterial turbulence is different from its fluid-turbulence counterpart: For large positive (or large but negative) values of the \textit{friction} (or \textit{activity}) parameter, the probability distribution functions of energy increments, along tracer trajectories, or the power are \textit{positively} skewed; so irreversibility in bacterial turbulence can lead, on average, to \textit{particles gaining energy faster than they lose it}, which is the exact opposite of what is observed for tracers in 2D fluid turbulence.
Anupam Gupta, Mauro Sbragaglia
We investigate the break-up of Newtonian/viscoelastic droplets in a viscoelastic/Newtonian matrix under the hydrodynamic conditions of a confined shear flow. Our numerical approach is based on a combination of Lattice-Boltzmann models (LBM) and Finite Difference (FD) schemes. LBM are used to model two immiscible fluids with variable viscosity ratio (i.e. the ratio of the droplet to matrix viscosity); FD schemes are used to model viscoelasticity, and the kinetics of the polymers is introduced using constitutive equations for viscoelastic fluids with finitely extensible non-linear elastic dumbbells with Peterlin's closure (FENE-P). We study both strongly and weakly confined cases to highlight the role of matrix and droplet viscoelasticity in changing the droplet dynamics after the startup of a shear flow. Simulations provide easy access to quantities such as droplet deformation and orientation and will be used to quantitatively predict the critical Capillary number at which the droplet breaks, the latter being strongly correlated to the formation of multiple neckings at break-up. This study complements our previous investigation on the role of droplet viscoelasticity (A. Gupta \& M. Sbragaglia, {\it Phys. Rev. E} {\bf 90}, 023305 (2014)), and is here further extended to the case of matrix viscoelasticity.
K. V. Rajany, Anupam Gupta, Alexander V. Panfilov, Rahul Pandit
Disorganized electrical activity in the heart leads to sudden cardiac death. To what extent can this electrical turbulence be viewed as classical fluid turbulence,which is an important central problem in modern physics? We investigate,for the first time,via extensive DNSs,the statistical properties of spiral-and scroll-wave turbulence in two- and three-dimensional excitable media by using approaches employed in studies of classical turbulence. We use the Panfilov and the Aliev-Panfilov mathematical models for cardiac tissue. We show that once electrical-wave turbulence has been initiated,there is a forward cascade,in which spirals or scrolls form,interact,and break to yield a turbulent state that is statistically steady and,far away from boundaries,is statistically homogeneous and isotropic. For the transmembrane potential $V$ and the slow recovery variable $g$,which define our models,we define $E_V(k)$ and $E_g(k)$,the electrical-wave analogs of the fluid energy spectrum $E(k)$ in fluid turbulence. We show that $E_V(k)$ and $E_g(k)$ are spread out over several decades in $k$. Thus spiral- and scroll-wave turbulence involves a wide range of spatial scales. $E_V(k)$ and $E_g(k)$ show approximate power laws,in some range of $k$, however,their exponents cannot be determined as accurately as their fluid-turbulence counterparts. The dimensionless ratio $L/λ$ is a convenient control parameter like the Reynolds number for fluid turbulence,where $L$ is the linear size of the domain and $λ$ the wavelength of a plane wave in the medium. By comparing several other statistical properties for spiral- and scroll-wave turbulence with their fluid-turbulence counterparts,we show that,although spiral- and scroll-wave turbulence have some statistical properties like those of fluid turbulence,overall these types of turbulence are special and differ in important ways from fluid turbulence.
Anupam Gupta, Euiwoong Lee, Jason Li
In the $k$-Cut problem, we are given an edge-weighted graph $G$ and an integer $k$, and have to remove a set of edges with minimum total weight so that $G$ has at least $k$ connected components. Prior work on this problem gives, for all $h \in [2,k]$, a $(2-h/k)$-approximation algorithm for $k$-cut that runs in time $n^{O(h)}$. Hence to get a $(2 - \varepsilon)$-approximation algorithm for some absolute constant $\varepsilon$, the best runtime using prior techniques is $n^{O(k\varepsilon)}$. Moreover, it was recently shown that getting a $(2 - \varepsilon)$-approximation for general $k$ is NP-hard, assuming the Small Set Expansion Hypothesis. If we use the size of the cut as the parameter, an FPT algorithm to find the exact $k$-Cut is known, but solving the $k$-Cut problem exactly is $W[1]$-hard if we parameterize only by the natural parameter of $k$. An immediate question is: \emph{can we approximate $k$-Cut better in FPT-time, using $k$ as the parameter?} We answer this question positively. We show that for some absolute constant $\varepsilon > 0$, there exists a $(2 - \varepsilon)$-approximation algorithm that runs in time $2^{O(k^6)} \cdot \widetilde{O} (n^4)$. This is the first FPT algorithm that is parameterized only by $k$ and strictly improves the $2$-approximation.