Amatya Sharma, Santhoshini Velusamy
We study the single-pass streaming complexity of deciding satisfiability of Constraint Satisfaction Problems (CSPs). A CSP is specified by a constraint language $Γ$, that is, a finite set of $k$-ary relations over the domain $[q] = \{0, \dots, q-1\}$. An instance of $\mathsf{CSP}(Γ)$ consists of $m$ constraints over $n$ variables $x_1, \ldots, x_n$ taking values in $[q]$. Each constraint $C_i$ is of the form $\{R_i,(x_{i_1} + λ_{i_1}, \ldots, x_{i_k} + λ_{i_k})\}$, where $R_i \in Γ$ and $λ_{i_1}, \ldots, λ_{i_k} \in [q]$ are constants; it is satisfied if and only if $(x_{i_1} + λ_{i_1}, \ldots, x_{i_k} + λ_{i_k}) \in R_i$, where addition is modulo $q$. In the streaming model, constraints arrive one by one, and the goal is to determine, using minimum memory, whether there exists an assignment satisfying all constraints. For $k$-SAT, Vu (TCS 2024) proves an optimal $Ω(n^k)$ space lower bound, while for general CSPs, Chou, Golovnev, Sudan, and Velusamy (JACM 2024) establish an $Ω(n)$ lower bound; a complete characterization has remained open. We close this gap by showing that the single-pass streaming space complexity of $\mathsf{CSP}(Γ)$ is precisely governed by its non-redundancy, a structural parameter introduced by Bessiere, Carbonnel, and Katsirelos (AAAI 2020). The non-redundancy $\mathsf{NRD}_n(Γ)$ is the maximum number of constraints over $n$ variables such that every constraint $C$ is non-redundant, i.e., there exists an assignment satisfying all constraints except $C$. We prove that the single-pass streaming complexity of $\mathsf{CSP}(Γ)$ is characterized, up to a logarithmic factor, by $\mathsf{NRD}_n(Γ)$.
Nicholas Kocurek, Shayan Oveis Gharan, Dante Tjowasi
We design an efficient sampling algorithm to generate samples from the hardcore model on random regular bipartite graphs as long as $λ\lesssim \frac{1}{\sqrtΔ}$, where $Δ$ is the degree. Combined with recent work of Jenssen, Keevash and Perkins this implies an FPRAS for the partition function of the hardcore model on random regular bipartite graphs at any fugacity. Our algorithm is shown by analyzing two new Markov chains that work in complementary regimes. Our proof then proceeds by showing the corresponding simplicial complexes are top-link spectral expanders and appealing to the trickle-down theorem to prove fast mixing.
Piotr Kawałek, Jacek Krzaczkowski
Most classical results in circuit complexity theory concern circuits over the Boolean domain. Besides their simplicity and the ease of comparing different languages, the actual architecture of computers is also an important motivating factor. On the other hand, by restricting attention to Boolean circuits, we lose sight of the much richer landscape of circuits over larger domains. Our goal is to bridge these two worlds: to use deep algebraic tools to obtain results in computational complexity theory, including circuit complexity, and to apply results from computational complexity to gain a better understanding of the structure of finite algebras. In this paper, we propose a unifying algebraic framework which we believe will help achieve this goal. Our work is inspired by branching programs and nonuniform deterministic automata introduced by Barrington, as well as by their generalization proposed by Idziak et al. We begin our investigation by studying the languages recognized by natural classes of algebraic structures. In particular, we characterize language classes recognized by circuits over simple algebras and over algebras from congruence modular varieties.
Ishay Haviv
We study the kernel complexity of constraint satisfaction problems over a finite domain, parameterized by the number of variables, whose constraint language consists of two relations: the non-equality relation and an additional permutation-invariant relation $R$. We establish a conditional lower bound on the kernel size in terms of the largest arity of an OR relation definable from $R$. Building on this, we investigate the kernel complexity of uniformly rainbow free coloring problems. In these problems, for fixed positive integers $d$, $\ell$, and $q \geq d$, we are given a graph $G$ on $n$ vertices and a collection $\cal F$ of $\ell$-tuples of $d$-subsets of its vertex set, and the goal is to decide whether there exists a proper coloring of $G$ with $q$ colors such that no $\ell$-tuple in $\cal F$ is uniformly rainbow, that is, no tuple has all its sets colored with the same $d$ distinct colors. We determine, for all admissible values of $d$, $\ell$, and $q$, the infimum over all values $η$ for which the problem admits a kernel of size $O(n^η)$, under the assumption $\mathsf{NP} \nsubseteq \mathsf{coNP/poly}$. As applications, we obtain nearly tight bounds on the kernel complexity of various coloring problems under diverse settings and parameterizations. This includes graph coloring problems parameterized by the vertex-deletion distance to a disjoint union of cliques, resolving a question of Schalken (2020), as well as uniform hypergraph coloring problems parameterized by the number of vertices, extending results of Jansen and Pieterse (2019) and Beukers (2021).
Pratik Shastri
We prove that every fan-in $2$ noncommutative arithmetic circuit computing the palindrome polynomial has size $Ω(n^2)$. The proof builds on and refines a previous work of the author. The new ingredients in the proof were generated by Gemini 3.1 Pro.
Chia-Yu Chang, Fulvio Gesmundo, Jeroen Zuiddam
We determine the border subrank of higher order structure tensors of several families of algebras, and in particular obtain the following results. (1) We determine tight bounds on the border subrank of $k$-fold matrix multiplication and $k$-fold upper triangular matrix multiplication for all $k$. (2) We determine the border subrank of the higher order structure tensors of truncated polynomial algebras, null algebras, and apolar algebras of a quadric. (3) We determine the border subrank of the higher order structure tensors of the Lie algebra $\mathfrak{sl}_2$ for all orders. (4) We prove that degeneration of structure tensors of algebras propagates from higher to lower order. Along the way, we investigate which upper bound methods (geometric rank, $G$-stable rank, socle degree) are effective in which settings, and how they relate. Our work extends the results of Strassen (J.~Reine Angew.~Math., 1987, 1991), who determined the asymptotic subrank of these algebras for tensors of order three, in two directions: we determine the border subrank itself rather than its asymptotic version, and we consider higher order structure tensors.
Arianne Meijer-van de Griend
Apr 21, 2026·quant-ph·PDF In this paper, we give a mathematical proof that bounds the number of CNOT gates required to synthesize an $n$ qubit phase polynomial with $g$ terms to be at least $O(\frac{gn}{\max (\log g, 1)})$ and at most $O(gn)$. However, when targeting restricted hardware, not all CNOTs are allowed. If we were to use SWAP-based methods to route the qubits on the architecture such that the earlier synthesized gates are natively allowed, we increase the number of CNOTs by a routing overhead factor of $O(\log n) \leq α\leq O(n \log^2 n)$. However, if we only synthesize allowed gates, we do not need to route any qubits. Moreover, in that case the routing overhead factor is $1 \leq α\leq 4 \simeq O(1)$. Additionally, since phase polynomials and Hadamard gates together form a universal gate set, we get qubit routing for almost free.
Nikita Guseynov, Zoë Holmes, Armando Angrisani
Apr 21, 2026·quant-ph·PDF We introduce coherent-state propagation, a computational framework for simulating bosonic systems. We focus on bosonic circuits composed of displaced linear optics augmented by Kerr nonlinearities, a universal model of bosonic quantum computation that is also physically motivated by driven Bose-Hubbard dynamics. The method works in the Schrödinger picture representing the evolving state as a sparse superposition of coherent states. We develop approximation strategies that keep the simulation cost tractable in physically relevant regimes, notably when the number of Kerr gates is small or the Kerr nonlinearities are weak, and prove rigorous guarantees for both observable estimation and sampling. In particular, bosonic circuits with logarithmically many Kerr gates admit quasi-polynomial-time classical simulation at exponentially small error in trace distance. We further identify a weak-nonlinearity regime in which the runtime is polynomial for arbitrarily small constant precision. We complement these results with numerical benchmarks on the Bose-Hubbard model with all-to-all connectivity. The method reproduces Fock-basis and matrix-product-state reference data, suggesting that it offers a useful route to the classical simulation of bosonic systems.
Michael T. M. Emmerich, Ksenia Pereverdieva, André H. Deutz
We prove that, for every fixed $θ_0>0$, selecting a subset of prescribed cardinality that maximizes the Solow--Polasky diversity indicator is NP-hard for finite point sets in $\mathbb{R}^2$ with the Euclidean metric, and therefore also for finite point sets in $\mathbb{R}^d$ for every fixed dimension $d \ge 2$. This strictly strengthens our earlier NP-hardness result for general metric spaces by showing that hardness persists under the severe geometric restriction to the Euclidean plane. At the same time, the Euclidean proof technique is different from the conceptually easier earlier argument for arbitrary metric spaces, and that general metric-space construction does not directly translate to the Euclidean setting. In the earlier proof one can use an exact construction tailored to arbitrary metrics, essentially exploiting a two-distance structure. In contrast, such an exact realization is unavailable in fixed-dimensional Euclidean space, so the present reduction requires a genuinely geometric argument. Our Euclidean proof is based on two distance thresholds, which allow us to separate yes-instances from no-instances by robust inequalities rather than by the exact construction used in the general metric setting. The main technical ingredient is a bounded-box comparison lemma for the nonlinear objective $\mathbf{1}^{\top}Z^{-1}\mathbf{1}$, where $Z_{ij}=e^{-θ_0 d(x_i,x_j)}$. This lemma controls the effect of perturbations in the pairwise distances well enough to transfer the gap created by the reduction. The reduction is from \emph{Geometric Unit-Disk Independent Set}. We present the main argument in geometric form for finite subsets of $\mathbb{R}^2$, with an appendix supplying the bit-complexity details needed for polynomial-time reducibility.
Ron Kupfer
We extend the Ting--Yao randomized maximum-finding algorithm [TY94] to inputs that need not be pairwise distinct: each parity test $P(i,B)=\prod_{a\in B}(x_i-x_a):0$ on $B\subseteq[n]\setminus\{i\}$ is simulated by $O(\log |B|)$ ordinary polynomial tests, raising depth from $O((\log n)^2)$ to $O((\log n)^3)$ while preserving the $O(n^{-c})$ failure probability for every fixed $c>0$.
Dohoon Kim, Eungyu Woo, Donghoon Shin
This paper investigates a special variant of a pursuit-evasion game called lions and contamination. In a graph where all vertices are initially contaminated, a set of lions traverses the graph, clearing the contamination from every vertex they visit. However, the contamination simultaneously spreads to any adjacent vertex not occupied by a lion. We analyze the relationships among the lion number $\mathcal{L}(G)$, monotone lion number $\mathcal{L}^m(G)$, and the graph's pathwidth $\operatorname{pw}(G)$. Our main results are as follows: (a) We prove a monotonicity property: for any graph $G$ and its isometric subgraph $H$, $\mathcal{L}(H)\le \mathcal{L}(G)$. (b) For trees $T$, we show that the lion number is tightly characterized by pathwidth, satisfying $\operatorname{pw}(T)\le \mathcal{L}(T)\le \operatorname{pw}(T)+1$. (c) We provide a counterexample showing that the monotonicity property fails for arbitrary subgraphs. (d) We show that, in contrast to the tree case, pathwidth does not yield a general lower bound on $\mathcal{L}(G)$ for arbitrary graphs. (e) For any connected graph $G$, we prove the general upper bound $\mathcal{L}(G)\le \operatorname{pw}(G)+1$. (f) For the monotone variant, we establish the general lower bound $\operatorname{pw}(G)\le \mathcal{L}^m(G)$. (g) Conversely, we show that $\mathcal{L}^m(G)\le 2\operatorname{pw}(G)+2$ holds for all connected graphs, which is best possible up to a small additive constant.
Bibhas Adhikari
Apr 20, 2026·quant-ph·PDF We develop a unified quantum framework for subgraph counting in graphs. We encode a graph on $N$ vertices into a quantum state on $2\lceil \log_2 N \rceil$ working qubits and $2$ ancilla qubits using its adjacency list, with worst-case gate complexity $O(N^2)$, which we refer to as the graph adjacency state. We design quantum measurement operators that capture the edge structure of a target subgraph, enabling estimation of its count via measurements on the $m$-fold tensor product of the adjacency state, where $m$ is the number of edges in the subgraph. We illustrate the framework for triangles, cycles, and cliques. This approach yields quantum logspace algorithms for motif counting, with no known classical counterpart.
Michael Lampis, Manolis Vasilakis
Capacitated Vertex Cover is the hard-capacitated variant of Vertex Cover: given a graph, a capacity for every vertex, and an integer $k$, the task is to select at most $k$ vertices that cover all edges and assign each edge to one of its chosen endpoints so that no chosen vertex receives more incident edges than its capacity. This problem is a classical benchmark in parameterized complexity, as it was among the first natural problems shown to be W[1]-hard when parameterized by treewidth. We revisit its exact complexity from a fine-grained parameterized perspective and obtain a much sharper picture for several standard parameters. For the natural parameter $k$, we prove under the Exponential Time Hypothesis (ETH) that no algorithm with running time $k^{o(k)} n^{\mathcal{O}(1)}$ exists. In particular, this shows that the known algorithms with running time $k^{\mathcal{O}(\mathrm{tw})} n^{\mathcal{O}(1)}$ are essentially optimal. We then turn to more general structural parameters. For vertex cover number $\mathrm{vc}$, we give evidence against a $2^{\mathcal{O}(\mathrm{vc}^{2-\varepsilon})} n^{\mathcal{O}(1)}$ algorithm, as such an improvement would imply corresponding progress for a broader class of integer-programming-type problems. We complement this barrier with a nearly matching upper bound for vertex integrity $\mathrm{vi}$, improving the previously known double-exponential dependence to an algorithm with running time $\mathrm{vi}^{\mathcal{O}(\mathrm{vi}^{2})} n^{\mathcal{O}(1)}$ using $N$-fold integer programming. For treewidth, we show that the standard dynamic programming algorithm with running time $n^{\mathcal{O}(\mathrm{tw})}$ is essentially optimal under the ETH, even if one parameterizes by tree-depth. Turning to clique-width, we prove that Capacitated Vertex Cover remains NP-hard already on graphs of linear clique-width $6$...
Alonso Botero, Matthias Christandl, Thomas C. Fraser, Itai Leigh, Harold Nieuwboer
Upper and lower quantum functionals, introduced by Christandl, Vrana and Zuiddam (STOC 2018, J. Amer. Math. Soc. 2023), are families of monotone functions of tensors indexed by a weighting on the set of subsets of the tensor legs. Inspired by quantum information theory, they were crafted as obstructions to asymptotic tensor transformations, relevant in algebraic complexity theory. For tensors of order three, and more generally for weightings on singletons for higher-order tensors, the upper and lower quantum functionals coincide and are spectral points in Strassen's asymptotic spectrum. Moreover, the singleton quantum functionals characterize the asymptotic slice rank, whereas general weightings provide upper bounds on asymptotic partition rank. It has been an open question whether the upper and lower quantum functionals also coincide for other cases, or more generally, how to construct further spectral points, especially for higher-order tensors. In this work, we show that upper and lower quantum functionals generally do not coincide, but that they anchor new spectral points. With this we mean that there exist new spectral points, which equal the quantum functionals on the set of tensors on which upper and lower coincide. The set is shown to include embedded three-tensors and W-like states and concerns all laminar weightings, significantly extending the singleton case.
Faruk Alpay, Levent Sarioglu
We study a list-constrained extension of modular equation deletion over powers of two, called Coset-List Min-2-Lin$^{\pm}$ over $\mathbb{Z}/2^d\mathbb{Z}$. Each variable is restricted to a dyadic coset $a+2^{\ell}(\mathbb{Z}/2^d\mathbb{Z})$, each binary constraint is of the form $x_u=x_v$, $x_u=-x_v$, or $x_u=2x_v$, and the goal is to delete a minimum number of constraints so that the remaining system is satisfiable. This problem lies between the no-list case and the poorly understood fully conservative list setting. Our main technical result is a coordinatewise balanced covering theorem for linear gain graphs labeled by vectors in $\mathbb{F}_2^r$. Given any balanced subgraph of cost at most $k$, a randomized procedure outputs a vertex set $S$ and an edge set $F$ such that $(G-F)[S]$ is balanced and, with probability $2^{-O(k^2r)}$, every hidden balanced subgraph of cost at most $k$ is contained in $S$ while all incident deletions are captured by $F$. The proof tensors the one-coordinate balanced-covering theorem of Dabrowski, Jonsson, Ordyniak, Osipov, and Wahlström across coordinates, and is combined with a rank-compression theorem replacing the ambient lifted dimension by the intrinsic cycle-label rank $ρ$. We also develop a cycle-space formulation, a cut-space/potential characterization of balancedness, a minimal-dimension statement for equivalent labelings, and an explicit bit-lifting analysis for dyadic coset systems. These yield a randomized one-sided-error algorithm running in \[ 2^{O(k^2ρ+k\log(kρ+2))}\cdot n^{O(1)}+\widetilde{O}(md+ρ^ω), \] and the same framework returns a minimum-weight feasible deletion set among all solutions of size at most $k$.
Xiao Wang
The key-value (KV) cache is the dominant memory bottleneck during Transformer inference, yet little is known theoretically about how aggressively it can be compressed before multi-step reasoning degrades. We study this through $k$-hop pointer chasing on $n$ tokens under a shared KV cache of size $s$, attention dimension $m$, $H$ heads, $p$-bit precision, and a locality-respecting cache controller (satisfied by all standard KV-compression methods). We give three results. (1) Product depth lower bound (conjectured). We conjecture that any such Transformer ($n \geq 4k$, $s \leq \sqrt{n}/4$) requires depth $L = Ω(\lceil k/s \rceil \cdot \lceil \log_2 n/(Hmp) \rceil)$, and isolate the sole remaining gap as a probabilistic step on the joint distribution of cache trace and pointer chain. Unconditionally, we prove a matching upper bound $L = O(\min(k, \lceil k/s \rceil \log s) \cdot \log n/(mp))$ via windowed pointer doubling, and a max-bound $L = Ω(\max(\lceil k/s \rceil, \log n/(Hmp)))$. Closing the conjecture amounts to upgrading max to product. (2) Bandwidth barrier. The product bound binds only when $Hmp \lesssim \log n$. Any lower bound provable via per-window distinguishability counting -- including reachability, bandwidth, and combinations -- cannot exceed $\lceil k/s \rceil$ once $Hmp \geq \log_2 n$. Breaking this requires lifting unconditional communication-complexity bounds for pointer chasing to Cache-Transformer depth. (3) Adaptive vs oblivious error scaling. Under random cache over $T = \lceil \log_2 k \rceil$ doubling stages, oblivious caches give $\Pr[\mathcal{E}] \leq (s/(n-T))^T + 2T^3/n$ (exponential in $T$), while adaptive locality-respecting caches achieve $\Pr[\mathcal{E}] = s/n$ exactly, independent of $T$. The $Ω((n/s)^{T-1})$ separation explains why heavy-hitter eviction empirically dominates random eviction for multi-hop reasoning.
József Pintér, Regina Stangl
Network caching asks how to place contents in distributed caches so that future requests are served close to their users. Ganian, Mc Inerney and Tsigkari recently initiated the parameterized-complexity study of the problem and, for the homogeneous unit-size variant (HomNC), isolated an unresolved family of six parameterizations: by the number of caches $C$, the number of users $U$, $U+K$, $C+U$, $C+λ$, and the vertex-cover number $\text{vc}(G)$, where $K$ is the maximum cache capacity and $λ$ is the maximum number of contents requested with nonzero probability by any user. Their interreducibility theorem showed that these six cases stand or fall together under parameterized reductions, and they conjectured the family to be W[1]-hard. We resolve this conjecture in the opposite direction. We prove that HomNC is fixed-parameter tractable parameterized by $C$ alone, and therefore fixed-parameter tractable for all six parameterizations. Our algorithm is based on an exact $n$-fold integer programming formulation that reveals a nontrivial block structure in homogeneous network caching, with the repeated part depending only on $C$. Standard algorithms for $n$-fold integer programming then yield a running time of the form $f(C)\lvert I\rvert^{O(1)}$.
Divya Bajaj, Bin Fu, Ryan Knobel, Austin Luchsinger, Aiden Massie, Pablo Santos, Ramiro Santos, Robert Schweller, Evan Tomai, Tim Wylie
Chemical Reaction Networks (CRNs) are a well-established model of distributed computing characterized by quantities of molecular species that can transform or change through applications of reactions. A fundamental problem in CRNs is the reachability problem, which asks if an initial configuration of species can transition to a target configuration through an applicable sequence of reactions. It is well-known that the reachability problem in general CRNs was recently proven to be Ackermann-complete. However, if the CRN's reactions are restricted in both power, such as only deleting species (deletion-only rules) or consuming and producing an equal number of species (volume-preserving rules), and size (unimolecular or bimolecular rules), then reachability falls below Ackermann-completeness, and is even solvable in polynomial time for deletion-only systems. In this paper, we investigate reachability under this set of restricted unimolecular and bimolecular reactions, but in the Priority-Inhibitory CRN and Inhibitory CRN models. These models extend a traditional CRN by allowing some reactions to be inhibited from firing in a configuration if certain species are present; the exact inhibition behavior varies between the models. We first show that reachability with Priority iCRNs mostly remains in P for deletion-only systems, but becomes NP-complete for one case. We then show that reachability with deletion-only reactions for iCRNs is mostly NP-complete, and PSPACE-complete even for (1,1)-size (general) reactions. We also provide FPT algorithms for solving most of the reachability problems for the iCRN model. Finally, we show reachability for CRNs with states is already NP-hard for the simplest deletion-only systems, and is PSPACE-complete even for (general) (1,1)-size reactions.
Johannes Bund, Amir Leshem, Moti Medina
Metastability is a spurious mode of operation in digital signals, where an electrical signal fails to settle into a stable state within a specified time, leading to uncertainty and potentially failing downstream hardware. A system that computes the closure over all possibilities, given an uncertain input, is called a Metastability-containing system. While prior work has addressed metastability-containing systems in the context of combinational and clocked circuits, state machines, and logic formulas, its implications for general-purpose computation remain largely unexplored. In this work, we study the metastability-containing systems within an abstract computational model: The Turing Machine. This approach allows us to investigate the computational limits and capabilities of Turing Machines operating under uncertain inputs. Specifically, we prove that in general the metastable closure of a Turing Machine is non-computable. Then we discuss cases where the meta-stable closure is computable: For EXPTIME problems, we prove that resolving even a single uncertain bit is EXPTIME-complete. In contrast, we prove that for polynomial time problems, the meta-stable closure is polynomial time computable for a logarithmic number of uncertain bits, but coNP-complete, when the number of undefined inputs is arbitrary. Finally, we describe a hardware-realizable Universal Turning Machine that computes the metastable closure of any given bounded-time Turing Machine with at most an exponential blowup in time.
Enoch Hyunwook Kang
Iterative alignment methods based on purely greedy updates are remarkably effective in practice, yet existing theoretical guarantees of \(O(\log T)\) KL-regularized regret can seem pessimistic relative to their empirical performance. In this paper, we argue that this mismatch arises from the regret criterion itself: KL-regularized regret conflates the statistical cost of learning with the exploratory randomization induced by the softened training policy. To separate these effects, we study the traditional temperature-zero regret criterion, which evaluates only the top-ranked response at inference time. Under this decision-centric notion of performance, we prove that standard greedy online alignment methods, including online RLHF and online DPO, achieve constant \((O(1))\) cumulative regret. By isolating the cost of identifying the best response from the stochasticity induced by regularization, our results provide a sharper theoretical explanation for the practical superb efficiency of greedy alignment.