Ka Ho Lai, Hei Tung Tsang, Gary P. T. Choi, Lok Ming Lui
Origami structures, particularly Miura-ori patterns, offer unique capabilities for surface approximation and deployable designs. In this study, a constrained mapping optimization algorithm is designed for designing surface-aligned Miura-ori via a narrow band approximation of the input surface. The Miura-fold, embedded in the narrow band, is parameterized to a planar domain, and a mapping is computed on the parameter pattern by optimizing certain energy terms and constraints. Extensive experiments are conducted, showing the significance and flexibility of our methods.
Nikola Milićević
We develop new aspects of the homological algebra theory for persistence modules, in both the one-parameter and multi-parameter settings. For a poset $P$ and an order preserving map $\varphi:P\times P\to P$, we introduce a novel tensor product of persistence modules indexed by $P$, $\otimes_{\varphi}$. We prove that each $\otimes_{\varphi}$ has a right adjoint, $\mathbf{Hom}^{\varphi}$, the internal hom of persistence modules that also depends on $\varphi$. We prove that every $\otimes_{\varphi}$ yields a Künneth short exact sequence of chain complexes of persistence modules. Dually, the $\mathbf{Hom}^{\varphi}$ also has an associated Künneth short exact sequence in cohomology. As special cases both of these short exact sequences yield Universal Coefficient Theorems. We show how to apply these to chain complexes of persistence modules arising from filtered CW complexes. For the special case of $P=\mathbb{R}_+$, the $p$-quasinorms for each $p\in (0,\infty]$ yield a distinct $\otimes_{\ell^p_c}$ and its adjoint $\mathbf{Hom}^{\ell^p_c}$. We compute their derived functors, $\mathbf{Tor}^{\ell^p_c}$ and $\mathbf{Ext}_{\ell^p_c}$ explicitly for interval modules. We show that the Universal Coefficient Theorem developed can be used to compute persistent Borel-Moore homology of a filtration of non-compact spaces. Finally, we show that for every $p\in [1,\infty]$ the associated Künneth short exact sequence can be used to significantly speed up and approximate persistent homology computations in a product metric space $(X\times Y,d^p)$ with the distance $d^p((x,y),(x',y'))=||d_X(x,x'),d_Y(y,y')||_p$.
Hugo Hiu Chak Cheng, Gary P. T. Choi
Kirigami, the art of paper cutting, has been widely used in the modern design of mechanical metamaterials. In recent years, many kirigami-based metamaterials have been designed based on different planar tiling patterns and applied to different science and engineering problems. However, it is natural to ask whether one can create deployable kirigami structures based on the simplest forms of tilings, namely the monotile patterns. In this work, we answer this question by proving the existence of periodic and aperiodic monotile kirigami structures via explicit constructions. In particular, we present a comprehensive collection of periodic monotile kirigami structures covering all 17 wallpaper groups and aperiodic monotile kirigami structures covering various quasicrystal patterns as well as polykite tilings. We further perform theoretical and computational analyses of monotile kirigami patterns in terms of their shape and size changes under deployment. Altogether, our work paves a new way for the design and analysis of a wider range of shape-morphing metamaterials.
Michael T. M. Emmerich, Ksenia Pereverdieva, André H. Deutz
We prove that, for every fixed $θ_0>0$, selecting a subset of prescribed cardinality that maximizes the Solow--Polasky diversity indicator is NP-hard for finite point sets in $\mathbb{R}^2$ with the Euclidean metric, and therefore also for finite point sets in $\mathbb{R}^d$ for every fixed dimension $d \ge 2$. This strictly strengthens our earlier NP-hardness result for general metric spaces by showing that hardness persists under the severe geometric restriction to the Euclidean plane. At the same time, the Euclidean proof technique is different from the conceptually easier earlier argument for arbitrary metric spaces, and that general metric-space construction does not directly translate to the Euclidean setting. In the earlier proof one can use an exact construction tailored to arbitrary metrics, essentially exploiting a two-distance structure. In contrast, such an exact realization is unavailable in fixed-dimensional Euclidean space, so the present reduction requires a genuinely geometric argument. Our Euclidean proof is based on two distance thresholds, which allow us to separate yes-instances from no-instances by robust inequalities rather than by the exact construction used in the general metric setting. The main technical ingredient is a bounded-box comparison lemma for the nonlinear objective $\mathbf{1}^{\top}Z^{-1}\mathbf{1}$, where $Z_{ij}=e^{-θ_0 d(x_i,x_j)}$. This lemma controls the effect of perturbations in the pairwise distances well enough to transfer the gap created by the reduction. The reduction is from \emph{Geometric Unit-Disk Independent Set}. We present the main argument in geometric form for finite subsets of $\mathbb{R}^2$, with an appendix supplying the bit-complexity details needed for polynomial-time reducibility.
Yifan Zhang
We study a family of local depth-based corrections to maxmin landmark selection for lazy witness persistence. Starting from maxmin seeds, we partition the cloud into nearest-seed cells and replace or move each seed toward a deep representative of its cell. The principal implemented variant, \emph{support-weighted partial recentering}, scales the amount of movement by cell support. The contributions are both mathematical and algorithmic. On the mathematical side, we prove local geometric guarantees for these corrections: a convex-core robustness lemma derived from halfspace depth, a $2r$ cover bound for subset recentering, and projected cover bounds for the implemented partial-recentering rules. On the algorithmic side, we identify a practically effective variant through a layered empirical study consisting of planar synthetic benchmarks, a parameter-sensitivity study, and an MPEG-7 silhouette benchmark, together with a modest three-dimensional torus extension. The main planar experiments show that support-weighted partial recentering gives a consistent geometric improvement over maxmin while preserving the thresholded $H_1$ summary used in the study. The three-dimensional experiment shows the same geometric tendency but only mixed topological behavior. The paper should therefore be read as a controlled study of a local depth-based alternative to maxmin, rather than as a global witness-approximation theorem or a claim of uniform empirical superiority.
Michael T. M. Emmerich
We investigate \emph{magnitude} as a new unary and strictly Pareto-compliant quality indicator for finite approximation sets to the Pareto front in multiobjective optimization. Magnitude originates in enriched category theory and metric geometry, where it is a notion of size or point content for compact metric spaces and a generalization of cardinality. For dominated regions in the \(\ell_1\) box setting, magnitude is close to hypervolume but not identical: it contains the top-dimensional hypervolume term together with positive lower-dimensional projection and boundary contributions. This paper gives a first theoretical study of magnitude as an indicator. We consider multiobjective maximization with a common anchor point. For dominated sets generated by finite approximation sets, we derive an all-dimensional projection formula, prove weak and strict set monotonicity on finite unions of anchored boxes, and thereby obtain weak and strict Pareto compliance. Unlike hypervolume, magnitude assigns positive value to boundary points sharing one or more coordinates with the anchor point, even when their top-dimensional hypervolume contribution vanishes. We then formulate projected set-gradient methods and compare hypervolume and magnitude on biobjective and three-dimensional simplex examples. Numerically, magnitude favors boundary-including populations and, for suitable cardinalities, complete Das--Dennis grids, whereas hypervolume prefers more interior-filling configurations. Computationally, magnitude reduces to hypervolume on coordinate projections; for fixed dimension this yields the same asymptotic complexity up to a factor \(2^d-1\), and in dimensions two and three \(Θ(n\log n)\) time. These results identify magnitude as a mathematically natural and computationally viable alternative to hypervolume for finite Pareto front approximations.
Omrit Filtser, Tzalik Maimon, Ofir Yomtovyan
The minimum convex cover problem seeks to cover a polygon $P$ with the fewest convex polygons that lie within $P$. This problem is $\exists\mathbb R$-complete, and the best previously known algorithm, due to Eidenbenz and Widmayer (2001), achieves an $O(\log n)$-approximation in $O(n^{29} \log n)$ time, where $n$ is the complexity of $P$. In this work we present a novel approach that preserves the $O(\log n)$ approximation guarantee while significantly reducing the running time. By discretizing the problem and formulating it as a set cover problem, we focus on efficiently finding a convex polygon that covers the largest number of uncovered regions, in each iteration of the greedy algorithm. This core subproblem, which we call the rotten potato peeling problem, is a variant of the classic potato peeling problem. We solve it by finding maximum weighted paths in Directed Acyclic Graphs (DAGs) that correspond to visibility polygons, with the DAG construction carefully constrained to manage complexity. Our approach yields a substantial improvement in the overall running time and introduces techniques that may be of independent interest for other geometric covering problems.
Seongbin Park, Eunjin Oh
In this paper, we study the many-to-many matching problem on planar point sets with integer coordinates: Given two disjoint sets $R,B \subset [Δ]^2$ with $|R|+|B|=n$, the goal is to select a set of edges between $R$ and $B$ so that every point is incident to at least one edge and the total Euclidean length is minimized. In the general case that $R$ and $B$ are point sets in the plane, the best-known algorithm for the many-to-many matching problem takes $\tilde{O}(n^2)$ time. We present an exact $\tilde{O}(n^{1.5} \log Δ)$ time algorithm for point sets in $[Δ]^2$. To the best of our knowledge, this is the first subquadratic exact algorithm for planar many-to-many matching under bounded integer coordinates.
David Avis, Luc Devroye
In this paper, we investigate the relationships between the volumes of four convex bodies: the cut polytope, metric polytope, rooted metric polytope, and elliptope, defined on graphs with $n$ vertices. The cut polytope is contained in each of the other three, which, for optimization purposes, provide polynomial-time relaxations. It is therefore of interest to see how tight these relaxations are. Worst-case ratio bounds are well known, but these are limited to objective functions with non-negative coefficients. Volume ratios, pioneered by Jon Lee with several co-authors, give global bounds and are the subject of this paper. For the rooted metric polytope over the complete graph, we show that its volume is much greater than that of the elliptope. For the metric polytope, for small values of $n$, we show that its volume is smaller than that of the elliptope; however, for large values, volume estimates suggest the converse is true. We also give exact formulae for the volume of the cut polytope for some families of sparse graphs.
Takashi Yoshino, Supanut Chaidee
We consider a new treatment for making polyhedron nets referred to as ``apple peel unfolding'': drawing the nets as if we were peeling off appleskins. We define apple peel unfolding strictly and implement a program that derives the sequential selection of the polyhedral faces for a target polyhedron in accordance with the definition. Consequently, the program determines whether the polyhedron is peelable (can be peeled completely). We classify Archimedean solids and their duals (Catalan solids) as perfect (always peelable), possible (peelable for restricted cases), or impossible. The results show that three Archimedean and six Catalan solids are perfect, and three Archimedean and three Catalan ones are possible.
Stefan Huber, Dominik Kaaser
We study the patient zero problem in epidemic spreading processes in the independent cascade model and propose a geometric approach for source reconstruction. Using Johnson-Lindenstrauss projections, we embed the contact network into a low-dimensional Euclidean space and estimate the infection source as the node closest to the center of gravity of infected nodes. Simulations on Erdős-Rényi graphs demonstrate that our estimator achieves meaningful reconstruction accuracy despite operating on compressed observations.
Jaehoon Chung
We study a variant of a polygon partition problem, introduced by Chung, Iwama, Liao, and Ahn [ISAAC'25]. Given orthogonal unit vectors $\mathbf{u},\mathbf{v}\in \mathbb{R}^2$ and a polygon $P$ with $n$ vertices, we partition $P$ into connected pieces by cuts parallel to $\mathbf{v}$ such that each resulting subpolygon has width at most one in direction $\mathbf{u}$. We consider the value version, which asks for the minimum number of strips, and the reporting version, which outputs a compact encoding of the cuts in an optimal strip partition. We give efficient algorithms and lower bounds for both versions on three classes of polygons of increasing generality: convex, simple, and self-overlapping. For convex polygons, we solve the value version in $O(\log n)$ time and the reporting version in $O\!\left(h \log\left(1 + \frac{n}{h}\right)\right)$ time, where $h$ is the width of $P$ in direction $\mathbf{u}$. We prove matching lower bounds in the decision-tree model, showing that the reporting algorithm is input-sensitive optimal with respect to $h$. For simple polygons, we present $O(n \log n)$-time, $O(n)$-space algorithms for both versions and prove an $Ω(n)$ lower bound. For self-overlapping polygons, we extend the approach for simple polygons to obtain $O(n \log n)$-time, $O(n)$-space algorithms for both versions, and we prove a matching $Ω(n \log n)$ lower bound in the algebraic computation-tree model via a reduction from the $δ$-closeness problem. Our approach relies on a lattice-theoretic formulation of the problem. We represent strip partitions as antichains of intervals in the Clarke--Cormack--Burkowski lattice, originally developed for minimal-interval semantics in information retrieval. Within this lattice framework, we design a dynamic programming algorithm that uses the lattice operations of meet and join.
Minati De, Satyam Singh
In the classical online model, the maximum independent set problem admits an $Ω(n)$ lower bound on the competitive ratio even for interval graphs, motivating the study of the problem under additional assumptions. We first study the problem on graphs with a bounded independent kissing number $ζ$, defined as the size of the largest induced star in the graph minus one. We show that a simple greedy algorithm, requiring no geometric representation, achieves a competitive ratio of $ζ$. Moreover, this bound is optimal for deterministic online algorithms and asymptotically optimal for randomized ones. This extends previous results from specific geometric graph families to more general graph classes. Since this bound rules out further improvements through randomization alone, we investigate the power of randomization with access to geometric representation. When the geometric representation of the objects is known, we present randomized online algorithms with improved guarantees. For unit ball graphs in $\mathbb{R}^3$, we present an algorithm whose expected competitive ratio is strictly smaller than the deterministic lower bound implied by the independent kissing number. For $α$-fat objects and for axis-aligned hyper-rectangles in $\mathbb{R}^d$ with bounded diameters, we obtain algorithms with expected competitive ratios that depend polylogarithmically on the ratio between the maximum and minimum object diameters. In both cases, the randomized lower bound implied by the independent kissing number grows polynomially with the ratio between the maximum and minimum object diameters, implying substantial performance guarantees for our algorithms.
Mark de Berg, Prosenjit Bose, Leonidas Theocharous
Many algorithmic problems can be solved (almost) as efficiently in metric spaces of bounded doubling dimension as in Euclidean space. Unfortunately, the metric space defined by points in a simple polygon equipped with the geodesic distance does not necessarily have bounded doubling dimension. We therefore study the doubling dimension of fat polygons, for two well-known fatness definitions. We prove that locally-fat simple polygons do not always have bounded doubling dimension, while any $(α,β)$-covered polygon does have bounded doubling dimension (even if it has holes). We also study the perimeter of geodesically convex sets in $(α,β)$-covered polygons (possibly with holes), and show that this perimeter is at most a constant times the Euclidean diameter of the set. Using these two results, we obtain new results for several problems on $(α,β)$-covered polygons, including an algorithm that computes the closest pair of a set of $m$ points in an $(α,β)$-covered polygon with $n$ vertices that runs in $O(n + m\log{n})$ expected time.
Bingwei Zhang, Thomas Chen, Kai Hormann, Chee Yap
Range functions are a fundamental tool for certified computations in geometric modeling, computer graphics, and robotics, but traditional range functions have only quadratic convergence order ($m=2$). For ``superior'' convergence order (i.e., $m>2$), we exploit the Cornelius--Lohner framework in order to introduce new bivariate range functions based on Taylor, Lagrange, and Hermite interpolation. In particular, we focus on practical range functions with cubic and quartic convergence order. We implemented them in Julia and provide experimental validation of their performance in terms of efficiency and efficacy.
Nguyen Phan, Brian Kim, Adeel Zafar, Guoning Chen
Streamlines have been widely used to represent and analyze various steady vector fields. To sufficiently represent important features in complex vector fields (like flow), a large number of streamlines are required. Due to the lack of a rigorous definition of features or patterns in streamlines, user interaction and exploration are required to achieve effective interpretation. Existing approaches based on clustering or pattern search, while valuable for specific analysis tasks, often face challenges in supporting interactive and level-of-detail exploration of large-scale curve-based data, particularly when real-time parameter adjustment and iterative refinement are needed. To address this, we design and implement an interactive web-based system. Our system utilizes a Curve Segment Neighborhood Graph (CSNG) to encode the neighboring relationships between curve segments. CSNG enables us to adapt a fast community detection algorithm to identify coherent flow structures and spatial groupings in the streamlines interactively. CSNG also supports a multi-level exploration through an enhanced force-directed layout. Furthermore, our system integrates an adjacency matrix representation to reveal detailed inter-relations among segments. To achieve real-time performance within a web browser, our system employs matrix compression for memory-efficient CSNG storage and parallel processing. We have applied our system to analyze and interpret complex patterns in several streamline datasets. Our experiments show that we achieve real-time performance on datasets with hundreds of thousands of segments.
Vladimir Molchanov, Hennes Rave, Lars Linsen
Cartograms are a technique for visually representing geographically distributed statistical data, where values of a numerical attribute are mapped to the size of geographic regions. Contiguous cartograms preserve the adjacencies of the original regions during the mapping. To be useful, contiguous cartograms also require approximate preservation of shapes and relative positions. Due to these desirable properties, contiguous cartograms are among the most popular ones. Most methods for constructing contiguous cartograms exploit a deformation of the original map. Aiming at the preservation of geographical properties, existing approaches are often algorithmically cumbersome and computationally intensive. We propose a novel deformation technique for computing time-varying contiguous cartograms based on integral images evaluated for a series of discrete density distributions. The density textures represent the given dynamic statistical data. The iterative application of the proposed mapping smoothly transforms the domain to gradually equalize the temporal density, i.e., region areas grow or shrink following their evolutionary statistical data. Global shape preservation at each time step is controlled by a single parameter that can be interactively adjusted by the user. Our efficient GPU implementation of the proposed algorithm is significantly faster than existing state-of-the-art methods while achieving comparable quality for cartographic accuracy, shape preservation, and topological error. We investigate strategies for transitioning between adjacent time steps and discuss the parameter choice. Our approach applies to comparative cartograms' morphing and interactive cartogram exploration.
Samuel Weidemaier, Christoph Norden-Smoch, Martin Rumpf
We propose a novel variational method to compute a highly accurate global signed distance function (SDF) to a given point cloud. To this end, the jump set of the gradient of the SDF, which coincides with the medial axis of the surface, is explicitly taken into account through a higher-order variational formulation that enforces linear growth along the gradient direction away from this discontinuity set. The eikonal equation and the zero-level set of the SDF are enforced as constraints. To make this variational problem computationally tractable, a phase field approximation of Ambrosio-Tortorelli type is employed. The associated phase field function implicitly describes the medial axis. The method is implemented for surfaces represented by unoriented point clouds using neural network approximations of both the SDF and the phase field. Experiments demonstrate the method's accuracy both in the near field and globally. Quantitative and qualitative comparisons with other approaches show the advantages of the proposed method.
Chenming Gao, Hongwei Lin, Gengchen Li
In the realm of computer-aided design (CAD) software, the intersection of B-spline surfaces stands as a fundamental operation. Despite the extensive history of surface intersection algorithms, the challenge of handling complex intersection topologies persists. While subdivision algorithms have demonstrated strong robustness in computing surface/surface intersection and are capable of addressing singular cases, determining the topology of the intersection obtained through these methods is a key factor for calculating correct intersection, and remains a difficult issue. To address this challenge, we propose a Mapper-based method for determining the topology of the intersection between two B-spline surfaces. Our algorithm is designed to efficiently handle various common and complex intersection topologies. Experimental results verify the robustness and topological correctness of this method.
Victor Maus, Vinicius Pozzobon Borin
Exact hierarchical agglomerative clustering (HAC) of large spatial datasets is limited in practice by the $\mathcal{O}(n^2)$ time and memory required for the full pairwise distance matrix. We present GSHAC (Geographically Sparse Hierarchical Agglomerative Clustering), a system that makes exact HAC feasible at scales of millions of geographic features on a commodity workstation. GSHAC replaces the distance matrix with a sparse geographic distance graph containing only pairs within a user-specified geodesic bound~$h_{\max}$, constructed in $\mathcal{O}(n \cdot k)$ time via spatial indexing, where~$k$ is the mean number of neighbors within~$h_{\max}$. Connected components of this graph define independent subproblems, and we prove that the resulting assignments are exact for all standard linkage methods at any cut height $h \le h_{\max}$. For single linkage, an MST-based path keeps memory at $\mathcal{O}(n_k + m_k)$ per component. Applied to a global mining inventory ($n = 261{,}073$), the system completes in 12\,s (109\,MiB peak HAC memory) versus $\approx 545$\,GiB for the dense baseline. On a 2-million-point GeoNames sample, all tested thresholds completed in under 3\,minutes with peak memory under 3\,GiB. We provide a scikit-learn-compatible implementation for direct integration into GIS workflows.