Fabian Baumann, Erol Akçay, Joshua B. Plotkin
Generative artificial intelligence (genAI) is rapidly reshaping how knowledge and culture are produced and consumed. Yet generative models are vulnerable to model collapse: when trained on data generated by earlier versions of themselves, their outputs can lose diversity and accuracy. This creates a social dilemma, because delegating tasks to genAI can be individually beneficial in the short term even as widespread adoption degrades future model performance. Here we develop a parsimonious model of behavior in collaborative interactions in which individuals can either exert human effort, rely on genAI, or refrain from work altogether. The welfare consequences of genAI are organized by a simple two-dimensional taxonomy: the strength of the incentive to perform the task without AI, and the severity of model collapse. Within this framework, the introduction of genAI -- while initially beneficial at the individual level -- will reduce social welfare for the most important types of tasks. In addition, habit formation around genAI use can couple otherwise separate domains, so that adoption in low-stakes tasks spills over into high-value tasks and amplifies welfare losses. Together, these results identify a general pathway by which, in the absence of intervention, individually rational adoption of genAI will assuredly and profoundly reduce collective welfare.
Tina Šfiligoj, Oded Cats
In this study, we take a systematic look at the unrealised part of public transport networks (PTNs) with functional connections. We consider their complement graphs and study their structure. The complement graph $\bar G$ of an unweighted graph $G$ is a straightforward concept, yielding a graph on the same set of nodes, and an edge exists in $\bar G$ if and only if it is not present in $G$. In contrast, a weighted complement graph cannot be uniquely determined. However, if we consider PTNs with travel times as edge weights, there are physical constraints on the possible weight ranges. We propose a method to construct weighted complement graphs of operational PTN graph representations based on the geographical distances between nodes (representing stops) and assign weights to edges based on distance, combined with network-specific distributions of effective velocities and waiting times. We observe that the most central nodes in the weighted complement graph do not correspond to the least central nodes in the original network but are, remarkably, those in the geographical centre of the network that lack topological connectedness. Testing against null models on a dataset of 31 metro networks worldwide confirms that this is a fundamentally spatial effect.
Martin Hendrick, Maximilian Trique, Gabriele Manoli
Urban expansion fronts display a robust local roughness exponent together with strongly dispersed growth and nonuniversal dynamic exponents. We show that this coexistence can arise from a disorder-controlled crossover in projected-front growth. Introducing a minimal Eden model, in which geographic constraints act as quenched dilution and coalescence as quenched local acceleration, we demonstrate that the resulting front evolves through a long preasymptotic regime controlled by ordinary two-dimensional percolation before crossing over to asymptotic KPZ growth. In this regime, the local roughness remains close to $1/2$, while the large-scale exponents vary broadly with disorder and acceleration. These results provide a minimal explanation of urban-front roughening and suggest a more general mechanism for stochastic growth in heterogeneous media.
Tamás Kiss
Theory and empirical research on management teams' influence on firm performance have witnessed continuous development, and by now incorporate numerous details. Classic, experiment-based studies examining social systems collect vast amount of data, but often times investigate only the first one or two modes of the distribution of measured variables, and experience difficulty in analyzing the effect of context. For example, in functional diversity research, management teams are described by measures incorporating complex distributions of capabilities of individual managers and teams of managers. To investigate the effect of hidden distributions, and the effect of functional diversity composition on team communication and performance, we developed an agent-based model, and conducted a series of simulation experiments. Modeling results show that depending on the context, such as communication scheme among interacting agents, or their functional composition, intrapersonal functional diversity (IFD), and dominant function diversity (DFD) might enhance or reduce performance and communication among agents. Furthermore, simulation results also suggest that a third measure is required alongside IFD and DFD capturing the aggregate expertise of the team to comprehensively account for empirical findings.
Mariko I. Ito, Hiroyuki Hasada, Yudai Honma, Takaaki Ohnishi, Tsutomu Watanabe, Kazuyuki Aihara
Market instability has been extensively studied using mathematical approaches to characterize complex trading dynamics and detect structural change points. This study explores the potential for early warning of market instability by applying the Dynamical Network Marker (DNM) theory to order placement and execution data from the Tokyo Stock Exchange. DNM theory identifies indicators associated with critical slowing down -- a precursor to critical transitions -- in high-dimensional systems of many interacting elements. In this study, market participants are identified using virtual server IDs from the trading system, and multivariate time series representing their trading activities are constructed. This framework treats each participant as an interacting element, thereby enabling the application of DNM theory to the resulting time series. The results suggest that early warning signals of large price movements can be detected on a daily time scale. These findings highlight the potential to develop practical DNM-based early-warning systems for large price movements by further refining forecasting horizons and integrating multiple time series capturing different aspects of trading behavior.
Sarvesh K. Upadhyay, Trifce Sandev, Sanjay Kumar, R. K. Singh
We study exploration properties of a random walk on a network. For a fully connected network we find that the problem can be mapped to the well known coupon collector problem, thus allowing us to estimate form of $P(S,t)$: the distribution of number of distinct nodes $S$ visited by the random walk upto time $t$. From a practical point of view, however, both the fully connected network and hops taking place after fixed intervals are an idealization. We solve this problem by introducing the formalism of continuous time random walks wherein the random walk spends a random amount of time a node before hopping to its neighboring node. The formalism allows us to study the large deviation limit of $P(S,t)$ under very mild conditions that the distribution of waiting times $ψ(τ)$ exhibits analyticity at small times. Furthermore, we find that at small times, the properties of $P(S,t)$ are largely independent of the network topology, and are governed solely by the waiting time characteristics.
Caesnan M. G. Leditto, Angus Southwell, Muhammad Usman, Kavan Modi
Apr 22, 2026·quant-ph·PDF Higher-order networks with multiway interactions can exhibit collective dynamical phenomena that are absent in traditional pairwise network models. However, analyzing such dynamics becomes computationally prohibitive as their state space grows combinatorially in the multiway interaction order. Here we develop quantum algorithms for two central tasks -- synchronization estimation and certification of the no-phase-locking regime -- in the simplicial Kuramoto model. This model is a higher-order generalization of the celebrated Kuramoto model for coupled oscillators on graph-based networks. Under explicit assumptions on data access and types, and simplicial structure, we derive end-to-end quantum gate complexities and identify regimes with polynomial quantum advantage for synchronization estimation and super-polynomial quantum advantage for no-phase-locking certification over classical methods. More broadly, these results extend quantum algorithms for higher-order networks from structural analysis to nonlinear dynamical diagnostics, easing a major computational bottleneck and opening a route to quantum methods for probing higher-order phenomena beyond the reach of direct classical approaches.
Alok Yadav, Saroj Yadav
Traditional macroeconomic growth models rely on general equilibrium and continuous, frictionless institutional transitions, failing to account for the catastrophic structural collapses observed in empirical economic history. We propose the Stochastic Networked Governance (SNG) model, a discrete-time, agent-based framework that bridges econophysics, network science, and institutional economics. By defining jurisdictions through a binary institutional genome, the model formalizes institutional complementarity, endogenous growth, and the non-linear macroeconomic penalties of structural reform (the "J-Curve"). Using the CEPII Gravity Database and the IMF Systemic Banking Crises dataset, we move beyond theoretical topologies to execute an empirical historical simulation from 1970 to 2017 across the top 100 global economies. Through Monte Carlo ensembles, we demonstrate how scale-invariant exogenous shocks and spatial capital flight drive global phase transitions, exposing the mathematical mechanics of the 1989-1991 Soviet collapse, the Hub-Risk Paradigm, and the emergent resilience of spatially firewalled market networks.
Lluís Torres-Hugas, Jordi Duch, Sergio Gómez, Alex Arenas
Higher-order interaction networks are typically modeled using hypergraphs or simplicial complexes, where interactions explicitly involve more than two nodes. Here we demonstrate that effective higher-order dynamical constraints emerge naturally on the 1-skeleton of a graph, provided the interaction carries nontrivial topological structure. We study phase-oscillator dynamics with edge phase lags modeled as a $U(1)$-valued connection. This structure induces a gradient Sakaguchi--Kuramoto-type flow and an associated twisted Laplacian whose spectrum depends on the cohomology class of the connection. We prove that the associated twisted Laplacian admits a zero mode if and only if the connection is cohomologically trivial, that is, when all cycle holonomies vanish. Consequently, synchronization is obstructed not by local pairwise mismatches, but by intrinsic topological frustration on cycles. We derive that the smallest eigenvalue of the twisted Laplacian scales with the magnitude of the holonomy, and its spectral transitions accurately predict the loss of stability of the phase-locked state as frustration is increased. For the specific case of constant phase lag, we analytically derive the critical transition point, $α_c = π/3$ for a pentagonal cycle, which is in quantitative agreement with previously reported numerical thresholds. Our results establish a spectral framework linking dynamical frustration to network cohomology, and show that transitions in remote synchronization are shaped by cycle-level topological constraints.
I. David Elder, Juan Moreno-Cruz, Cameron Wade, Sylvia Sleep, Sara Hastings-Simon, Sean McCoy, Heather L. MacLean, I. Daniel Posen
Energy systems optimisation models are a leading tool for informing decisions in the energy transition. However, these models often remain opaque, and results are frequently presented without a clear discussion of their epistemic limitations. We propose Diagnostic Modelling as a framework wherein modellers critically interrogate their models and explore uncertainties to uncover mechanistic explanations that offer policy-relevant insights. Mechanistic explanations provide fundamental understanding that remains valid despite model uncertainty and does not depend on detailed knowledge of a specific model. By adopting a more open and transparent approach to engaging with energy systems models, Diagnostic Modelling encourages the participation of a broader range of decision-makers, thereby building consensus in support of the energy transition.
Xiaochen Wang
Heterogeneity in individual characteristics and behaviour is a fundamental property of complex dynamical systems. While previous studies on evolutionary dynamics of strategies evolution in various systems have predominantly focused on the structural heterogeneity, dynamical heterogeneity in individuals' strategy update has been largely neglected. Here, we introduce a novel dynamical update mechanism based on individuals' decision-making information, comprising personal and social components. This update rule allows each individual to vary in the weight of personal information and the amount of social information, capturing the general scenario of dynamically heterogeneous populations. We find that cooperation, as a collective prosocial outcome, is significantly enhanced when highly connected individuals on interaction network rely more heavily on personal information and access more social information. This effect is notably absent in homogeneous networks, thereby overturning the prevailing consensus that structural heterogeneity inherently suppresses cooperation. This theoretical prediction is further validated by empirical evidence from GitHub collaboration networks. Furthermore, individuals preferentially linking to those who are well-informed and possess greater personal information further promotes collective cooperation. We additionally reveal that cooperators gain a decisive advantage when relying more on personal information compared to defectors, whereas social information affects cooperators and defectors equivalently. Our findings offer profound insights into how dynamical heterogeneity fundamentally shapes the evolution of collective cooperation in complex systems.
Konrad Szocik, Abraham Loeb
Recent work on the Loeb Scale has provided astronomy a structured framework for assessing anomalous interstellar objects, including a quantitative mapping of a classification ranking, its evolution with the addition of data, and a broader observational strategy for firming its verdict. What remains unclear is the epistemic and methodological meaning of the threshold built into that framework. Here we argue that the central philosophical issue is no longer whether astronomy can define such a threshold, but how a threshold already in place should regulate scientific inquiry under uncertainty. We suggest that candidate technosignature status, such as Level 4 on the Loeb Scale, should be understood as an intermediate epistemic status: stronger than permissive openness, weaker than confirmation, yet sufficient to justify methodological escalation. The argument proceeds in three steps. First, it reconstructs the recent philosophical debate through the work of Lomas, Lane, and Cowie. Second, it turns to historical cases discussed by Kaplan (2026) to show that important discoveries are often delayed not only by weak evidence, but also by paradigms, prestige, and institutional filtering. Third, it interprets candidate status as a form of structured scientific commitment under uncertainty, one that justifies intensified observation, broader hypothesis management, and more deliberate allocation of attention and resources without licensing belief in artificial origin. The paper concludes by arguing that AI should not be the arbitrator in deducing an extraterrestrial origin, but can support the detection, comparison, and prioritization of anomalies once a candidate status has been formally recognized.
A. Schmaus, N. Marwan, N. Molkenthin
Trajectories of units moving on networks are relevant for nonlinear dynamical systems as diverse as polymers, ocean drifters, and human mobility. Although RQA is a well-researched tool with applications in many areas, it has rarely been used for spatial trajectories on networks. Here, we explore the use of RQA for paths on networks. We find that path dynamics on networks display recurrence patterns that are not often described in other applications of recurrence analysis. In particular, the combination of diagonal lines and perpendicular diagonal lines, indicates backtracking paths. We find that recurrence analysis for path dynamics on networks can be helpful to a) better understand the network structure if dynamic and recurrence plots are known, b) better understand the dynamics if network and recurrence plots are known, and c) understand the interaction between path dynamics and the underlying network.
Han-Yun Tu, Xiang Yang, Si-Yao Wei
Firms' positions in innovation networks determine their access to external knowledge, yet how these positions shape technological search behavior and influence productivity remains underexplored. We propose that central network positions systematically reconfigure firms' innovation strategies by promoting exploratory search across emerging technological domains while sustaining broader technological portfolios. This behavioral reorientation allows central firms to diversify their innovation efforts and leverage knowledge spillovers more effectively, translating network advantages into higher productivity. Using panel data on Chinese listed firms and patent-based measures of innovation networks, we construct a dynamic patent citation network to track changes in firms' network centrality and technological search patterns over time. Our findings show that firms with greater centrality are more likely to enter novel technological fields and expand their technological scope, leading to measurable gains in total factor productivity. We further demonstrate that the impact of network centrality on exploratory search is amplified by scientific embeddedness, whereas the productivity returns from exploration depend on technological distance. By connecting structural network positions with behavioral adaptations in technological search, this study uncovers a direct micro-level mechanism through which innovation networks drive firm performance. These results highlight the strategic value of network centrality in shaping not just access to knowledge, but also the direction and efficiency of innovation activities.
Alina Dubovskaya, David J. P. O'Sullivan, Michael Quayle
Understanding social polarization requires integrating insights from psychology, sociology, and complex systems science. Agent-based modeling provides a natural framework to combine perspectives from different fields and explore how individual cognition shapes collective outcomes. This study introduces a novel agent-based model that integrates two cognitive and social mechanisms: the desire to be unique within a group (optimal distinctiveness theory) and the tendency to simplify complex information (cognitive compression). In the model, virtual agents interact in pairs and decide whether to adopt each other's opinions by balancing two opposing drives: maximizing opinion diversity within their local social group while simplifying the overall opinion landscape, with both evaluated using Shannon entropy. We show that the combination of these mechanisms can reproduce real-world patterns, such as the emergence of distinct heterogeneous opinion clusters. Moreover, unlike many existing models where opinions become fixed once opinion groups form, individuals in our model continue to adjust their opinions after clusters emerge, leading to ongoing variation within and between opinion groups. Computational experiments reveal that polarization emerges when local group sizes are moderate (consistent with Dunbar's number), while smaller groups cause fragmentation and larger ones hinder distinct cluster formation. Higher cognitive compression increases unpredictability, while lower compression produces more consistent group structures. These results demonstrate how simple psychological rules can generate complex, realistic social behavior and advance understanding of polarization in human societies.
Tommaso Giacometti, Paola Surcinelli, Mariachiara Stellato, Nico Curti
Background: The Rorschach inkblots are ambiguous stimuli developed to evoke subjective interpretations in humans, while modern artificial intelligence (AI) models are trained to recognize well established patterns and classes. The comparison of these two opposite systems arises a simple and provocative question: what happens when we ask an AI model to interpret an inkblot that "is not supposed to represent anything predefined"? Methods: We submitted the complete set of ten Rorschach inkblots to 61 AI models pretrained on the ImageNet dataset, spanning multiple architectural families. Model predictions were analyzed at the level of top-ranked classes and were quantified using a selected set of psycho-semantic variables inspired by the Rorschach tradition. Statistical analyses examined the effects of model family, computational complexity, and image conditions, comparing model-generated responses with human reference profiles. Findings: Across all architectures, model responses were highly non-random and showed systematic semantic convergence and inter-model agreement. However, quantitative analyses revealed a clear and robust separation between human responses and all AI model families. Human profiles exhibited substantially higher affective load, semantic richness, projected agency, and variability, whereas AI models converged toward frequent, formally coherent, and perceptually stable interpretations. Interpretation: Vision models consistently project the semantic organization learned, favoring consensus and formal coherence over affective or symbolic elaboration. Applying the Rorschach test to AI systems does not assess human-like cognition but provides a principled framework for exposing perceptual and semantic biases embedded in contemporary computer vision models.
Simon D. Lindner, Elisabeth L. Zeilinger, Amelie Fuchs, Simone Lubowitzki, Peter Klimek, Alexander Gaiger
Treatment of cancer involves heterogeneous, complex care pathways. The relationship between these longitudinal trajectories, baseline mental health, and prognostic outcomes remains poorly understood. We introduce an interpretable time-analysis framework leveraging these temporal dynamics, analyzing care patterns spanning up to 37 years for >8,000 patients. Using Dynamic Time Warping (DTW) and Hierarchical Clustering on sequence data of healthcare encounters, we identified nine distinct, robust trajectory phenotypes. We evaluated their prognostic utility by incorporating them into generalized linear models alongside conventional clinical, demographic, and socioeconomic covariates. The trajectory clusters significantly enhanced mortality prediction and maintained independent predictive significance. Compared to a low-utilization reference group (mortality 31.5%), all eight remaining clusters exhibited substantially higher mortality odds. We uncovered two primary high-risk trajectory patterns: long-term, complex care pathways reflecting chronic disease courses (up to 196 events; mortality OR up to 3.38, 95% CI 2.13-5.37), and shorter but intense trajectories indicating rapid progression (median 78 events; OR 2.32, 95% CI 1.82-2.97). Unexpectedly, the high-utilization complexity clusters were associated with significantly lower baseline anxiety scores, highlighting a divergent relationship between trajectory intensity, mortality risk, and initial psychological burden. These results demonstrate that incorporating temporal healthcare utilization data uncovers robust trajectory phenotypes capturing multidimensional prognostic information. This offers significant explanatory power beyond established static variables for refining risk stratification in precision oncology.
Atif Ansar, Bent Flyvbjerg, Alexander Budzier
Do projects learn across space and time? The Olympics, among the largest publicly funded programmes in the world, offer a unique empirical setting. Theoretically, the Games seem ideal for generating "positive learning curves," driving down costs from one iteration to the next. In practice, they do not. Drawing on the concept of "myopia of learning," we argue that spatiotemporality (geographic distance, temporal gaps, and the temporary organisational form of each host committee) combines to block higher-level learning. Our analysis of cost overruns from 1960 to 2024 reveals no sustained improvement over 64 years. Tactical learning abounds, but none aggregates into strategic improvement. We propose four strategies for overcoming the spatiotemporal barrier (incremental, centralising, decentralising, and real options), arguing that radical reform is required.
M. Levent Kurnaz
International commerce has long been seen as a key way to keep the global food system stable, allowing agricultural surpluses in some areas to compensate for shortages in others. This strategy has led to the rise of highly specialised processing hubs that combine significant industrial capacity with agricultural inputs sourced from throughout the world. Türkiye's flour sector -- currently the largest wheat flour exporter in the world -- represents one of the most prominent examples of this model. However, increasing climate variability and geopolitical fragmentation raise important questions regarding the long-term resilience of food systems that rely heavily on imported biological inputs. Recent research shows the growing probability of synchronised crop failures across multiple agricultural regions due to atmospheric circulation anomalies and climate-induced extreme weather events. The assumption that global markets can consistently rebalance supply disruptions through trade is challenged by such events. Using the flour industry of Türkiye as a case study, this paper investigates the susceptibility of globally integrated grain processing centres. In order to assess the correlation between the scope of industrial processing and the capacity of domestic agricultural production, we introduce the Biophysical Autonomy Ratio~(BAR). The analysis demonstrates that Türkiye's BAR has declined consistently over time, suggesting that its processing sector has expanded beyond the domestic production base. The results suggest that in order to enhance the resilience of the food system in the future, it may be necessary to establish a more precise alignment between biological production systems and industrial food infrastructure. The paper concludes by addressing the policy implications for national food security governance in the context of escalating climate instability.
Shengjun Wu, Jeffery Wu
Analyzing correlation between variables is often both the tool and the goal of modern science. A crucial question is whether the correlation between two variables is a direct correlation or only an indirect correlation through a confounder. We review the existing measures of direct correlation and organize them into two families, each corresponding to a systematic construction: (i) removing the direct correlation from the original joint distribution and quantifying the resulting distributional shift, and (ii) intervening on one variable via do-calculus and quantifying how the distribution of the other variable responds. For every Kullback--Leibler-based measure in either family, we propose a Jensen--Shannon-based regularized analogue. Since the square root of the Jensen--Shannon divergence is a bounded metric, the regularized measures take values in $[0,1]$ and are free of the singularity of the Kullback--Leibler divergence. We further analyze the achievable upper bound of each regularized measure under the observed marginal $p(x,z)$, which depends on the alphabet size and is in general strictly below $1$; this sets the correct scale against which observed values should be read. The properties and the differences of the proposed measures are illustrated on a decision-making toy model and on three public real datasets: Titanic survival, UCI Adult (Census Income), and the UC~Berkeley 1973 graduate admissions. Bootstrap $95\%$ confidence intervals are reported for every numerical value.