Zhanchuan Zhang, Jeth Arunseangroj, Wenchao Xu
Neutral-atom arrays are a leading platform for quantum technologies, offering a promising route toward large-scale, fault-tolerant quantum computing. We propose a novel quantum processing architecture based on dual-type, dual-element atom arrays, where individually trapped atoms serve as data qubits, and small atomic ensembles enable ancillary operations. By leveraging the selective initialization, coherent control, and collective optical response of atomic ensembles, we demonstrate ensemble-assisted quantum operations that enable reconfigurable, high-speed control of individual data qubits and rapid mid-circuit readout, including both projective single-qubit and joint multi-qubit measurements. The hybrid approach of this architecture combines the long coherence times of single-atom qubits with the enhanced controllability of atomic ensembles, achieving high-fidelity state manipulation and detection with minimal crosstalk. Numerical simulations indicate that our scheme supports individually addressable single- and multi-qubit operations with fidelities of 99.5% and 99.9%, respectively, as well as fast single- and multi-qubit state readout with fidelities exceeding 99% within tens of microseconds. These capabilities open new pathways toward scalable, fault-tolerant quantum computation, enabling repetitive error syndrome detection and efficient generation of long-range entangled many-body states, thereby expanding the quantum information toolbox beyond existing platforms.
Wenchao Xu, Xinyu Zhang
Asymptotic optimality is a key theoretical property in model averaging. Due to technical difficulties, existing studies rely on restricted weight sets or the assumption that there is no true model with fixed dimensions in the candidate set. The focus of this paper is to overcome these difficulties. Surprisingly, we discover that when the penalty factor in the weight selection criterion diverges with a certain order and the true model dimension is fixed, asymptotic loss optimality does not hold, but asymptotic risk optimality does. This result differs from the corresponding result of Fang et al. (2023, Econometric Theory 39, 412-441) and reveals that using the discrete weight set of Hansen (2007, Econometrica 75, 1175-1189) can yield opposite asymptotic properties compared to using the usual weight set. Simulation studies illustrate the theoretical findings in a variety of settings.
Wenchao Xu, Xinyu Zhang, Jeng-Min Chiou, Yuying Sun
Given the high volatility and susceptibility to extreme events in the cryptocurrency market, forecasting tail risk is of paramount importance. Value-at-Risk (VaR), a quantile-based risk measure, is widely used for assessing tail risk and is central to monitoring financial market stability. In data-rich environments, functional data from various domains are employed to forecast conditional quantiles. However, the infinite-dimensional nature of functional data introduces uncertainty. This paper addresses this uncertainty problem by proposing a novel data-driven conditional quantile model averaging (MA) approach. With a set of candidate models varying by the number of components, MA assigns weights to each model determined by a K-fold cross-validation criterion. We prove the asymptotic optimality of the selected weights in terms of minimizing the excess final prediction error when all candidate models are misspecified. Additionally, when the true regression relationship belongs to the set of candidate models, we provide consistency results for the averaged estimators. Numerical studies indicate that, in most cases, the proposed method outperforms other model selection and averaging methods, particularly for extreme quantiles in cryptocurrency markets.
Wenchao Xu, Holger Grosshans
Electrostatic charge on powders arises during pneumatic transport due to particle-particle and particle-surface interactions via triboelectrification. This is a potential threat to the safety of industrial productions and the source of numerous fires and dust explosions in the past. Triboelectric charges are affected by environmental conditions, such as ambient temperature and relative humidity. In this work, we experimentally investigated the influence of ambient humidity on the particle charge of gas-solid flows in a square-shaped duct. Monodisperse PMMA particles are fed into a fully developed airflow in a PMMA duct and then pass through a metallic duct section. The charge of particles is measured at the outlet of the metallic duct via a Faraday cup. By measuring the electrostatic charge under various environmental conditions, we observed that the electrostatic charge first increases with the humidity and then decreases when the humidity becomes higher.
Wenchao Xu, Holger Grosshans
During pneumatic conveying, powder electrifies rapidly due to the high flow velocities. In our experiments, the particles even charge if the conveying duct is made of the same material, which might be caused by triboelectrification between two asymmetric contact surfaces. Surprisingly, we found the airflow rate to determine the polarity of the overall powder charge. This study investigates the charging of microscale PMMA particles in turbulent flows passing through a square PMMA duct. The particles are spherical and monodisperse. A Faraday at the duct outlet measured the total charge of the particles. At low flow velocities, the particles charged negatively after passing through the duct. However, the powder's overall charge switched to a positive polarity when increasing the flow velocity.
Wenchao Xu, Xinyu Zhang
Model selection (MS) and model averaging (MA) are two popular approaches when having many candidate models. Theoretically, the estimation risk of an oracle MA is not larger than that of an oracle MS because the former one is more flexible, but a foundational issue is: does MA offer a {\it substantial} improvement over MS? Recently, a seminal work: Peng and Yang (2021), has answered this question under nested models with linear orthonormal series expansion. In the current paper, we further reply this question under linear nested regression models. Especially, a more general nested framework, heteroscedastic and autocorrelated random errors, and sparse coefficients are allowed in the current paper, which is more common in practice. In addition, we further compare MAs with different weight sets. Simulation studies support the theoretical findings in a variety of settings.
Wenchao Xu, Aditya V. Venkatramani, Sergio H. Cantú, Tamara Šumarac, Valentin Klüsener, Mikhail D. Lukin, Vladan Vuletić
May 24, 2021·quant-ph·PDF We demonstrate a new approach for fast preparation, manipulation, and collective readout of an atomic Rydberg-state qubit. By making use of Rydberg blockade inside a small atomic ensemble, we prepare a single qubit within 3~$μ$s with a success probability of $F_p=0.93 \pm 0.02$, rotate it, and read out its state in $6$ $μs$ with a single-shot fidelity of $F_d=0.92 \pm 0.04$. The ensemble-assisted detection is $10^3$ times faster than imaging of a single atom with the same optical resolution, and enables fast repeated non-destructive measurement. We observe qubit coherence times of 15~$μ$s, much longer than the $π$ rotation time of 90~ns. Potential applications ranging from faster quantum information processing in atom arrays to efficient implementation of quantum error correction are discussed.
Sergio H. Cantu, Aditya V. Venkatramani, Wenchao Xu, Leo Zhou, Brana Jelenković, Mikhail D. Lukin, Vladan Vuletić
The ability to control strongly interacting light quanta (photons) is of central importance in quantum science and engineering. Recently it was shown that such strong interactions can be engineered in specially prepared quantum optical systems. Here, we demonstrate a method for coherent control of strongly interacting photons, extending quantum nonlinear optics into the domain of repulsive photons. This is achieved by coherently coupling photons to several atomic states, including strongly interacting Rydberg levels in a cold Rubidium gas. Using this approach we demonstrate both repulsive and attractive interactions between individual photons and characterize them by the measured two- and three-photon correlation functions. For the repulsive case, we demonstrate signatures of interference and self ordering from three-photon measurements. These observations open a route to study strongly interacting dissipative systems and quantum matter composed of light such as a crystal of individual photons.
Holger Grosshans, Wenchao Xu, Tatsushi Matsuyama
Thus far, simulations have failed to predict accurately electrostatic powder charging during pneumatic transport. We advanced the modeling of powder flow charging by a three-part study: first, we shot individual particles on a metal target and measured the exchanged charge. Second, based on these results, we formulated an empirical model and implemented it in our CFD tool. Using this tool, we performed large-eddy simulations of the powder flow through a square duct with a cylindrical obstacle inside. Finally, we compared the simulations to measurements in our pneumatic conveying test rig. The simulations successfully predicted the charging of powder consisting of monodisperse particles of a size of 200 $μ$m. Contrary to the usual procedure for this type of simulation, the tool requires no tuning of any parameters. According to our simulations, the powder mostly charged when hitting the cylindrical obstacle. The contacts led to bipolar charge distributions.
Fushuo Huo, Wenchao Xu, Song Guo, Jingcai Guo, Haozhao Wang, Ziming Liu, Xiaocheng Lu
Open-World Compositional Zero-shot Learning (OW-CZSL) aims to recognize novel compositions of state and object primitives in images with no priors on the compositional space, which induces a tremendously large output space containing all possible state-object compositions. Existing works either learn the joint compositional state-object embedding or predict simple primitives with separate classifiers. However, the former heavily relies on external word embedding methods, and the latter ignores the interactions of interdependent primitives, respectively. In this paper, we revisit the primitive prediction approach and propose a novel method, termed Progressive Cross-primitive Compatibility (ProCC), to mimic the human learning process for OW-CZSL tasks. Specifically, the cross-primitive compatibility module explicitly learns to model the interactions of state and object features with the trainable memory units, which efficiently acquires cross-primitive visual attention to reason high-feasibility compositions, without the aid of external knowledge. Moreover, considering the partial-supervision setting (pCZSL) as well as the imbalance issue of multiple task prediction, we design a progressive training paradigm to enable the primitive classifiers to interact to obtain discriminative information in an easy-to-hard manner. Extensive experiments on three widely used benchmark datasets demonstrate that our method outperforms other representative methods on both OW-CZSL and pCZSL settings by large margins.
Yunfeng Fan, Wenchao Xu, Haozhao Wang, Junxiao Wang, Song Guo
Multimodal learning (MML) aims to jointly exploit the common priors of different modalities to compensate for their inherent limitations. However, existing MML methods often optimize a uniform objective for different modalities, leading to the notorious "modality imbalance" problem and counterproductive MML performance. To address the problem, some existing methods modulate the learning pace based on the fused modality, which is dominated by the better modality and eventually results in a limited improvement on the worse modal. To better exploit the features of multimodal, we propose Prototypical Modality Rebalance (PMR) to perform stimulation on the particular slow-learning modality without interference from other modalities. Specifically, we introduce the prototypes that represent general features for each class, to build the non-parametric classifiers for uni-modal performance evaluation. Then, we try to accelerate the slow-learning modality by enhancing its clustering toward prototypes. Furthermore, to alleviate the suppression from the dominant modality, we introduce a prototype-based entropy regularization term during the early training stage to prevent premature convergence. Besides, our method only relies on the representations of each modality and without restrictions from model structures and fusion methods, making it with great application potential for various scenarios.
Jie Zhang, Xiaosong Ma, Song Guo, Wenchao Xu
Federated Semi-supervised Learning (FedSSL) has emerged as a new paradigm for allowing distributed clients to collaboratively train a machine learning model over scarce labeled data and abundant unlabeled data. However, existing works for FedSSL rely on a closed-world assumption that all local training data and global testing data are from seen classes observed in the labeled dataset. It is crucial to go one step further: adapting FL models to an open-world setting, where unseen classes exist in the unlabeled data. In this paper, we propose a novel Federatedopen-world Semi-Supervised Learning (FedoSSL) framework, which can solve the key challenge in distributed and open-world settings, i.e., the biased training process for heterogeneously distributed unseen classes. Specifically, since the advent of a certain unseen class depends on a client basis, the locally unseen classes (exist in multiple clients) are likely to receive differentiated superior aggregation effects than the globally unseen classes (exist only in one client). We adopt an uncertainty-aware suppressed loss to alleviate the biased training between locally unseen and globally unseen classes. Besides, we enable a calibration module supplementary to the global aggregation to avoid potential conflicting knowledge transfer caused by inconsistent data distribution among different clients. The proposed FedoSSL can be easily adapted to state-of-the-art FL methods, which is also validated via extensive experiments on benchmarks and real-world datasets (CIFAR-10, CIFAR-100 and CINIC-10).
Wenchao Xu, Simon Jantač, Tatsushi Matsuyama, Holger Grosshans
This article reports on measurements of the electrostatic charge of particles in a turbulent duct flow. In contrast to previous charge measurements, which do not apply to turbulent flows or give only the sum of all particles' charges, the new method resolves the charge of a turbulent powder flow spatially. The experiment consists of a Particle Tracking Velocimetry (PTV) system and electrode plates that generate an electric field. By comparing particle velocities and accelerations with and without the electric field, the time-averaged local particle charge profile is derived. Spatially resolving the charge profiles unveiled bipolar particle flow. The average of the charge profiles agreed well with a conventional Faraday pail measurement, demonstrating the accuracy of our measurements. However, the peak value of the charge profiles was 76 times higher than the average of the particles' charge.
Yunfeng Fan, Wenchao Xu, Haozhao Wang, Junhong Liu, Song Guo
Recently, Multimodal Learning (MML) has gained significant interest as it compensates for single-modality limitations through comprehensive complementary information within multimodal data. However, traditional MML methods generally use the joint learning framework with a uniform learning objective that can lead to the modality competition issue, where feedback predominantly comes from certain modalities, limiting the full potential of others. In response to this challenge, this paper introduces DI-MML, a novel detached MML framework designed to learn complementary information across modalities under the premise of avoiding modality competition. Specifically, DI-MML addresses competition by separately training each modality encoder with isolated learning objectives. It further encourages cross-modal interaction via a shared classifier that defines a common feature space and employing a dimension-decoupled unidirectional contrastive (DUC) loss to facilitate modality-level knowledge transfer. Additionally, to account for varying reliability in sample pairs, we devise a certainty-aware logit weighting strategy to effectively leverage complementary information at the instance level during inference. Extensive experiments conducted on audio-visual, flow-image, and front-rear view datasets show the superior performance of our proposed method. The code is released at https://github.com/fanyunfeng-bit/DI-MML.
Yuhao Pan, Xiucheng Wang, Zhiyao Xu, Nan Cheng, Wenchao Xu, Jun-jie Zhang
Unmanned Aerial Vehicles (UAVs), due to their low cost and high flexibility, have been widely used in various scenarios to enhance network performance. However, the optimization of UAV trajectories in unknown areas or areas without sufficient prior information, still faces challenges related to poor planning performance and low distributed execution. These challenges arise when UAVs rely solely on their own observation information and the information from other UAVs within their communicable range, without access to global information. To address these challenges, this paper proposes the Qedgix framework, which combines graph neural networks (GNNs) and the QMIX algorithm to achieve distributed optimization of the Age of Information (AoI) for users in unknown scenarios. The framework utilizes GNNs to extract information from UAVs, users within the observable range, and other UAVs within the communicable range, thereby enabling effective UAV trajectory planning. Due to the discretization and temporal features of AoI indicators, the Qedgix framework employs QMIX to optimize distributed partially observable Markov decision processes (Dec-POMDP) based on centralized training and distributed execution (CTDE) with respect to mean AoI values of users. By modeling the UAV network optimization problem in terms of AoI and applying the Kolmogorov-Arnold representation theorem, the Qedgix framework achieves efficient neural network training through parameter sharing based on permutation invariance. Simulation results demonstrate that the proposed algorithm significantly improves convergence speed while reducing the mean AoI values of users. The code is available at https://github.com/UNIC-Lab/Qedgix.
Nan Cheng, Wenchao Xu, Weisen Shi, Yi Zhou, Ning Lu, Haibo Zhou, Xuemin, Shen
The ever-increasing mobile data demands have posed significant challenges in the current radio access networks, while the emerging computation-heavy Internet of things (IoT) applications with varied requirements demand more flexibility and resilience from the cloud/edge computing architecture. In this article, to address the issues, we propose a novel air-ground integrated mobile edge network (AGMEN), where UAVs are flexibly deployed and scheduled, and assist the communication, caching, and computing of the edge network. In specific, we present the detailed architecture of AGMEN, and investigate the benefits and application scenarios of drone-cells, and UAV-assisted edge caching and computing. Furthermore, the challenging issues in AGMEN are discussed, and potential research directions are highlighted.
Yichen Li, Haozhao Wang, Wenchao Xu, Tianzhe Xiao, Hong Liu, Minzhu Tu, Yuying Wang, Xin Yang, Rui Zhang, Shui Yu, Song Guo, Ruixuan Li
Non-Centralized Continual Learning (NCCL) has become an emerging paradigm for enabling distributed devices such as vehicles and servers to handle streaming data from a joint non-stationary environment. To achieve high reliability and scalability in deploying this paradigm in distributed systems, it is essential to conquer challenges stemming from both spatial and temporal dimensions, manifesting as distribution shifts, catastrophic forgetting, heterogeneity, and privacy issues. This survey focuses on a comprehensive examination of the development of the non-centralized continual learning algorithms and the real-world deployment across distributed devices. We begin with an introduction to the background and fundamentals of non-centralized learning and continual learning. Then, we review existing solutions from three levels to represent how existing techniques alleviate the catastrophic forgetting and distribution shift. Additionally, we delve into the various types of heterogeneity issues, security, and privacy attributes, as well as real-world applications across three prevalent scenarios. Furthermore, we establish a large-scale benchmark to revisit this problem and analyze the performance of the state-of-the-art NCCL approaches. Finally, we discuss the important challenges and future research directions in NCCL.
Yuhao Pan, Xiucheng Wang, Nan Cheng, Wenchao Xu
Radio frequency fingerprint identification (RFFI) is a critical technique for wireless network security, leveraging intrinsic hardware-level imperfections introduced during device manufacturing to enable precise transmitter identification. While deep neural networks have shown remarkable capability in extracting discriminative features, their real-world deployment is hindered by receiver-induced variability. In practice, RF fingerprint signals comprise transmitter-specific features as well as channel distortions and receiver-induced biases. Although channel equalization can mitigate channel noise, receiver-induced feature shifts remain largely unaddressed, causing the RFFI models to overfit to receiver-specific patterns. This limitation is particularly problematic when training and evaluation share the same receiver, as replacing the receiver in deployment can cause substantial performance degradation. To tackle this challenge, we propose an RFFI framework robust to cross-receiver variability, integrating adversarial training and style transfer to explicitly disentangle transmitter and receiver features. By enforcing domain-invariant representation learning, our method isolates genuine hardware signatures from receiver artifacts, ensuring robustness against receiver changes. Extensive experiments on multi-receiver datasets demonstrate that our approach consistently outperforms state-of-the-art baselines, achieving up to a 10% improvement in average accuracy across diverse receiver settings.
Minhui Zhu, Minyang Tian, Xiaocheng Yang, Tianci Zhou, Lifan Yuan, Penghao Zhu, Eli Chertkov, Shengyan Liu, Yufeng Du, Ziming Ji, Indranil Das, Junyi Cao, Yufeng Du, Jiabin Yu, Peixue Wu, Jinchen He, Yifan Su, Yikun Jiang, Yujie Zhang, Chang Liu, Ze-Min Huang, Weizhen Jia, Yunkai Wang, Farshid Jafarpour, Yong Zhao, Xinan Chen, Jessie Shelton, Aaron W. Young, John Bartolotta, Wenchao Xu, Yue Sun, Anjun Chu, Victor Colussi, Chris Akers, Nathan Brooks, Wenbo Fu, Jinchao Zhao, Marvin Qi, Anqi Mu, Yubo Yang, Allen Zang, Yang Lyu, Peizhi Mai, Christopher Wilson, Xuefei Guo, Juntai Zhou, Daniel Inafuku, Chi Xue, Luyu Gao, Ze Yang, Yaïr Hein, Yonatan Kahn, Kevin Zhou, Di Luo, John Drew Wilson, Jarrod T. Reilly, Dmytro Bandak, Ofir Press, Liang Yang, Xueying Wang, Hao Tong, Nicolas Chia, Eliu Huerta, Hao Peng
While large language models (LLMs) with reasoning capabilities are progressing rapidly on high-school math competitions and coding, can they reason effectively through complex, open-ended challenges found in frontier physics research? And crucially, what kinds of reasoning tasks do physicists want LLMs to assist with? To address these questions, we present the CritPt (Complex Research using Integrated Thinking - Physics Test, pronounced "critical point"), the first benchmark designed to test LLMs on unpublished, research-level reasoning tasks that broadly covers modern physics research areas, including condensed matter, quantum physics, atomic, molecular & optical physics, astrophysics, high energy physics, mathematical physics, statistical physics, nuclear physics, nonlinear dynamics, fluid dynamics and biophysics. CritPt consists of 71 composite research challenges designed to simulate full-scale research projects at the entry level, which are also decomposed to 190 simpler checkpoint tasks for more fine-grained insights. All problems are newly created by 50+ active physics researchers based on their own research. Every problem is hand-curated to admit a guess-resistant and machine-verifiable answer and is evaluated by an automated grading pipeline heavily customized for advanced physics-specific output formats. We find that while current state-of-the-art LLMs show early promise on isolated checkpoints, they remain far from being able to reliably solve full research-scale challenges: the best average accuracy among base models is only 5.7%, achieved by GPT-5 (high), moderately rising to around 10% when equipped with coding tools. Through the realistic yet standardized evaluation offered by CritPt, we highlight a large disconnect between current model capabilities and realistic physics research demands, offering a foundation to guide the development of scientifically grounded AI tools.
Tamara Šumarac, Emily H. Qiu, Shai Tsesses, Peiran Niu, Adrian J. Menssen, Wenchao Xu, Valentin Walther, Uroš Delić, Soonwon Choi, Mikhail D. Lukin, Vladan Vuletić
Rydberg atoms represent a platform underpinning many recent developments in quantum computation, simulation, sensing, and metrology. They further facilitate optical nonlinearity at the single-photon level when coupled to photons propagating in atomic clouds, which form collective atomic excitations called Rydberg polaritons, strongly interacting with each other. Here, we experimentally explore interactions between a Rydberg polariton in an atomic ensemble and a single, adjacent, Rydberg atom. We discover three different regimes of quantum dynamics corresponding to polariton blockade, coherent exchange, and probabilistic hopping, which are defined by their distinct transmission characteristics, with a transition through an exceptional point occurring between blockade and coherent exchange. We investigate the applications of such interactions for fast, non-destructive detection of Rydberg atoms and present proof-of-principle demonstrations for their potential application in nonlinear photonic networks.