Qi Zhao, You Zhou
Bell inequality with self-testing property has played an important role in quantum information field with both fundamental and practical applications. However, it is generally challenging to find Bell inequalities with self-testing property for multipartite states and actually there are not many known candidates. In this work, we propose a systematical framework to construct Bell inequalities from stabilizers which are maximally violated by general stabilizer states, with two observables for each local party. We show that the constructed Bell inequalities can self-test any stabilizer state which is essentially device-independent, if and only if these stabilizers can uniquely determine the state in a device-dependent manner. This bridges the gap between device-independent and device-dependent verification methods. Our framework can provide plenty of Bell inequalities for self-testing stabilizer states. Among them, we give two families of Bell inequalities with different advantages: (1) a family of Bell inequalities with a constant ratio of quantum and classical bounds using 2N correlations, (2) Single pair inequalities improving on all previous robustness self-testing bounds using N+1 correlations, which are both efficient and suitable for realizations in multipartite systems. Our framework can not only inspire more fruitful multipartite Bell inequalities from conventional verification methods, but also pave the way for their practical applications.
Qi Zhao, Zheng Zhao, Xiaoya Fan, Zhengwei Yuan, Qian Mao, Yudong Yao
Secondary structure plays an important role in determining the function of non-coding RNAs. Hence, identifying RNA secondary structures is of great value to research. Computational prediction is a mainstream approach for predicting RNA secondary structure. Unfortunately, even though new methods have been proposed over the past 40 years, the performance of computational prediction methods has stagnated in the last decade. Recently, with the increasing availability of RNA structure data, new methods based on machine-learning technologies, especially deep learning, have alleviated the issue. In this review, we provide a comprehensive overview of RNA secondary structure prediction methods based on machine-learning technologies and a tabularized summary of the most important methods in this field. The current pending issues in the field of RNA secondary structure prediction and future trends are also discussed.
Ming-Han Li, Xingjian Zhang, Wen-Zhao Liu, Si-Ran Zhao, Bing Bai, Yang Liu, Qi Zhao, Yuxiang Peng, Jun Zhang, Yanbao Zhang, William J. Munro, Xiongfeng Ma, Qiang Zhang, Jingyun Fan, Jian-Wei Pan
Feb 20, 2019·quant-ph·PDF Randomness expansion where one generates a longer sequence of random numbers from a short one is viable in quantum mechanics but not allowed classically. Device-independent quantum randomness expansion provides a randomness resource of the highest security level. Here, we report the first experimental realization of device-independent quantum randomness expansion secure against quantum side information established through quantum probability estimation. We generate $5.47\times10^8$ quantum-proof random bits while consuming $4.39\times10^8$ bits of entropy, expanding our store of randomness by $1.08\times10^8$ bits at a latency of about $13.1$ h, with a total soundness error $4.6\times10^{-10}$. Device-independent quantum randomness expansion not only enriches our understanding of randomness but also sets a solid base to bring quantum-certifiable random bits into realistic applications.
Senrui Chen, Xingjian Zhang, You Zhou, Qi Zhao
Jun 21, 2019·quant-ph·PDF The resource theory of quantum coherence is an important topic in quantum information science. Standard coherence distillation and dilution problems have been thoroughly studied. In this paper, we introduce and study the problem of one-shot coherence distillation with catalysts. In order to distill more coherence from a state of interest, a catalytic system can be involved and a jointly free operation is applied to both systems. The joint output state should be a maximally coherent state in tensor product with the unchanged catalysts, with some allowable fidelity error. We consider several different definitions of this problem. Firstly, with a small fidelity error in both systems, we show that, even via the smallest free operation class (PIO), the distillable coherence of any state with no restriction on the catalysts is infinite, which is a "coherence embezzling phenomenon". We then define and calculate a lower bound for the distillable coherence when the dimension of catalysts is restricted. Finally, in consideration of physical relevance, we define the "perfect catalysts" scenario where the catalysts are required to be pure and precisely unchanged. Interestingly, we show that in this setting catalysts basically provide no advantages in pure state distillation via IO and SIO under certain smoothing restriction. Our work enhances the understanding of catalytic effect in quantum resource theory.
You Zhou, Qi Zhao, Xiao Yuan, Xiongfeng Ma
Apr 10, 2019·quant-ph·PDF Recently, there are tremendous developments on the number of controllable qubits in several quantum computing systems. For these implementations, it is crucial to determine the entanglement structure of the prepared multipartite quantum state as a basis for further information processing tasks. In reality, evaluation of a multipartite state is in general a very challenging task owing to the exponential increase of the Hilbert space with respect to the number of system components. In this work, we propose a systematic method using very few local measurements to detect multipartite entanglement structures based on the graph state --- one of the most important classes of quantum states for quantum information processing. Thanks to the close connection between the Schmidt coefficient and quantum entropy in graph states, we develop a family of efficient witness operators to detect the entanglement between subsystems under any partitions and hence the entanglement intactness. We show that the number of local measurements equals to the chromatic number of the underlying graph, which is a constant number, independent of the number of qubits. In reality, the optimization problem involved in the witnesses can be challenging with large system size. For several widely-used graph states, such as 1-D and 2-D cluster states and the Greenberger-Horne-Zeilinger state, by taking advantage of the area law of entanglement entropy, we derive analytical solutions for the witnesses, which only employ two local measurements. Our method offers a standard tool for entanglement structure detection to benchmark multipartite quantum systems.
Qi Zhao, Huanhao Li, Zhipeng Yu, Chi Man Woo, Tianting Zhong, Shengfu Cheng, Yuanjin Zheng, Honglin Liu, Jie Tian, Puxiang Lai
Face recognition has recently become ubiquitous in many scenes for authentication or security purposes. Meanwhile, there are increasing concerns about the privacy of face images, which are sensitive biometric data that should be carefully protected. Software-based cryptosystems are widely adopted nowadays to encrypt face images, but the security level is limited by insufficient digital secret key length or computing power. Hardware-based optical cryptosystems can generate enormously longer secret keys and enable encryption at light speed, but most reported optical methods, such as double random phase encryption, are less compatible with other systems due to system complexity. In this study, a plain yet high-efficient speckle-based optical cryptosystem is proposed and implemented. A scattering ground glass is exploited to generate physical secret keys of gigabit length and encrypt face images via seemingly random optical speckles at light speed. Face images can then be decrypted from the random speckles by a well-trained decryption neural network, such that face recognition can be realized with up to 98% accuracy. The proposed cryptosystem has wide applicability, and it may open a new avenue for high-security complex information encryption and decryption by utilizing optical speckles.
He-Liang Huang, Qi Zhao, Xiongfeng Ma, Chang Liu, Zu-En Su, Xi-Lin Wang, Li Li, Nai-Le Liu, Barry C. Sanders, Chao-Yang Lu, Jian-Wei Pan
To date, blind quantum computing demonstrations require clients to have weak quantum devices. Here we implement a proof-of-principle experiment for completely classical clients. Via classically interacting with two quantum servers that share entanglement, the client accomplishes the task of having the number 15 factorized by servers who are denied information about the computation itself. This concealment is accompanied by a verification protocol that tests servers' honesty and correctness. Our demonstration shows the feasibility of completely classical clients and thus is a key milestone towards secure cloud quantum computing.
Qi Zhao, Christian Wressnegger
Neural networks can be drastically shrunk in size by removing redundant parameters. While crucial for the deployment on resource-constraint hardware, oftentimes, compression comes with a severe drop in accuracy and lack of adversarial robustness. Despite recent advances, counteracting both aspects has only succeeded for moderate compression rates so far. We propose a novel method, HARP, that copes with aggressive pruning significantly better than prior work. For this, we consider the network holistically. We learn a global compression strategy that optimizes how many parameters (compression rate) and which parameters (scoring connections) to prune specific to each layer individually. Our method fine-tunes an existing model with dynamic regularization, that follows a step-wise incremental function balancing the different objectives. It starts by favoring robustness before shifting focus on reaching the target compression rate and only then handles the objectives equally. The learned compression strategies allow us to maintain the pre-trained model natural accuracy and its adversarial robustness for a reduction by 99% of the network original size. Moreover, we observe a crucial influence of non-uniform compression across layers.
Qi Zhao, You Zhou, Andrew M. Childs
Quantum entanglement is an essential feature of many-body systems that impacts both quantum information processing and fundamental physics. The growth of entanglement is a major challenge for classical simulation methods. In this work, we investigate the relationship between quantum entanglement and quantum simulation, showing that product-formula approximations can perform better for entangled systems. We establish a tighter upper bound for algorithmic error in terms of entanglement entropy and develop an adaptive simulation algorithm incorporating measurement gadgets to estimate the algorithmic error. This shows that entanglement is not only an obstacle to classical simulation, but also a feature that can accelerate quantum algorithms.
Qi Zhao, Gerui Wang, Xiao Yuan, Xiongfeng Ma
Feb 21, 2019·quant-ph·PDF Entanglement is a key resource for quantum information processing. A widely used tool for detecting entanglement is entanglement witness, where the measurement of the witness operator is guaranteed to be positive for all separable states and can be negative for certain entangled states. In reality, due to the exponentially increasing the Hilbert-space dimension with respective to the system size, it is very challenging to construct an efficient entanglement witness for general multipartite entangled states. For $N$-partite Greenberger-Horne-Zeilinger (GHZ)-like states, the most robust witness scheme requires $N+1$ local measurement settings and can tolerate up to $1/2$ white noise. As a comparison, the most efficient witness for GHZ-like states only needs two local measurement settings and can tolerate up to $1/3$ white noise. There is a trade-off between the realization efficiency, the number of measurement settings, and the detection robustness, the maximally tolerable white noise. In this work, we study this trade-off by proposing a family of entanglement witnesses with $k$ ($2\le k\le N+1$) local measurement settings. Considering symmetric local measurements, we calculate the maximal tolerable noise for any given number of measurement settings. Consequently, we design the optimal witness with a minimal number of settings for any given level of white noise. Our theoretical analysis can be applied to other multipartite entangled states with a strong symmetry. Our witnesses can be easily implemented in experiment and applied in practical multipartite entanglement detection under different noise conditions.
Mark Um, Qi Zhao, Junhua Zhang, Pengfei Wang, Ye Wang, Mu Qiao, Hongyi Zhou, Xiongfeng Ma, Kihwan Kim
The output randomness from a random number generator can be certified by observing the violation of quantum contextuality inequalities based on the Kochen-Specker theorem. Contextuality can be tested in a single quantum system, which significantly simplifies the experimental requirements to observe the violation comparing to the ones based on nonlocality tests. However, it is not yet resolved how to ensure compatibilities for sequential measurements that is required in contextuality tests. Here, we employ a modified Klyachko-Can-Binicioğlu-Shumovsky contextuality inequality, which can ease the strict compatibility requirement on measurements. On a trapped single \Ba ion system, we experimentally demonstrate violation of the contextuality inequality and realize self-testing quantum random number expansion by closing detection loopholes. We perform $1.29 \times 10^8$ trials of experiments and extract the randomness of $8.06 \times 10^5$ bits with a speed of 270 bits s$^{-1}$. Our demonstration paves the way for the practical high-speed spot-checking quantum random number expansion and other secure information processing applications.
Qi Zhao, Nicola Tisato, Aly Abdelaziz, Johnson Ha, Giovanni Grasselli
Understanding rock shear failure behavior is crucial to gain insights into slip-related geohazards such as rock avalanches, landslides, and earthquakes. However, descriptions of the progressive damage on the shear surface are still incomplete or ambiguous. In this study, we use the hybrid finite-discrete element method (FDEM) to simulate a shear experiment and obtain a detailed comprehension of shear induced progressive damage and the associated seismic activity. We built a laboratory fault model from high resolution surface scans and micro-CT imaging. Our results show that under quasi-static shear loading, the fault surface experiences local dynamic seismic activities. We found that the seismic activity is related to the stress concentration on interlocking asperities. This interlocking behavior (i) causes stress concentration at the region of contact that could reach the compressive strength, and (ii) produces tensile stress up to the tensile strength in the region adjacent to the contact area. Thus, different failure mechanisms and damage patterns including crushing and sub-vertical fracturing are observed on the rough surface. Asperity failure creates rapid local slips resulting in significant stress perturbations that alter the overall stress condition and may trigger the slip of adjacent critically stressed asperities. We found that the spatial distribution of the damaged asperities and the seismic activity is highly heterogeneous; regions with intense asperity interactions formed gouge material, while others exhibit minimal to no damage. These results emphasize the important role of surface roughness in controlling the overall shear behavior and the local dynamic seismic activities on faults.
Qi Zhao, Xiao Yuan, Xiongfeng Ma
Jul 27, 2016·quant-ph·PDF Witnessing entanglement is crucial in quantum information processing. With properly preparing ancillary states, it has been shown previously that genuine entanglement can be witnessed without trusting measurement devices. In this work, we generalize the scenario and show that generic multipartite entanglement structures, including entanglement of subsystems and entanglement depth, can be witnessed via measurement-device-independent means. As the original measurement-device-independent entanglement witness scheme exploits only one out of four Bell measurement outcomes for each party, a direct generalization to multipartite quantum states will inevitably cause inefficiency in entanglement detection after taking account of statistical fluctuations. To resolve this problem, we also present a way to utilize all the measurement outcomes. The scheme is efficient for multipartite entanglement detection and can be realized with state-of-the-art technologies.
Xiao Yuan, Qi Zhao, Xiongfeng Ma
May 16, 2015·quant-ph·PDF Bell test is one of the most important tools in quantum information science. On the one hand, it enables fundamental test for the physics laws of nature, and on the other hand, it can be also applied in varieties of device independent tasks such as quantum key distribution and random number generation. In practice, loopholes existing in experimental demonstrations of Bell tests may affect the validity of the conclusions. In this work, we focus on the randomness (freewill) loophole and investigate the randomness requirement in a well-known Bell test, the Clauser-Horne test, under various conditions. With partially random inputs, we explicitly bound the Bell value for all local hidden variable models by optimizing the classical strategy. Our result thus puts input randomness requirement on the Clauser-Horne test under varieties of practical scenarios. The employed analysis technique can be generalized to other Bell's inequalities.
Xiao Yuan, Qi Zhao, Davide Girolami, Xiongfeng Ma
May 25, 2016·quant-ph·PDF The peculiar uncertainty or randomness of quantum measurements stems from coherence, whose information-theoretic characterization is currently under investigation. Under the resource theory of coherence, it is interesting to investigate interpretations of coherence measures and the interplay with other quantum properties, such as quantum correlations and intrinsic randomness. Coherence can be viewed as the resource for the intrinsic randomness in the measurement outcomes of a state in the computational basis. We observed in our previous work that the coherence of formation, which measures the asymptotic coherence dilution rate, indeed quantifies the uncertainty of a (classical) correlated party about the system measurement outcome. In this work, we re-derive the result from a quantum point of view and then connect the intrinsic randomness to the relative entropy of coherence, another important coherence measure that quantifies the asymptotic distillable coherence. Even though there does not exist bound coherent states, these two intrinsic randomness quantified by coherence of formation and the relative entropy of coherence are different. Interestingly, we show that this gap is equal to the quantum discord, a general form of quantum correlations, in the state of the system of interest and the correlated party, after a local measurement on the former system.
Yan-Lin Tang, Hua-Lei Yin, Qi Zhao, Hui Liu, Xiang-Xiang Sun, Ming-Qi Huang, Wei-Jun Zhang, Si-Jing Chen, Lu Zhang, Li-Xing You, Zhen Wang, Yang Liu, Chao-Yang Lu, Xiao Jiang, Xiongfeng Ma, Qiang Zhang, Teng-Yun Chen, Jian-Wei Pan
Sep 28, 2015·quant-ph·PDF Quantum cryptography holds the promise to establish an information-theoretically secure global network. All field tests of metropolitan-scale quantum networks to date are based on trusted relays. The security critically relies on the accountability of the trusted relays, which will break down if the relay is dishonest or compromised. Here, we construct a measurement-device-independent quantum key distribution (MDIQKD) network in a star topology over a 200 square kilometers metropolitan area, which is secure against untrustful relays and against all detection attacks. In the field test, our system continuously runs through one week with a secure key rate ten times larger than previous result. Our results demonstrate that the MDIQKD network, combining the best of both worlds --- security and practicality, constitutes an appealing solution to secure metropolitan communications.
Qi Zhao, Xingyu Ni, Ziyu Wang, Feng Cheng, Ziyan Yang, Lu Jiang, Bohan Wang
We investigate how to enhance the physical fidelity of video generation models by leveraging synthetic videos derived from computer graphics pipelines. These rendered videos respect real-world physics, such as maintaining 3D consistency, and serve as a valuable resource that can potentially improve video generation models. To harness this potential, we propose a solution that curates and integrates synthetic data while introducing a method to transfer its physical realism to the model, significantly reducing unwanted artifacts. Through experiments on three representative tasks emphasizing physical consistency, we demonstrate its efficacy in enhancing physical fidelity. While our model still lacks a deep understanding of physics, our work offers one of the first empirical demonstrations that synthetic video enhances physical fidelity in video synthesis. Website: https://kevinz8866.github.io/simulation/
Giorgio Piras, Qi Zhao, Fabio Brau, Maura Pintor, Christian Wressnegger, Battista Biggio
Adversarial pruning methods have emerged as a powerful tool for compressing neural networks while preserving robustness against adversarial attacks. These methods typically follow a three-step pipeline: (i) pretrain a robust model, (ii) select a binary mask for weight pruning, and (iii) finetune the pruned model. To select the binary mask, these methods minimize a robust loss by assigning an importance score to each weight, and then keep the weights with the highest scores. However, this score-space optimization can lead to sharp local minima in the robust loss landscape and, in turn, to an unstable mask selection, reducing the robustness of adversarial pruning methods. To overcome this issue, we propose a novel plug-in method for adversarial pruning, termed Score-space Sharpness-aware Adversarial Pruning (S2AP). Through our method, we introduce the concept of score-space sharpness minimization, which operates during the mask search by perturbing importance scores and minimizing the corresponding robust loss. Extensive experiments across various datasets, models, and sparsity levels demonstrate that S2AP effectively minimizes sharpness in score space, stabilizing the mask selection, and ultimately improving the robustness of adversarial pruning methods.
Zhuoyuan Ma, Qi Zhao, Jin Zhang, Bai Yan
Aerial simultaneous transmitting and reflecting reconfigurable intelligent surfaces (STAR-RIS) enables full-space coverage in dynamic wireless networks. However, most existing works assume fixed user grouping, overlooking the fact that STAR-RIS deployment inherently determines whether users are served via transmission or reflection. To address this, we propose a joint deployment and beamforming framework, where an aerial STAR-RIS dynamically adjusts its location and orientation to adaptively control user grouping and enhance hybrid beamforming. We formulate a Markov decision process (MDP) capturing the coupling among deployment, grouping, and signal design. To solve the resulting non-convex and time-varying problem, we develop a PPO-based reinforcement learning algorithm that adaptively balances user grouping and beamforming resources through online policy learning. Simulation results show 57.1\% and 285\% sum-rate gains over fixed-deployment and RIS-free baselines, respectively, demonstrating the benefit of user-grouping-aware control in STAR-RIS-aided systems.
Qi Zhao, M. Salman Asif, Zhan Ma
The primary focus of Neural Representation for Videos (NeRV) is to effectively model its spatiotemporal consistency. However, current NeRV systems often face a significant issue of spatial inconsistency, leading to decreased perceptual quality. To address this issue, we introduce the Pyramidal Neural Representation for Videos (PNeRV), which is built on a multi-scale information connection and comprises a lightweight rescaling operator, Kronecker Fully-connected layer (KFc), and a Benign Selective Memory (BSM) mechanism. The KFc, inspired by the tensor decomposition of the vanilla Fully-connected layer, facilitates low-cost rescaling and global correlation modeling. BSM merges high-level features with granular ones adaptively. Furthermore, we provide an analysis based on the Universal Approximation Theory of the NeRV system and validate the effectiveness of the proposed PNeRV.We conducted comprehensive experiments to demonstrate that PNeRV surpasses the performance of contemporary NeRV models, achieving the best results in video regression on UVG and DAVIS under various metrics (PSNR, SSIM, LPIPS, and FVD). Compared to vanilla NeRV, PNeRV achieves a +4.49 dB gain in PSNR and a 231% increase in FVD on UVG, along with a +3.28 dB PSNR and 634% FVD increase on DAVIS.