Ilia Rushkin, Isaac Chuang, Dustin Tingley
Each time a learner in a self-paced online course seeks to answer an assessment question, it takes some time for the student to read the question and arrive at an answer to submit. If multiple attempts are allowed, and the first answer is incorrect, it takes some time to provide a second answer. Here we study the distribution of such "response times." We find that the log-normal statistical model for such times, previously suggested in the literature, holds for online courses. Users who, according to this model, tend to take longer on submits are more likely to complete the course, have a higher level of engagement, and achieve a higher grade. This finding can be the basis for designing interventions in online courses, such as MOOCs, which would encourage "fast" users to slow down.
Isaac Chuang
May 22, 2000·quant-ph·PDF The clock synchronization problem is to determine the time difference $Δ$ between two spatially separated clocks. When message delivery times between the two clocks are uncertain, $O(2^{2n})$ classical messages must be exchanged between the clocks to determine $n$ digits of $Δ$. On the other hand, as we show, there exists a quantum algorithm to obtain $n$ digits of $Δ$ while communicating only O(n) quantum messages.
Robert McConnell, Guang Hao Low, Theodore J. Yoder, Colin D. Bruzewicz, Isaac L. Chuang, John Chiaverini, Jeremy M. Sage
Classical imaging works by scattering photons from an object to be imaged, and achieves resolution scaling as $1/\sqrt{t}$, with $t$ the imaging time. By contrast, the laws of quantum mechanics allow one to utilize quantum coherence to obtain imaging resolution that can scale as quickly as $1/t$ -- the so-called "Heisenberg limit." However, ambiguities in the obtained signal often preclude taking full advantage of this quantum enhancement, while imaging techniques designed to be unambiguous often lose this optimal Heisenberg scaling. Here, we demonstrate an imaging technique which combines unambiguous detection of the target with Heisenberg scaling of the resolution. We also demonstrate a binary search algorithm which can efficiently locate a coherent target using the technique, resolving a target trapped ion to within 0.3% of the $1/e^2$ diameter of the excitation beam.
Isaac L. Chuang, Andrew W. Cross, Graeme Smith, John A. Smolin, Bei Zeng
Mar 21, 2008·quant-ph·PDF The codeword stabilized ("CWS") quantum codes formalism presents a unifying approach to both additive and nonadditive quantum error-correcting codes (arXiv:0708.1021). This formalism reduces the problem of constructing such quantum codes to finding a binary classical code correcting an error pattern induced by a graph state. Finding such a classical code can be very difficult. Here, we consider an algorithm which maps the search for CWS codes to a problem of identifying maximum cliques in a graph. While solving this problem is in general very hard, we prove three structure theorems which reduce the search space, specifying certain admissible and optimal ((n,K,d)) additive codes. In particular, we find there does not exist any ((7,3,3)) CWS code though the linear programming bound does not rule it out. The complexity of the CWS search algorithm is compared with the contrasting method introduced by Aggarwal and Calderbank (arXiv:cs/0610159).
Xie Chen, Bei Zeng, Zhengcheng Gu, Beni Yoshida, Isaac L. Chuang
Dec 21, 2008·quant-ph·PDF Many-body entangled quantum states studied in condensed matter physics can be primary resources for quantum information, allowing any quantum computation to be realized using measurements alone, on the state. Such a universal state would be remarkably valuable, if only it were thermodynamically stable and experimentally accessible, by virtue of being the unique ground state of a physically reasonable Hamiltonian made of two-body, nearest neighbor interactions. We introduce such a state, composed of six-state particles on a hexagonal lattice, and describe a general method for analyzing its properties based on its projected entangled pair state representation.
Ryuji Takagi, Theodore J. Yoder, Isaac L. Chuang
Jun 30, 2017·quant-ph·PDF A non-Clifford gate is required for universal quantum computation, and, typically, this is the most error-prone and resource intensive logical operation on an error-correcting code. Small, single-qubit rotations are popular choices for this non-Clifford gate, but certain three-qubit gates, such as Toffoli or controlled-controlled-Z (CCZ), are equivalent options that are also more suited for implementing some quantum algorithms, for instance those with coherent classical subroutines. Here, we calculate error rates and resource overheads for implementing logical CCZ with pieceable fault-tolerance, a non-transversal method for implementing logical gates. We provide a comparison with a non-local magic-state scheme on a concatenated code and a local magic-state scheme on the surface code. We find the pieceable fault-tolerance scheme particularly advantaged over magic states on concatenated codes and in certain regimes over magic states on the surface code. Our results suggest that pieceable fault-tolerance is a promising candidate for fault-tolerance in a near-future quantum computer.
John Blue, Harshil Avlani, Zhiyang He, Liu Ziyin, Isaac L. Chuang
Apr 17, 2025·quant-ph·PDF Fault-tolerant quantum computers will depend crucially on the performance of the classical decoding algorithm which takes in the results of measurements and outputs corrections to the errors inferred to have occurred. Machine learning models have shown great promise as decoders for the surface code; however, this promise has not yet been substantiated for the more challenging task of decoding quantum low-density parity-check (QLDPC) codes. In this paper, we present a recurrent, transformer-based neural network designed to decode circuit-level noise on Bivariate Bicycle (BB) codes, introduced recently by Bravyi et al (Nature 627, 778-782, 2024). For the $[[72,12,6]]$ BB code, at a physical error rate of $p=0.1\%$, our model achieves a logical error rate almost $5$ times lower than belief propagation with ordered statistics decoding (BP-OSD). Moreover, while BP-OSD has a wide distribution of runtimes with significant outliers, our model has a consistent runtime and is an order-of-magnitude faster than the worst-case times from a benchmark BP-OSD implementation. On the $[[144,12,12]]$ BB code, our model obtains worse logical error rates but maintains the speed advantage. These results demonstrate that machine learning decoders can out-perform conventional decoders on QLDPC codes, in regimes of current interest.
Christopher S. Wang, Jacob C. Curtis, Brian J. Lester, Yaxing Zhang, Yvonne Y. Gao, Jessica Freeze, Victor S. Batista, Patrick H. Vaccaro, Isaac L. Chuang, Luigi Frunzio, Liang Jiang, S. M. Girvin, Robert J. Schoelkopf
The efficient simulation of quantum systems is a primary motivating factor for developing controllable quantum machines. For addressing systems with underlying bosonic structure, it is advantageous to utilize a naturally bosonic platform. Optical photons passing through linear networks may be configured to perform quantum simulation tasks, but the efficient preparation and detection of multiphoton quantum states of light in linear optical systems are challenging. Here, we experimentally implement a boson sampling protocol for simulating molecular vibronic spectra [Nature Photonics $\textbf{9}$, 615 (2015)] in a two-mode superconducting device. In addition to enacting the requisite set of Gaussian operations across both modes, we fulfill the scalability requirement by demonstrating, for the first time in any platform, a high-fidelity single-shot photon number resolving detection scheme capable of resolving up to 15 photons per mode. Furthermore, we exercise the capability of synthesizing non-Gaussian input states to simulate spectra of molecular ensembles in vibrational excited states. We show the re-programmability of our implementation by extracting the spectra of photoelectron processes in H$_2$O, O$_3$, NO$_2$, and SO$_2$. The capabilities highlighted in this work establish the superconducting architecture as a promising platform for bosonic simulations, and by combining them with tools such as Kerr interactions and engineered dissipation, enable the simulation of a wider class of bosonic systems.
Curtis G. Northcutt, Tailin Wu, Isaac L. Chuang
Noisy PN learning is the problem of binary classification when training examples may be mislabeled (flipped) uniformly with noise rate rho1 for positive examples and rho0 for negative examples. We propose Rank Pruning (RP) to solve noisy PN learning and the open problem of estimating the noise rates, i.e. the fraction of wrong positive and negative labels. Unlike prior solutions, RP is time-efficient and general, requiring O(T) for any unrestricted choice of probabilistic classifier with T fitting time. We prove RP has consistent noise estimation and equivalent expected risk as learning with uncorrupted labels in ideal conditions, and derive closed-form solutions when conditions are non-ideal. RP achieves state-of-the-art noise estimation and F1, error, and AUC-PR for both MNIST and CIFAR datasets, regardless of the amount of noise and performs similarly impressively when a large portion of training examples are noise drawn from a third distribution. To highlight, RP with a CNN classifier can predict if an MNIST digit is a "one"or "not" with only 0.25% error, and 0.46 error across all digits, even when 50% of positive examples are mislabeled and 50% of observed positive labels are mislabeled negative examples.
Thomas Monz, Daniel Nigg, Esteban A. Martinez, Matthias F. Brandl, Philipp Schindler, Richard Rines, Shannon X. Wang, Isaac L. Chuang, Rainer Blatt
Jul 31, 2015·quant-ph·PDF Quantum computers are able to outperform classical algorithms. This was long recognized by the visionary Richard Feynman who pointed out in the 1980s that quantum mechanical problems were better solved with quantum machines. It was only in 1994 that Peter Shor came up with an algorithm that is able to calculate the prime factors of a large number vastly more efficiently than known possible with a classical computer. This paradigmatic algorithm stimulated the flourishing research in quantum information processing and the quest for an actual implementation of a quantum computer. Over the last fifteen years, using skillful optimizations, several instances of a Shor algorithm have been implemented on various platforms and clearly proved the feasibility of quantum factoring. For general scalability, though, a different approach has to be pursued. Here, we report the realization of a fully scalable Shor algorithm as proposed by Kitaev. For this, we demonstrate factoring the number fifteen by effectively employing and controlling seven qubits and four "cache-qubits", together with the implementation of generalized arithmetic operations, known as modular multipliers. The scalable algorithm has been realized with an ion-trap quantum computer exhibiting success probabilities in excess of 90%.
Tailin Wu, John Peurifoy, Isaac L. Chuang, Max Tegmark
Compared to humans, machine learning models generally require significantly more training examples and fail to extrapolate from experience to solve previously unseen challenges. To help close this performance gap, we augment single-task neural networks with a meta-recognition model which learns a succinct model code via its autoencoder structure, using just a few informative examples. The model code is then employed by a meta-generative model to construct parameters for the task-specific model. We demonstrate that for previously unseen tasks, without additional training, this Meta-Learning Autoencoder (MeLA) framework can build models that closely match the true underlying models, with loss significantly lower than given by fine-tuned baseline networks, and performance that compares favorably with state-of-the-art meta-learning algorithms. MeLA also adds the ability to identify influential training examples and predict which additional data will be most valuable to acquire to improve model prediction.
Curtis G. Northcutt, Andrew D. Ho, Isaac L. Chuang
We describe a cheating strategy enabled by the features of massive open online courses (MOOCs) and detectable by virtue of the sophisticated data systems that MOOCs provide. The strategy, Copying Answers using Multiple Existences Online (CAMEO), involves a user who gathers solutions to assessment questions using a "harvester" account and then submits correct answers using a separate "master" account. We use "clickstream" learner data to detect CAMEO use among 1.9 million course participants in 115 MOOCs from two universities. Using conservative thresholds, we estimate CAMEO prevalence at 1,237 certificates, accounting for 1.3% of the certificates in the 69 MOOCs with CAMEO users. Among earners of 20 or more certificates, 25% have used the CAMEO strategy. CAMEO users are more likely to be young, male, and international than other MOOC certificate earners. We identify preventive strategies that can decrease CAMEO rates and show evidence of their effectiveness in science courses.
Gregory D. Kahanamoku-Meyer, John Blue, Thiago Bergamaschi, Craig Gidney, Isaac L. Chuang
When designing quantum circuits for a given unitary, it can be much cheaper to achieve a good approximation on most inputs than on all inputs. In this work we formalize this idea, and propose that such "optimistic quantum circuits" are often sufficient in the context of larger quantum algorithms. For the rare algorithm in which a subroutine needs to be a good approximation on all inputs, we provide a reduction which transforms optimistic circuits into general ones. Applying these ideas, we build an optimistic circuit for the in-place quantum Fourier transform (QFT). Our circuit has depth $O(\log (n / ε))$ for tunable error parameter $ε$, uses $n$ total qubits, i.e. no ancillas, is local for input qubits arranged in 1D, and is measurement-free. The circuit's error is bounded by $ε$ on all input states except an $O(ε)$-sized fraction of the Hilbert space. The circuit is also rather simple and thus may be practically useful. Combined with recent QFT-based fast arithmetic constructions [arXiv:2403.18006], the optimistic QFT yields factoring circuits of nearly linear depth using only $2n + O(n/\log n)$ total qubits. Additionally, we apply our reduction technique to yield an approximate QFT with well-controlled error on all inputs; it is the first to achieve the asymptotically optimal depth of $O(\log (n/ε))$ with a sublinear number of ancilla qubits. The reduction uses long-range gates but no measurements.
Richard R. Allen, Francisco Machado, Isaac L. Chuang, Hsin-Yuan Huang, Soonwon Choi
Jan 13, 2025·quant-ph·PDF Quantum computing and quantum sensing represent two distinct frontiers of quantum information science. In this work, we harness quantum computing to solve a fundamental and practically important sensing problem: the detection of weak oscillating fields with unknown strength and frequency. We present a quantum computing enhanced sensing protocol that outperforms all existing approaches. Furthermore, we prove our approach is optimal by establishing the Grover-Heisenberg limit -- a fundamental lower bound on the minimum sensing time. The key idea is to robustly digitize the continuous, analog signal into a discrete operation, which is then integrated into a quantum algorithm. Our metrological gain originates from quantum computation, distinguishing our protocol from conventional sensing approaches. Indeed, we prove that broad classes of protocols based on quantum Fisher information, finite-lifetime quantum memory, or classical signal processing are strictly less powerful. Our protocol is compatible with multiple experimental platforms. We propose and analyze a proof-of-principle experiment using nitrogen-vacancy centers, where meaningful improvements are achievable using current technology. This work establishes quantum computation as a powerful new resource for advancing sensing capabilities.
James Ang, Gabriella Carini, Yanzhu Chen, Isaac Chuang, Michael Austin DeMarco, Sophia E. Economou, Alec Eickbusch, Andrei Faraon, Kai-Mei Fu, Steven M. Girvin, Michael Hatridge, Andrew Houck, Paul Hilaire, Kevin Krsulich, Ang Li, Chenxu Liu, Yuan Liu, Margaret Martonosi, David C. McKay, James Misewich, Mark Ritter, Robert J. Schoelkopf, Samuel A. Stein, Sara Sussman, Hong X. Tang, Wei Tang, Teague Tomesh, Norm M. Tubman, Chen Wang, Nathan Wiebe, Yong-Xin Yao, Dillon C. Yost, Yiyu Zhou
Dec 12, 2022·quant-ph·PDF Many proposals to scale quantum technology rely on modular or distributed designs where individual quantum processors, called nodes, are linked together to form one large multinode quantum computer (MNQC). One scalable method to construct an MNQC is using superconducting quantum systems with optical interconnects. However, a limiting factor of these machines will be internode gates, which may be two to three orders of magnitude noisier and slower than local operations. Surmounting the limitations of internode gates will require a range of techniques, including improvements in entanglement generation, the use of entanglement distillation, and optimized software and compilers, and it remains unclear how improvements to these components interact to affect overall system performance, what performance from each is required, or even how to quantify the performance of each. In this paper, we employ a `co-design' inspired approach to quantify overall MNQC performance in terms of hardware models of internode links, entanglement distillation, and local architecture. In the case of superconducting MNQCs with microwave-to-optical links, we uncover a tradeoff between entanglement generation and distillation that threatens to degrade performance. We show how to navigate this tradeoff, lay out how compilers should optimize between local and internode gates, and discuss when noisy quantum links have an advantage over purely classical links. Using these results, we introduce a roadmap for the realization of early MNQCs which illustrates potential improvements to the hardware and software of MNQCs and outlines criteria for evaluating the landscape, from progress in entanglement generation and quantum memory to dedicated algorithms such as distributed quantum phase estimation. While we focus on superconducting devices with optical interconnects, our approach is general across MNQC implementations.
Rich Rines, Kevin Obenland, Isaac Chuang
May 26, 2019·quant-ph·PDF Experimentally realizable quantum computers are rapidly approaching the threshold of quantum supremacy. Quantum Hamiltonian simulation promises to be one of the first practical applications for which such a device could demonstrate an advantage over all classical systems. However, these early devices will inevitably remain both noisy and small, precluding the use of quantum error correction. We use high-performance classical tools to construct, optimize, and simulate quantum circuits subject to realistic error models in order to empirically determine the "simulation capacity" of near-term simulation experiments implemented via quantum signal processing (QSP), describing the relationship between simulation time, system size, and resolution of QSP circuits which are optimally configured to balance algorithmic precision and external noise. From simulation capacity models, we estimate maximum tolerable error rate for meaningful simulation experiments on a near-term quantum computer. By exploiting symmetry inherent to the QSP circuit, we further demonstrate that its capacity for quantum simulation can be increased by at least two orders of magnitude if errors are systematic and unitary. We find that a device with $ε^2=10^{-5}$ systematic amplitude errors could meaningfully simulate systems up to $n\approx16$ with an expected failure rate below $10\%$, whereas the largest system a device with a stochastic error rate of $p_ε=10^{-5}$ could meaningfully simulate with the same rate of failure is between $n=3$ and $n=5$ (depending on the stochastic channel). Extrapolating from empirical results, we estimate that one would typically need a stochastic error rate below $p_ε=10^{-8}$ to perform a meaningful $n=50$ simulation experiment with a failure rate below $10\%$, while the same experiment could tolerate systematic unitary errors with strength $ε^2\approx10^{-6}$.
Dave Bacon, Isaac Chuang, Aram Harrow
Jul 12, 2004·quant-ph·PDF The Schur basis on n d-dimensional quantum systems is a generalization of the total angular momentum basis that is useful for exploiting symmetry under permutations or collective unitary rotations. We present efficient (size poly(n,d,log(1/ε)) for accuracy ε) quantum circuits for the Schur transform, which is the change of basis between the computational and the Schur bases. These circuits are based on efficient circuits for the Clebsch-Gordan transformation. We also present an efficient circuit for a limited version of the Schur transform in which one needs only to project onto different Schur subspaces. This second circuit is based on a generalization of phase estimation to any nonabelian finite group for which there exists a fast quantum Fourier transform.
Yongyi Yang, Tomaso Poggio, Isaac Chuang, Liu Ziyin
We prove that for a broad class of permutation-equivariant learning rules (including SGD, Adam, and others), the training process induces a bi-Lipschitz mapping between neurons and strongly constrains the topology of the neuron distribution during training. This result reveals a qualitative difference between small and large learning rates $η$. With a learning rate below a topological critical point $η^*$, the training is constrained to preserve all topological structure of the neurons. In contrast, above $η^*$, the learning process allows for topological simplification, making the neuron manifold progressively coarser and thereby reducing the model's expressivity. Viewed in combination with the recent discovery of the edge of stability phenomenon, the learning dynamics of neuron networks under gradient descent can be divided into two phases: first they undergo smooth optimization under topological constraints, and then enter a second phase where they learn through drastic topological simplifications. A key feature of our theory is that it is independent of specific architectures or loss functions, enabling the universal application of topological methods to the study of deep learning.
Catherine Medlock, Alan Oppenheim, Isaac Chuang, Qi Ding
Dec 15, 2020·quant-ph·PDF Receiver operating characteristics (ROCs) are a well-established representation of the tradeoff between detection and false alarm probabilities in classical binary hypothesis testing. We use classical ROCs as motivation for two types of operating characteristics for binary hypothesis testing in quantum systems -- decision operating characteristics (QDOCs) and measurement operating characteristics (QMOCs). Both are described in the context of a framework we propose that encompasses the typical formulations of binary hypothesis testing in both the classical and quantum scenarios. We interpret Helstrom's well-known result regarding discrimination between two quantum density operators with minimum probability of error in this framework. We also present a generalization of previous results regarding the correspondence between classical Parseval frames and quantum measurements. The derivation naturally leads to a constructive procedure for generating many different measurements besides Helstrom's optimal measurement, some standard and others non-standard, that achieve minimum probability of error.
Dorian Gangloff, Molu Shi, Tailin Wu, Alexei Bylinskii, Boris Braverman, Michael Gutierrez, Rosanna Nichols, Junru Li, Kai Aichholz, Marko Cetina, Leon Karpa, Branislav Jelenković, Isaac Chuang, Vladan Vuletić
We study the vacuum-induced degradation of high-finesse optical cavities with mirror coatings composed of SiO$_2$-Ta$_{2}$O$_{5}$ dielectric stacks, and present methods to protect these coatings and to recover their initial quality factor. For separate coatings with reflectivities centered at 370 nm and 422 nm, a vacuum-induced continuous increase in optical loss occurs if the surface-layer coating is made of Ta$_{2}$O$_{5}$, while it does not occur if it is made of SiO$_2$. The incurred optical loss can be reversed by filling the vacuum chamber with oxygen at atmospheric pressure, and the recovery rate can be strongly accelerated by continuous laser illumination at 422 nm. Both the degradation and the recovery processes depend strongly on temperature. We find that a 1 nm-thick layer of SiO$_2$ passivating the Ta$_{2}$O$_{5}$ surface layer is sufficient to reduce the degradation rate by more than a factor of 10, strongly supporting surface oxygen depletion as the primary degradation mechanism.