Showing 1–20 of 74 results
/ Date/ Name
Oct 13, 2022Mean-field analysis for heavy ball methods: Dropout-stability, connectivity, and global convergenceOct 3, 2022Bayes-optimal limits in structured PCA, and how to reach themJul 11, 2018Decoding Reed-Muller and Polar Codes by Successive Factor Graph PermutationsJan 22, 2016Comparing the Bit-MAP and Block-MAP Decoding Thresholds of Reed-Muller Codes on BMS ChannelsJan 18, 2016Reed-Muller Codes Achieve Capacity on Erasure ChannelsMay 22, 2024Matrix Denoising with Doubly Heteroscedastic Noise: Fundamental Limits and Optimal Spectral MethodsMay 21, 2025Neural Collapse is Globally Optimal in Deep Regularized ResNets and TransformersFeb 9, 2026Optimal Estimation in Orthogonally Invariant Generalized Linear Models: Spectral Initialization and Approximate Message PassingMar 5, 2026Improved Scaling Laws via Weak-to-Strong Generalization in Random Feature Ridge RegressionDec 27, 2022Fundamental Limits of Two-layer Autoencoders, and Achieving Them with Gradient MethodsMay 17, 2022Sharp asymptotics on the compression of two-layer neural networksFeb 3, 2023Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent KernelsMar 13, 2023Concentration without Independence via Information MeasuresDec 20, 2019Landscape Connectivity and Dropout Stability of SGD Solutions for Over-parameterized Neural NetworksDec 24, 2020Parallelism versus Latency in Simplified Successive-Cancellation Decoding of Polar CodesNov 3, 2021Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU NetworksJan 9, 2018A New Coding Paradigm for the Primitive Relay ChannelNov 3, 2016Capacity-Achieving Rate-Compatible Polar Codes for General ChannelsFeb 21, 2024Average gradient outer product as a mechanism for deep neural collapseOct 7, 2024Wide Neural Networks Trained with Weight Decay Provably Exhibit Neural Collapse