Showing 1–20 of 25 results
/ Date/ Name
Sep 20, 2018Stationary frequencies and mixing times for neutral drift processes with spatial structureApr 30, 2019AdaNet: A Scalable and Flexible Framework for Automatically Learning EnsemblesJul 31, 2020Cold Posteriors and Aleatoric UncertaintyDec 2, 2019A Random Matrix Perspective on Mixtures of Nonlinearities for Deep LearningOct 14, 2020Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width LimitSep 11, 2015Spectral Statistics of Sparse Random Graphs with a General Degree DistributionOct 20, 2019Learning GANs and Ensembles Using DiscrepancyMay 14, 2022Homogenization of SGD in high-dimensions: Exact dynamics and generalization propertiesAug 15, 2020The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of GeneralizationJul 31, 2020Finite Versus Infinite Neural Networks: an Empirical StudyNov 4, 2020Understanding Double Descent Requires a Fine-Grained Bias-Variance DecompositionMar 9, 2023Kernel Regression with Infinite-Width Neural Networks on Millions of ExamplesNov 16, 2021Covariate Shift in High-Dimensional Random Feature RegressionJun 25, 2020The Surprising Simplicity of the Early-Time Learning Dynamics of Neural NetworksFeb 8, 2022Understanding the bias-variance tradeoff of Bregman divergencesOct 30, 2019Investigating Under and Overfitting in Wasserstein Generative Adversarial NetworksJun 21, 2022Ensembling over Classifiers: a Bias-Variance PerspectiveJun 15, 2022Implicit Regularization or Implicit Conditioning? Exact Risk Trajectories of SGD in High DimensionsJul 9, 2014Universality of fixation probabilities in randomly structured populationsNov 6, 2020Underspecification Presents Challenges for Credibility in Modern Machine Learning