Showing 1–20 of 63 results
/ Date/ Name
May 31, 2019PowerSGD: Practical Low-Rank Gradient Compression for Distributed OptimizationJun 1, 2018Global linear convergence of Newton's method without strong-convexity or Lipschitz gradientsJun 16, 2020Byzantine-Robust Learning on Heterogeneous Datasets via BucketingMar 26, 2018On Matching Pursuit and Coordinate DescentOct 16, 2018Efficient Greedy Coordinate Descent for Composite ProblemsJul 10, 2022Mechanisms that Incentivize Data Sharing in Federated LearningAug 8, 2020Mime: Mimicking Centralized Stochastic Algorithms in Federated LearningFeb 3, 2022Byzantine-Robust Decentralized Learning via ClippedGossipJun 8, 2020Secure Byzantine-Robust Machine LearningDec 18, 2020Learning from History for Byzantine Robust OptimizationSep 11, 2019The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed CommunicationOct 28, 2021Towards Model Agnostic Federated Learning Using Knowledge DistillationOct 14, 2019SCAFFOLD: Stochastic Controlled Averaging for Federated LearningOct 8, 2021RelaySum for Decentralized Deep Learning on Heterogeneous DataAug 4, 2020PowerGossip: Practical Low-Rank Communication Compression in Decentralized Deep LearningJan 28, 2019Error Feedback Fixes SignSGD and other Gradient Compression SchemesJul 11, 2019Amplifying Rényi Differential Privacy via ShufflingMay 27, 2023Federated Conformal Predictors for Distributed Uncertainty QuantificationApr 16, 2024Privacy Can Arise Endogenously in an Economic System with Learning AgentsApr 24, 2024Collaborative Heterogeneous Causal Inference Beyond Meta-analysis