Showing 1–20 of 36 results
/ Date/ Name
Jan 1, 2023A Sequential Quadratic Programming Method with High Probability Complexity Bounds for Nonlinear Equality Constrained Stochastic OptimizationJun 17, 2024Modified Line Search Sequential Quadratic Methods for Equality-Constrained Optimization with Unified Global and Local Convergence GuaranteesMar 24, 2023Balancing Communication and Computation in Gradient Tracking Algorithms for Decentralized OptimizationMay 26, 2025Retrospective Approximation Sequential Quadratic Programming for Stochastic Optimization with General Deterministic Nonlinear ConstraintsMay 7, 2022First- and Second-Order High Probability Complexity Bounds for Trust-Region Methods with Noisy OraclesApr 8, 2022Accelerating Stochastic Sequential Quadratic Programming for Equality Constrained Optimization using Predictive Variance ReductionJul 26, 2017A Robust Multi-Batch L-BFGS Method for Machine LearningJun 3, 2025A line search framework with restarting for noisy optimization problemsApr 23, 2024Second-order Information Promotes Mini-Batch Robustness in Variance-Reduced GradientsNov 15, 2023Non-Uniform Smoothness for Gradient DescentMar 18, 2019Nested Distributed Gradient Methods with Adaptive Quantized CommunicationJun 6, 2020SONIA: A Symmetric Blockwise Truncated Optimization AlgorithmJan 28, 2019Quasi-Newton Methods for Machine Learning: Forget the Past, Just SampleOct 8, 2019Global Convergence Rate Analysis of a Generic Line Search Algorithm with NoiseMar 9, 2025Optimistic Noise-Aware Sequential Quadratic Programming for Equality Constrained Optimization with Rank-Deficient JacobiansMay 29, 2019Linear interpolation gives better gradients than Gaussian smoothing in derivative-free optimizationMay 30, 2019Scaling Up Quasi-Newton Algorithms: Communication Efficient Distributed SR1Jul 25, 2021Full-low evaluation methods for derivative-free optimizationSep 6, 2023Adaptive Consensus: A network pruning approach for decentralized optimizationMay 19, 2016A Multi-Batch L-BFGS Method for Machine Learning