Heterogeneous Distributed Zeroth-Order Nonconvex Optimization with Communication Compression
/ Authors
/ Abstract
Distributed zeroth-order optimization is increasingly applied in heterogeneous scenarios where agents possess distinct data distributions and objectives. This heterogeneity poses fundamental challenges for convergence analysis, as existing convergence analyses rely on relatively strong assumptions to ensure theoretical guarantees. Specifically, at least one of the following three assumptions is usually required: (i) data homogeneity across agents, (ii) $\mathcal{O}(pn)$ function evaluations per iteration with $p$ denoting the dimension and $n$ the number of agents, or (iii) the Polyak--{\L}ojasiewicz (P--L) or strong convexity condition with a known corresponding constant. To overcome these limitations, we propose a Heterogeneous Distributed Zeroth-Order Compressed (HEDZOC) algorithm, which is based on a two-point zeroth-order gradient estimator and a general class of compressors. Without assuming data homogeneity, we develop the analysis covering three settings: general nonconvex functions, functions satisfying the P--L condition without knowing the P--L constant, and those with a known constant. To the best of our knowledge, the proposed HEDZOC algorithm is the first distributed zeroth-order method that establishes convergence without relying on the above three assumptions. Moreover, it achieves linear speedup convergence rate, which is comparable to state-of-the-art results attainable under data homogeneity and exact communication assumptions. Finally, experiments on heterogeneous adversarial example generation validate the theoretical results.