Nonsmooth Nonconvex-Concave Minimax Optimization: Convergence Criteria and Algorithms
Abstract
This paper considers constrained stochastic nonsmooth minimax optimization problem of the form $\min_{\mathbf{x}\in\mathcal{X}}\max_{\mathbf{y}\in\mathcal{Y}}f\left(\mathbf{x},\mathbf{y}\right)=\mathbb{E}[F(\mathbf{x},\mathbf{y};\mathbfξ)]$, where the objective $f(\mathbf{x},\mathbf{y})$ is concave in $\mathbf{y}$ but possibly nonconvex in $\mathbf{x}$, the stochastic component $F(\mathbf{x},\mathbf{y};\mathbfξ)$ indexed by random variable $\mathbfξ$ is mean-squared Lipschitz continuous, and the feasible sets $\mathcal X$ and $\mathcal Y$ are convex and compact. We introduce the notion of $(η_x,η_y,δ,ε)$-Goldstein saddle stationary point (GSSP) to characterize the convergence for solving constrained nonsmooth minimax problems. We then develop projected gradient-free descent ascent methods for finding $(η_x,η_y,δ,ε)$-GSSPs of the objective function $f(\mathbf{x},\mathbf{y})$ with non-asymptotic convergence rates. We further propose nested-loop projected gradient-free descent ascent methods to establish the non-asymptotic convergence for finding $(η,δ,ε)$-generalized Goldstein stationary points (GGSP) [Liu et al., 2024] of the primal function $Φ(\mathbf{x})\triangleq\max_{\mathbf{y}\in\mathcal{Y}}{f}\left(\mathbf{x},\mathbf{y}\right)$. It is worth noting that our algorithm designs and theoretical analyses do not require additional assumptions such as the weak convexity used in prior works on nonsmooth minimax optimization [Lin et al., 2025, Boţ and Böhm, 2023].